a Method for the Seamlines Network Automatic Selection Based on Building Vector
NASA Astrophysics Data System (ADS)
Li, P.; Dong, Y.; Hu, Y.; Li, X.; Tan, P.
2018-04-01
In order to improve the efficiency of large scale orthophoto production of city, this paper presents a method for automatic selection of seamlines network in large scale orthophoto based on the buildings' vector. Firstly, a simple model of the building is built by combining building's vector, height and DEM, and the imaging area of the building on single DOM is obtained. Then, the initial Voronoi network of the measurement area is automatically generated based on the positions of the bottom of all images. Finally, the final seamlines network is obtained by optimizing all nodes and seamlines in the network automatically based on the imaging areas of the buildings. The experimental results show that the proposed method can not only get around the building seamlines network quickly, but also remain the Voronoi network' characteristics of projection distortion minimum theory, which can solve the problem of automatic selection of orthophoto seamlines network in image mosaicking effectively.
Sultan, Mohammad M; Kiss, Gert; Shukla, Diwakar; Pande, Vijay S
2014-12-09
Given the large number of crystal structures and NMR ensembles that have been solved to date, classical molecular dynamics (MD) simulations have become powerful tools in the atomistic study of the kinetics and thermodynamics of biomolecular systems on ever increasing time scales. By virtue of the high-dimensional conformational state space that is explored, the interpretation of large-scale simulations faces difficulties not unlike those in the big data community. We address this challenge by introducing a method called clustering based feature selection (CB-FS) that employs a posterior analysis approach. It combines supervised machine learning (SML) and feature selection with Markov state models to automatically identify the relevant degrees of freedom that separate conformational states. We highlight the utility of the method in the evaluation of large-scale simulations and show that it can be used for the rapid and automated identification of relevant order parameters involved in the functional transitions of two exemplary cell-signaling proteins central to human disease states.
A new method of automatic landmark tagging for shape model construction via local curvature scale
NASA Astrophysics Data System (ADS)
Rueda, Sylvia; Udupa, Jayaram K.; Bai, Li
2008-03-01
Segmentation of organs in medical images is a difficult task requiring very often the use of model-based approaches. To build the model, we need an annotated training set of shape examples with correspondences indicated among shapes. Manual positioning of landmarks is a tedious, time-consuming, and error prone task, and almost impossible in the 3D space. To overcome some of these drawbacks, we devised an automatic method based on the notion of c-scale, a new local scale concept. For each boundary element b, the arc length of the largest homogeneous curvature region connected to b is estimated as well as the orientation of the tangent at b. With this shape description method, we can automatically locate mathematical landmarks selected at different levels of detail. The method avoids the use of landmarks for the generation of the mean shape. The selection of landmarks on the mean shape is done automatically using the c-scale method. Then, these landmarks are propagated to each shape in the training set, defining this way the correspondences among the shapes. Altogether 12 strategies are described along these lines. The methods are evaluated on 40 MRI foot data sets, the object of interest being the talus bone. The results show that, for the same number of landmarks, the proposed methods are more compact than manual and equally spaced annotations. The approach is applicable to spaces of any dimensionality, although we have focused in this paper on 2D shapes.
Spectral saliency via automatic adaptive amplitude spectrum analysis
NASA Astrophysics Data System (ADS)
Wang, Xiaodong; Dai, Jialun; Zhu, Yafei; Zheng, Haiyong; Qiao, Xiaoyan
2016-03-01
Suppressing nonsalient patterns by smoothing the amplitude spectrum at an appropriate scale has been shown to effectively detect the visual saliency in the frequency domain. Different filter scales are required for different types of salient objects. We observe that the optimal scale for smoothing amplitude spectrum shares a specific relation with the size of the salient region. Based on this observation and the bottom-up saliency detection characterized by spectrum scale-space analysis for natural images, we propose to detect visual saliency, especially with salient objects of different sizes and locations via automatic adaptive amplitude spectrum analysis. We not only provide a new criterion for automatic optimal scale selection but also reserve the saliency maps corresponding to different salient objects with meaningful saliency information by adaptive weighted combination. The performance of quantitative and qualitative comparisons is evaluated by three different kinds of metrics on the four most widely used datasets and one up-to-date large-scale dataset. The experimental results validate that our method outperforms the existing state-of-the-art saliency models for predicting human eye fixations in terms of accuracy and robustness.
ADMAP (automatic data manipulation program)
NASA Technical Reports Server (NTRS)
Mann, F. I.
1971-01-01
Instructions are presented on the use of ADMAP, (automatic data manipulation program) an aerospace data manipulation computer program. The program was developed to aid in processing, reducing, plotting, and publishing electric propulsion trajectory data generated by the low thrust optimization program, HILTOP. The program has the option of generating SC4020 electric plots, and therefore requires the SC4020 routines to be available at excution time (even if not used). Several general routines are present, including a cubic spline interpolation routine, electric plotter dash line drawing routine, and single parameter and double parameter sorting routines. Many routines are tailored for the manipulation and plotting of electric propulsion data, including an automatic scale selection routine, an automatic curve labelling routine, and an automatic graph titling routine. Data are accepted from either punched cards or magnetic tape.
NASA Astrophysics Data System (ADS)
Álvarez, Charlens; Martínez, Fabio; Romero, Eduardo
2015-01-01
The pelvic magnetic Resonance images (MRI) are used in Prostate cancer radiotherapy (RT), a process which is part of the radiation planning. Modern protocols require a manual delineation, a tedious and variable activity that may take about 20 minutes per patient, even for trained experts. That considerable time is an important work ow burden in most radiological services. Automatic or semi-automatic methods might improve the efficiency by decreasing the measure times while conserving the required accuracy. This work presents a fully automatic atlas- based segmentation strategy that selects the more similar templates for a new MRI using a robust multi-scale SURF analysis. Then a new segmentation is achieved by a linear combination of the selected templates, which are previously non-rigidly registered towards the new image. The proposed method shows reliable segmentations, obtaining an average DICE Coefficient of 79%, when comparing with the expert manual segmentation, under a leave-one-out scheme with the training database.
McNeilly, Clyde E.
1977-01-04
A device is provided for automatically selecting from a plurality of ranges of a scale of values to which a meter may be made responsive, that range which encompasses the value of an unknown parameter. A meter relay indicates whether the unknown is of greater or lesser value than the range to which the meter is then responsive. The rotatable part of a stepping relay is rotated in one direction or the other in response to the indication from the meter relay. Various positions of the rotatable part are associated with particular scales. Switching means are sensitive to the position of the rotatable part to couple the associated range to the meter.
Automatic measurement of images on astrometric plates
NASA Astrophysics Data System (ADS)
Ortiz Gil, A.; Lopez Garcia, A.; Martinez Gonzalez, J. M.; Yershov, V.
1994-04-01
We present some results on the process of automatic detection and measurement of objects in overlapped fields of astrometric plates. The main steps of our algorithm are the following: determination of the Scale and Tilt between charge coupled devices (CCD) and microscope coordinate systems and estimation of signal-to-noise ratio in each field;--image identification and improvement of its position and size;--image final centering;--image selection and storage. Several parameters allow the use of variable criteria for image identification, characterization and selection. Problems related with faint images and crowded fields will be approached by special techniques (morphological filters, histogram properties and fitting models).
The evolution and devolution of cognitive control: The costs of deliberation in a competitive world
Tomlin, Damon; Rand, David G.; Ludvig, Elliot A.; Cohen, Jonathan D.
2015-01-01
Dual-system theories of human cognition, under which fast automatic processes can complement or compete with slower deliberative processes, have not typically been incorporated into larger scale population models used in evolutionary biology, macroeconomics, or sociology. However, doing so may reveal important phenomena at the population level. Here, we introduce a novel model of the evolution of dual-system agents using a resource-consumption paradigm. By simulating agents with the capacity for both automatic and controlled processing, we illustrate how controlled processing may not always be selected over rigid, but rapid, automatic processing. Furthermore, even when controlled processing is advantageous, frequency-dependent effects may exist whereby the spread of control within the population undermines this advantage. As a result, the level of controlled processing in the population can oscillate persistently, or even go extinct in the long run. Our model illustrates how dual-system psychology can be incorporated into population-level evolutionary models, and how such a framework can be used to examine the dynamics of interaction between automatic and controlled processing that transpire over an evolutionary time scale. PMID:26078086
The evolution and devolution of cognitive control: The costs of deliberation in a competitive world.
Tomlin, Damon; Rand, David G; Ludvig, Elliot A; Cohen, Jonathan D
2015-06-16
Dual-system theories of human cognition, under which fast automatic processes can complement or compete with slower deliberative processes, have not typically been incorporated into larger scale population models used in evolutionary biology, macroeconomics, or sociology. However, doing so may reveal important phenomena at the population level. Here, we introduce a novel model of the evolution of dual-system agents using a resource-consumption paradigm. By simulating agents with the capacity for both automatic and controlled processing, we illustrate how controlled processing may not always be selected over rigid, but rapid, automatic processing. Furthermore, even when controlled processing is advantageous, frequency-dependent effects may exist whereby the spread of control within the population undermines this advantage. As a result, the level of controlled processing in the population can oscillate persistently, or even go extinct in the long run. Our model illustrates how dual-system psychology can be incorporated into population-level evolutionary models, and how such a framework can be used to examine the dynamics of interaction between automatic and controlled processing that transpire over an evolutionary time scale.
Automatic rock detection for in situ spectroscopy applications on Mars
NASA Astrophysics Data System (ADS)
Mahapatra, Pooja; Foing, Bernard H.
A novel algorithm for rock detection has been developed for effectively utilising Mars rovers, and enabling autonomous selection of target rocks that require close-contact spectroscopic measurements. The algorithm demarcates small rocks in terrain images as seen by cameras on a Mars rover during traverse. This information may be used by the rover for selection of geologically relevant sample rocks, and (in conjunction with a rangefinder) to pick up target samples using a robotic arm for automatic in situ determination of rock composition and mineralogy using, for example, a Raman spectrometer. Determining rock samples within the region that are of specific interest without physically approaching them significantly reduces time, power and risk. Input images in colour are converted to greyscale for intensity analysis. Bilateral filtering is used for texture removal while preserving rock boundaries. Unsharp masking is used for contrast enhance-ment. Sharp contrasts in intensities are detected using Canny edge detection, with thresholds that are calculated from the image obtained after contrast-limited adaptive histogram equalisation of the unsharp masked image. Scale-space representations are then generated by convolving this image with a Gaussian kernel. A scale-invariant blob detector (Laplacian of the Gaussian, LoG) detects blobs independently of their sizes, and therefore requires a multi-scale approach with automatic scale se-lection. The scale-space blob detector consists of convolution of the Canny edge-detected image with a scale-normalised LoG at several scales, and finding the maxima of squared LoG response in scale-space. After the extraction of local intensity extrema, the intensity profiles along rays going out of the local extremum are investigated. An ellipse is fitted to the region determined by significant changes in the intensity profiles. The fitted ellipses are overlaid on the original Mars terrain image for a visual estimation of the rock detection accuracy, and the number of ellipses are counted. Since geometry and illumination have the least effect on small rocks, the proposed algorithm is effective in detecting small rocks (or bigger rocks at larger distances from the camera) that consist of a small fraction of image pixels. Acknowledgements: The first author would like to express her gratitude to the European Space Agency (ESA/ESTEC) and the International Lunar Exploration Working Group (ILEWG) for their support of this work.
Dorofeeva, A A; Khrustalev, A V; Krylov, Iu V; Bocharov, D A; Negasheva, M A
2010-01-01
Digital images of the iris were received for study peculiarities of the iris color during the anthropological examination of 578 students aged 16-24 years. Simultaneously with the registration of the digital images, the visual assessment of the eye color was carried out using the traditional scale of Bunak, based on 12 ocular prostheses. Original software for automatic determination of the iris color based on 12 classes scale of Bunak was designed, and computer version of that scale was developed. The software proposed allows to conduct the determination of the iris color with high validity based on numerical evaluation; its application may reduce the bias due to subjective assessment and methodological divergences of the different researchers. The software designed for automatic determination of the iris color may help develop both theoretical and applied anthropology, it may be used in forensic and emergency medicine, sports medicine, medico-genetic counseling and professional selection.
Adaptive Spot Detection With Optimal Scale Selection in Fluorescence Microscopy Images.
Basset, Antoine; Boulanger, Jérôme; Salamero, Jean; Bouthemy, Patrick; Kervrann, Charles
2015-11-01
Accurately detecting subcellular particles in fluorescence microscopy is of primary interest for further quantitative analysis such as counting, tracking, or classification. Our primary goal is to segment vesicles likely to share nearly the same size in fluorescence microscopy images. Our method termed adaptive thresholding of Laplacian of Gaussian (LoG) images with autoselected scale (ATLAS) automatically selects the optimal scale corresponding to the most frequent spot size in the image. Four criteria are proposed and compared to determine the optimal scale in a scale-space framework. Then, the segmentation stage amounts to thresholding the LoG of the intensity image. In contrast to other methods, the threshold is locally adapted given a probability of false alarm (PFA) specified by the user for the whole set of images to be processed. The local threshold is automatically derived from the PFA value and local image statistics estimated in a window whose size is not a critical parameter. We also propose a new data set for benchmarking, consisting of six collections of one hundred images each, which exploits backgrounds extracted from real microscopy images. We have carried out an extensive comparative evaluation on several data sets with ground-truth, which demonstrates that ATLAS outperforms existing methods. ATLAS does not need any fine parameter tuning and requires very low computation time. Convincing results are also reported on real total internal reflection fluorescence microscopy images.
Automatic segmentation and supervised learning-based selection of nuclei in cancer tissue images.
Nandy, Kaustav; Gudla, Prabhakar R; Amundsen, Ryan; Meaburn, Karen J; Misteli, Tom; Lockett, Stephen J
2012-09-01
Analysis of preferential localization of certain genes within the cell nuclei is emerging as a new technique for the diagnosis of breast cancer. Quantitation requires accurate segmentation of 100-200 cell nuclei in each tissue section to draw a statistically significant result. Thus, for large-scale analysis, manual processing is too time consuming and subjective. Fortuitously, acquired images generally contain many more nuclei than are needed for analysis. Therefore, we developed an integrated workflow that selects, following automatic segmentation, a subpopulation of accurately delineated nuclei for positioning of fluorescence in situ hybridization-labeled genes of interest. Segmentation was performed by a multistage watershed-based algorithm and screening by an artificial neural network-based pattern recognition engine. The performance of the workflow was quantified in terms of the fraction of automatically selected nuclei that were visually confirmed as well segmented and by the boundary accuracy of the well-segmented nuclei relative to a 2D dynamic programming-based reference segmentation method. Application of the method was demonstrated for discriminating normal and cancerous breast tissue sections based on the differential positioning of the HES5 gene. Automatic results agreed with manual analysis in 11 out of 14 cancers, all four normal cases, and all five noncancerous breast disease cases, thus showing the accuracy and robustness of the proposed approach. Published 2012 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCarroll, R; UT Health Science Center, Graduate School of Biomedical Sciences, Houston, TX; Beadle, B
Purpose: To investigate and validate the use of an independent deformable-based contouring algorithm for automatic verification of auto-contoured structures in the head and neck towards fully automated treatment planning. Methods: Two independent automatic contouring algorithms [(1) Eclipse’s Smart Segmentation followed by pixel-wise majority voting, (2) an in-house multi-atlas based method] were used to create contours of 6 normal structures of 10 head-and-neck patients. After rating by a radiation oncologist, the higher performing algorithm was selected as the primary contouring method, the other used for automatic verification of the primary. To determine the ability of the verification algorithm to detect incorrectmore » contours, contours from the primary method were shifted from 0.5 to 2cm. Using a logit model the structure-specific minimum detectable shift was identified. The models were then applied to a set of twenty different patients and the sensitivity and specificity of the models verified. Results: Per physician rating, the multi-atlas method (4.8/5 point scale, with 3 rated as generally acceptable for planning purposes) was selected as primary and the Eclipse-based method (3.5/5) for verification. Mean distance to agreement and true positive rate were selected as covariates in an optimized logit model. These models, when applied to a group of twenty different patients, indicated that shifts could be detected at 0.5cm (brain), 0.75cm (mandible, cord), 1cm (brainstem, cochlea), or 1.25cm (parotid), with sensitivity and specificity greater than 0.95. If sensitivity and specificity constraints are reduced to 0.9, detectable shifts of mandible and brainstem were reduced by 0.25cm. These shifts represent additional safety margins which might be considered if auto-contours are used for automatic treatment planning without physician review. Conclusion: Automatically contoured structures can be automatically verified. This fully automated process could be used to flag auto-contours for special review or used with safety margins in a fully automatic treatment planning system.« less
Predicting habits of vegetable parenting practices to facilitate the design of change programmes
USDA-ARS?s Scientific Manuscript database
Habit has been defined as the automatic performance of a usual behaviour. The present paper reports the relationships of variables from a Model of Goal Directed Behavior to four scales in regard to parents' habits when feeding their children: habit of (i) actively involving child in selection of veg...
Wang, Hui; Xu, Lei; Fan, Zhanming; Liang, Junfu; Yan, Zixu; Sun, Zhonghua
2017-01-01
The aim of this study was to evaluate the workflow efficiency of a new automatic coronary-specific reconstruction technique (Smart Phase, GE Healthcare-SP) for selection of the best cardiac phase with least coronary motion when compared with expert manual selection (MS) of best phase in patients with high heart rate. A total of 46 patients with heart rates above 75 bpm who underwent single beat coronary computed tomography angiography (CCTA) were enrolled in this study. CCTA of all subjects were performed on a 256-detector row CT scanner (Revolution CT, GE Healthcare, Waukesha, Wisconsin, US). With the SP technique, the acquired phase range was automatically searched in 2% phase intervals during the reconstruction process to determine the optimal phase for coronary assessment, while for routine expert MS, reconstructions were performed at 5% intervals and a best phase was manually determined. The reconstruction and review times were recorded to measure the workflow efficiency for each method. Two reviewers subjectively assessed image quality for each coronary artery in the MS and SP reconstruction volumes using a 4-point grading scale. The average HR of the enrolled patients was 91.1±19.0bpm. A total of 204 vessels were assessed. The subjective image quality using SP was comparable to that of the MS, 1.45±0.85 vs 1.43±0.81 respectively (p = 0.88). The average time was 246 seconds for the manual best phase selection, and 98 seconds for the SP selection, resulting in average time saving of 148 seconds (60%) with use of the SP algorithm. The coronary specific automatic cardiac best phase selection technique (Smart Phase) improves clinical workflow in high heart rate patients and provides image quality comparable with manual cardiac best phase selection. Reconstruction of single-beat CCTA exams with SP can benefit the users with less experienced in CCTA image interpretation.
NASA Technical Reports Server (NTRS)
Endlich, R. M.; Wolf, D. E.
1980-01-01
The automatic cloud tracking system was applied to METEOSAT 6.7 micrometers water vapor measurements to learn whether the system can track the motions of water vapor patterns. Data for the midlatitudes, subtropics, and tropics were selected from a sequence of METEOSAT pictures for 25 April 1978. Trackable features in the water vapor patterns were identified using a clustering technique and the features were tracked by two different methods. In flat (low contrast) water vapor fields, the automatic motion computations were not reliable, but in areas where the water vapor fields contained small scale structure (such as in the vicinity of active weather phenomena) the computations were successful. Cloud motions were computed using METEOSAT infrared observations (including tropical convective systems and midlatitude jet stream cirrus).
Eichmann, Mischa; Kugel, Harald; Suslow, Thomas
2008-12-01
Difficulties in identifying and differentiating one's emotions are a central characteristic of alexithymia. In the present study, automatic activation of the fusiform gyrus to facial emotion was investigated as a function of alexithymia as assessed by the 20-item Toronto Alexithymia Scale. During 3 Tesla fMRI scanning, pictures of faces bearing sad, happy, and neutral expressions masked by neutral faces were presented to 22 healthy adults who also responded to the Toronto Alexithymia Scale. The fusiform gyrus was selected as the region of interest, and voxel values of this region were extracted, summarized as means, and tested among the different conditions (sad, happy, and neutral faces). Masked sad facial emotions were associated with greater bilateral activation of the fusiform gyrus than masked neutral faces. The subscale, Difficulty Identifying Feelings, was negatively correlated with the neural response of the fusiform gyrus to masked sad faces. The correlation results suggest that automatic hyporesponsiveness of the fusiform gyrus to negative emotion stimuli may reflect problems in recognizing one's emotions in everyday life.
Remote Sensing Analysis of Forest Disturbances
NASA Technical Reports Server (NTRS)
Asner, Gregory P. (Inventor)
2015-01-01
The present invention provides systems and methods to automatically analyze Landsat satellite data of forests. The present invention can easily be used to monitor any type of forest disturbance such as from selective logging, agriculture, cattle ranching, natural hazards (fire, wind events, storms), etc. The present invention provides a large-scale, high-resolution, automated remote sensing analysis of such disturbances.
Remote sensing analysis of forest disturbances
NASA Technical Reports Server (NTRS)
Asner, Gregory P. (Inventor)
2012-01-01
The present invention provides systems and methods to automatically analyze Landsat satellite data of forests. The present invention can easily be used to monitor any type of forest disturbance such as from selective logging, agriculture, cattle ranching, natural hazards (fire, wind events, storms), etc. The present invention provides a large-scale, high-resolution, automated remote sensing analysis of such disturbances.
NASA Astrophysics Data System (ADS)
Paiè, Petra; Bassi, Andrea; Bragheri, Francesca; Osellame, Roberto
2017-02-01
Selective plane illumination microscopy (SPIM) is an optical sectioning technique that allows imaging of biological samples at high spatio-temporal resolution. Standard SPIM devices require dedicated set-ups, complex sample preparation and accurate system alignment, thus limiting the automation of the technique, its accessibility and throughput. We present a millimeter-scaled optofluidic device that incorporates selective plane illumination and fully automatic sample delivery and scanning. To this end an integrated cylindrical lens and a three-dimensional fluidic network were fabricated by femtosecond laser micromachining into a single glass chip. This device can upgrade any standard fluorescence microscope to a SPIM system. We used SPIM on a CHIP to automatically scan biological samples under a conventional microscope, without the need of any motorized stage: tissue spheroids expressing fluorescent proteins were flowed in the microchannel at constant speed and their sections were acquired while passing through the light sheet. We demonstrate high-throughput imaging of the entire sample volume (with a rate of 30 samples/min), segmentation and quantification in thick (100-300 μm diameter) cellular spheroids. This optofluidic device gives access to SPIM analyses to non-expert end-users, opening the way to automatic and fast screening of a high number of samples at subcellular resolution.
Abusam, A; Keesman, K J; van Straten, G; Spanjers, H; Meinema, K
2001-01-01
When applied to large simulation models, the process of parameter estimation is also called calibration. Calibration of complex non-linear systems, such as activated sludge plants, is often not an easy task. On the one hand, manual calibration of such complex systems is usually time-consuming, and its results are often not reproducible. On the other hand, conventional automatic calibration methods are not always straightforward and often hampered by local minima problems. In this paper a new straightforward and automatic procedure, which is based on the response surface method (RSM) for selecting the best identifiable parameters, is proposed. In RSM, the process response (output) is related to the levels of the input variables in terms of a first- or second-order regression model. Usually, RSM is used to relate measured process output quantities to process conditions. However, in this paper RSM is used for selecting the dominant parameters, by evaluating parameters sensitivity in a predefined region. Good results obtained in calibration of ASM No. 1 for N-removal in a full-scale oxidation ditch proved that the proposed procedure is successful and reliable.
Jones, Joseph L.; Fulford, Janice M.; Voss, Frank D.
2002-01-01
A system of numerical hydraulic modeling, geographic information system processing, and Internet map serving, supported by new data sources and application automation, was developed that generates inundation maps for forecast floods in near real time and makes them available through the Internet. Forecasts for flooding are generated by the National Weather Service (NWS) River Forecast Center (RFC); these forecasts are retrieved automatically by the system and prepared for input to a hydraulic model. The model, TrimR2D, is a new, robust, two-dimensional model capable of simulating wide varieties of discharge hydrographs and relatively long stream reaches. TrimR2D was calibrated for a 28-kilometer reach of the Snoqualmie River in Washington State, and is used to estimate flood extent, depth, arrival time, and peak time for the RFC forecast. The results of the model are processed automatically by a Geographic Information System (GIS) into maps of flood extent, depth, and arrival and peak times. These maps subsequently are processed into formats acceptable by an Internet map server (IMS). The IMS application is a user-friendly interface to access the maps over the Internet; it allows users to select what information they wish to see presented and allows the authors to define scale-dependent availability of map layers and their symbology (appearance of map features). For example, the IMS presents a background of a digital USGS 1:100,000-scale quadrangle at smaller scales, and automatically switches to an ortho-rectified aerial photograph (a digital photograph that has camera angle and tilt distortions removed) at larger scales so viewers can see ground features that help them identify their area of interest more effectively. For the user, the option exists to select either background at any scale. Similar options are provided for both the map creator and the viewer for the various flood maps. This combination of a robust model, emerging IMS software, and application interface programming should allow the technology developed in the pilot study to be applied to other river systems where NWS forecasts are provided routinely.
Automatic Calibration of Stereo-Cameras Using Ordinary Chess-Board Patterns
NASA Astrophysics Data System (ADS)
Prokos, A.; Kalisperakis, I.; Petsa, E.; Karras, G.
2012-07-01
Automation of camera calibration is facilitated by recording coded 2D patterns. Our toolbox for automatic camera calibration using images of simple chess-board patterns is freely available on the Internet. But it is unsuitable for stereo-cameras whose calibration implies recovering camera geometry and their true-to-scale relative orientation. In contrast to all reported methods requiring additional specific coding to establish an object space coordinate system, a toolbox for automatic stereo-camera calibration relying on ordinary chess-board patterns is presented here. First, the camera calibration algorithm is applied to all image pairs of the pattern to extract nodes of known spacing, order them in rows and columns, and estimate two independent camera parameter sets. The actual node correspondences on stereo-pairs remain unknown. Image pairs of a textured 3D scene are exploited for finding the fundamental matrix of the stereo-camera by applying RANSAC to point matches established with the SIFT algorithm. A node is then selected near the centre of the left image; its match on the right image is assumed as the node closest to the corresponding epipolar line. This yields matches for all nodes (since these have already been ordered), which should also satisfy the 2D epipolar geometry. Measures for avoiding mismatching are taken. With automatically estimated initial orientation values, a bundle adjustment is performed constraining all pairs on a common (scaled) relative orientation. Ambiguities regarding the actual exterior orientations of the stereo-camera with respect to the pattern are irrelevant. Results from this automatic method show typical precisions not above 1/4 pixels for 640×480 web cameras.
NASA Technical Reports Server (NTRS)
Quiroga, S. Q.
1977-01-01
The applicability of LANDSAT digital information to soil mapping is described. A compilation of all cartographic information and bibliography of the study area is made. LANDSAT MSS images on a scale of 1:250,000 are interpreted and a physiographic map with legend is prepared. The study area is inspected and a selection of the sample areas is made. A digital map of the different soil units is produced and the computer mapping units are checked against the soil units encountered in the field. The soil boundaries obtained by automatic mapping were not substantially changed by field work. The accuracy of the automatic mapping is rather high.
A New Automatic Method of Urban Areas Mapping in East Asia from LANDSAT Data
NASA Astrophysics Data System (ADS)
XU, R.; Jia, G.
2012-12-01
Cities, as places where human activities are concentrated, account for a small percent of global land cover but are frequently cited as the chief causes of, and solutions to, climate, biogeochemistry, and hydrology processes at local, regional, and global scales. Accompanying with uncontrolled economic growth, urban sprawl has been attributed to the accelerating integration of East Asia into the world economy and involved dramatic changes in its urban form and land use. To understand the impact of urban extent on biogeophysical processes, reliable mapping of built-up areas is particularly essential in eastern cities as a result of their characteristics of smaller patches, more fragile, and a lower fraction of the urban landscape which does not have natural than in the West. Segmentation of urban land from other land-cover types using remote sensing imagery can be done by standard classification processes as well as a logic rule calculation based on spectral indices and their derivations. Efforts to establish such a logic rule with no threshold for automatically mapping are highly worthwhile. Existing automatic methods are reviewed, and then a proposed approach is introduced including the calculation of the new index and the improved logic rule. Following this, existing automatic methods as well as the proposed approach are compared in a common context. Afterwards, the proposed approach is tested separately in cities of large, medium, and small scale in East Asia selected from different LANDSAT images. The results are promising as the approach can efficiently segment urban areas, even in the presence of more complex eastern cities. Key words: Urban extraction; Automatic Method; Logic Rule; LANDSAT images; East AisaThe Proposed Approach of Extraction of Urban Built-up Areas in Guangzhou, China
Zeng, Xueqiang; Luo, Gang
2017-12-01
Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.
NASA Technical Reports Server (NTRS)
Greene, P. H.
1972-01-01
Both in practical engineering and in control of muscular systems, low level subsystems automatically provide crude approximations to the proper response. Through low level tuning of these approximations, the proper response variant can emerge from standardized high level commands. Such systems are expressly suited to emerging large scale integrated circuit technology. A computer, using symbolic descriptions of subsystem responses, can select and shape responses of low level digital or analog microcircuits. A mathematical theory that reveals significant informational units in this style of control and software for realizing such information structures are formulated.
NASA Technical Reports Server (NTRS)
Coggeshall, M. E.; Hoffer, R. M.
1973-01-01
Remote sensing equipment and automatic data processing techniques were employed as aids in the institution of improved forest resource management methods. On the basis of automatically calculated statistics derived from manually selected training samples, the feature selection processor of LARSYS selected, upon consideration of various groups of the four available spectral regions, a series of channel combinations whose automatic classification performances (for six cover types, including both deciduous and coniferous forest) were tested, analyzed, and further compared with automatic classification results obtained from digitized color infrared photography.
NASA Astrophysics Data System (ADS)
Schlupp, A.; Sira, C.; Schmitt, K.; Schaming, M.
2013-12-01
In charge of intensity estimations in France, BCSF has collected and manually analyzed more than 47000 online individual macroseismic questionnaires since 2000 up to intensity VI. These macroseismic data allow us to estimate one SQI value (Single Questionnaire Intensity) for each form following the EMS98 scale. The reliability of the automatic intensity estimation is important as they are today used for automatic shakemaps communications and crisis management. Today, the automatic intensity estimation at BCSF is based on the direct use of thumbnails selected on a menu by the witnesses. Each thumbnail corresponds to an EMS-98 intensity value, allowing us to quickly issue an intensity map of the communal intensity by averaging the SQIs at each city. Afterwards an expert, to determine a definitive SQI, manually analyzes each form. This work is time consuming and not anymore suitable considering the increasing number of testimonies at BCSF. Nevertheless, it can take into account incoherent answers. We tested several automatic methods (USGS algorithm, Correlation coefficient, Thumbnails) (Sira et al. 2013, IASPEI) and compared them with 'expert' SQIs. These methods gave us medium score (between 50 to 60% of well SQI determined and 35 to 40% with plus one or minus one intensity degree). The best fit was observed with the thumbnails. Here, we present new approaches based on 3 statistical ranking methods as 1) Multinomial logistic regression model, 2) Discriminant analysis DISQUAL and 3) Support vector machines (SVMs). The two first methods are standard methods, while the third one is more recent. Theses methods could be applied because the BCSF has already in his database more then 47000 forms and because their questions and answers are well adapted for a statistical analysis. The ranking models could then be used as automatic method constrained on expert analysis. The performance of the automatic methods and the reliability of the estimated SQI can be evaluated thanks to the fact that each definitive BCSF SQIs is determined by an expert analysis. We compare the SQIs obtained by these methods from our database and discuss the coherency and variations between automatic and manual processes. These methods lead to high scores with up to 85% of the forms well classified and most of the remaining forms classified with only a shift of one intensity degree. This allows us to use the ranking methods as the best automatic methods to fast SQIs estimation and to produce fast shakemaps. The next step, to improve the use of these methods, will be to identify explanations for the forms not classified at the correct value and a way to select the few remaining forms that should be analyzed by the expert. Note that beyond intensity VI, on-line questionnaires are insufficient and a field survey is indispensable to estimate intensity. For such survey, in France, BCSF leads a macroseismic intervention group (GIM).
Automatic rapid attachable warhead section
Trennel, A.J.
1994-05-10
Disclosed are a method and apparatus for automatically selecting warheads or reentry vehicles from a storage area containing a plurality of types of warheads or reentry vehicles, automatically selecting weapon carriers from a storage area containing at least one type of weapon carrier, manipulating and aligning the selected warheads or reentry vehicles and weapon carriers, and automatically coupling the warheads or reentry vehicles with the weapon carriers such that coupling of improperly selected warheads or reentry vehicles with weapon carriers is inhibited. Such inhibition enhances safety of operations and is achieved by a number of means including computer control of the process of selection and coupling and use of connectorless interfaces capable of assuring that improperly selected items will be rejected or rendered inoperable prior to coupling. Also disclosed are a method and apparatus wherein the stated principles pertaining to selection, coupling and inhibition are extended to apply to any item-to-be-carried and any carrying assembly. 10 figures.
Automatic rapid attachable warhead section
Trennel, Anthony J.
1994-05-10
Disclosed are a method and apparatus for (1) automatically selecting warheads or reentry vehicles from a storage area containing a plurality of types of warheads or reentry vehicles, (2) automatically selecting weapon carriers from a storage area containing at least one type of weapon carrier, (3) manipulating and aligning the selected warheads or reentry vehicles and weapon carriers, and (4) automatically coupling the warheads or reentry vehicles with the weapon carriers such that coupling of improperly selected warheads or reentry vehicles with weapon carriers is inhibited. Such inhibition enhances safety of operations and is achieved by a number of means including computer control of the process of selection and coupling and use of connectorless interfaces capable of assuring that improperly selected items will be rejected or rendered inoperable prior to coupling. Also disclosed are a method and apparatus wherein the stated principles pertaining to selection, coupling and inhibition are extended to apply to any item-to-be-carried and any carrying assembly.
Ivezic, Nenad; Potok, Thomas E.
2003-09-30
A method for automatically evaluating a manufacturing technique comprises the steps of: receiving from a user manufacturing process step parameters characterizing a manufacturing process; accepting from the user a selection for an analysis of a particular lean manufacturing technique; automatically compiling process step data for each process step in the manufacturing process; automatically calculating process metrics from a summation of the compiled process step data for each process step; and, presenting the automatically calculated process metrics to the user. A method for evaluating a transition from a batch manufacturing technique to a lean manufacturing technique can comprise the steps of: collecting manufacturing process step characterization parameters; selecting a lean manufacturing technique for analysis; communicating the selected lean manufacturing technique and the manufacturing process step characterization parameters to an automatic manufacturing technique evaluation engine having a mathematical model for generating manufacturing technique evaluation data; and, using the lean manufacturing technique evaluation data to determine whether to transition from an existing manufacturing technique to the selected lean manufacturing technique.
Automatic cloud tracking applied to GOES and Meteosat observations
NASA Technical Reports Server (NTRS)
Endlich, R. M.; Wolf, D. E.
1981-01-01
An improved automatic processing method for the tracking of cloud motions as revealed by satellite imagery is presented and applications of the method to GOES observations of Hurricane Eloise and Meteosat water vapor and infrared data are presented. The method is shown to involve steps of picture smoothing, target selection and the calculation of cloud motion vectors by the matching of a group at a given time with its best likeness at a later time, or by a cross-correlation computation. Cloud motion computations can be made in as many as four separate layers simultaneously. For data of 4 and 8 km resolution in the eye of Hurricane Eloise, the automatic system is found to provide results comparable in accuracy and coverage to those obtained by NASA analysts using the Atmospheric and Oceanographic Information Processing System, with results obtained by the pattern recognition and cross correlation computations differing by only fractions of a pixel. For Meteosat water vapor data from the tropics and midlatitudes, the automatic motion computations are found to be reliable only in areas where the water vapor fields contained small-scale structure, although excellent results are obtained using Meteosat IR data in the same regions. The automatic method thus appears to be competitive in accuracy and coverage with motion determination by human analysts.
Automatic image enhancement based on multi-scale image decomposition
NASA Astrophysics Data System (ADS)
Feng, Lu; Wu, Zhuangzhi; Pei, Luo; Long, Xiong
2014-01-01
In image processing and computational photography, automatic image enhancement is one of the long-range objectives. Recently the automatic image enhancement methods not only take account of the globe semantics, like correct color hue and brightness imbalances, but also the local content of the image, such as human face and sky of landscape. In this paper we describe a new scheme for automatic image enhancement that considers both global semantics and local content of image. Our automatic image enhancement method employs the multi-scale edge-aware image decomposition approach to detect the underexposure regions and enhance the detail of the salient content. The experiment results demonstrate the effectiveness of our approach compared to existing automatic enhancement methods.
Hümmer, Christiane; Poppe, Carolin; Bunos, Milica; Stock, Belinda; Wingenfeld, Eva; Huppert, Volker; Stuth, Juliane; Reck, Kristina; Essl, Mike; Seifried, Erhard; Bonig, Halvard
2016-03-16
Automation of cell therapy manufacturing promises higher productivity of cell factories, more economical use of highly-trained (and costly) manufacturing staff, facilitation of processes requiring manufacturing steps at inconvenient hours, improved consistency of processing steps and other benefits. One of the most broadly disseminated engineered cell therapy products is immunomagnetically selected CD34+ hematopoietic "stem" cells (HSCs). As the clinical GMP-compliant automat CliniMACS Prodigy is being programmed to perform ever more complex sequential manufacturing steps, we developed a CD34+ selection module for comparison with the standard semi-automatic CD34 "normal scale" selection process on CliniMACS Plus, applicable for 600 × 10(6) target cells out of 60 × 10(9) total cells. Three split-validation processings with healthy donor G-CSF-mobilized apheresis products were performed; feasibility, time consumption and product quality were assessed. All processes proceeded uneventfully. Prodigy runs took about 1 h longer than CliniMACS Plus runs, albeit with markedly less hands-on operator time and therefore also suitable for less experienced operators. Recovery of target cells was the same for both technologies. Although impurities, specifically T- and B-cells, were 5 ± 1.6-fold and 4 ± 0.4-fold higher in the Prodigy products (p = ns and p = 0.013 for T and B cell depletion, respectively), T cell contents per kg of a virtual recipient receiving 4 × 10(6) CD34+ cells/kg was below 10 × 10(3)/kg even in the worst Prodigy product and thus more than fivefold below the specification of CD34+ selected mismatched-donor stem cell products. The products' theoretical clinical usability is thus confirmed. This split validation exercise of a relatively short and simple process exemplifies the potential of automatic cell manufacturing. Automation will further gain in attractiveness when applied to more complex processes, requiring frequent interventions or handling at unfavourable working hours, such as re-targeting of T-cells.
Searchfield, Grant D; Linford, Tania; Kobayashi, Kei; Crowhen, David; Latzel, Matthias
2018-03-01
To compare preference for and performance of manually selected programmes to an automatic sound classifier, the Phonak AutoSense OS. A single blind repeated measures study. Participants were fit with Phonak Virto V90 ITE aids; preferences for different listening programmes were compared across four different sound scenarios (speech in: quiet, noise, loud noise and a car). Following a 4-week trial preferences were reassessed and the users preferred programme was compared to the automatic classifier for sound quality and hearing in noise (HINT test) using a 12 loudspeaker array. Twenty-five participants with symmetrical moderate-severe sensorineural hearing loss. Participant preferences of manual programme for scenarios varied considerably between and within sessions. A HINT Speech Reception Threshold (SRT) advantage was observed for the automatic classifier over participant's manual selection for speech in quiet, loud noise and car noise. Sound quality ratings were similar for both manual and automatic selections. The use of a sound classifier is a viable alternative to manual programme selection.
First order augmentation to tensor voting for boundary inference and multiscale analysis in 3D.
Tong, Wai-Shun; Tang, Chi-Keung; Mordohai, Philippos; Medioni, Gérard
2004-05-01
Most computer vision applications require the reliable detection of boundaries. In the presence of outliers, missing data, orientation discontinuities, and occlusion, this problem is particularly challenging. We propose to address it by complementing the tensor voting framework, which was limited to second order properties, with first order representation and voting. First order voting fields and a mechanism to vote for 3D surface and volume boundaries and curve endpoints in 3D are defined. Boundary inference is also useful for a second difficult problem in grouping, namely, automatic scale selection. We propose an algorithm that automatically infers the smallest scale that can preserve the finest details. Our algorithm then proceeds with progressively larger scales to ensure continuity where it has not been achieved. Therefore, the proposed approach does not oversmooth features or delay the handling of boundaries and discontinuities until model misfit occurs. The interaction of smooth features, boundaries, and outliers is accommodated by the unified representation, making possible the perceptual organization of data in curves, surfaces, volumes, and their boundaries simultaneously. We present results on a variety of data sets to show the efficacy of the improved formalism.
Scale Space for Camera Invariant Features.
Puig, Luis; Guerrero, José J; Daniilidis, Kostas
2014-09-01
In this paper we propose a new approach to compute the scale space of any central projection system, such as catadioptric, fisheye or conventional cameras. Since these systems can be explained using a unified model, the single parameter that defines each type of system is used to automatically compute the corresponding Riemannian metric. This metric, is combined with the partial differential equations framework on manifolds, allows us to compute the Laplace-Beltrami (LB) operator, enabling the computation of the scale space of any central projection system. Scale space is essential for the intrinsic scale selection and neighborhood description in features like SIFT. We perform experiments with synthetic and real images to validate the generalization of our approach to any central projection system. We compare our approach with the best-existing methods showing competitive results in all type of cameras: catadioptric, fisheye, and perspective.
SHIELD: FITGALAXY -- A Software Package for Automatic Aperture Photometry of Extended Sources
NASA Astrophysics Data System (ADS)
Marshall, Melissa
2013-01-01
Determining the parameters of extended sources, such as galaxies, is a common but time-consuming task. Finding a photometric aperture that encompasses the majority of the flux of a source and identifying and excluding contaminating objects is often done by hand - a lengthy and difficult to reproduce process. To make extracting information from large data sets both quick and repeatable, I have developed a program called FITGALAXY, written in IDL. This program uses minimal user input to automatically fit an aperture to, and perform aperture and surface photometry on, an extended source. FITGALAXY also automatically traces the outlines of surface brightness thresholds and creates surface brightness profiles, which can then be used to determine the radial properties of a source. Finally, the program performs automatic masking of contaminating sources. Masks and apertures can be applied to multiple images (regardless of the WCS solution or plate scale) in order to accurately measure the same source at different wavelengths. I present the fluxes, as measured by the program, of a selection of galaxies from the Local Volume Legacy Survey. I then compare these results with the fluxes given by Dale et al. (2009) in order to assess the accuracy of FITGALAXY.
NASA Astrophysics Data System (ADS)
Gerlitz, Lars; Gafurov, Abror; Apel, Heiko; Unger-Sayesteh, Katy; Vorogushyn, Sergiy; Merz, Bruno
2016-04-01
Statistical climate forecast applications typically utilize a small set of large scale SST or climate indices, such as ENSO, PDO or AMO as predictor variables. If the predictive skill of these large scale modes is insufficient, specific predictor variables such as customized SST patterns are frequently included. Hence statistically based climate forecast models are either based on a fixed number of climate indices (and thus might not consider important predictor variables) or are highly site specific and barely transferable to other regions. With the aim of developing an operational seasonal forecast model, which is easily transferable to any region in the world, we present a generic data mining approach which automatically selects potential predictors from gridded SST observations and reanalysis derived large scale atmospheric circulation patterns and generates robust statistical relationships with posterior precipitation anomalies for user selected target regions. Potential predictor variables are derived by means of a cellwise correlation analysis of precipitation anomalies with gridded global climate variables under consideration of varying lead times. Significantly correlated grid cells are subsequently aggregated to predictor regions by means of a variability based cluster analysis. Finally for every month and lead time, an individual random forest based forecast model is automatically calibrated and evaluated by means of the preliminary generated predictor variables. The model is exemplarily applied and evaluated for selected headwater catchments in Central and South Asia. Particularly the for winter and spring precipitation (which is associated with westerly disturbances in the entire target domain) the model shows solid results with correlation coefficients up to 0.7, although the variability of precipitation rates is highly underestimated. Likewise for the monsoonal precipitation amounts in the South Asian target areas a certain skill of the model could be detected. The skill of the model for the dry summer season in Central Asia and the transition seasons over South Asia is found to be low. A sensitivity analysis by means on well known climate indices reveals the major large scale controlling mechanisms for the seasonal precipitation climate of each target area. For the Central Asian target areas, both, the El Nino Southern Oscillation and the North Atlantic Oscillation are identified as important controlling factors for precipitation totals during moist spring season. Drought conditions are found to be triggered by a warm ENSO phase in combination with a positive phase of the NAO. For the monsoonal summer precipitation amounts over Southern Asia, the model suggests a distinct negative response to El Nino events.
LANDSAT and radar mapping of intrusive rocks in SE-Brazil
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Dossantos, A. R.; Dosanjos, C. E.; Moreira, J. C.; Barbosa, M. P.; Veneziani, P.
1982-01-01
The feasibility of intrusive rock mapping was investigated and criteria for regional geological mapping established at the scale of 1:500,00 in polycyclic and polymetamorphic areas using the logic method of photointerpretation of LANDSAT imagery and radar from the RADAMBRASIL project. The spectral behavior of intrusive rocks, was evaluated using the interactive multispectral image analysis system (Image-100). The region of Campos (city) in northern Rio de Janeiro State was selected as the study area and digital imagery processing and pattern recognition techniques were applied. Various maps at the 2:250,000 scale were obtained to evaluate the results of automatic data processing.
A Scale-Independent Clustering Method with Automatic Variable Selection Based on Trees
2014-03-01
veterans fought. They then clustered the data and were able to identify three distinct post-combat syndromes associated with different eras...granting some legitimacy to proposed medical conditions such as the Gulf War Syndrome (Jones et al., 2002, pp. 321–324) D. MEASURING DISTANCES BETWEEN...chosen so as to minimize the sum of squared errors of the response across the two regions (Equation 2.1). The average y for the left and right child
MeSH Now: automatic MeSH indexing at PubMed scale via learning to rank.
Mao, Yuqing; Lu, Zhiyong
2017-04-17
MeSH indexing is the task of assigning relevant MeSH terms based on a manual reading of scholarly publications by human indexers. The task is highly important for improving literature retrieval and many other scientific investigations in biomedical research. Unfortunately, given its manual nature, the process of MeSH indexing is both time-consuming (new articles are not immediately indexed until 2 or 3 months later) and costly (approximately ten dollars per article). In response, automatic indexing by computers has been previously proposed and attempted but remains challenging. In order to advance the state of the art in automatic MeSH indexing, a community-wide shared task called BioASQ was recently organized. We propose MeSH Now, an integrated approach that first uses multiple strategies to generate a combined list of candidate MeSH terms for a target article. Through a novel learning-to-rank framework, MeSH Now then ranks the list of candidate terms based on their relevance to the target article. Finally, MeSH Now selects the highest-ranked MeSH terms via a post-processing module. We assessed MeSH Now on two separate benchmarking datasets using traditional precision, recall and F 1 -score metrics. In both evaluations, MeSH Now consistently achieved over 0.60 in F-score, ranging from 0.610 to 0.612. Furthermore, additional experiments show that MeSH Now can be optimized by parallel computing in order to process MEDLINE documents on a large scale. We conclude that MeSH Now is a robust approach with state-of-the-art performance for automatic MeSH indexing and that MeSH Now is capable of processing PubMed scale documents within a reasonable time frame. http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/MeSHNow/ .
Teijeiro, E J; Macías, R J; Morales, J M; Guerra, E; López, G; Alvarez, L M; Fernández, F; Maragoto, C; Seijo, F; Alvarez, E
The Neurosurgical Deep Recording System (NDRS) using a personal computer takes the place of complex electronic equipment for recording and processing deep cerebral electrical activity, as a guide in stereotaxic functional neurosurgery. It also permits increased possibilities of presenting information in direct graphic form with automatic management and sufficient flexibility to implement different analyses. This paper describes the possibilities of automatic simultaneous graphic representation in three almost orthogonal planes, available with the new 5.1 version of NDRS so as to facilitate the analysis of anatomophysiological correlation in the localization of deep structures of the brain during minimal access surgery. This new version can automatically show the spatial behaviour of signals registered throughout the path of the electrode inside the brain, superimposed simultaneously on sagittal, coronal and axial sections of an anatomical atlas of the brain, after adjusting the scale automatically according to the dimensions of the brain of each individual patient. This may also be shown in a tridimensional representation of the different planes themselves intercepting. The NDRS system has been successfully used in Spain and Cuba in over 300 functional neurosurgery operations. The new version further facilitates analysis of spatial anatomophysiological correlation for the localization of brain structures. This system has contributed to increase the precision and safety in selecting surgical targets in the control of Parkinson s disease and other disorders of movement.
Grohar: Automated Visualization of Genome-Scale Metabolic Models and Their Pathways.
Moškon, Miha; Zimic, Nikolaj; Mraz, Miha
2018-05-01
Genome-scale metabolic models (GEMs) have become a powerful tool for the investigation of the entire metabolism of the organism in silico. These models are, however, often extremely hard to reconstruct and also difficult to apply to the selected problem. Visualization of the GEM allows us to easier comprehend the model, to perform its graphical analysis, to find and correct the faulty relations, to identify the parts of the system with a designated function, etc. Even though several approaches for the automatic visualization of GEMs have been proposed, metabolic maps are still manually drawn or at least require large amount of manual curation. We present Grohar, a computational tool for automatic identification and visualization of GEM (sub)networks and their metabolic fluxes. These (sub)networks can be specified directly by listing the metabolites of interest or indirectly by providing reference metabolic pathways from different sources, such as KEGG, SBML, or Matlab file. These pathways are identified within the GEM using three different pathway alignment algorithms. Grohar also supports the visualization of the model adjustments (e.g., activation or inhibition of metabolic reactions) after perturbations are induced.
10 CFR 429.45 - Automatic commercial ice makers.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 3 2012-01-01 2012-01-01 false Automatic commercial ice makers. 429.45 Section 429.45... PRODUCTS AND COMMERCIAL AND INDUSTRIAL EQUIPMENT Certification § 429.45 Automatic commercial ice makers. (a... automatic commercial ice makers; and (2) For each basic model of automatic commercial ice maker selected for...
10 CFR 429.45 - Automatic commercial ice makers.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 3 2014-01-01 2014-01-01 false Automatic commercial ice makers. 429.45 Section 429.45... PRODUCTS AND COMMERCIAL AND INDUSTRIAL EQUIPMENT Certification § 429.45 Automatic commercial ice makers. (a... automatic commercial ice makers; and (2) For each basic model of automatic commercial ice maker selected for...
10 CFR 429.45 - Automatic commercial ice makers.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 3 2013-01-01 2013-01-01 false Automatic commercial ice makers. 429.45 Section 429.45... PRODUCTS AND COMMERCIAL AND INDUSTRIAL EQUIPMENT Certification § 429.45 Automatic commercial ice makers. (a... automatic commercial ice makers; and (2) For each basic model of automatic commercial ice maker selected for...
Linear segmentation algorithm for detecting layer boundary with lidar.
Mao, Feiyue; Gong, Wei; Logan, Timothy
2013-11-04
The automatic detection of aerosol- and cloud-layer boundary (base and top) is important in atmospheric lidar data processing, because the boundary information is not only useful for environment and climate studies, but can also be used as input for further data processing. Previous methods have demonstrated limitations in defining the base and top, window-size setting, and have neglected the in-layer attenuation. To overcome these limitations, we present a new layer detection scheme for up-looking lidars based on linear segmentation with a reasonable threshold setting, boundary selecting, and false positive removing strategies. Preliminary results from both real and simulated data show that this algorithm cannot only detect the layer-base as accurate as the simple multi-scale method, but can also detect the layer-top more accurately than that of the simple multi-scale method. Our algorithm can be directly applied to uncalibrated data without requiring any additional measurements or window size selections.
NASA Astrophysics Data System (ADS)
Enell, Carl-Fredrik; Kozlovsky, Alexander; Turunen, Tauno; Ulich, Thomas; Välitalo, Sirkku; Scotto, Carlo; Pezzopane, Michael
2016-03-01
This paper presents a comparison between standard ionospheric parameters manually and automatically scaled from ionograms recorded at the high-latitude Sodankylä Geophysical Observatory (SGO, ionosonde SO166, 64.1° geomagnetic latitude), located in the vicinity of the auroral oval. The study is based on 2610 ionograms recorded during the period June-December 2013. The automatic scaling was made by means of the Autoscala software. A few typical examples are shown to outline the method, and statistics are presented regarding the differences between manually and automatically scaled values of F2, F1, E and sporadic E (Es) layer parameters. We draw the conclusions that: 1. The F2 parameters scaled by Autoscala, foF2 and M(3000)F2, are reliable. 2. F1 is identified by Autoscala in significantly fewer cases (about 50 %) than in the manual routine, but if identified the values of foF1 are reliable. 3. Autoscala frequently (30 % of the cases) detects an E layer when the manual scaling process does not. When identified by both methods, the Autoscala E-layer parameters are close to those manually scaled, foE agreeing to within 0.4 MHz. 4. Es and parameters of Es identified by Autoscala are in many cases different from those of the manual scaling. Scaling of Es at auroral latitudes is often a difficult task.
Shaping Attention with Reward: Effects of Reward on Space- and Object-Based Selection
Shomstein, Sarah; Johnson, Jacoba
2014-01-01
The contribution of rewarded actions to automatic attentional selection remains obscure. We hypothesized that some forms of automatic orienting, such as object-based selection, can be completely abandoned in lieu of reward maximizing strategy. While presenting identical visual stimuli to the observer, in a set of two experiments, we manipulate what is being rewarded (different object targets or random object locations) and the type of reward received (money or points). It was observed that reward alone guides attentional selection, entirely predicting behavior. These results suggest that guidance of selective attention, while automatic, is flexible and can be adjusted in accordance with external non-sensory reward-based factors. PMID:24121412
NMR reaction monitoring in flow synthesis
Gomez, M Victoria
2017-01-01
Recent advances in the use of flow chemistry with in-line and on-line analysis by NMR are presented. The use of macro- and microreactors, coupled with standard and custom made NMR probes involving microcoils, incorporated into high resolution and benchtop NMR instruments is reviewed. Some recent selected applications have been collected, including synthetic applications, the determination of the kinetic and thermodynamic parameters and reaction optimization, even in single experiments and on the μL scale. Finally, software that allows automatic reaction monitoring and optimization is discussed. PMID:28326137
NMR reaction monitoring in flow synthesis.
Gomez, M Victoria; de la Hoz, Antonio
2017-01-01
Recent advances in the use of flow chemistry with in-line and on-line analysis by NMR are presented. The use of macro- and microreactors, coupled with standard and custom made NMR probes involving microcoils, incorporated into high resolution and benchtop NMR instruments is reviewed. Some recent selected applications have been collected, including synthetic applications, the determination of the kinetic and thermodynamic parameters and reaction optimization, even in single experiments and on the μL scale. Finally, software that allows automatic reaction monitoring and optimization is discussed.
X-15 Research Results with a Selected Bibliography
1965-01-01
temperatures ad pressures. A suit that met these requirements was developed by the David C. Clark Co., which had created a means of giving the wearer high...E. J.; FETTERMAN , D. E. JR.; AND SALTZMAN, E. J., "Com- parison of Full-Scale Lift and Drag Characteristics of the X-15 Air- plane with Wind-Tunnel...Elasticity of air, 49 Data analysis of flights, 45 Electronics Data-reduction, flight data, 35 Automatic damping, 75 David C. Clark Co., pressure suit de
SAMuS: Service-Oriented Architecture for Multisensor Surveillance in Smart Homes
Van de Walle, Rik
2014-01-01
The design of a service-oriented architecture for multisensor surveillance in smart homes is presented as an integrated solution enabling automatic deployment, dynamic selection, and composition of sensors. Sensors are implemented as Web-connected devices, with a uniform Web API. RESTdesc is used to describe the sensors and a novel solution is presented to automatically compose Web APIs that can be applied with existing Semantic Web reasoners. We evaluated the solution by building a smart Kinect sensor that is able to dynamically switch between IR and RGB and optimizing person detection by incorporating feedback from pressure sensors, as such demonstrating the collaboration among sensors to enhance detection of complex events. The performance results show that the platform scales for many Web APIs as composition time remains limited to a few hundred milliseconds in almost all cases. PMID:24778579
NASA Astrophysics Data System (ADS)
Wang, Zhihua; Yang, Xiaomei; Lu, Chen; Yang, Fengshuo
2018-07-01
Automatic updating of land use/cover change (LUCC) databases using high spatial resolution images (HSRI) is important for environmental monitoring and policy making, especially for coastal areas that connect the land and coast and that tend to change frequently. Many object-based change detection methods are proposed, especially those combining historical LUCC with HSRI. However, the scale parameter(s) segmenting the serial temporal images, which directly determines the average object size, is hard to choose without experts' intervention. And the samples transferred from historical LUCC also need experts' intervention to avoid insufficient or wrong samples. With respect to the scale parameter(s) choosing, a Scale Self-Adapting Segmentation (SSAS) approach based on the exponential sampling of a scale parameter and location of the local maximum of a weighted local variance was proposed to determine the scale selection problem when segmenting images constrained by LUCC for detecting changes. With respect to the samples transferring, Knowledge Transfer (KT), a classifier trained on historical images with LUCC and applied in the classification of updated images, was also proposed. Comparison experiments were conducted in a coastal area of Zhujiang, China, using SPOT 5 images acquired in 2005 and 2010. The results reveal that (1) SSAS can segment images more effectively without intervention of experts. (2) KT can also reach the maximum accuracy of samples transfer without experts' intervention. Strategy SSAS + KT would be a good choice if the temporal historical image and LUCC match, and the historical image and updated image are obtained from the same resource.
10 CFR 431.135 - Units to be tested.
Code of Federal Regulations, 2011 CFR
2011-01-01
... EQUIPMENT Automatic Commercial Ice Makers Test Procedures § 431.135 Units to be tested. For each basic model of automatic commercial ice maker selected for testing, a sample of sufficient size shall be selected...
An Approximate Approach to Automatic Kernel Selection.
Ding, Lizhong; Liao, Shizhong
2016-02-02
Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.
Lv, Peijie; Liu, Jie; Zhang, Rui; Jia, Yan
2015-01-01
Objective To assess the lesion conspicuity and image quality in CT evaluation of small (≤ 3 cm) hepatocellular carcinomas (HCCs) using automatic tube voltage selection (ATVS) and automatic tube current modulation (ATCM) with or without iterative reconstruction. Materials and Methods One hundred and five patients with 123 HCC lesions were included. Fifty-seven patients were scanned using both ATVS and ATCM and images were reconstructed using either filtered back-projection (FBP) (group A1) or sinogram-affirmed iterative reconstruction (SAFIRE) (group A2). Forty-eight patients were imaged using only ATCM, with a fixed tube potential of 120 kVp and FBP reconstruction (group B). Quantitative parameters (image noise in Hounsfield unit and contrast-to-noise ratio of the aorta, the liver, and the hepatic tumors) and qualitative visual parameters (image noise, overall image quality, and lesion conspicuity as graded on a 5-point scale) were compared among the groups. Results Group A2 scanned with the automatically chosen 80 kVp and 100 kVp tube voltages ranked the best in lesion conspicuity and subjective and objective image quality (p values ranging from < 0.001 to 0.004) among the three groups, except for overall image quality between group A2 and group B (p = 0.022). Group A1 showed higher image noise (p = 0.005) but similar lesion conspicuity and overall image quality as compared with group B. The radiation dose in group A was 19% lower than that in group B (p = 0.022). Conclusion CT scanning with combined use of ATVS and ATCM and image reconstruction with SAFIRE algorithm provides higher lesion conspicuity and better image quality for evaluating small hepatic HCCs with radiation dose reduction. PMID:25995682
Very large scale characterization of graphene mechanical devices using a colorimetry technique.
Cartamil-Bueno, Santiago Jose; Centeno, Alba; Zurutuza, Amaia; Steeneken, Peter Gerard; van der Zant, Herre Sjoerd Jan; Houri, Samer
2017-06-08
We use a scalable optical technique to characterize more than 21 000 circular nanomechanical devices made of suspended single- and double-layer graphene on cavities with different diameters (D) and depths (g). To maximize the contrast between suspended and broken membranes we used a model for selecting the optimal color filter. The method enables parallel and automatized image processing for yield statistics. We find the survival probability to be correlated with a structural mechanics scaling parameter given by D 4 /g 3 . Moreover, we extract a median adhesion energy of Γ = 0.9 J m -2 between the membrane and the native SiO 2 at the bottom of the cavities.
Automating the selection of standard parallels for conic map projections
NASA Astrophysics Data System (ADS)
Šavriǒ, Bojan; Jenny, Bernhard
2016-05-01
Conic map projections are appropriate for mapping regions at medium and large scales with east-west extents at intermediate latitudes. Conic projections are appropriate for these cases because they show the mapped area with less distortion than other projections. In order to minimize the distortion of the mapped area, the two standard parallels of conic projections need to be selected carefully. Rules of thumb exist for placing the standard parallels based on the width-to-height ratio of the map. These rules of thumb are simple to apply, but do not result in maps with minimum distortion. There also exist more sophisticated methods that determine standard parallels such that distortion in the mapped area is minimized. These methods are computationally expensive and cannot be used for real-time web mapping and GIS applications where the projection is adjusted automatically to the displayed area. This article presents a polynomial model that quickly provides the standard parallels for the three most common conic map projections: the Albers equal-area, the Lambert conformal, and the equidistant conic projection. The model defines the standard parallels with polynomial expressions based on the spatial extent of the mapped area. The spatial extent is defined by the length of the mapped central meridian segment, the central latitude of the displayed area, and the width-to-height ratio of the map. The polynomial model was derived from 3825 maps-each with a different spatial extent and computationally determined standard parallels that minimize the mean scale distortion index. The resulting model is computationally simple and can be used for the automatic selection of the standard parallels of conic map projections in GIS software and web mapping applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bizyaev, D. V.; Kautsch, S. J.; Mosenkov, A. V.
We present a catalog of true edge-on disk galaxies automatically selected from the Seventh Data Release of the Sloan Digital Sky Survey (SDSS). A visual inspection of the g, r, and i images of about 15,000 galaxies allowed us to split the initial sample of edge-on galaxy candidates into 4768 (31.8% of the initial sample) genuine edge-on galaxies, 8350 (55.7%) non-edge-on galaxies, and 1865 (12.5%) edge-on galaxies not suitable for simple automatic analysis because these objects either show signs of interaction and warps, or nearby bright stars project on it. We added more candidate galaxies from RFGC, EFIGI, RC3, andmore » Galaxy Zoo catalogs found in the SDSS footprints. Our final sample consists of 5747 genuine edge-on galaxies. We estimate the structural parameters of the stellar disks (the stellar disk thickness, radial scale length, and central surface brightness) in the galaxies by analyzing photometric profiles in each of the g, r, and i images. We also perform simplified three-dimensional modeling of the light distribution in the stellar disks of edge-on galaxies from our sample. Our large sample is intended to be used for studying scaling relations in the stellar disks and bulges and for estimating parameters of the thick disks in different types of galaxies via the image stacking. In this paper, we present the sample selection procedure and general description of the sample.« less
2017-01-01
In this paper, we propose a new automatic hyperparameter selection approach for determining the optimal network configuration (network structure and hyperparameters) for deep neural networks using particle swarm optimization (PSO) in combination with a steepest gradient descent algorithm. In the proposed approach, network configurations were coded as a set of real-number m-dimensional vectors as the individuals of the PSO algorithm in the search procedure. During the search procedure, the PSO algorithm is employed to search for optimal network configurations via the particles moving in a finite search space, and the steepest gradient descent algorithm is used to train the DNN classifier with a few training epochs (to find a local optimal solution) during the population evaluation of PSO. After the optimization scheme, the steepest gradient descent algorithm is performed with more epochs and the final solutions (pbest and gbest) of the PSO algorithm to train a final ensemble model and individual DNN classifiers, respectively. The local search ability of the steepest gradient descent algorithm and the global search capabilities of the PSO algorithm are exploited to determine an optimal solution that is close to the global optimum. We constructed several experiments on hand-written characters and biological activity prediction datasets to show that the DNN classifiers trained by the network configurations expressed by the final solutions of the PSO algorithm, employed to construct an ensemble model and individual classifier, outperform the random approach in terms of the generalization performance. Therefore, the proposed approach can be regarded an alternative tool for automatic network structure and parameter selection for deep neural networks. PMID:29236718
NASA Astrophysics Data System (ADS)
Wang, J.; Feng, B.
2016-12-01
Impervious surface area (ISA) has long been studied as an important input into moisture flux models. In general, ISA impedes groundwater recharge, increases stormflow/flood frequency, and alters in-stream and riparian habitats. Urban area is recognized as one of the richest ISA environment. Urban ISA mapping assists flood prevention and urban planning. Hyperspectral imagery (HI), for its ability to detect subtle spectral signature, becomes an ideal candidate in urban ISA mapping. To map ISA from HI involves endmember (EM) selection. The high degree of spatial and spectral heterogeneity of urban environment puts great difficulty in this task: a compromise point is needed between the automatic degree and the good representativeness of the method. The study tested one manual and two semi-automatic EM selection strategies. The manual and the first semi-automatic methods have been widely used in EM selection. The second semi-automatic EM selection method is rather new and has been only proposed for moderate spatial resolution satellite. The manual method visually selected the EM candidates from eight landcover types in the original image. The first semi-automatic method chose the EM candidates using a threshold over the pixel purity index (PPI) map. The second semi-automatic method used the triangle shape of the HI scatter plot in the n-Dimension visualizer to identify the V-I-S (vegetation-impervious surface-soil) EM candidates: the pixels locate at the triangle points. The initial EM candidates from the three methods were further refined by three indexes (EM average RMSE, minimum average spectral angle, and count based EM selection) and generated three spectral libraries, which were used to classify the test image. Spectral angle mapper was applied. The accuracy reports for the classification results were generated. The overall accuracy are 85% for the manual method, 81% for the PPI method, and 87% for the V-I-S method. The V-I-S EM selection method performs best in this study. This fact proves the value of V-I-S EM selection method in not only moderate spatial resolution satellite image but also the more and more accessible high spatial resolution airborne image. This semi-automatic EM selection method can be adopted into a wide range of remote sensing images and provide ISA map for hydrology analysis.
Mahmoud, Jihan S R; Staten, Ruth Topsy; Lennie, Terry A; Hall, Lynne A
2015-05-01
Understanding young adults' anxiety requires applying a multidimensional approach to assess the psychosocial, behavioral, and cognitive aspects of this phenomenon. A hypothesized model of the relationships among coping style, thinking style, life satisfaction, social support, and selected demographics and anxiety among college students was tested using path analysis. A total of 257 undergraduate students aged 18-24 years completed an online survey. The independent variables were measured using the Multidimensional Scale of Perceived Social Support, the Brief Students' Multidimensional Life Satisfaction Scale, the Brief COPE Inventory, the Positive Automatic Thoughts Questionnaire, and the Cognition Checklist-Anxiety. The outcome, anxiety, was measured using the Anxiety subscale of the 21-item Depression Anxiety and Stress Scale. Only negative thinking and maladaptive coping had a direct relationship with anxiety. Negative thinking was the strongest predictor of both maladaptive coping and anxiety. These findings suggest that helping undergraduates manage their anxiety by reducing their negative thinking is critical. Designing and testing interventions to decrease negative thinking in college students is recommended for future research. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Wu, Jie; Besnehard, Quentin; Marchessoux, Cédric
2011-03-01
Clinical studies for the validation of new medical imaging devices require hundreds of images. An important step in creating and tuning the study protocol is the classification of images into "difficult" and "easy" cases. This consists of classifying the image based on features like the complexity of the background, the visibility of the disease (lesions). Therefore, an automatic medical background classification tool for mammograms would help for such clinical studies. This classification tool is based on a multi-content analysis framework (MCA) which was firstly developed to recognize image content of computer screen shots. With the implementation of new texture features and a defined breast density scale, the MCA framework is able to automatically classify digital mammograms with a satisfying accuracy. BI-RADS (Breast Imaging Reporting Data System) density scale is used for grouping the mammograms, which standardizes the mammography reporting terminology and assessment and recommendation categories. Selected features are input into a decision tree classification scheme in MCA framework, which is the so called "weak classifier" (any classifier with a global error rate below 50%). With the AdaBoost iteration algorithm, these "weak classifiers" are combined into a "strong classifier" (a classifier with a low global error rate) for classifying one category. The results of classification for one "strong classifier" show the good accuracy with the high true positive rates. For the four categories the results are: TP=90.38%, TN=67.88%, FP=32.12% and FN =9.62%.
Wang, Yang; Wang, Xiaohua; Liu, Fangnan; Jiang, Xiaoning; Xiao, Yun; Dong, Xuehan; Kong, Xianglei; Yang, Xuemei; Tian, Donghua; Qu, Zhiyong
2016-01-01
Few studies have looked at the relationship between psychological and the mental health status of pregnant women in rural China. The current study aims to explore the potential mediating effect of negative automatic thoughts between negative life events and antenatal depression. Data were collected in June 2012 and October 2012. 495 rural pregnant women were interviewed. Depressive symptoms were measured by the Edinburgh postnatal depression scale, stresses of pregnancy were measured by the pregnancy pressure scale, negative automatic thoughts were measured by the automatic thoughts questionnaire, and negative life events were measured by the life events scale for pregnant women. We used logistic regression and path analysis to test the mediating effect. The prevalence of antenatal depression was 13.7%. In the logistic regression, the only socio-demographic and health behavior factor significantly related to antenatal depression was sleep quality. Negative life events were not associated with depression in the fully adjusted model. Path analysis showed that the eventual direct and general effects of negative automatic thoughts were 0.39 and 0.51, which were larger than the effects of negative life events. This study suggested that there was a potentially significant mediating effect of negative automatic thoughts. Pregnant women who had lower scores of negative automatic thoughts were more likely to suffer less from negative life events which might lead to antenatal depression.
Yücel, Basak; Kora, Kaan; Ozyalçín, Süleyman; Alçalar, Nilüfer; Ozdemir, Ozay; Yücel, Aysen
2002-03-01
The role of psychological factors related to headache has long been a focus of investigation. The aim of this study was to evaluate depression, automatic thoughts, alexithymia, and assertiveness in persons with tension-type headache and to compare the results with those from healthy controls. One hundred five subjects with tension-type headache (according to the criteria of the International Headache Society classification) and 70 controls were studied. The Beck Depression Inventory, Automatic Thoughts Scale, Toronto Alexithymia Scale, and Rathus Assertiveness Schedule were administered to both groups. Sociodemographic variables and headache features were evaluated via a semistructured scale. Compared with healthy controls, the subjects with headache had significantly higher scores on measures of depression, automatic thoughts, and alexithymia and lower scores on assertiveness. Subjects with chronic tension-type headache had higher depression and automatic thoughts scores than those with episodic tension-type headache. These findings suggested that persons with tension-type headache have high depression scores and also may have difficulty with expression of their emotions. Headache frequency appears to influence the likelihood of coexisting depression.
Agetsuma, Naoki; Koda, Ryosuke; Tsujino, Riyou; Agetsuma-Yanagihara, Yoshimi
2015-02-01
Population densities of wildlife species tend to be correlated with resource productivity of habitats. However, wildlife density has been greatly modified by increasing human influences. For effective conservation, we must first identify the significant factors that affect wildlife density, and then determine the extent of the areas in which the factors should be managed. Here, we propose a protocol that accomplishes these two tasks. The main threats to wildlife are thought to be habitat alteration and hunting, with increases in alien carnivores being a concern that has arisen recently. Here, we examined the effect of these anthropogenic disturbances, as well as natural factors, on the local density of Yakushima macaques (Macaca fuscata yakui). We surveyed macaque densities at 30 sites across their habitat using data from 403 automatic cameras. We quantified the effect of natural vegetation (broad-leaved forest, mixed coniferous/broad-leaved forest, etc.), altered vegetation (forestry area and agricultural land), hunting pressure, and density of feral domestic dogs (Canis familiaris). The effect of each vegetation type was analyzed at numerous spatial scales (between 150 and 3,600-m radii from the camera locations) to determine the best scale for explaining macaque density (effective spatial scale). A model-selection procedure (generalized linear mixed model) was used to detect significant factors affecting macaque density. We detected that the most effective spatial scale was 400 m in radius, a scale that corresponded to group range size of the macaques. At this scale, the amount of broad-leaved forest was selected as a positive factor, whereas mixed forest and forestry area were selected as negative factors for macaque density. This study demonstrated the importance of the simultaneous evaluation of all possible factors of wildlife population density at the appropriate spatial scale. © 2014 Wiley Periodicals, Inc.
Generation algorithm of craniofacial structure contour in cephalometric images
NASA Astrophysics Data System (ADS)
Mondal, Tanmoy; Jain, Ashish; Sardana, H. K.
2010-02-01
Anatomical structure tracing on cephalograms is a significant way to obtain cephalometric analysis. Computerized cephalometric analysis involves both manual and automatic approaches. The manual approach is limited in accuracy and repeatability. In this paper we have attempted to develop and test a novel method for automatic localization of craniofacial structure based on the detected edges on the region of interest. According to the grey scale feature at the different region of the cephalometric images, an algorithm for obtaining tissue contour is put forward. Using edge detection with specific threshold an improved bidirectional contour tracing approach is proposed by an interactive selection of the starting edge pixels, the tracking process searches repetitively for an edge pixel at the neighborhood of previously searched edge pixel to segment images, and then craniofacial structures are obtained. The effectiveness of the algorithm is demonstrated by the preliminary experimental results obtained with the proposed method.
Global quasi-linearization (GQL) versus QSSA for a hydrogen-air auto-ignition problem.
Yu, Chunkan; Bykov, Viatcheslav; Maas, Ulrich
2018-04-25
A recently developed automatic reduction method for systems of chemical kinetics, the so-called Global Quasi-Linearization (GQL) method, has been implemented to study and reduce the dimensions of a homogeneous combustion system. The results of application of the GQL and the Quasi-Steady State Assumption (QSSA) are compared. A number of drawbacks of the QSSA are discussed, i.e. the selection criteria of QSS-species and its sensitivity to system parameters, initial conditions, etc. To overcome these drawbacks, the GQL approach has been developed as a robust, automatic and scaling invariant method for a global analysis of the system timescale hierarchy and subsequent model reduction. In this work the auto-ignition problem of the hydrogen-air system is considered in a wide range of system parameters and initial conditions. The potential of the suggested approach to overcome most of the drawbacks of the standard approaches is illustrated.
Burgmans, Mark Christiaan; den Harder, J Michiel; Meershoek, Philippa; van den Berg, Nynke S; Chan, Shaun Xavier Ju Min; van Leeuwen, Fijs W B; van Erkel, Arian R
2017-06-01
To determine the accuracy of automatic and manual co-registration methods for image fusion of three-dimensional computed tomography (CT) with real-time ultrasonography (US) for image-guided liver interventions. CT images of a skills phantom with liver lesions were acquired and co-registered to US using GE Logiq E9 navigation software. Manual co-registration was compared to automatic and semiautomatic co-registration using an active tracker. Also, manual point registration was compared to plane registration with and without an additional translation point. Finally, comparison was made between manual and automatic selection of reference points. In each experiment, accuracy of the co-registration method was determined by measurement of the residual displacement in phantom lesions by two independent observers. Mean displacements for a superficial and deep liver lesion were comparable after manual and semiautomatic co-registration: 2.4 and 2.0 mm versus 2.0 and 2.5 mm, respectively. Both methods were significantly better than automatic co-registration: 5.9 and 5.2 mm residual displacement (p < 0.001; p < 0.01). The accuracy of manual point registration was higher than that of plane registration, the latter being heavily dependent on accurate matching of axial CT and US images by the operator. Automatic reference point selection resulted in significantly lower registration accuracy compared to manual point selection despite lower root-mean-square deviation (RMSD) values. The accuracy of manual and semiautomatic co-registration is better than that of automatic co-registration. For manual co-registration using a plane, choosing the correct plane orientation is an essential first step in the registration process. Automatic reference point selection based on RMSD values is error-prone.
Automatic Condensation of Electronic Publications by Sentence Selection.
ERIC Educational Resources Information Center
Brandow, Ronald; And Others
1995-01-01
Describes a system that performs automatic summaries of news from a large commercial news service encompassing 41 different publications. This system was compared to a system that used only the lead sentences of the texts. Lead-based summaries significantly outperformed the sentence-selection summaries. (AEF)
Ungvari, Gabor S; Goggins, William; Leung, Siu-Kau; Lee, Edwin; Gerevich, Jozsef
2009-02-01
No reports have yet been published on catatonia using latent class analysis (LCA). This study applied LCA to a large, diagnostically homogenous sample of patients with chronic schizophrenia who also presented with catatonic symptoms. A random sample of 225 Chinese inpatients with DSM-IV schizophrenia was selected from the long-stay wards of a psychiatric hospital. Their psychopathology, extrapyramidal motor status and level of functioning were evaluated with standardized rating scales. Catatonia was rated using a modified version of the Bush-Francis Catatonia Rating Scale. LCA was then applied to the 178 patients who presented with at least one catatonic sign. In LCA a four-class solution was found to fit best the statistical model. Classes 1, 2, 3 and 4 constituted 18%, 39.4%, 20.1% and 22.5% of the whole catatonic sample, respectively. Class 1 included patients with symptoms of 'automatic' phenomena (automatic obedience, Mitgehen, waxy flexibility). Class 2 comprised patients with 'repetitive/echo' phenomena (perseveration, stereotypy, verbigeration, mannerisms and grimacing). Class 3 contained patients with symptoms of 'withdrawal' (immobility, mutism, posturing, staring and withdrawal). Class 4 consisted of 'agitated/resistive' patients, who displayed symptoms of excitement, impulsivity, negativism and combativeness. The symptom composition of these 4 classes was nearly identical with that of the four factors identified by factor analysis in the same cohort of subjects in an earlier study. In multivariate regression analysis, the 'withdrawn' class was associated with higher scores on the Scale of Assessment of Negative Symptoms and lower and higher scores for negative and positive items respectively on the Nurses' Observation Scale for Inpatient Evaluation's (NOSIE). The 'automatic' class was associated with lower values on the Simpson-Angus Extrapyramidal Side Effects Scale, and the 'repetitive/echo' class with higher scores on the NOSIE positive items. These results provide preliminary support for the notion that chronic schizophrenia patients with catatonic features can be classified into 4 distinct syndromal groups on the basis of their motor symptoms. Identifying distinct catatonic syndromes would help to find their biological substrates and to develop specific therapeutic measures.
Chen, Yibing; Ogata, Taiki; Ueyama, Tsuyoshi; Takada, Toshiyuki; Ota, Jun
2018-01-01
Machine vision is playing an increasingly important role in industrial applications, and the automated design of image recognition systems has been a subject of intense research. This study has proposed a system for automatically designing the field-of-view (FOV) of a camera, the illumination strength and the parameters in a recognition algorithm. We formulated the design problem as an optimisation problem and used an experiment based on a hierarchical algorithm to solve it. The evaluation experiments using translucent plastics objects showed that the use of the proposed system resulted in an effective solution with a wide FOV, recognition of all objects and 0.32 mm and 0.4° maximal positional and angular errors when all the RGB (red, green and blue) for illumination and R channel image for recognition were used. Though all the RGB illumination and grey scale images also provided recognition of all the objects, only a narrow FOV was selected. Moreover, full recognition was not achieved by using only G illumination and a grey-scale image. The results showed that the proposed method can automatically design the FOV, illumination and parameters in the recognition algorithm and that tuning all the RGB illumination is desirable even when single-channel or grey-scale images are used for recognition. PMID:29786665
Chen, Yibing; Ogata, Taiki; Ueyama, Tsuyoshi; Takada, Toshiyuki; Ota, Jun
2018-05-22
Machine vision is playing an increasingly important role in industrial applications, and the automated design of image recognition systems has been a subject of intense research. This study has proposed a system for automatically designing the field-of-view (FOV) of a camera, the illumination strength and the parameters in a recognition algorithm. We formulated the design problem as an optimisation problem and used an experiment based on a hierarchical algorithm to solve it. The evaluation experiments using translucent plastics objects showed that the use of the proposed system resulted in an effective solution with a wide FOV, recognition of all objects and 0.32 mm and 0.4° maximal positional and angular errors when all the RGB (red, green and blue) for illumination and R channel image for recognition were used. Though all the RGB illumination and grey scale images also provided recognition of all the objects, only a narrow FOV was selected. Moreover, full recognition was not achieved by using only G illumination and a grey-scale image. The results showed that the proposed method can automatically design the FOV, illumination and parameters in the recognition algorithm and that tuning all the RGB illumination is desirable even when single-channel or grey-scale images are used for recognition.
Track-based event recognition in a realistic crowded environment
NASA Astrophysics Data System (ADS)
van Huis, Jasper R.; Bouma, Henri; Baan, Jan; Burghouts, Gertjan J.; Eendebak, Pieter T.; den Hollander, Richard J. M.; Dijk, Judith; van Rest, Jeroen H.
2014-10-01
Automatic detection of abnormal behavior in CCTV cameras is important to improve the security in crowded environments, such as shopping malls, airports and railway stations. This behavior can be characterized at different time scales, e.g., by small-scale subtle and obvious actions or by large-scale walking patterns and interactions between people. For example, pickpocketing can be recognized by the actual snatch (small scale), when he follows the victim, or when he interacts with an accomplice before and after the incident (longer time scale). This paper focusses on event recognition by detecting large-scale track-based patterns. Our event recognition method consists of several steps: pedestrian detection, object tracking, track-based feature computation and rule-based event classification. In the experiment, we focused on single track actions (walk, run, loiter, stop, turn) and track interactions (pass, meet, merge, split). The experiment includes a controlled setup, where 10 actors perform these actions. The method is also applied to all tracks that are generated in a crowded shopping mall in a selected time frame. The results show that most of the actions can be detected reliably (on average 90%) at a low false positive rate (1.1%), and that the interactions obtain lower detection rates (70% at 0.3% FP). This method may become one of the components that assists operators to find threatening behavior and enrich the selection of videos that are to be observed.
Swiderska, Zaneta; Korzynska, Anna; Markiewicz, Tomasz; Lorent, Malgorzata; Zak, Jakub; Wesolowska, Anna; Roszkowiak, Lukasz; Slodkowska, Janina; Grala, Bartlomiej
2015-01-01
Background. This paper presents the study concerning hot-spot selection in the assessment of whole slide images of tissue sections collected from meningioma patients. The samples were immunohistochemically stained to determine the Ki-67/MIB-1 proliferation index used for prognosis and treatment planning. Objective. The observer performance was examined by comparing results of the proposed method of automatic hot-spot selection in whole slide images, results of traditional scoring under a microscope, and results of a pathologist's manual hot-spot selection. Methods. The results of scoring the Ki-67 index using optical scoring under a microscope, software for Ki-67 index quantification based on hot spots selected by two pathologists (resp., once and three times), and the same software but on hot spots selected by proposed automatic methods were compared using Kendall's tau-b statistics. Results. Results show intra- and interobserver agreement. The agreement between Ki-67 scoring with manual and automatic hot-spot selection is high, while agreement between Ki-67 index scoring results in whole slide images and traditional microscopic examination is lower. Conclusions. The agreement observed for the three scoring methods shows that automation of area selection is an effective tool in supporting physicians and in increasing the reliability of Ki-67 scoring in meningioma.
Swiderska, Zaneta; Korzynska, Anna; Markiewicz, Tomasz; Lorent, Malgorzata; Zak, Jakub; Wesolowska, Anna; Roszkowiak, Lukasz; Slodkowska, Janina; Grala, Bartlomiej
2015-01-01
Background. This paper presents the study concerning hot-spot selection in the assessment of whole slide images of tissue sections collected from meningioma patients. The samples were immunohistochemically stained to determine the Ki-67/MIB-1 proliferation index used for prognosis and treatment planning. Objective. The observer performance was examined by comparing results of the proposed method of automatic hot-spot selection in whole slide images, results of traditional scoring under a microscope, and results of a pathologist's manual hot-spot selection. Methods. The results of scoring the Ki-67 index using optical scoring under a microscope, software for Ki-67 index quantification based on hot spots selected by two pathologists (resp., once and three times), and the same software but on hot spots selected by proposed automatic methods were compared using Kendall's tau-b statistics. Results. Results show intra- and interobserver agreement. The agreement between Ki-67 scoring with manual and automatic hot-spot selection is high, while agreement between Ki-67 index scoring results in whole slide images and traditional microscopic examination is lower. Conclusions. The agreement observed for the three scoring methods shows that automation of area selection is an effective tool in supporting physicians and in increasing the reliability of Ki-67 scoring in meningioma. PMID:26240787
Lim, Jiyeon; Park, Eun-Ah; Lee, Whal; Shim, Hackjoon; Chung, Jin Wook
2015-06-01
To assess the image quality and radiation exposure of 320-row area detector computed tomography (320-ADCT) coronary angiography with optimal tube voltage selection with the guidance of an automatic exposure control system in comparison with a body mass index (BMI)-adapted protocol. Twenty-two patients (study group) underwent 320-ADCT coronary angiography using an automatic exposure control system with the target standard deviation value of 33 as the image quality index and the lowest possible tube voltage. For comparison, a sex- and BMI-matched group (control group, n = 22) using a BMI-adapted protocol was established. Images of both groups were reconstructed by an iterative reconstruction algorithm. For objective evaluation of the image quality, image noise, vessel density, signal to noise ratio (SNR), and contrast to noise ratio (CNR) were measured. Two blinded readers then subjectively graded the image quality using a four-point scale (1: nondiagnostic to 4: excellent). Radiation exposure was also measured. Although the study group tended to show higher image noise (14.1 ± 3.6 vs. 9.3 ± 2.2 HU, P = 0.111) and higher vessel density (665.5 ± 161 vs. 498 ± 143 HU, P = 0.430) than the control group, the differences were not significant. There was no significant difference between the two groups for SNR (52.5 ± 19.2 vs. 60.6 ± 21.8, P = 0.729), CNR (57.0 ± 19.8 vs. 67.8 ± 23.3, P = 0.531), or subjective image quality scores (3.47 ± 0.55 vs. 3.59 ± 0.56, P = 0.960). However, radiation exposure was significantly reduced by 42 % in the study group (1.9 ± 0.8 vs. 3.6 ± 0.4 mSv, P = 0.003). Optimal tube voltage selection with the guidance of an automatic exposure control system in 320-ADCT coronary angiography allows substantial radiation reduction without significant impairment of image quality, compared to the results obtained using a BMI-based protocol.
Automatic learning-based beam angle selection for thoracic IMRT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amit, Guy; Marshall, Andrea; Purdie, Thomas G., E-mail: tom.purdie@rmp.uhn.ca
Purpose: The treatment of thoracic cancer using external beam radiation requires an optimal selection of the radiation beam directions to ensure effective coverage of the target volume and to avoid unnecessary treatment of normal healthy tissues. Intensity modulated radiation therapy (IMRT) planning is a lengthy process, which requires the planner to iterate between choosing beam angles, specifying dose–volume objectives and executing IMRT optimization. In thorax treatment planning, where there are no class solutions for beam placement, beam angle selection is performed manually, based on the planner’s clinical experience. The purpose of this work is to propose and study a computationallymore » efficient framework that utilizes machine learning to automatically select treatment beam angles. Such a framework may be helpful for reducing the overall planning workload. Methods: The authors introduce an automated beam selection method, based on learning the relationships between beam angles and anatomical features. Using a large set of clinically approved IMRT plans, a random forest regression algorithm is trained to map a multitude of anatomical features into an individual beam score. An optimization scheme is then built to select and adjust the beam angles, considering the learned interbeam dependencies. The validity and quality of the automatically selected beams evaluated using the manually selected beams from the corresponding clinical plans as the ground truth. Results: The analysis included 149 clinically approved thoracic IMRT plans. For a randomly selected test subset of 27 plans, IMRT plans were generated using automatically selected beams and compared to the clinical plans. The comparison of the predicted and the clinical beam angles demonstrated a good average correspondence between the two (angular distance 16.8° ± 10°, correlation 0.75 ± 0.2). The dose distributions of the semiautomatic and clinical plans were equivalent in terms of primary target volume coverage and organ at risk sparing and were superior over plans produced with fixed sets of common beam angles. The great majority of the automatic plans (93%) were approved as clinically acceptable by three radiation therapy specialists. Conclusions: The results demonstrated the feasibility of utilizing a learning-based approach for automatic selection of beam angles in thoracic IMRT planning. The proposed method may assist in reducing the manual planning workload, while sustaining plan quality.« less
NASA Astrophysics Data System (ADS)
Wang, Jinhu; Lindenbergh, Roderik; Menenti, Massimo
2017-06-01
Urban road environments contain a variety of objects including different types of lamp poles and traffic signs. Its monitoring is traditionally conducted by visual inspection, which is time consuming and expensive. Mobile laser scanning (MLS) systems sample the road environment efficiently by acquiring large and accurate point clouds. This work proposes a methodology for urban road object recognition from MLS point clouds. The proposed method uses, for the first time, shape descriptors of complete objects to match repetitive objects in large point clouds. To do so, a novel 3D multi-scale shape descriptor is introduced, that is embedded in a workflow that efficiently and automatically identifies different types of lamp poles and traffic signs. The workflow starts by tiling the raw point clouds along the scanning trajectory and by identifying non-ground points. After voxelization of the non-ground points, connected voxels are clustered to form candidate objects. For automatic recognition of lamp poles and street signs, a 3D significant eigenvector based shape descriptor using voxels (SigVox) is introduced. The 3D SigVox descriptor is constructed by first subdividing the points with an octree into several levels. Next, significant eigenvectors of the points in each voxel are determined by principal component analysis (PCA) and mapped onto the appropriate triangle of a sphere approximating icosahedron. This step is repeated for different scales. By determining the similarity of 3D SigVox descriptors between candidate point clusters and training objects, street furniture is automatically identified. The feasibility and quality of the proposed method is verified on two point clouds obtained in opposite direction of a stretch of road of 4 km. 6 types of lamp pole and 4 types of road sign were selected as objects of interest. Ground truth validation showed that the overall accuracy of the ∼170 automatically recognized objects is approximately 95%. The results demonstrate that the proposed method is able to recognize street furniture in a practical scenario. Remaining difficult cases are touching objects, like a lamp pole close to a tree.
Chien, S H; Hsieh, M K; Li, H; Monnell, J; Dzombak, D; Vidic, R
2012-02-01
Pilot-scale cooling towers can be used to evaluate corrosion, scaling, and biofouling control strategies when using particular cooling system makeup water and particular operating conditions. To study the potential for using a number of different impaired waters as makeup water, a pilot-scale system capable of generating 27,000 kJ∕h heat load and maintaining recirculating water flow with a Reynolds number of 1.92 × 10(4) was designed to study these critical processes under conditions that are similar to full-scale systems. The pilot-scale cooling tower was equipped with an automatic makeup water control system, automatic blowdown control system, semi-automatic biocide feeding system, and corrosion, scaling, and biofouling monitoring systems. Observed operational data revealed that the major operating parameters, including temperature change (6.6 °C), cycles of concentration (N = 4.6), water flow velocity (0.66 m∕s), and air mass velocity (3660 kg∕h m(2)), were controlled quite well for an extended period of time (up to 2 months). Overall, the performance of the pilot-scale cooling towers using treated municipal wastewater was shown to be suitable to study critical processes (corrosion, scaling, biofouling) and evaluate cooling water management strategies for makeup waters of complex quality.
Automatic Tools for Enhancing the Collaborative Experience in Large Projects
NASA Astrophysics Data System (ADS)
Bourilkov, D.; Rodriquez, J. L.
2014-06-01
With the explosion of big data in many fields, the efficient management of knowledge about all aspects of the data analysis gains in importance. A key feature of collaboration in large scale projects is keeping a log of what is being done and how - for private use, reuse, and for sharing selected parts with collaborators and peers, often distributed geographically on an increasingly global scale. Even better if the log is automatically created on the fly while the scientist or software developer is working in a habitual way, without the need for extra efforts. This saves time and enables a team to do more with the same resources. The CODESH - COllaborative DEvelopment SHell - and CAVES - Collaborative Analysis Versioning Environment System projects address this problem in a novel way. They build on the concepts of virtual states and transitions to enhance the collaborative experience by providing automatic persistent virtual logbooks. CAVES is designed for sessions of distributed data analysis using the popular ROOT framework, while CODESH generalizes the approach for any type of work on the command line in typical UNIX shells like bash or tcsh. Repositories of sessions can be configured dynamically to record and make available the knowledge accumulated in the course of a scientific or software endeavor. Access can be controlled to define logbooks of private sessions or sessions shared within or between collaborating groups. A typical use case is building working scalable systems for analysis of Petascale volumes of data as encountered in the LHC experiments. Our approach is general enough to find applications in many fields.
Fully automatic time-window selection using machine learning for global adjoint tomography
NASA Astrophysics Data System (ADS)
Chen, Y.; Hill, J.; Lei, W.; Lefebvre, M. P.; Bozdag, E.; Komatitsch, D.; Tromp, J.
2017-12-01
Selecting time windows from seismograms such that the synthetic measurements (from simulations) and measured observations are sufficiently close is indispensable in a global adjoint tomography framework. The increasing amount of seismic data collected everyday around the world demands "intelligent" algorithms for seismic window selection. While the traditional FLEXWIN algorithm can be "automatic" to some extent, it still requires both human input and human knowledge or experience, and thus is not deemed to be fully automatic. The goal of intelligent window selection is to automatically select windows based on a learnt engine that is built upon a huge number of existing windows generated through the adjoint tomography project. We have formulated the automatic window selection problem as a classification problem. All possible misfit calculation windows are classified as either usable or unusable. Given a large number of windows with a known selection mode (select or not select), we train a neural network to predict the selection mode of an arbitrary input window. Currently, the five features we extract from the windows are its cross-correlation value, cross-correlation time lag, amplitude ratio between observed and synthetic data, window length, and minimum STA/LTA value. More features can be included in the future. We use these features to characterize each window for training a multilayer perceptron neural network (MPNN). Training the MPNN is equivalent to solve a non-linear optimization problem. We use backward propagation to derive the gradient of the loss function with respect to the weighting matrices and bias vectors and use the mini-batch stochastic gradient method to iteratively optimize the MPNN. Numerical tests show that with a careful selection of the training data and a sufficient amount of training data, we are able to train a robust neural network that is capable of detecting the waveforms in an arbitrary earthquake data with negligible detection error compared to existing selection methods (e.g. FLEXWIN). We will introduce in detail the mathematical formulation of the window-selection-oriented MPNN and show very encouraging results when applying the new algorithm to real earthquake data.
On the use of variability time-scales as an early classifier of radio transients and variables
NASA Astrophysics Data System (ADS)
Pietka, M.; Staley, T. D.; Pretorius, M. L.; Fender, R. P.
2017-11-01
We have shown previously that a broad correlation between the peak radio luminosity and the variability time-scales, approximately L ∝ τ5, exists for variable synchrotron emitting sources and that different classes of astrophysical sources occupy different regions of luminosity and time-scale space. Based on those results, we investigate whether the most basic information available for a newly discovered radio variable or transient - their rise and/or decline rate - can be used to set initial constraints on the class of events from which they originate. We have analysed a sample of ≈800 synchrotron flares, selected from light curves of ≈90 sources observed at 5-8 GHz, representing a wide range of astrophysical phenomena, from flare stars to supermassive black holes. Selection of outbursts from the noisy radio light curves has been done automatically in order to ensure reproducibility of results. The distribution of rise/decline rates for the selected flares is modelled as a Gaussian probability distribution for each class of object, and further convolved with estimated areal density of that class in order to correct for the strong bias in our sample. We show in this way that comparing the measured variability time-scale of a radio transient/variable of unknown origin can provide an early, albeit approximate, classification of the object, and could form part of a suite of measurements used to provide early categorization of such events. Finally, we also discuss the effect scintillating sources will have on our ability to classify events based on their variability time-scales.
Automatic Scaling of Digisonde Ionograms Computer Program and Numerical Analysis Documentation,
1983-02-01
and Huang, 1983). This method is ideally suited for autoscaled results as discussed in Reference 1. The results of ARTIST are outputted to a standard... ARTIST , the autoscaling routine has been tested on some 8000 ionograms from Goose Bay, Labrador, for the months January, April, July, and September of 1980...Automatic scaling of Digisonde ionograms by-~a ’R-4 ARTIST system is discussed and ref erence is made to the ARTIST’s success in scaling over 8000
Multiple-Diode-Laser Gas-Detection Spectrometer
NASA Technical Reports Server (NTRS)
Webster, Christopher R.; Beer, Reinhard; Sander, Stanley P.
1988-01-01
Small concentrations of selected gases measured automatically. Proposed multiple-laser-diode spectrometer part of system for measuring automatically concentrations of selected gases at part-per-billion level. Array of laser/photodetector pairs measure infrared absorption spectrum of atmosphere along probing laser beams. Adaptable to terrestrial uses as monitoring pollution or control of industrial processes.
Code of Federal Regulations, 2010 CFR
2010-10-01
... automatic interlocking. (a) The control circuits for aspects with indications more favorable than “proceed... 49 Transportation 4 2010-10-01 2010-10-01 false Signal control circuits, selection through track... automatic interlocking. 236.311 Section 236.311 Transportation Other Regulations Relating to Transportation...
Automatic Text Analysis Based on Transition Phenomena of Word Occurrences
ERIC Educational Resources Information Center
Pao, Miranda Lee
1978-01-01
Describes a method of selecting index terms directly from a word frequency list, an idea originally suggested by Goffman. Results of the analysis of word frequencies of two articles seem to indicate that the automated selection of index terms from a frequency list holds some promise for automatic indexing. (Author/MBR)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burgmans, Mark Christiaan, E-mail: m.c.burgmans@lumc.nl; Harder, J. Michiel den, E-mail: chiel.den.harder@gmail.com; Meershoek, Philippa, E-mail: P.Meershoek@lumc.nl
PurposeTo determine the accuracy of automatic and manual co-registration methods for image fusion of three-dimensional computed tomography (CT) with real-time ultrasonography (US) for image-guided liver interventions.Materials and MethodsCT images of a skills phantom with liver lesions were acquired and co-registered to US using GE Logiq E9 navigation software. Manual co-registration was compared to automatic and semiautomatic co-registration using an active tracker. Also, manual point registration was compared to plane registration with and without an additional translation point. Finally, comparison was made between manual and automatic selection of reference points. In each experiment, accuracy of the co-registration method was determined bymore » measurement of the residual displacement in phantom lesions by two independent observers.ResultsMean displacements for a superficial and deep liver lesion were comparable after manual and semiautomatic co-registration: 2.4 and 2.0 mm versus 2.0 and 2.5 mm, respectively. Both methods were significantly better than automatic co-registration: 5.9 and 5.2 mm residual displacement (p < 0.001; p < 0.01). The accuracy of manual point registration was higher than that of plane registration, the latter being heavily dependent on accurate matching of axial CT and US images by the operator. Automatic reference point selection resulted in significantly lower registration accuracy compared to manual point selection despite lower root-mean-square deviation (RMSD) values.ConclusionThe accuracy of manual and semiautomatic co-registration is better than that of automatic co-registration. For manual co-registration using a plane, choosing the correct plane orientation is an essential first step in the registration process. Automatic reference point selection based on RMSD values is error-prone.« less
Automatic Scoring of Paper-and-Pencil Figural Responses. Research Report.
ERIC Educational Resources Information Center
Martinez, Michael E.; And Others
Large-scale testing is dominated by the multiple-choice question format. Widespread use of the format is due, in part, to the ease with which multiple-choice items can be scored automatically. This paper examines automatic scoring procedures for an alternative item type: figural response. Figural response items call for the completion or…
NASA Astrophysics Data System (ADS)
Lin, D.; Jarzabek-Rychard, M.; Schneider, D.; Maas, H.-G.
2018-05-01
An automatic building façade thermal texture mapping approach, using uncooled thermal camera data, is proposed in this paper. First, a shutter-less radiometric thermal camera calibration method is implemented to remove the large offset deviations caused by changing ambient environment. Then, a 3D façade model is generated from a RGB image sequence using structure-from-motion (SfM) techniques. Subsequently, for each triangle in the 3D model, the optimal texture is selected by taking into consideration local image scale, object incident angle, image viewing angle as well as occlusions. Afterwards, the selected textures can be further corrected using thermal radiant characteristics. Finally, the Gauss filter outperforms the voted texture strategy at the seams smoothing and thus for instance helping to reduce the false alarm rate in façade thermal leakages detection. Our approach is evaluated on a building row façade located at Dresden, Germany.
Method and appartus for converting static in-ground vehicle scales into weigh-in-motion systems
Muhs, Jeffrey D.; Scudiere, Matthew B.; Jordan, John K.
2002-01-01
An apparatus and method for converting in-ground static weighing scales for vehicles to weigh-in-motion systems. The apparatus upon conversion includes the existing in-ground static scale, peripheral switches and an electronic module for automatic computation of the weight. By monitoring the velocity, tire position, axle spacing, and real time output from existing static scales as a vehicle drives over the scales, the system determines when an axle of a vehicle is on the scale at a given time, monitors the combined weight output from any given axle combination on the scale(s) at any given time, and from these measurements automatically computes the weight of each individual axle and gross vehicle weight by an integration, integration approximation, and/or signal averaging technique.
Morrogh-Bernard, Helen C; Husson, Simon J; Harsanto, Fransiskus A; Chivers, David J
2014-01-01
This study was conducted to see how orang-utans (Pongo pygmaeus wurmbii) were coping with fine-scale habitat disturbance in a selectively logged peat swamp forest in Central Kalimantan, Borneo. Seven habitat classes were defined, and orang-utans were found to use all of these, but were selective in their preference for certain classes over others. Overall, the tall forest classes (≥20 m) were preferred. They were preferred for feeding, irrespective of canopy connectivity, whereas classes with a connected canopy (canopy cover ≥75%), irrespective of canopy height, were preferred for resting and nesting, suggesting that tall trees are preferred for feeding and connected canopy for security and protection. The smaller forest classes (≤10 m high) were least preferred and were used mainly for travelling from patch to patch. Thus, selective logging is demonstrated here to be compatible with orang-utan survival as long as large food trees and patches of primary forest remain. Logged forest, therefore, should not automatically be designated as 'degraded'. These findings have important implications for forest management, forest classification and the designation of protected areas for orang-utan conservation.
Research directions in large scale systems and decentralized control
NASA Technical Reports Server (NTRS)
Tenney, R. R.
1980-01-01
Control theory provides a well established framework for dealing with automatic decision problems and a set of techniques for automatic decision making which exploit special structure, but it does not deal well with complexity. The potential exists for combining control theoretic and knowledge based concepts into a unified approach. The elements of control theory are diagrammed, including modern control and large scale systems.
Souza, Elisa Sebba Tosta de; Crippa, José Alexandre de Souza; Pasian, Sonia Regina; Martinez, José Antônio Baddini
2010-01-01
To develop a new scale aimed at evaluating smoking motivation by incorporating questions and domains from the 68-item Wisconsin Inventory of Smoking Dependence Motives (WISDM-68) into the Modified Reasons for Smoking Scale (MRSS). Nine WISDM-68 questions regarding affiliative attachment, cue exposure/associative processes, and weight control were added to the 21 questions of the MRSS. The new scale, together with the Fagerström Test for Nicotine Dependence (FTND), was administered to 311 smokers (214 males; mean age = 37.6 ± 10.8 years; mean number of cigarettes smoked per day = 15.0 ± 9.2), who also provided additional information. We used exploratory factor analysis in order to determine the factor structure of the scale. The influence that certain clinical features had on the scores of the final factor solution was also analyzed. The factor analysis revealed a 21-question solution grouped into nine factors: addiction, pleasure from smoking, tension reduction, stimulation, automatism, handling, social smoking, weight control, and affiliative attachment. For the overall scale, the Cronbach's alpha coefficient was 0.83. Females scored significantly higher for addiction, tension reduction, handling, weight control, and affiliative attachment than did males. The FTND score correlated positively with addiction, tension reduction, stimulation, automatism, social smoking, and affiliative attachment. The number of cigarettes smoked per day was associated with addiction, tension reduction, stimulation, automatism, affiliative attachment, and handling. The level of exhaled CO correlated positively with addiction, automatism, and affiliative attachment. The new scale provides an acceptable framework of motivational factors for smoking, with satisfactory psychometric properties and reliability.
Threshold automatic selection hybrid phase unwrapping algorithm for digital holographic microscopy
NASA Astrophysics Data System (ADS)
Zhou, Meiling; Min, Junwei; Yao, Baoli; Yu, Xianghua; Lei, Ming; Yan, Shaohui; Yang, Yanlong; Dan, Dan
2015-01-01
Conventional quality-guided (QG) phase unwrapping algorithm is hard to be applied to digital holographic microscopy because of the long execution time. In this paper, we present a threshold automatic selection hybrid phase unwrapping algorithm that combines the existing QG algorithm and the flood-filled (FF) algorithm to solve this problem. The original wrapped phase map is divided into high- and low-quality sub-maps by selecting a threshold automatically, and then the FF and QG unwrapping algorithms are used in each level to unwrap the phase, respectively. The feasibility of the proposed method is proved by experimental results, and the execution speed is shown to be much faster than that of the original QG unwrapping algorithm.
Predicting habits of vegetable parenting practices to facilitate the design of change programmes.
Baranowski, Tom; Chen, Tzu-An; O'Connor, Teresia M; Hughes, Sheryl O; Diep, Cassandra S; Beltran, Alicia; Brand, Leah; Nicklas, Theresa; Baranowski, Janice
2016-08-01
Habit has been defined as the automatic performance of a usual behaviour. The present paper reports the relationships of variables from a Model of Goal Directed Behavior to four scales in regard to parents' habits when feeding their children: habit of (i) actively involving child in selection of vegetables; (ii) maintaining a positive vegetable environment; (iii) positive communications about vegetables; and (iv) controlling vegetable practices. We tested the hypothesis that the primary predictor of each habit variable would be the measure of the corresponding parenting practice. Internet survey data from a mostly female sample. Primary analyses employed regression modelling with backward deletion, controlling for demographics and parenting practices behaviour. Houston, Texas, USA. Parents of 307 pre-school (3-5-year-old) children. Three of the four models accounted for about 50 % of the variance in the parenting practices habit scales. Each habit scale was primarily predicted by the corresponding parenting practices scale (suggesting validity). The habit of active child involvement in vegetable selection was also most strongly predicted by two barriers and rudimentary self-efficacy; the habit of maintaining a positive vegetable environment by one barrier; the habit of maintaining positive communications about vegetables by an emotional scale; and the habit of controlling vegetable practices by a perceived behavioural control scale. The predictiveness of the psychosocial variables beyond parenting practices behaviour was modest. Discontinuing the habit of ineffective controlling parenting practices may require increasing the parent's perceived control of parenting practices, perhaps through simulated parent-child interactions.
[Automatic tracing of conversion scales from conventional units to the SI system of units].
Besozzi, M; Bianchi, P; Agrifoglio, L
1988-01-01
American medical journals, as the Journal of the American Medical Association (JAMA), and the American Journal of Clinical Pathology (AJCP), the Journal of the American Society of Clinical Pathologists (ASCP), are shifting to selected SI (Système International d'Unités) units for reporting measurements. Further discussion by the AMA, the ASCP and other organizations is required before consensus in the US medical community can be reached as to the extent of and time frame for conversion to SI for reporting clinical laboratory measurements: however this decision will certainly greatly speed up the process of conversion in European countries too. Transition to SI units will require the use of different reference ranges, and there will be a potential for serious misinterpretation of laboratory data unless well-planned educational programs are instituted before the change. A simple program written in Microsoft Basic for automatically tracing on one's personal computer (PC) monitor a dual scale, in the conventional and in the SI system of units, is presented here. The program may be easily implemented and run on every PC operating under MS-DOS, equipped with a CGA or an AT&T6300 graphic card: through the operating system the scales may also be printed on a dot-matrix graphic printer. We believe that this, and other tools of this kind, will be useful in the thorough educational process of those reading the reports, and will be an important factor in the success of conversion to SI reporting.
Cyber Surveillance for Flood Disasters
Lo, Shi-Wei; Wu, Jyh-Horng; Lin, Fang-Pang; Hsu, Ching-Han
2015-01-01
Regional heavy rainfall is usually caused by the influence of extreme weather conditions. Instant heavy rainfall often results in the flooding of rivers and the neighboring low-lying areas, which is responsible for a large number of casualties and considerable property loss. The existing precipitation forecast systems mostly focus on the analysis and forecast of large-scale areas but do not provide precise instant automatic monitoring and alert feedback for individual river areas and sections. Therefore, in this paper, we propose an easy method to automatically monitor the flood object of a specific area, based on the currently widely used remote cyber surveillance systems and image processing methods, in order to obtain instant flooding and waterlogging event feedback. The intrusion detection mode of these surveillance systems is used in this study, wherein a flood is considered a possible invasion object. Through the detection and verification of flood objects, automatic flood risk-level monitoring of specific individual river segments, as well as the automatic urban inundation detection, has become possible. The proposed method can better meet the practical needs of disaster prevention than the method of large-area forecasting. It also has several other advantages, such as flexibility in location selection, no requirement of a standard water-level ruler, and a relatively large field of view, when compared with the traditional water-level measurements using video screens. The results can offer prompt reference for appropriate disaster warning actions in small areas, making them more accurate and effective. PMID:25621609
Automatic allograft bone selection through band registration and its application to distal femur.
Zhang, Yu; Qiu, Lei; Li, Fengzan; Zhang, Qing; Zhang, Li; Niu, Xiaohui
2017-09-01
Clinical reports suggest that large bone defects could be effectively restored by allograft bone transplantation, where allograft bone selection acts an important role. Besides, there is a huge demand for developing the automatic allograft bone selection methods, as the automatic methods could greatly improve the management efficiency of the large bone banks. Although several automatic methods have been presented to select the most suitable allograft bone from the massive allograft bone bank, these methods still suffer from inaccuracy. In this paper, we propose an effective allograft bone selection method without using the contralateral bones. Firstly, the allograft bone is globally aligned to the recipient bone by surface registration. Then, the global alignment is further refined through band registration. The band, defined as the recipient points within the lifted and lowered cutting planes, could involve more local structure of the defected segment. Therefore, our method could achieve robust alignment and high registration accuracy of the allograft and recipient. Moreover, the existing contour method and surface method could be unified into one framework under our method by adjusting the lift and lower distances of the cutting planes. Finally, our method has been validated on the database of distal femurs. The experimental results indicate that our method outperforms the surface method and contour method.
NASA Astrophysics Data System (ADS)
Widyaningrum, E.; Gorte, B. G. H.
2017-05-01
LiDAR data acquisition is recognized as one of the fastest solutions to provide basis data for large-scale topographical base maps worldwide. Automatic LiDAR processing is believed one possible scheme to accelerate the large-scale topographic base map provision by the Geospatial Information Agency in Indonesia. As a progressive advanced technology, Geographic Information System (GIS) open possibilities to deal with geospatial data automatic processing and analyses. Considering further needs of spatial data sharing and integration, the one stop processing of LiDAR data in a GIS environment is considered a powerful and efficient approach for the base map provision. The quality of the automated topographic base map is assessed and analysed based on its completeness, correctness, quality, and the confusion matrix.
Floating-point scaling technique for sources separation automatic gain control
NASA Astrophysics Data System (ADS)
Fermas, A.; Belouchrani, A.; Ait-Mohamed, O.
2012-07-01
Based on the floating-point representation and taking advantage of scaling factor indetermination in blind source separation (BSS) processing, we propose a scaling technique applied to the separation matrix, to avoid the saturation or the weakness in the recovered source signals. This technique performs an automatic gain control in an on-line BSS environment. We demonstrate the effectiveness of this technique by using the implementation of a division-free BSS algorithm with two inputs, two outputs. The proposed technique is computationally cheaper and efficient for a hardware implementation compared to the Euclidean normalisation.
NASA Astrophysics Data System (ADS)
Duan, Fajie; Fu, Xiao; Jiang, Jiajia; Huang, Tingting; Ma, Ling; Zhang, Cong
2018-05-01
In this work, an automatic variable selection method for quantitative analysis of soil samples using laser-induced breakdown spectroscopy (LIBS) is proposed, which is based on full spectrum correction (FSC) and modified iterative predictor weighting-partial least squares (mIPW-PLS). The method features automatic selection without artificial processes. To illustrate the feasibility and effectiveness of the method, a comparison with genetic algorithm (GA) and successive projections algorithm (SPA) for different elements (copper, barium and chromium) detection in soil was implemented. The experimental results showed that all the three methods could accomplish variable selection effectively, among which FSC-mIPW-PLS required significantly shorter computation time (12 s approximately for 40,000 initial variables) than the others. Moreover, improved quantification models were got with variable selection approaches. The root mean square errors of prediction (RMSEP) of models utilizing the new method were 27.47 (copper), 37.15 (barium) and 39.70 (chromium) mg/kg, which showed comparable prediction effect with GA and SPA.
NASA Astrophysics Data System (ADS)
Imbalzano, Giulio; Anelli, Andrea; Giofré, Daniele; Klees, Sinja; Behler, Jörg; Ceriotti, Michele
2018-06-01
Machine learning of atomic-scale properties is revolutionizing molecular modeling, making it possible to evaluate inter-atomic potentials with first-principles accuracy, at a fraction of the costs. The accuracy, speed, and reliability of machine learning potentials, however, depend strongly on the way atomic configurations are represented, i.e., the choice of descriptors used as input for the machine learning method. The raw Cartesian coordinates are typically transformed in "fingerprints," or "symmetry functions," that are designed to encode, in addition to the structure, important properties of the potential energy surface like its invariances with respect to rotation, translation, and permutation of like atoms. Here we discuss automatic protocols to select a number of fingerprints out of a large pool of candidates, based on the correlations that are intrinsic to the training data. This procedure can greatly simplify the construction of neural network potentials that strike the best balance between accuracy and computational efficiency and has the potential to accelerate by orders of magnitude the evaluation of Gaussian approximation potentials based on the smooth overlap of atomic positions kernel. We present applications to the construction of neural network potentials for water and for an Al-Mg-Si alloy and to the prediction of the formation energies of small organic molecules using Gaussian process regression.
Automatic design of synthetic gene circuits through mixed integer non-linear programming.
Huynh, Linh; Kececioglu, John; Köppe, Matthias; Tagkopoulos, Ilias
2012-01-01
Automatic design of synthetic gene circuits poses a significant challenge to synthetic biology, primarily due to the complexity of biological systems, and the lack of rigorous optimization methods that can cope with the combinatorial explosion as the number of biological parts increases. Current optimization methods for synthetic gene design rely on heuristic algorithms that are usually not deterministic, deliver sub-optimal solutions, and provide no guaranties on convergence or error bounds. Here, we introduce an optimization framework for the problem of part selection in synthetic gene circuits that is based on mixed integer non-linear programming (MINLP), which is a deterministic method that finds the globally optimal solution and guarantees convergence in finite time. Given a synthetic gene circuit, a library of characterized parts, and user-defined constraints, our method can find the optimal selection of parts that satisfy the constraints and best approximates the objective function given by the user. We evaluated the proposed method in the design of three synthetic circuits (a toggle switch, a transcriptional cascade, and a band detector), with both experimentally constructed and synthetic promoter libraries. Scalability and robustness analysis shows that the proposed framework scales well with the library size and the solution space. The work described here is a step towards a unifying, realistic framework for the automated design of biological circuits.
Mölder, Anna; Drury, Sarah; Costen, Nicholas; Hartshorne, Geraldine M; Czanner, Silvester
2015-02-01
Embryo selection in in vitro fertilization (IVF) treatment has traditionally been done manually using microscopy at intermittent time points during embryo development. Novel technique has made it possible to monitor embryos using time lapse for long periods of time and together with the reduced cost of data storage, this has opened the door to long-term time-lapse monitoring, and large amounts of image material is now routinely gathered. However, the analysis is still to a large extent performed manually, and images are mostly used as qualitative reference. To make full use of the increased amount of microscopic image material, (semi)automated computer-aided tools are needed. An additional benefit of automation is the establishment of standardization tools for embryo selection and transfer, making decisions more transparent and less subjective. Another is the possibility to gather and analyze data in a high-throughput manner, gathering data from multiple clinics and increasing our knowledge of early human embryo development. In this study, the extraction of data to automatically select and track spatio-temporal events and features from sets of embryo images has been achieved using localized variance based on the distribution of image grey scale levels. A retrospective cohort study was performed using time-lapse imaging data derived from 39 human embryos from seven couples, covering the time from fertilization up to 6.3 days. The profile of localized variance has been used to characterize syngamy, mitotic division and stages of cleavage, compaction, and blastocoel formation. Prior to analysis, focal plane and embryo location were automatically detected, limiting precomputational user interaction to a calibration step and usable for automatic detection of region of interest (ROI) regardless of the method of analysis. The results were validated against the opinion of clinical experts. © 2015 International Society for Advancement of Cytometry. © 2015 International Society for Advancement of Cytometry.
Integration and segregation of large-scale brain networks during short-term task automatization
Mohr, Holger; Wolfensteller, Uta; Betzel, Richard F.; Mišić, Bratislav; Sporns, Olaf; Richiardi, Jonas; Ruge, Hannes
2016-01-01
The human brain is organized into large-scale functional networks that can flexibly reconfigure their connectivity patterns, supporting both rapid adaptive control and long-term learning processes. However, it has remained unclear how short-term network dynamics support the rapid transformation of instructions into fluent behaviour. Comparing fMRI data of a learning sample (N=70) with a control sample (N=67), we find that increasingly efficient task processing during short-term practice is associated with a reorganization of large-scale network interactions. Practice-related efficiency gains are facilitated by enhanced coupling between the cingulo-opercular network and the dorsal attention network. Simultaneously, short-term task automatization is accompanied by decreasing activation of the fronto-parietal network, indicating a release of high-level cognitive control, and a segregation of the default mode network from task-related networks. These findings suggest that short-term task automatization is enabled by the brain's ability to rapidly reconfigure its large-scale network organization involving complementary integration and segregation processes. PMID:27808095
Partial polygon pruning of hydrographic features in automated generalization
Stum, Alexander K.; Buttenfield, Barbara P.; Stanislawski, Larry V.
2017-01-01
This paper demonstrates a working method to automatically detect and prune portions of waterbody polygons to support creation of a multi-scale hydrographic database. Water features are known to be sensitive to scale change; and thus multiple representations are required to maintain visual and geographic logic at smaller scales. Partial pruning of polygonal features—such as long and sinuous reservoir arms, stream channels that are too narrow at the target scale, and islands that begin to coalesce—entails concurrent management of the length and width of polygonal features as well as integrating pruned polygons with other generalized point and linear hydrographic features to maintain stream network connectivity. The implementation follows data representation standards developed by the U.S. Geological Survey (USGS) for the National Hydrography Dataset (NHD). Portions of polygonal rivers, streams, and canals are automatically characterized for width, length, and connectivity. This paper describes an algorithm for automatic detection and subsequent processing, and shows results for a sample of NHD subbasins in different landscape conditions in the United States.
Harati, Vida; Khayati, Rasoul; Farzan, Abdolreza
2011-07-01
Uncontrollable and unlimited cell growth leads to tumor genesis in the brain. If brain tumors are not diagnosed early and cured properly, they could cause permanent brain damage or even death to patients. As in all methods of treatments, any information about tumor position and size is important for successful treatment; hence, finding an accurate and a fully automated method to give information to physicians is necessary. A fully automatic and accurate method for tumor region detection and segmentation in brain magnetic resonance (MR) images is suggested. The presented approach is an improved fuzzy connectedness (FC) algorithm based on a scale in which the seed point is selected automatically. This algorithm is independent of the tumor type in terms of its pixels intensity. Tumor segmentation evaluation results based on similarity criteria (similarity index (SI), overlap fraction (OF), and extra fraction (EF) are 92.89%, 91.75%, and 3.95%, respectively) indicate a higher performance of the proposed approach compared to the conventional methods, especially in MR images, in tumor regions with low contrast. Thus, the suggested method is useful for increasing the ability of automatic estimation of tumor size and position in brain tissues, which provides more accurate investigation of the required surgery, chemotherapy, and radiotherapy procedures. Copyright © 2011 Elsevier Ltd. All rights reserved.
Edmands, William M B; Barupal, Dinesh K; Scalbert, Augustin
2015-03-01
MetMSLine represents a complete collection of functions in the R programming language as an accessible GUI for biomarker discovery in large-scale liquid-chromatography high-resolution mass spectral datasets from acquisition through to final metabolite identification forming a backend to output from any peak-picking software such as XCMS. MetMSLine automatically creates subdirectories, data tables and relevant figures at the following steps: (i) signal smoothing, normalization, filtration and noise transformation (PreProc.QC.LSC.R); (ii) PCA and automatic outlier removal (Auto.PCA.R); (iii) automatic regression, biomarker selection, hierarchical clustering and cluster ion/artefact identification (Auto.MV.Regress.R); (iv) Biomarker-MS/MS fragmentation spectra matching and fragment/neutral loss annotation (Auto.MS.MS.match.R) and (v) semi-targeted metabolite identification based on a list of theoretical masses obtained from public databases (DBAnnotate.R). All source code and suggested parameters are available in an un-encapsulated layout on http://wmbedmands.github.io/MetMSLine/. Readme files and a synthetic dataset of both X-variables (simulated LC-MS data), Y-variables (simulated continuous variables) and metabolite theoretical masses are also available on our GitHub repository. © The Author 2014. Published by Oxford University Press.
Edmands, William M. B.; Barupal, Dinesh K.; Scalbert, Augustin
2015-01-01
Summary: MetMSLine represents a complete collection of functions in the R programming language as an accessible GUI for biomarker discovery in large-scale liquid-chromatography high-resolution mass spectral datasets from acquisition through to final metabolite identification forming a backend to output from any peak-picking software such as XCMS. MetMSLine automatically creates subdirectories, data tables and relevant figures at the following steps: (i) signal smoothing, normalization, filtration and noise transformation (PreProc.QC.LSC.R); (ii) PCA and automatic outlier removal (Auto.PCA.R); (iii) automatic regression, biomarker selection, hierarchical clustering and cluster ion/artefact identification (Auto.MV.Regress.R); (iv) Biomarker—MS/MS fragmentation spectra matching and fragment/neutral loss annotation (Auto.MS.MS.match.R) and (v) semi-targeted metabolite identification based on a list of theoretical masses obtained from public databases (DBAnnotate.R). Availability and implementation: All source code and suggested parameters are available in an un-encapsulated layout on http://wmbedmands.github.io/MetMSLine/. Readme files and a synthetic dataset of both X-variables (simulated LC–MS data), Y-variables (simulated continuous variables) and metabolite theoretical masses are also available on our GitHub repository. Contact: ScalbertA@iarc.fr PMID:25348215
Automatic Classification of Tremor Severity in Parkinson's Disease Using a Wearable Device.
Jeon, Hyoseon; Lee, Woongwoo; Park, Hyeyoung; Lee, Hong Ji; Kim, Sang Kyong; Kim, Han Byul; Jeon, Beomseok; Park, Kwang Suk
2017-09-09
Although there is clinical demand for new technology that can accurately measure Parkinsonian tremors, automatic scoring of Parkinsonian tremors using machine-learning approaches has not yet been employed. This study aims to fill this gap by proposing machine-learning algorithms as a way to predict the Unified Parkinson's Disease Rating Scale (UPDRS), which are similar to how neurologists rate scores in actual clinical practice. In this study, the tremor signals of 85 patients with Parkinson's disease (PD) were measured using a wrist-watch-type wearable device consisting of an accelerometer and a gyroscope. The displacement and angle signals were calculated from the measured acceleration and angular velocity, and the acceleration, angular velocity, displacement, and angle signals were used for analysis. Nineteen features were extracted from each signal, and the pairwise correlation strategy was used to reduce the number of feature dimensions. With the selected features, a decision tree (DT), support vector machine (SVM), discriminant analysis (DA), random forest (RF), and k -nearest-neighbor ( k NN) algorithm were explored for automatic scoring of the Parkinsonian tremor severity. The performance of the employed classifiers was analyzed using accuracy, recall, and precision, and compared to other findings in similar studies. Finally, the limitations and plans for further study are discussed.
Automatic selection of arterial input function using tri-exponential models
NASA Astrophysics Data System (ADS)
Yao, Jianhua; Chen, Jeremy; Castro, Marcelo; Thomasson, David
2009-02-01
Dynamic Contrast Enhanced MRI (DCE-MRI) is one method for drug and tumor assessment. Selecting a consistent arterial input function (AIF) is necessary to calculate tissue and tumor pharmacokinetic parameters in DCE-MRI. This paper presents an automatic and robust method to select the AIF. The first stage is artery detection and segmentation, where knowledge about artery structure and dynamic signal intensity temporal properties of DCE-MRI is employed. The second stage is AIF model fitting and selection. A tri-exponential model is fitted for every candidate AIF using the Levenberg-Marquardt method, and the best fitted AIF is selected. Our method has been applied in DCE-MRIs of four different body parts: breast, brain, liver and prostate. The success rates in artery segmentation for 19 cases are 89.6%+/-15.9%. The pharmacokinetic parameters computed from the automatically selected AIFs are highly correlated with those from manually determined AIFs (R2=0.946, P(T<=t)=0.09). Our imaging-based tri-exponential AIF model demonstrated significant improvement over a previously proposed bi-exponential model.
Content-based analysis of Ki-67 stained meningioma specimens for automatic hot-spot selection.
Swiderska-Chadaj, Zaneta; Markiewicz, Tomasz; Grala, Bartlomiej; Lorent, Malgorzata
2016-10-07
Hot-spot based examination of immunohistochemically stained histological specimens is one of the most important procedures in pathomorphological practice. The development of image acquisition equipment and computational units allows for the automation of this process. Moreover, a lot of possible technical problems occur in everyday histological material, which increases the complexity of the problem. Thus, a full context-based analysis of histological specimens is also needed in the quantification of immunohistochemically stained specimens. One of the most important reactions is the Ki-67 proliferation marker in meningiomas, the most frequent intracranial tumour. The aim of our study is to propose a context-based analysis of Ki-67 stained specimens of meningiomas for automatic selection of hot-spots. The proposed solution is based on textural analysis, mathematical morphology, feature ranking and classification, as well as on the proposed hot-spot gradual extinction algorithm to allow for the proper detection of a set of hot-spot fields. The designed whole slide image processing scheme eliminates such artifacts as hemorrhages, folds or stained vessels from the region of interest. To validate automatic results, a set of 104 meningioma specimens were selected and twenty hot-spots inside them were identified independently by two experts. The Spearman rho correlation coefficient was used to compare the results which were also analyzed with the help of a Bland-Altman plot. The results show that most of the cases (84) were automatically examined properly with two fields of view with a technical problem at the very most. Next, 13 had three such fields, and only seven specimens did not meet the requirement for the automatic examination. Generally, the Automatic System identifies hot-spot areas, especially their maximum points, better. Analysis of the results confirms the very high concordance between an automatic Ki-67 examination and the expert's results, with a Spearman rho higher than 0.95. The proposed hot-spot selection algorithm with an extended context-based analysis of whole slide images and hot-spot gradual extinction algorithm provides an efficient tool for simulation of a manual examination. The presented results have confirmed that the automatic examination of Ki-67 in meningiomas could be introduced in the near future.
Automatic alignment method for calibration of hydrometers
NASA Astrophysics Data System (ADS)
Lee, Y. J.; Chang, K. H.; Chon, J. C.; Oh, C. Y.
2004-04-01
This paper presents a new method to automatically align specific scale-marks for the calibration of hydrometers. A hydrometer calibration system adopting the new method consists of a vision system, a stepping motor, and software to control the system. The vision system is composed of a CCD camera and a frame grabber, and is used to acquire images. The stepping motor moves the camera, which is attached to the vessel containing a reference liquid, along the hydrometer. The operating program has two main functions: to process images from the camera to find the position of the horizontal plane and to control the stepping motor for the alignment of the horizontal plane with a particular scale-mark. Any system adopting this automatic alignment method is a convenient and precise means of calibrating a hydrometer. The performance of the proposed method is illustrated by comparing the calibration results using the automatic alignment method with those obtained using the manual method.
Classification of radiolarian images with hand-crafted and deep features
NASA Astrophysics Data System (ADS)
Keçeli, Ali Seydi; Kaya, Aydın; Keçeli, Seda Uzunçimen
2017-12-01
Radiolarians are planktonic protozoa and are important biostratigraphic and paleoenvironmental indicators for paleogeographic reconstructions. Radiolarian paleontology still remains as a low cost and the one of the most convenient way to obtain dating of deep ocean sediments. Traditional methods for identifying radiolarians are time-consuming and cannot scale to the granularity or scope necessary for large-scale studies. Automated image classification will allow making these analyses promptly. In this study, a method for automatic radiolarian image classification is proposed on Scanning Electron Microscope (SEM) images of radiolarians to ease species identification of fossilized radiolarians. The proposed method uses both hand-crafted features like invariant moments, wavelet moments, Gabor features, basic morphological features and deep features obtained from a pre-trained Convolutional Neural Network (CNN). Feature selection is applied over deep features to reduce high dimensionality. Classification outcomes are analyzed to compare hand-crafted features, deep features, and their combinations. Results show that the deep features obtained from a pre-trained CNN are more discriminative comparing to hand-crafted ones. Additionally, feature selection utilizes to the computational cost of classification algorithms and have no negative effect on classification accuracy.
NASA Technical Reports Server (NTRS)
Nguyen, Duc T.; Storaasli, Olaf O.; Qin, Jiangning; Qamar, Ramzi
1994-01-01
An automatic differentiation tool (ADIFOR) is incorporated into a finite element based structural analysis program for shape and non-shape design sensitivity analysis of structural systems. The entire analysis and sensitivity procedures are parallelized and vectorized for high performance computation. Small scale examples to verify the accuracy of the proposed program and a medium scale example to demonstrate the parallel vector performance on multiple CRAY C90 processors are included.
Mathematical algorithm for the automatic recognition of intestinal parasites.
Alva, Alicia; Cangalaya, Carla; Quiliano, Miguel; Krebs, Casey; Gilman, Robert H; Sheen, Patricia; Zimic, Mirko
2017-01-01
Parasitic infections are generally diagnosed by professionals trained to recognize the morphological characteristics of the eggs in microscopic images of fecal smears. However, this laboratory diagnosis requires medical specialists which are lacking in many of the areas where these infections are most prevalent. In response to this public health issue, we developed a software based on pattern recognition analysis from microscopi digital images of fecal smears, capable of automatically recognizing and diagnosing common human intestinal parasites. To this end, we selected 229, 124, 217, and 229 objects from microscopic images of fecal smears positive for Taenia sp., Trichuris trichiura, Diphyllobothrium latum, and Fasciola hepatica, respectively. Representative photographs were selected by a parasitologist. We then implemented our algorithm in the open source program SCILAB. The algorithm processes the image by first converting to gray-scale, then applies a fourteen step filtering process, and produces a skeletonized and tri-colored image. The features extracted fall into two general categories: geometric characteristics and brightness descriptions. Individual characteristics were quantified and evaluated with a logistic regression to model their ability to correctly identify each parasite separately. Subsequently, all algorithms were evaluated for false positive cross reactivity with the other parasites studied, excepting Taenia sp. which shares very few morphological characteristics with the others. The principal result showed that our algorithm reached sensitivities between 99.10%-100% and specificities between 98.13%- 98.38% to detect each parasite separately. We did not find any cross-positivity in the algorithms for the three parasites evaluated. In conclusion, the results demonstrated the capacity of our computer algorithm to automatically recognize and diagnose Taenia sp., Trichuris trichiura, Diphyllobothrium latum, and Fasciola hepatica with a high sensitivity and specificity.
Mathematical algorithm for the automatic recognition of intestinal parasites
Alva, Alicia; Cangalaya, Carla; Quiliano, Miguel; Krebs, Casey; Gilman, Robert H.; Sheen, Patricia; Zimic, Mirko
2017-01-01
Parasitic infections are generally diagnosed by professionals trained to recognize the morphological characteristics of the eggs in microscopic images of fecal smears. However, this laboratory diagnosis requires medical specialists which are lacking in many of the areas where these infections are most prevalent. In response to this public health issue, we developed a software based on pattern recognition analysis from microscopi digital images of fecal smears, capable of automatically recognizing and diagnosing common human intestinal parasites. To this end, we selected 229, 124, 217, and 229 objects from microscopic images of fecal smears positive for Taenia sp., Trichuris trichiura, Diphyllobothrium latum, and Fasciola hepatica, respectively. Representative photographs were selected by a parasitologist. We then implemented our algorithm in the open source program SCILAB. The algorithm processes the image by first converting to gray-scale, then applies a fourteen step filtering process, and produces a skeletonized and tri-colored image. The features extracted fall into two general categories: geometric characteristics and brightness descriptions. Individual characteristics were quantified and evaluated with a logistic regression to model their ability to correctly identify each parasite separately. Subsequently, all algorithms were evaluated for false positive cross reactivity with the other parasites studied, excepting Taenia sp. which shares very few morphological characteristics with the others. The principal result showed that our algorithm reached sensitivities between 99.10%-100% and specificities between 98.13%- 98.38% to detect each parasite separately. We did not find any cross-positivity in the algorithms for the three parasites evaluated. In conclusion, the results demonstrated the capacity of our computer algorithm to automatically recognize and diagnose Taenia sp., Trichuris trichiura, Diphyllobothrium latum, and Fasciola hepatica with a high sensitivity and specificity. PMID:28410387
Scaling up high throughput field phenotyping of corn and soy research plots using ground rovers
NASA Astrophysics Data System (ADS)
Peshlov, Boyan; Nakarmi, Akash; Baldwin, Steven; Essner, Scott; French, Jasenka
2017-05-01
Crop improvement programs require large and meticulous selection processes that effectively and accurately collect and analyze data to generate quality plant products as efficiently as possible, develop superior cropping and/or crop improvement methods. Typically, data collection for such testing is performed by field teams using hand-held instruments or manually-controlled devices. Although steps are taken to reduce error, the data collected in such manner can be unreliable due to human error and fatigue, which reduces the ability to make accurate selection decisions. Monsanto engineering teams have developed a high-clearance mobile platform (Rover) as a step towards high throughput and high accuracy phenotyping at an industrial scale. The rovers are equipped with GPS navigation, multiple cameras and sensors and on-board computers to acquire data and compute plant vigor metrics per plot. The supporting IT systems enable automatic path planning, plot identification, image and point cloud data QA/QC and near real-time analysis where results are streamed to enterprise databases for additional statistical analysis and product advancement decisions. Since the rover program was launched in North America in 2013, the number of research plots we can analyze in a growing season has expanded dramatically. This work describes some of the successes and challenges in scaling up of the rover platform for automated phenotyping to enable science at scale.
Automatic blood vessel based-liver segmentation using the portal phase abdominal CT
NASA Astrophysics Data System (ADS)
Maklad, Ahmed S.; Matsuhiro, Mikio; Suzuki, Hidenobu; Kawata, Yoshiki; Niki, Noboru; Shimada, Mitsuo; Iinuma, Gen
2018-02-01
Liver segmentation is the basis for computer-based planning of hepatic surgical interventions. In diagnosis and analysis of hepatic diseases and surgery planning, automatic segmentation of liver has high importance. Blood vessel (BV) has showed high performance at liver segmentation. In our previous work, we developed a semi-automatic method that segments the liver through the portal phase abdominal CT images in two stages. First stage was interactive segmentation of abdominal blood vessels (ABVs) and subsequent classification into hepatic (HBVs) and non-hepatic (non-HBVs). This stage had 5 interactions that include selective threshold for bone segmentation, selecting two seed points for kidneys segmentation, selection of inferior vena cava (IVC) entrance for starting ABVs segmentation, identification of the portal vein (PV) entrance to the liver and the IVC-exit for classifying HBVs from other ABVs (non-HBVs). Second stage is automatic segmentation of the liver based on segmented ABVs as described in [4]. For full automation of our method we developed a method [5] that segments ABVs automatically tackling the first three interactions. In this paper, we propose full automation of classifying ABVs into HBVs and non- HBVs and consequently full automation of liver segmentation that we proposed in [4]. Results illustrate that the method is effective at segmentation of the liver through the portal abdominal CT images.
1982-01-01
physical reasoning and based on computational experience with similar equations. There is another non- automatic way: through proper scaling of all...1979) for an automatic scheme for this scaling on a digitial computer . Shampine(1980) reports a special definition of stiffness appropriate for...an analog for a laboratory that typically already has a digital computer . The digitial is much more versatile. Also there does not yet exist " software
Analysis and Comparison of Some Automatic Vehicle Monitoring Systems
DOT National Transportation Integrated Search
1973-07-01
In 1970 UMTA solicited proposals and selected four companies to develop systems to demonstrate the feasibility of different automatic vehicle monitoring techniques. The demonstrations culminated in experiments in Philadelphia to assess the performanc...
Automatic Evolution of Molecular Nanotechnology Designs
NASA Technical Reports Server (NTRS)
Globus, Al; Lawton, John; Wipke, Todd; Saini, Subhash (Technical Monitor)
1998-01-01
This paper describes strategies for automatically generating designs for analog circuits at the molecular level. Software maps out the edges and vertices of potential nanotechnology systems on graphs, then selects appropriate ones through evolutionary or genetic paradigms.
[Study on the automatic parameters identification of water pipe network model].
Jia, Hai-Feng; Zhao, Qi-Feng
2010-01-01
Based on the problems analysis on development and application of water pipe network model, the model parameters automatic identification is regarded as a kernel bottleneck of model's application in water supply enterprise. The methodology of water pipe network model parameters automatic identification based on GIS and SCADA database is proposed. Then the kernel algorithm of model parameters automatic identification is studied, RSA (Regionalized Sensitivity Analysis) is used for automatic recognition of sensitive parameters, and MCS (Monte-Carlo Sampling) is used for automatic identification of parameters, the detail technical route based on RSA and MCS is presented. The module of water pipe network model parameters automatic identification is developed. At last, selected a typical water pipe network as a case, the case study on water pipe network model parameters automatic identification is conducted and the satisfied results are achieved.
Thai Automatic Speech Recognition
2005-01-01
used in an external DARPA evaluation involving medical scenarios between an American Doctor and a naïve monolingual Thai patient. 2. Thai Language... dictionary generation more challenging, and (3) the lack of word segmentation, which calls for automatic segmentation approaches to make n-gram language...requires a dictionary and provides various segmentation algorithms to automatically select suitable segmentations. Here we used a maximal matching
Automated Detection of Microaneurysms Using Scale-Adapted Blob Analysis and Semi-Supervised Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adal, Kedir M.; Sidebe, Desire; Ali, Sharib
2014-01-07
Despite several attempts, automated detection of microaneurysm (MA) from digital fundus images still remains to be an open issue. This is due to the subtle nature of MAs against the surrounding tissues. In this paper, the microaneurysm detection problem is modeled as finding interest regions or blobs from an image and an automatic local-scale selection technique is presented. Several scale-adapted region descriptors are then introduced to characterize these blob regions. A semi-supervised based learning approach, which requires few manually annotated learning examples, is also proposed to train a classifier to detect true MAs. The developed system is built using onlymore » few manually labeled and a large number of unlabeled retinal color fundus images. The performance of the overall system is evaluated on Retinopathy Online Challenge (ROC) competition database. A competition performance measure (CPM) of 0.364 shows the competitiveness of the proposed system against state-of-the art techniques as well as the applicability of the proposed features to analyze fundus images.« less
NASA Technical Reports Server (NTRS)
Saether, Erik; Glaessgen, Edward H.
2009-01-01
Atomistic simulations of intergranular fracture have indicated that grain-scale crack growth in polycrystalline metals can be direction dependent. At these material length scales, the atomic environment greatly influences the nature of intergranular crack propagation, through either brittle or ductile mechanisms, that are a function of adjacent grain orientation and direction of crack propagation. Methods have been developed to obtain cohesive zone models (CZM) directly from molecular dynamics simulations. These CZMs may be incorporated into decohesion finite element formulations to simulate fracture at larger length scales. A new directional decohesion element is presented that calculates the direction of Mode I opening and incorporates a material criterion for dislocation emission based on the local crystallographic environment to automatically select the CZM that best represents crack growth. The simulation of fracture in 2-D and 3-D aluminum polycrystals is used to illustrate the effect of parameterized CZMs and the effectiveness of directional decohesion finite elements.
Video auto stitching in multicamera surveillance system
NASA Astrophysics Data System (ADS)
He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang
2012-01-01
This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.
Video auto stitching in multicamera surveillance system
NASA Astrophysics Data System (ADS)
He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang
2011-12-01
This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.
A consistent and uniform research earthquake catalog for the AlpArray region: preliminary results.
NASA Astrophysics Data System (ADS)
Molinari, I.; Bagagli, M.; Kissling, E. H.; Diehl, T.; Clinton, J. F.; Giardini, D.; Wiemer, S.
2017-12-01
The AlpArray initiative (www.alparray.ethz.ch) is a large-scale European collaboration ( 50 institutes involved) to study the entire Alpine orogen at high resolution with a variety of geoscientific methods. AlpArray provides unprecedentedly uniform station coverage for the region with more than 650 broadband seismic stations, 300 of which are temporary. The AlpArray Seismic Network (AASN) is a joint effort of 25 institutes from 10 nations, operates since January 2016 and is expected to continue until the end of 2018. In this study, we establish a uniform earthquake catalogue for the Greater Alpine region during the operation period of the AASN with a aimed completeness of M2.5. The catalog has two main goals: 1) calculation of consistent and precise hypocenter locations 2) provide preliminary but uniform magnitude calculations across the region. The procedure is based on automatic high-quality P- and S-wave pickers, providing consistent phase arrival times in combination with a picking quality assessment. First, we detect all events in the region in 2016/2017 using an STA/LTA based detector. Among the detected events, we select 50 geographically homogeneously distributed events with magnitudes ≥2.5 representative for the entire catalog. We manually pick the selected events to establish a consistent P- and S-phase reference data set, including arrival-time time uncertainties. The reference data, are used to adjust the automatic pickers and to assess their performance. In a first iteration, a simple P-picker algorithm is applied to the entire dataset, providing initial picks for the advanced MannekenPix (MPX) algorithm. In a second iteration, the MPX picker provides consistent and reliable automatic first arrival P picks together with a pick-quality estimate. The derived automatic P picks are then used as initial values for a multi-component S-phase picking algorithm. Subsequently, automatic picks of all well-locatable earthquakes will be considered to calculate final minimum 1D P and S velocity models for the region with appropriate stations corrections. Finally, all the events are relocated with the NonLinLoc algorithm in combination with the updated 1D models. The proposed procedure represents the first step towards uniform earthquake catalog for the entire greater Alpine region using the AASN.
Very Large Scale Integrated Circuits for Military Systems.
1981-01-01
ABBREVIATIONS A/D Analog-to-digital C AGC Automatic Gain Control A A/J Anti-jam ASP Advanced Signal Processor AU Arithmetic Units C.AD Computer-Aided...ESM) equipments (Ref. 23); in lieu of an adequate automatic proces- sing capability, the function is now performed manually (Ref. 24), which involves...a human operator, displays, etc., and a sacrifice in performance (acquisition speed, saturation signal density). Various automatic processing
Use of automatic door closers improves fire safety.
Waterman, T E
1979-01-01
In a series of 16 full-scale fire tests, investigators at the IIT Research Institute have concluded that automatic door control in the room of fire origin can significantly reduce the spread of toxic smoke and gases. The researchers also investigated the effects of sprinkler actuation, and the functional relationship between sprinklers and automatic door closers. This report presents the results of the study, and presents recommendations for health-care facilities.
Durbin, Kenneth R.; Tran, John C.; Zamdborg, Leonid; Sweet, Steve M. M.; Catherman, Adam D.; Lee, Ji Eun; Li, Mingxi; Kellie, John F.; Kelleher, Neil L.
2011-01-01
Applying high-throughput Top-Down MS to an entire proteome requires a yet-to-be-established model for data processing. Since Top-Down is becoming possible on a large scale, we report our latest software pipeline dedicated to capturing the full value of intact protein data in automated fashion. For intact mass detection, we combine algorithms for processing MS1 data from both isotopically resolved (FT) and charge-state resolved (ion trap) LC-MS data, which are then linked to their fragment ions for database searching using ProSight. Automated determination of human keratin and tubulin isoforms is one result. Optimized for the intricacies of whole proteins, new software modules visualize proteome-scale data based on the LC retention time and intensity of intact masses and enable selective detection of PTMs to automatically screen for acetylation, phosphorylation, and methylation. Software functionality was demonstrated using comparative LC-MS data from yeast strains in addition to human cells undergoing chemical stress. We further these advances as a key aspect of realizing Top-Down MS on a proteomic scale. PMID:20848673
Hi-fidelity multi-scale local processing for visually optimized far-infrared Herschel images
NASA Astrophysics Data System (ADS)
Li Causi, G.; Schisano, E.; Liu, S. J.; Molinari, S.; Di Giorgio, A.
2016-07-01
In the context of the "Hi-Gal" multi-band full-plane mapping program for the Galactic Plane, as imaged by the Herschel far-infrared satellite, we have developed a semi-automatic tool which produces high definition, high quality color maps optimized for visual perception of extended features, like bubbles and filaments, against the high background variations. We project the map tiles of three selected bands onto a 3-channel panorama, which spans the central 130 degrees of galactic longitude times 2.8 degrees of galactic latitude, at the pixel scale of 3.2", in cartesian galactic coordinates. Then we process this image piecewise, applying a custom multi-scale local stretching algorithm, enforced by a local multi-scale color balance. Finally, we apply an edge-preserving contrast enhancement to perform an artifact-free details sharpening. Thanks to this tool, we have thus produced a stunning giga-pixel color image of the far-infrared Galactic Plane that we made publicly available with the recent release of the Hi-Gal mosaics and compact source catalog.
Tan, Zhengguo; Hohage, Thorsten; Kalentev, Oleksandr; Joseph, Arun A; Wang, Xiaoqing; Voit, Dirk; Merboldt, K Dietmar; Frahm, Jens
2017-12-01
The purpose of this work is to develop an automatic method for the scaling of unknowns in model-based nonlinear inverse reconstructions and to evaluate its application to real-time phase-contrast (RT-PC) flow magnetic resonance imaging (MRI). Model-based MRI reconstructions of parametric maps which describe a physical or physiological function require the solution of a nonlinear inverse problem, because the list of unknowns in the extended MRI signal equation comprises multiple functional parameters and all coil sensitivity profiles. Iterative solutions therefore rely on an appropriate scaling of unknowns to numerically balance partial derivatives and regularization terms. The scaling of unknowns emerges as a self-adjoint and positive-definite matrix which is expressible by its maximal eigenvalue and solved by power iterations. The proposed method is applied to RT-PC flow MRI based on highly undersampled acquisitions. Experimental validations include numerical phantoms providing ground truth and a wide range of human studies in the ascending aorta, carotid arteries, deep veins during muscular exercise and cerebrospinal fluid during deep respiration. For RT-PC flow MRI, model-based reconstructions with automatic scaling not only offer velocity maps with high spatiotemporal acuity and much reduced phase noise, but also ensure fast convergence as well as accurate and precise velocities for all conditions tested, i.e. for different velocity ranges, vessel sizes and the simultaneous presence of signals with velocity aliasing. In summary, the proposed automatic scaling of unknowns in model-based MRI reconstructions yields quantitatively reliable velocities for RT-PC flow MRI in various experimental scenarios. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Tupas, M. E. A.; Dasallas, J. A.; Jiao, B. J. D.; Magallon, B. J. P.; Sempio, J. N. H.; Ramos, M. K. F.; Aranas, R. K. D.; Tamondong, A. M.
2017-10-01
The FAST-SIFT corner detector and descriptor extractor combination was used to automatically georeference DIWATA-1 Spaceborne Multispectral Imager images. Features from the Fast Accelerated Segment Test (FAST) algorithm detects corners or keypoints in an image, and these robustly detected keypoints have well-defined positions. Descriptors were computed using Scale-Invariant Feature Transform (SIFT) extractor. FAST-SIFT method effectively SMI same-subscene images detected by the NIR sensor. The method was also tested in stitching NIR images with varying subscene swept by the camera. The slave images were matched to the master image. The keypoints served as the ground control points. Random sample consensus was used to eliminate fall-out matches and ensure accuracy of the feature points from which the transformation parameters were derived. Keypoints are matched based on their descriptor vector. Nearest-neighbor matching is employed based on a metric distance between the descriptors. The metrics include Euclidean and city block, among others. Rough matching outputs not only the correct matches but also the faulty matches. A previous work in automatic georeferencing incorporates a geometric restriction. In this work, we applied a simplified version of the learning method. RANSAC was used to eliminate fall-out matches and ensure accuracy of the feature points. This method identifies if a point fits the transformation function and returns inlier matches. The transformation matrix was solved by Affine, Projective, and Polynomial models. The accuracy of the automatic georeferencing method were determined by calculating the RMSE of interest points, selected randomly, between the master image and transformed slave image.
Automatic three-dimensional measurement of large-scale structure based on vision metrology.
Zhu, Zhaokun; Guan, Banglei; Zhang, Xiaohu; Li, Daokui; Yu, Qifeng
2014-01-01
All relevant key techniques involved in photogrammetric vision metrology for fully automatic 3D measurement of large-scale structure are studied. A new kind of coded target consisting of circular retroreflective discs is designed, and corresponding detection and recognition algorithms based on blob detection and clustering are presented. Then a three-stage strategy starting with view clustering is proposed to achieve automatic network orientation. As for matching of noncoded targets, the concept of matching path is proposed, and matches for each noncoded target are found by determination of the optimal matching path, based on a novel voting strategy, among all possible ones. Experiments on a fixed keel of airship have been conducted to verify the effectiveness and measuring accuracy of the proposed methods.
Detecting cheaters without thinking: testing the automaticity of the cheater detection module.
Van Lier, Jens; Revlin, Russell; De Neys, Wim
2013-01-01
Evolutionary psychologists have suggested that our brain is composed of evolved mechanisms. One extensively studied mechanism is the cheater detection module. This module would make people very good at detecting cheaters in a social exchange. A vast amount of research has illustrated performance facilitation on social contract selection tasks. This facilitation is attributed to the alleged automatic and isolated operation of the module (i.e., independent of general cognitive capacity). This study, using the selection task, tested the critical automaticity assumption in three experiments. Experiments 1 and 2 established that performance on social contract versions did not depend on cognitive capacity or age. Experiment 3 showed that experimentally burdening cognitive resources with a secondary task had no impact on performance on the social contract version. However, in all experiments, performance on a non-social contract version did depend on available cognitive capacity. Overall, findings validate the automatic and effortless nature of social exchange reasoning.
Towards the Optimal Pixel Size of dem for Automatic Mapping of Landslide Areas
NASA Astrophysics Data System (ADS)
Pawłuszek, K.; Borkowski, A.; Tarolli, P.
2017-05-01
Determining appropriate spatial resolution of digital elevation model (DEM) is a key step for effective landslide analysis based on remote sensing data. Several studies demonstrated that choosing the finest DEM resolution is not always the best solution. Various DEM resolutions can be applicable for diverse landslide applications. Thus, this study aims to assess the influence of special resolution on automatic landslide mapping. Pixel-based approach using parametric and non-parametric classification methods, namely feed forward neural network (FFNN) and maximum likelihood classification (ML), were applied in this study. Additionally, this allowed to determine the impact of used classification method for selection of DEM resolution. Landslide affected areas were mapped based on four DEMs generated at 1 m, 2 m, 5 m and 10 m spatial resolution from airborne laser scanning (ALS) data. The performance of the landslide mapping was then evaluated by applying landslide inventory map and computation of confusion matrix. The results of this study suggests that the finest scale of DEM is not always the best fit, however working at 1 m DEM resolution on micro-topography scale, can show different results. The best performance was found at 5 m DEM-resolution for FFNN and 1 m DEM resolution for results. The best performance was found to be using 5 m DEM-resolution for FFNN and 1 m DEM resolution for ML classification.
Merlet, Benjamin; Paulhe, Nils; Vinson, Florence; Frainay, Clément; Chazalviel, Maxime; Poupin, Nathalie; Gloaguen, Yoann; Giacomoni, Franck; Jourdan, Fabien
2016-01-01
This article describes a generic programmatic method for mapping chemical compound libraries on organism-specific metabolic networks from various databases (KEGG, BioCyc) and flat file formats (SBML and Matlab files). We show how this pipeline was successfully applied to decipher the coverage of chemical libraries set up by two metabolomics facilities MetaboHub (French National infrastructure for metabolomics and fluxomics) and Glasgow Polyomics (GP) on the metabolic networks available in the MetExplore web server. The present generic protocol is designed to formalize and reduce the volume of information transfer between the library and the network database. Matching of metabolites between libraries and metabolic networks is based on InChIs or InChIKeys and therefore requires that these identifiers are specified in both libraries and networks. In addition to providing covering statistics, this pipeline also allows the visualization of mapping results in the context of metabolic networks. In order to achieve this goal, we tackled issues on programmatic interaction between two servers, improvement of metabolite annotation in metabolic networks and automatic loading of a mapping in genome scale metabolic network analysis tool MetExplore. It is important to note that this mapping can also be performed on a single or a selection of organisms of interest and is thus not limited to large facilities.
Colzato, Lorenza S; Steenbergen, Laura; Hommel, Bernhard
2018-01-23
The aim of the study was to throw more light on the relationship between rumination and cognitive-control processes. Seventy-eight adults were assessed with respect to rumination tendencies by means of the LEIDS-r before performing a Stroop task, an event-file task assessing the automatic retrieval of irrelevant information, an attentional set-shifting task, and the Attentional Network Task, which provided scores for alerting, orienting, and executive control functioning. The size of the Stroop effect and irrelevant retrieval in the event-five task were positively correlated with the tendency to ruminate, while all other scores did not correlate with any rumination scale. Controlling for depressive tendencies eliminated the Stroop-related finding (an observation that may account for previous failures to replicate), but not the event-file finding. Taken altogether, our results suggest that rumination does not affect attention, executive control, or response selection in general, but rather selectively impairs the control of stimulus-induced retrieval of irrelevant information.
The Lick-Gaertner automatic measuring system
NASA Technical Reports Server (NTRS)
Vasilevskis, S.; Popov, W. A.
1971-01-01
The Lick-Gaertner automatic equipment has been designed mainly for the measurement of stellar proper motions with reference to galaxies, and consists of two main components: the survey machine and the automatic measuring engine. The survey machine is used for initial inspection and selection of objects for subsequent measurement. Two plates, up to 17 x 17 inches each, are surveyed simultaneously by means of projection on a screen. The approximate positions of objects selected are measured by two optical screws: helical lines cut through an aluminum coating on glass cylinders. These approximate coordinates to a precision of the order of 0.03mm are transmitted to a card punch by encoders connected with the cylinders.
Fromberger, Peter; Jordan, Kirsten; Steinkrauss, Henrike; von Herder, Jakob; Stolpmann, Georg; Kröner-Herwig, Birgit; Müller, Jürgen Leo
2013-05-01
Recent theories in sexuality highlight the importance of automatic and controlled attentional processes in viewing sexually relevant stimuli. The model of Spiering and Everaerd (2007) assumes that sexually relevant features of a stimulus are preattentively selected and automatically induce focal attention to these sexually relevant aspects. Whether this assumption proves true for pedophiles is unknown. It is aim of this study to test this assumption empirically for people suffering from pedophilic interests. Twenty-two pedophiles, 8 nonpedophilic forensic controls, and 52 healthy controls simultaneously viewed the picture of a child and the picture of an adult while eye movements were measured. Entry time was assessed as a measure of automatic attentional processes and relative fixation time in order to assess controlled attentional processes. Pedophiles demonstrated significantly shorter entry time to child stimuli than to adult stimuli. The opposite was the case for nonpedophiles, as they showed longer relative fixation time for adult stimuli, and, against all expectations, pedophiles also demonstrated longer relative fixation time for adult stimuli. The results confirmed the hypothesis that pedophiles automatically selected sexually relevant stimuli (children). Contrary to all expectations, this automatic selection did not trigger the focal attention to these sexually relevant pictures. Furthermore, pedophiles were first and longest attracted by faces and pubic regions of children; nonpedophiles were first and longest attracted by faces and breasts of adults. The results demonstrated, for the first time, that the face and pubic region are the most attracting regions in children for pedophiles. © 2013 American Psychological Association
Automatic selective attention as a function of sensory modality in aging.
Guerreiro, Maria J S; Adam, Jos J; Van Gerven, Pascal W M
2012-03-01
It was recently hypothesized that age-related differences in selective attention depend on sensory modality (Guerreiro, M. J. S., Murphy, D. R., & Van Gerven, P. W. M. (2010). The role of sensory modality in age-related distraction: A critical review and a renewed view. Psychological Bulletin, 136, 975-1022. doi:10.1037/a0020731). So far, this hypothesis has not been tested in automatic selective attention. The current study addressed this issue by investigating age-related differences in automatic spatial cueing effects (i.e., facilitation and inhibition of return [IOR]) across sensory modalities. Thirty younger (mean age = 22.4 years) and 25 older adults (mean age = 68.8 years) performed 4 left-right target localization tasks, involving all combinations of visual and auditory cues and targets. We used stimulus onset asynchronies (SOAs) of 100, 500, 1,000, and 1,500 ms between cue and target. The results showed facilitation (shorter reaction times with valid relative to invalid cues at shorter SOAs) in the unimodal auditory and in both cross-modal tasks but not in the unimodal visual task. In contrast, there was IOR (longer reaction times with valid relative to invalid cues at longer SOAs) in both unimodal tasks but not in either of the cross-modal tasks. Most important, these spatial cueing effects were independent of age. The results suggest that the modality hypothesis of age-related differences in selective attention does not extend into the realm of automatic selective attention.
NASA Astrophysics Data System (ADS)
Yusop, Hanafi M.; Ghazali, M. F.; Yusof, M. F. M.; Remli, M. A. Pi; Kamarulzaman, M. H.
2017-10-01
In a recent study, the analysis of pressure transient signals could be seen as an accurate and low-cost method for leak and feature detection in water distribution systems. Transient phenomena occurs due to sudden changes in the fluid’s propagation in pipelines system caused by rapid pressure and flow fluctuation due to events such as closing and opening valves rapidly or through pump failure. In this paper, the feasibility of the Hilbert-Huang transform (HHT) method/technique in analysing the pressure transient signals in presented and discussed. HHT is a way to decompose a signal into intrinsic mode functions (IMF). However, the advantage of HHT is its difficulty in selecting the suitable IMF for the next data postprocessing method which is Hilbert Transform (HT). This paper reveals that utilizing the application of an integrated kurtosis-based algorithm for a z-filter technique (I-Kaz) to kurtosis ratio (I-Kaz-Kurtosis) allows/contributes to/leads to automatic selection of the IMF that should be used. This technique is demonstrated on a 57.90-meter medium high-density polyethylene (MDPE) pipe installed with a single artificial leak. The analysis results using the I-Kaz-kurtosis ratio revealed/confirmed that the method can be used as an automatic selection of the IMF although the noise level ratio of the signal is low. Therefore, the I-Kaz-kurtosis ratio method is recommended as a means to implement an automatic selection technique of the IMF for HHT analysis.
Hirose, Tomoaki; Igami, Tsuyoshi; Koga, Kusuto; Hayashi, Yuichiro; Ebata, Tomoki; Yokoyama, Yukihiro; Sugawara, Gen; Mizuno, Takashi; Yamaguchi, Junpei; Mori, Kensaku; Nagino, Masato
2017-03-01
Fusion angiography using reconstructed multidetector-row computed tomography (MDCT) images, and cholangiography using reconstructed images from MDCT with a cholangiographic agent include an anatomical gap due to the different periods of MDCT scanning. To conquer such gaps, we attempted to develop a cholangiography procedure that automatically reconstructs a cholangiogram from portal-phase MDCT images. The automatically produced cholangiography procedure utilized an original software program that was developed by the Graduate School of Information Science, Nagoya University. This program structured 5 candidate biliary tracts, and automatically selected one as the candidate for cholangiography. The clinical value of the automatically produced cholangiography procedure was estimated based on a comparison with manually produced cholangiography. Automatically produced cholangiograms were reconstructed for 20 patients who underwent MDCT scanning before biliary drainage for distal biliary obstruction. The procedure showed the ability to extract the 5 main biliary branches and the 21 subsegmental biliary branches in 55 and 25 % of the cases, respectively. The extent of aberrant connections and aberrant extractions outside the biliary tract was acceptable. Among all of the cholangiograms, 5 were clinically applied with no correction, 8 were applied with modest improvements, and 3 produced a correct cholangiography before automatic selection. Although our procedure requires further improvement based on the analysis of additional patient data, it may represent an alternative to direct cholangiography in the future.
Automatic Design of Synthetic Gene Circuits through Mixed Integer Non-linear Programming
Huynh, Linh; Kececioglu, John; Köppe, Matthias; Tagkopoulos, Ilias
2012-01-01
Automatic design of synthetic gene circuits poses a significant challenge to synthetic biology, primarily due to the complexity of biological systems, and the lack of rigorous optimization methods that can cope with the combinatorial explosion as the number of biological parts increases. Current optimization methods for synthetic gene design rely on heuristic algorithms that are usually not deterministic, deliver sub-optimal solutions, and provide no guaranties on convergence or error bounds. Here, we introduce an optimization framework for the problem of part selection in synthetic gene circuits that is based on mixed integer non-linear programming (MINLP), which is a deterministic method that finds the globally optimal solution and guarantees convergence in finite time. Given a synthetic gene circuit, a library of characterized parts, and user-defined constraints, our method can find the optimal selection of parts that satisfy the constraints and best approximates the objective function given by the user. We evaluated the proposed method in the design of three synthetic circuits (a toggle switch, a transcriptional cascade, and a band detector), with both experimentally constructed and synthetic promoter libraries. Scalability and robustness analysis shows that the proposed framework scales well with the library size and the solution space. The work described here is a step towards a unifying, realistic framework for the automated design of biological circuits. PMID:22536398
NASA Astrophysics Data System (ADS)
Fredouille, Corinne; Pouchoulin, Gilles; Ghio, Alain; Revis, Joana; Bonastre, Jean-François; Giovanni, Antoine
2009-12-01
This paper addresses voice disorder assessment. It proposes an original back-and-forth methodology involving an automatic classification system as well as knowledge of the human experts (machine learning experts, phoneticians, and pathologists). The goal of this methodology is to bring a better understanding of acoustic phenomena related to dysphonia. The automatic system was validated on a dysphonic corpus (80 female voices), rated according to the GRBAS perceptual scale by an expert jury. Firstly, focused on the frequency domain, the classification system showed the interest of 0-3000 Hz frequency band for the classification task based on the GRBAS scale. Later, an automatic phonemic analysis underlined the significance of consonants and more surprisingly of unvoiced consonants for the same classification task. Submitted to the human experts, these observations led to a manual analysis of unvoiced plosives, which highlighted a lengthening of VOT according to the dysphonia severity validated by a preliminary statistical analysis.
MatchGUI: A Graphical MATLAB-Based Tool for Automatic Image Co-Registration
NASA Technical Reports Server (NTRS)
Ansar, Adnan I.
2011-01-01
MatchGUI software, based on MATLAB, automatically matches two images and displays the match result by superimposing one image on the other. A slider bar allows focus to shift between the two images. There are tools for zoom, auto-crop to overlap region, and basic image markup. Given a pair of ortho-rectified images (focused primarily on Mars orbital imagery for now), this software automatically co-registers the imagery so that corresponding image pixels are aligned. MatchGUI requires minimal user input, and performs a registration over scale and inplane rotation fully automatically
Automatic assessment of functional health decline in older adults based on smart home data.
Alberdi Aramendi, Ane; Weakley, Alyssa; Aztiria Goenaga, Asier; Schmitter-Edgecombe, Maureen; Cook, Diane J
2018-05-01
In the context of an aging population, tools to help elderly to live independently must be developed. The goal of this paper is to evaluate the possibility of using unobtrusively collected activity-aware smart home behavioral data to automatically detect one of the most common consequences of aging: functional health decline. After gathering the longitudinal smart home data of 29 older adults for an average of >2 years, we automatically labeled the data with corresponding activity classes and extracted time-series statistics containing 10 behavioral features. Using this data, we created regression models to predict absolute and standardized functional health scores, as well as classification models to detect reliable absolute change and positive and negative fluctuations in everyday functioning. Functional health was assessed every six months by means of the Instrumental Activities of Daily Living-Compensation (IADL-C) scale. Results show that total IADL-C score and subscores can be predicted by means of activity-aware smart home data, as well as a reliable change in these scores. Positive and negative fluctuations in everyday functioning are harder to detect using in-home behavioral data, yet changes in social skills have shown to be predictable. Future work must focus on improving the sensitivity of the presented models and performing an in-depth feature selection to improve overall accuracy. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Guthoff, Rudolf F.; Zhivov, Andrey; Stachs, Oliver
2010-02-01
The aim of the study was to produce two-dimensional reconstruction maps of the living corneal sub-basal nerve plexus by in vivo laser scanning confocal microscopy in real time. CLSM source data (frame rate 30Hz, 384x384 pixel) were used to create large-scale maps of the scanned area by selecting the Automatic Real Time (ART) composite mode. The mapping algorithm is based on an affine transformation. Microscopy of the sub-basal nerve plexus was performed on normal and LASIK eyes as well as on rabbit eyes. Real-time mapping of the sub-basal nerve plexus was performed in large-scale up to a size of 3.2mm x 3.2mm. The developed method enables a real-time in vivo mapping of the sub-basal nerve plexus which is stringently necessary for statistically firmed conclusions about morphometric plexus alterations.
Tooth labeling in cone-beam CT using deep convolutional neural network for forensic identification
NASA Astrophysics Data System (ADS)
Miki, Yuma; Muramatsu, Chisako; Hayashi, Tatsuro; Zhou, Xiangrong; Hara, Takeshi; Katsumata, Akitoshi; Fujita, Hiroshi
2017-03-01
In large disasters, dental record plays an important role in forensic identification. However, filing dental charts for corpses is not an easy task for general dentists. Moreover, it is laborious and time-consuming work in cases of large scale disasters. We have been investigating a tooth labeling method on dental cone-beam CT images for the purpose of automatic filing of dental charts. In our method, individual tooth in CT images are detected and classified into seven tooth types using deep convolutional neural network. We employed the fully convolutional network using AlexNet architecture for detecting each tooth and applied our previous method using regular AlexNet for classifying the detected teeth into 7 tooth types. From 52 CT volumes obtained by two imaging systems, five images each were randomly selected as test data, and the remaining 42 cases were used as training data. The result showed the tooth detection accuracy of 77.4% with the average false detection of 5.8 per image. The result indicates the potential utility of the proposed method for automatic recording of dental information.
Apparatus enables automatic microanalysis of body fluids
NASA Technical Reports Server (NTRS)
Soffen, G. A.; Stuart, J. L.
1966-01-01
Apparatus will automatically and quantitatively determine body fluid constituents which are amenable to analysis by fluorometry or colorimetry. The results of the tests are displayed as percentages of full scale deflection on a strip-chart recorder. The apparatus can also be adapted for microanalysis of various other fluids.
Automatic transducer switching provides accurate wide range measurement of pressure differential
NASA Technical Reports Server (NTRS)
Yoder, S. K.
1967-01-01
Automatic pressure transducer switching network sequentially selects any one of a number of limited-range transducers as gas pressure rises or falls, extending the range of measurement and lessening the chances of damage due to high pressure.
Mento, Giovanni
2017-12-01
A main distinction has been proposed between voluntary and automatic mechanisms underlying temporal orienting (TO) of selective attention. Voluntary TO implies the endogenous directing of attention induced by symbolic cues. Conversely, automatic TO is exogenously instantiated by the physical properties of stimuli. A well-known example of automatic TO is sequential effects (SEs), which refer to the adjustments in participants' behavioral performance as a function of the trial-by-trial sequential distribution of the foreperiod between two stimuli. In this study a group of healthy adults underwent a cued reaction time task purposely designed to assess both voluntary and automatic TO. During the task, both post-cue and post-target event-related potentials (ERPs) were recorded by means of a high spatial resolution EEG system. In the results of the post-cue analysis, the P3a and P3b were identified as two distinct ERP markers showing distinguishable spatiotemporal features and reflecting automatic and voluntary a priori expectancy generation, respectively. The brain source reconstruction further revealed that distinct cortical circuits supported these two temporally dissociable components. Namely, the voluntary P3b was supported by a left sensorimotor network, while the automatic P3a was generated by a more distributed frontoparietal circuit. Additionally, post-cue contingent negative variation (CNV) and post-target P3 modulations were observed as common markers of voluntary and automatic expectancy implementation and response selection, although partially dissociable neural networks subserved these two mechanisms. Overall, these results provide new electrophysiological evidence suggesting that distinct neural substrates can be recruited depending on the voluntary or automatic cognitive nature of the cognitive mechanisms subserving TO. Copyright © 2017 Elsevier Ltd. All rights reserved.
Searching for Ultra-cool Objects at the Limits of Large-scale Surveys
NASA Astrophysics Data System (ADS)
Pinfield, D. J.; Patel, K.; Zhang, Z.; Gomes, J.; Burningham, B.; Day-Jones, A. C.; Jenkins, J.
2011-12-01
We have made a search (to Y=19.6) of the UKIDSS Large Area Survey (LAS DR7) for objects detected only in the Y-band. We have identified and removed contamination due to solar system objects, dust specs in the WFCAM optical path, persistence in the WFCAM detectors, and other sources of spurious single source Y-detections in the UKIDSS LAS data-base. In addition to our automated selection procedure we have visually inspected the ˜600 automatically selected candidates to provide an additional level of quality filtering. This has resulted in 55 good candidates that await follow-up observations to confirm their nature. Ultra-cool LAS Y-only objects would have blue Y-J colours combined with very red optical-NIR SEDs - characteristics shared by Jupiter, and suggested by an extrapolation of the Y-J colour trend seen for the latest T dwarfs currently known.
Discriminative region extraction and feature selection based on the combination of SURF and saliency
NASA Astrophysics Data System (ADS)
Deng, Li; Wang, Chunhong; Rao, Changhui
2011-08-01
The objective of this paper is to provide a possible optimization on salient region algorithm, which is extensively used in recognizing and learning object categories. Salient region algorithm owns the superiority of intra-class tolerance, global score of features and automatically prominent scale selection under certain range. However, the major limitation behaves on performance, and that is what we attempt to improve. By reducing the number of pixels involved in saliency calculation, it can be accelerated. We use interest points detected by fast-Hessian, the detector of SURF, as the candidate feature for saliency operation, rather than the whole set in image. This implementation is thereby called Saliency based Optimization over SURF (SOSU for short). Experiment shows that bringing in of such a fast detector significantly speeds up the algorithm. Meanwhile, Robustness of intra-class diversity ensures object recognition accuracy.
Recent Research on the Automated Mass Measuring System
NASA Astrophysics Data System (ADS)
Yao, Hong; Ren, Xiao-Ping; Wang, Jian; Zhong, Rui-Lin; Ding, Jing-An
The research development of robotic measurement system as well as the representative automatic system were introduced in the paper, and then discussed a sub-multiple calibration scheme adopted on a fully-automatic CCR10 system effectively. Automatic robot system can be able to perform the dissemination of the mass scale without any manual intervention as well as the fast speed calibration of weight samples against a reference weight. At the last, evaluation of the expanded uncertainty was given out.
StochKit2: software for discrete stochastic simulation of biochemical systems with events.
Sanft, Kevin R; Wu, Sheng; Roh, Min; Fu, Jin; Lim, Rone Kwei; Petzold, Linda R
2011-09-01
StochKit2 is the first major upgrade of the popular StochKit stochastic simulation software package. StochKit2 provides highly efficient implementations of several variants of Gillespie's stochastic simulation algorithm (SSA), and tau-leaping with automatic step size selection. StochKit2 features include automatic selection of the optimal SSA method based on model properties, event handling, and automatic parallelism on multicore architectures. The underlying structure of the code has been completely updated to provide a flexible framework for extending its functionality. StochKit2 runs on Linux/Unix, Mac OS X and Windows. It is freely available under GPL version 3 and can be downloaded from http://sourceforge.net/projects/stochkit/. petzold@engineering.ucsb.edu.
Zhu, Jingbo; Liu, Baoyue; Shan, Shibo; Ding, Yanl; Kou, Zinong; Xiao, Wei
2015-08-01
In order to meet the needs of efficient purification of products from natural resources, this paper developed an automatic vacuum liquid chromatographic device (AUTO-VLC) and applied it to the component separation of petroleum ether extracts of Schisandra chinensis (Turcz) Baill. The device was comprised of a solvent system, a 10-position distribution valve, a 3-position changes valve, dynamic axis compress chromatographic columns with three diameters, and a 10-position fraction valve. The programmable logic controller (PLC) S7- 200 was adopted to realize the automatic control and monitoring of the mobile phase changing, column selection, separation time setting and fraction collection. The separation results showed that six fractions (S1-S6) of different chemical components from 100 g Schisandra chinensis (Turcz) Baill. petroleum ether phase were obtained by the AUTO-VLC with 150 mm diameter dynamic axis compress chromatographic column. A new method used for the VLC separation parameters screened by using multiple development TLC was developed and confirmed. The initial mobile phase of AUTO-VLC was selected by taking Rf of all the target compounds ranging from 0 to 0.45 for fist development on the TLC; gradient elution ratio was selected according to k value (the slope of the linear function of Rf value and development times on the TLC) and the resolution of target compounds; elution times (n) were calculated by the formula n ≈ ΔRf/k. A total of four compounds with the purity more than 85% and 13 other components were separated from S5 under the selected conditions for only 17 h. Therefore, the development of the automatic VLC and its method are significant to the automatic and systematic separation of traditional Chinese medicines.
NASA Astrophysics Data System (ADS)
Durocher, M.; Mostofi Zadeh, S.; Burn, D. H.; Ashkar, F.
2017-12-01
Floods are one of the most costly hazards and frequency analysis of river discharges is an important part of the tools at our disposal to evaluate their inherent risks and to provide an adequate response. In comparison to the common examination of annual streamflow maximums, peaks over threshold (POT) is an interesting alternative that makes better use of the available information by including more than one flood event per year (on average). However, a major challenge is the selection of a satisfactory threshold above which peaks are assumed to respect certain conditions necessary for an adequate estimation of the risk. Additionally, studies have shown that POT is also a valuable approach to investigate the evolution of flood regimes in the context of climate change. Recently, automatic procedures for the selection of the threshold were suggested to guide that important choice, which otherwise rely on graphical tools and expert judgment. Furthermore, having an automatic procedure that is objective allows for quickly repeating the analysis on a large number of samples, which is useful in the context of large databases or for uncertainty analysis based on a resampling approach. This study investigates the impact of considering such procedures in a case study including many sites across Canada. A simulation study is conducted to evaluate the bias and predictive power of the automatic procedures in similar conditions as well as investigating the power of derived nonstationarity tests. The results obtained are also evaluated in the light of expert judgments established in a previous study. Ultimately, this study provides a thorough examination of the considerations that need to be addressed when conducting POT analysis using automatic threshold selection.
Ilunga-Mbuyamba, Elisee; Avina-Cervantes, Juan Gabriel; Cepeda-Negrete, Jonathan; Ibarra-Manzano, Mario Alberto; Chalopin, Claire
2017-12-01
Brain tumor segmentation is a routine process in a clinical setting and provides useful information for diagnosis and treatment planning. Manual segmentation, performed by physicians or radiologists, is a time-consuming task due to the large quantity of medical data generated presently. Hence, automatic segmentation methods are needed, and several approaches have been introduced in recent years including the Localized Region-based Active Contour Model (LRACM). There are many popular LRACM, but each of them presents strong and weak points. In this paper, the automatic selection of LRACM based on image content and its application on brain tumor segmentation is presented. Thereby, a framework to select one of three LRACM, i.e., Local Gaussian Distribution Fitting (LGDF), localized Chan-Vese (C-V) and Localized Active Contour Model with Background Intensity Compensation (LACM-BIC), is proposed. Twelve visual features are extracted to properly select the method that may process a given input image. The system is based on a supervised approach. Applied specifically to Magnetic Resonance Imaging (MRI) images, the experiments showed that the proposed system is able to correctly select the suitable LRACM to handle a specific image. Consequently, the selection framework achieves better accuracy performance than the three LRACM separately. Copyright © 2017 Elsevier Ltd. All rights reserved.
Automatic Real Time Ionogram Scaler with True Height Analysis - Artist
1983-07-01
scaled. The corresponding autoscaled values were compared with the manual scaled h’F, h’F2, fminF, foE, foEs, h’E and hlEs. The ARTIST program...I ... , ·~ J .,\\; j~~·n! I:\\’~ .. IC HT:/\\L rritw!E I ONOGI\\AM SCALER ’:!"[’!’if T:\\!_1!: H~:IGHT ANALYSIS - ARTIST P...S. TYPE OF REPORT & PERiCO COVERED Scientific Report No. 7 AUTOMATIC REAL TIME IONOGRAM SCALER WITH TRUE HEIGHT ANALYSIS - ARTIST 6. PERFORMING OG
Self-aggregation in scaled principal component space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Chris H.Q.; He, Xiaofeng; Zha, Hongyuan
2001-10-05
Automatic grouping of voluminous data into meaningful structures is a challenging task frequently encountered in broad areas of science, engineering and information processing. These data clustering tasks are frequently performed in Euclidean space or a subspace chosen from principal component analysis (PCA). Here we describe a space obtained by a nonlinear scaling of PCA in which data objects self-aggregate automatically into clusters. Projection into this space gives sharp distinctions among clusters. Gene expression profiles of cancer tissue subtypes, Web hyperlink structure and Internet newsgroups are analyzed to illustrate interesting properties of the space.
Revell, James; Mirmehdi, Majid; McNally, Donal
2005-06-01
We present the development and validation of an image based speckle tracking methodology, for determining temporal two-dimensional (2-D) axial and lateral displacement and strain fields from ultrasound video streams. We refine a multiple scale region matching approach incorporating novel solutions to known speckle tracking problems. Key contributions include automatic similarity measure selection to adapt to varying speckle density, quantifying trajectory fields, and spatiotemporal elastograms. Results are validated using tissue mimicking phantoms and in vitro data, before applying them to in vivo musculoskeletal ultrasound sequences. The method presented has the potential to improve clinical knowledge of tendon pathology from carpel tunnel syndrome, inflammation from implants, sport injuries, and many others.
Mark, Daniel B.; Anstrom, Kevin J.; McNulty, Steven E.; Flaker, Greg C.; Tonkin, Andrew M.; Smith, Warren M.; Toff, William D.; Dorian, Paul; Clapp-Channing, Nancy E.; Anderson, Jill; Johnson, George; Schron, Eleanor B.; Poole, Jeanne E.; Lee, Kerry L.; Bardy, Gust H.
2010-01-01
Background Public access automatic external defibrillators (AEDs) can save lives, but most deaths from out-of-hospital sudden cardiac arrest occur at home. The Home Automatic External Defibrillator Trial (HAT) found no survival advantage for adding a home AED to cardiopulmonary resuscitation (CPR) training for 7001 patients with a prior anterior wall myocardial infarction. Quality of life (QOL) outcomes for both the patient and spouse/companion were secondary endpoints. Methods A subset of 1007 study patients and their spouse/companions was randomly selected for ascertainment of QOL by structured interview at baseline and 12 and 24 months following enrollment. The primary QOL measures were the Medical Outcomes Study 36-Item Short-Form (SF-36) psychological well-being (reflecting anxiety and depression) and vitality (reflecting energy and fatigue) subscales. Results For patients and spouse/companions, the psychological well-being and vitality scales did not differ significantly between those randomly assigned an AED plus CPR training and controls who received CPR training only. None of the other QOL measures collected showed a clinically and statistically significant difference between treatment groups. Patients in the AED group were more likely to report being extremely or quite a bit reassured by their treatment assignment. Spouse/companions in the AED group reported being less often nervous about the possibility of using AED/CPR treatment than those in the CPR group. Conclusions Adding access to a home AED to CPR training did not affect quality of life either for patients with a prior anterior myocardial infarction or their spouse/companion but did provide more reassurance to the patients without increasing anxiety for spouse/companions. PMID:20362722
AdjScales: Visualizing Differences between Adjectives for Language Learners
NASA Astrophysics Data System (ADS)
Sheinman, Vera; Tokunaga, Takenobu
In this study we introduce AdjScales, a method for scaling similar adjectives by their strength. It combines existing Web-based computational linguistic techniques in order to automatically differentiate between similar adjectives that describe the same property by strength. Though this kind of information is rarely present in most of the lexical resources and dictionaries, it may be useful for language learners that try to distinguish between similar words. Additionally, learners might gain from a simple visualization of these differences using unidimensional scales. The method is evaluated by comparison with annotation on a subset of adjectives from WordNet by four native English speakers. It is also compared against two non-native speakers of English. The collected annotation is an interesting resource in its own right. This work is a first step toward automatic differentiation of meaning between similar words for language learners. AdjScales can be useful for lexical resource enhancement.
NASA Technical Reports Server (NTRS)
Hasler, A. F.; Strong, J.; Woodward, R. H.; Pierce, H.
1991-01-01
Results are presented on an automatic stereo analysis of cloud-top heights from nearly simultaneous satellite image pairs from the GOES and NOAA satellites, using a massively parallel processor computer. Comparisons of computer-derived height fields and manually analyzed fields show that the automatic analysis technique shows promise for performing routine stereo analysis in a real-time environment, providing a useful forecasting tool by augmenting observational data sets of severe thunderstorms and hurricanes. Simulations using synthetic stereo data show that it is possible to automatically resolve small-scale features such as 4000-m-diam clouds to about 1500 m in the vertical.
Code of Federal Regulations, 2011 CFR
2011-01-01
... may conduct a review of the test records. The Secretary may then conduct enforcement testing of that...) For automatic commercial ice makers, as well as commercial refrigerators, freezers, and refrigerator... numbers to select the units to be tested. (ii) For automatic commercial ice makers, as well as commercial...
Detecting Cheaters without Thinking: Testing the Automaticity of the Cheater Detection Module
Van Lier, Jens; Revlin, Russell; De Neys, Wim
2013-01-01
Evolutionary psychologists have suggested that our brain is composed of evolved mechanisms. One extensively studied mechanism is the cheater detection module. This module would make people very good at detecting cheaters in a social exchange. A vast amount of research has illustrated performance facilitation on social contract selection tasks. This facilitation is attributed to the alleged automatic and isolated operation of the module (i.e., independent of general cognitive capacity). This study, using the selection task, tested the critical automaticity assumption in three experiments. Experiments 1 and 2 established that performance on social contract versions did not depend on cognitive capacity or age. Experiment 3 showed that experimentally burdening cognitive resources with a secondary task had no impact on performance on the social contract version. However, in all experiments, performance on a non-social contract version did depend on available cognitive capacity. Overall, findings validate the automatic and effortless nature of social exchange reasoning. PMID:23342012
Approximation, abstraction and decomposition in search and optimization
NASA Technical Reports Server (NTRS)
Ellman, Thomas
1992-01-01
In this paper, I discuss four different areas of my research. One portion of my research has focused on automatic synthesis of search control heuristics for constraint satisfaction problems (CSPs). I have developed techniques for automatically synthesizing two types of heuristics for CSPs: Filtering functions are used to remove portions of a search space from consideration. Another portion of my research is focused on automatic synthesis of hierarchic algorithms for solving constraint satisfaction problems (CSPs). I have developed a technique for constructing hierarchic problem solvers based on numeric interval algebra. Another portion of my research is focused on automatic decomposition of design optimization problems. We are using the design of racing yacht hulls as a testbed domain for this research. Decomposition is especially important in the design of complex physical shapes such as yacht hulls. Another portion of my research is focused on intelligent model selection in design optimization. The model selection problem results from the difficulty of using exact models to analyze the performance of candidate designs.
Visualization of conserved structures by fusing highly variable datasets.
Silverstein, Jonathan C; Chhadia, Ankur; Dech, Fred
2002-01-01
Skill, effort, and time are required to identify and visualize anatomic structures in three-dimensions from radiological data. Fundamentally, automating these processes requires a technique that uses symbolic information not in the dynamic range of the voxel data. We were developing such a technique based on mutual information for automatic multi-modality image fusion (MIAMI Fuse, University of Michigan). This system previously demonstrated facility at fusing one voxel dataset with integrated symbolic structure information to a CT dataset (different scale and resolution) from the same person. The next step of development of our technique was aimed at accommodating the variability of anatomy from patient to patient by using warping to fuse our standard dataset to arbitrary patient CT datasets. A standard symbolic information dataset was created from the full color Visible Human Female by segmenting the liver parenchyma, portal veins, and hepatic veins and overwriting each set of voxels with a fixed color. Two arbitrarily selected patient CT scans of the abdomen were used for reference datasets. We used the warping functions in MIAMI Fuse to align the standard structure data to each patient scan. The key to successful fusion was the focused use of multiple warping control points that place themselves around the structure of interest automatically. The user assigns only a few initial control points to align the scans. Fusion 1 and 2 transformed the atlas with 27 points around the liver to CT1 and CT2 respectively. Fusion 3 transformed the atlas with 45 control points around the liver to CT1 and Fusion 4 transformed the atlas with 5 control points around the portal vein. The CT dataset is augmented with the transformed standard structure dataset, such that the warped structure masks are visualized in combination with the original patient dataset. This combined volume visualization is then rendered interactively in stereo on the ImmersaDesk in an immersive Virtual Reality (VR) environment. The accuracy of the fusions was determined qualitatively by comparing the transformed atlas overlaid on the appropriate CT. It was examined for where the transformed structure atlas was incorrectly overlaid (false positive) and where it was incorrectly not overlaid (false negative). According to this method, fusions 1 and 2 were correct roughly 50-75% of the time, while fusions 3 and 4 were correct roughly 75-100%. The CT dataset augmented with transformed dataset was viewed arbitrarily in user-centered perspective stereo taking advantage of features such as scaling, windowing and volumetric region of interest selection. This process of auto-coloring conserved structures in variable datasets is a step toward the goal of a broader, standardized automatic structure visualization method for radiological data. If successful it would permit identification, visualization or deletion of structures in radiological data by semi-automatically applying canonical structure information to the radiological data (not just processing and visualization of the data's intrinsic dynamic range). More sophisticated selection of control points and patterns of warping may allow for more accurate transforms, and thus advances in visualization, simulation, education, diagnostics, and treatment planning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhukov, A. V.; Komarov, A. N.; Safronov, A. N.
The principles of central control of the power generating units of thermal power plants by automatic secondary frequency and active power overcurrent regulation systems, and the algorithms for interactions between automatic power control systems for the power production units in thermal power plants and centralized systems for automatic frequency and power regulation, are discussed. The order of switching the power generating units of thermal power plants over to control by a centralized system for automatic frequency and power regulation and by the Central Coordinating System for automatic frequency and power regulation is presented. The results of full-scale system tests ofmore » the control of power generating units of the Kirishskaya, Stavropol, and Perm GRES (State Regional Electric Power Plants) by the Central Coordinating System for automatic frequency and power regulation at the United Power System of Russia on September 23-25, 2008, are reported.« less
[The effects of interpretation bias for social events and automatic thoughts on social anxiety].
Aizawa, Naoki
2015-08-01
Many studies have demonstrated that individuals with social anxiety interpret ambiguous social situations negatively. It is, however, not clear whether the interpretation bias discriminatively contributes to social anxiety in comparison with depressive automatic thoughts. The present study investigated the effects of negative interpretation bias and automatic thoughts on social anxiety. The Social Intent Interpretation-Questionnaire, which measures the tendency to interpret ambiguous social events as implying other's rejective intents, the short Japanese version of the Automatic Thoughts Questionnaire-Revised, and the Anthropophobic Tendency Scale were administered to 317 university students. Covariance structure analysis indicated that both rejective intent interpretation bias and negative automatic thoughts contributed to mental distress in social situations mediated by a sense of powerlessness and excessive concern about self and others in social situations. Positive automatic thoughts reduced mental distress. These results indicate the importance of interpretation bias and negative automatic thoughts in the development and maintenance of social anxiety. Implications for understanding of the cognitive features of social anxiety were discussed.
Uncovering effects of self-control and stimulus-driven action selection on the sense of agency.
Wang, Yuru; Damen, Tom G E; Aarts, Henk
2017-10-01
The sense of agency refers to feelings of causing one's own action and resulting effect. Previous research indicates that voluntary action selection is an important factor in shaping the sense of agency. Whereas the volitional nature of the sense of agency is well documented, the present study examined whether agency is modulated when action selection shifts from self-control to a more automatic stimulus-driven process. Seventy-two participants performed an auditory Simon task including congruent and incongruent trials to generate automatic stimulus-driven vs. more self-control driven action, respectively. Responses in the Simon task produced a tone and agency was assessed with the intentional binding task - an implicit measure of agency. Results showed a Simon effect and temporal binding effect. However, temporal binding was independent of congruency. These findings suggest that temporal binding, a window to the sense of agency, emerges for both automatic stimulus-driven actions and self-controlled actions. Copyright © 2017 Elsevier Inc. All rights reserved.
Automatic counting and classification of bacterial colonies using hyperspectral imaging
USDA-ARS?s Scientific Manuscript database
Detection and counting of bacterial colonies on agar plates is a routine microbiology practice to get a rough estimate of the number of viable cells in a sample. There have been a variety of different automatic colony counting systems and software algorithms mainly based on color or gray-scale pictu...
Automatic detection of Parkinson's disease in running speech spoken in three different languages.
Orozco-Arroyave, J R; Hönig, F; Arias-Londoño, J D; Vargas-Bonilla, J F; Daqrouq, K; Skodda, S; Rusz, J; Nöth, E
2016-01-01
The aim of this study is the analysis of continuous speech signals of people with Parkinson's disease (PD) considering recordings in different languages (Spanish, German, and Czech). A method for the characterization of the speech signals, based on the automatic segmentation of utterances into voiced and unvoiced frames, is addressed here. The energy content of the unvoiced sounds is modeled using 12 Mel-frequency cepstral coefficients and 25 bands scaled according to the Bark scale. Four speech tasks comprising isolated words, rapid repetition of the syllables /pa/-/ta/-/ka/, sentences, and read texts are evaluated. The method proves to be more accurate than classical approaches in the automatic classification of speech of people with PD and healthy controls. The accuracies range from 85% to 99% depending on the language and the speech task. Cross-language experiments are also performed confirming the robustness and generalization capability of the method, with accuracies ranging from 60% to 99%. This work comprises a step forward for the development of computer aided tools for the automatic assessment of dysarthric speech signals in multiple languages.
DOT National Transportation Integrated Search
1989-06-01
Author's abstract: A nonrandom sample of 120 disproportionately short, tall, and overweight drivers compared the comfort and convenience of the automatic safety belt systems used in seventeen automobiles. Nine vehicles had motorized shoulder belts wi...
Automatic, Multiple Assessment Options in Undergraduate Meteorology Education
ERIC Educational Resources Information Center
Kahl, Jonathan D. W.
2017-01-01
Since 2008, automatic, multiple assessment options have been utilised in selected undergraduate meteorology courses at the University of Wisconsin--Milwaukee. Motivated by a desire to reduce stress among students, the assessment methodology includes examination-heavy and homework-heavy alternatives, differing by an adjustable 15% of the overall…
1995-06-01
Energy efficient, 30 and 40 watt ballasts are Rapid Start, thermally protected, automatic resetting. Class P, high or low power factor as required...BALLASTS Energy efficient, 30 ana 40 watt Rapic Start, thermally protected, automatic resetting. Class P. high power factor, CEM, sound rated A. unless...BALLASTS Energy efficient, 40 Watt Rapid Start, thermally protected, automatic resetting, Class P, high power factor, CBM, sound rated A, unless
Application of industrial robots in automatic disassembly line of waste LCD displays
NASA Astrophysics Data System (ADS)
Wang, Sujuan
2017-11-01
In the automatic disassembly line of waste LCD displays, LCD displays are disassembled into plastic shells, metal shields, circuit boards, and LCD panels. Two industrial robots are used to cut metal shields and remove circuit boards in this automatic disassembly line. The functions of these two industrial robots, and the solutions to the critical issues of model selection, the interfaces with PLCs and the workflows were described in detail in this paper.
Chen, Yang; Ren, Xiaofeng; Zhang, Guo-Qiang; Xu, Rong
2013-01-01
Visual information is a crucial aspect of medical knowledge. Building a comprehensive medical image base, in the spirit of the Unified Medical Language System (UMLS), would greatly benefit patient education and self-care. However, collection and annotation of such a large-scale image base is challenging. To combine visual object detection techniques with medical ontology to automatically mine web photos and retrieve a large number of disease manifestation images with minimal manual labeling effort. As a proof of concept, we first learnt five organ detectors on three detection scales for eyes, ears, lips, hands, and feet. Given a disease, we used information from the UMLS to select affected body parts, ran the pretrained organ detectors on web images, and combined the detection outputs to retrieve disease images. Compared with a supervised image retrieval approach that requires training images for every disease, our ontology-guided approach exploits shared visual information of body parts across diseases. In retrieving 2220 web images of 32 diseases, we reduced manual labeling effort to 15.6% while improving the average precision by 3.9% from 77.7% to 81.6%. For 40.6% of the diseases, we improved the precision by 10%. The results confirm the concept that the web is a feasible source for automatic disease image retrieval for health image database construction. Our approach requires a small amount of manual effort to collect complex disease images, and to annotate them by standard medical ontology terms.
Ryu, Hee Wook; Cho, Kyung-Suk; Lee, Tae-Ho
2011-04-01
The performance of a pilot-scale anti-clogging biofilter system (ABS) was evaluated over a period of 125days for treating ammonia and volatile organic compounds emitted from a full-scale food waste-composting facility. The pilot-scale ABS was designed to intermittently and automatically remove excess biomass using an agitator. When the pressure drop in the polyurethane filter bed was increased to a set point (50 mm H(2)O m(-1)), due to excess biomass acclimation, the agitator automatically worked by the differential pressure switch, without biofilter shutdown. A high removal efficiency (97-99%) was stably maintained for the 125 days after an acclimation period of 1 week, even thought the inlet gas concentrations fluctuated from 0.16 to 0.55 g m(-3). Due the intermittent automatic agitation of the filter bed, the biomass concentration and pressure drop in the biofilter were maintained within the ranges of 1.1-2.0 g-DCW g PU(-1) and below 50 mm H(2)O m(-1), respectively. Copyright © 2011 Elsevier Ltd. All rights reserved.
Adal, Kedir M; Sidibé, Désiré; Ali, Sharib; Chaum, Edward; Karnowski, Thomas P; Mériaudeau, Fabrice
2014-04-01
Despite several attempts, automated detection of microaneurysm (MA) from digital fundus images still remains to be an open issue. This is due to the subtle nature of MAs against the surrounding tissues. In this paper, the microaneurysm detection problem is modeled as finding interest regions or blobs from an image and an automatic local-scale selection technique is presented. Several scale-adapted region descriptors are introduced to characterize these blob regions. A semi-supervised based learning approach, which requires few manually annotated learning examples, is also proposed to train a classifier which can detect true MAs. The developed system is built using only few manually labeled and a large number of unlabeled retinal color fundus images. The performance of the overall system is evaluated on Retinopathy Online Challenge (ROC) competition database. A competition performance measure (CPM) of 0.364 shows the competitiveness of the proposed system against state-of-the art techniques as well as the applicability of the proposed features to analyze fundus images. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
78 FR 11609 - Special Conditions: Embraer S.A., Model EMB-550 Airplane; Landing Pitchover Condition
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-19
... automatic braking system. The applicable airworthiness regulations do not contain adequate or appropriate... with an automatic braking system. This feature is a pilot-selectable function that allows earlier braking at landing without pilot pedal input. When the autobrake system is armed before landing, it...
Electrophysiological Evidence of Automatic Early Semantic Processing
ERIC Educational Resources Information Center
Hinojosa, Jose A.; Martin-Loeches, Manuel; Munoz, Francisco; Casado, Pilar; Pozo, Miguel A.
2004-01-01
This study investigates the automatic-controlled nature of early semantic processing by means of the Recognition Potential (RP), an event-related potential response that reflects lexical selection processes. For this purpose tasks differing in their processing requirements were used. Half of the participants performed a physical task involving a…
Unsupervised MDP Value Selection for Automating ITS Capabilities
ERIC Educational Resources Information Center
Stamper, John; Barnes, Tiffany
2009-01-01
We seek to simplify the creation of intelligent tutors by using student data acquired from standard computer aided instruction (CAI) in conjunction with educational data mining methods to automatically generate adaptive hints. In our previous work, we have automatically generated hints for logic tutoring by constructing a Markov Decision Process…
A wavelet based method for automatic detection of slow eye movements: a pilot study.
Magosso, Elisa; Provini, Federica; Montagna, Pasquale; Ursino, Mauro
2006-11-01
Electro-oculographic (EOG) activity during the wake-sleep transition is characterized by the appearance of slow eye movements (SEM). The present work describes an algorithm for the automatic localisation of SEM events from EOG recordings. The algorithm is based on a wavelet multiresolution analysis of the difference between right and left EOG tracings, and includes three main steps: (i) wavelet decomposition down to 10 detail levels (i.e., 10 scales), using Daubechies order 4 wavelet; (ii) computation of energy in 0.5s time steps at any level of decomposition; (iii) construction of a non-linear discriminant function expressing the relative energy of high-scale details to both high- and low-scale details. The main assumption is that the value of the discriminant function increases above a given threshold during SEM episodes due to energy redistribution toward higher scales. Ten EOG recordings from ten male patients with obstructive sleep apnea syndrome were used. All tracings included a period from pre-sleep wakefulness to stage 2 sleep. Two experts inspected the tracings separately to score SEMs. A reference set of SEM (gold standard) were obtained by joint examination by both experts. Parameters of the discriminant function were assigned on three tracings (design set) to minimize the disagreement between the system classification and classification by the two experts; the algorithm was then tested on the remaining seven tracings (test set). Results show that the agreement between the algorithm and the gold standard was 80.44+/-4.09%, the sensitivity of the algorithm was 67.2+/-7.37% and the selectivity 83.93+/-8.65%. However, most errors were not caused by an inability of the system to detect intervals with SEM activity against NON-SEM intervals, but were due to a different localisation of the beginning and end of some SEM episodes. The proposed method may be a valuable tool for computerized EOG analysis.
Automatic sequential fluid handling with multilayer microfluidic sample isolated pumping
Liu, Jixiao; Fu, Hai; Yang, Tianhang; Li, Songjing
2015-01-01
To sequentially handle fluids is of great significance in quantitative biology, analytical chemistry, and bioassays. However, the technological options are limited when building such microfluidic sequential processing systems, and one of the encountered challenges is the need for reliable, efficient, and mass-production available microfluidic pumping methods. Herein, we present a bubble-free and pumping-control unified liquid handling method that is compatible with large-scale manufacture, termed multilayer microfluidic sample isolated pumping (mμSIP). The core part of the mμSIP is the selective permeable membrane that isolates the fluidic layer from the pneumatic layer. The air diffusion from the fluidic channel network into the degassing pneumatic channel network leads to fluidic channel pressure variation, which further results in consistent bubble-free liquid pumping into the channels and the dead-end chambers. We characterize the mμSIP by comparing the fluidic actuation processes with different parameters and a flow rate range of 0.013 μl/s to 0.097 μl/s is observed in the experiments. As the proof of concept, we demonstrate an automatic sequential fluid handling system aiming at digital assays and immunoassays, which further proves the unified pumping-control and suggests that the mμSIP is suitable for functional microfluidic assays with minimal operations. We believe that the mμSIP technology and demonstrated automatic sequential fluid handling system would enrich the microfluidic toolbox and benefit further inventions. PMID:26487904
Otero, José; Palacios, Ana; Suárez, Rosario; Junco, Luis
2014-01-01
When selecting relevant inputs in modeling problems with low quality data, the ranking of the most informative inputs is also uncertain. In this paper, this issue is addressed through a new procedure that allows the extending of different crisp feature selection algorithms to vague data. The partial knowledge about the ordinal of each feature is modelled by means of a possibility distribution, and a ranking is hereby applied to sort these distributions. It will be shown that this technique makes the most use of the available information in some vague datasets. The approach is demonstrated in a real-world application. In the context of massive online computer science courses, methods are sought for automatically providing the student with a qualification through code metrics. Feature selection methods are used to find the metrics involved in the most meaningful predictions. In this study, 800 source code files, collected and revised by the authors in classroom Computer Science lectures taught between 2013 and 2014, are analyzed with the proposed technique, and the most relevant metrics for the automatic grading task are discussed. PMID:25114967
Navarro, Pedro J; Fernández-Isla, Carlos; Alcover, Pedro María; Suardíaz, Juan
2016-07-27
This paper presents a robust method for defect detection in textures, entropy-based automatic selection of the wavelet decomposition level (EADL), based on a wavelet reconstruction scheme, for detecting defects in a wide variety of structural and statistical textures. Two main features are presented. One of the new features is an original use of the normalized absolute function value (NABS) calculated from the wavelet coefficients derived at various different decomposition levels in order to identify textures where the defect can be isolated by eliminating the texture pattern in the first decomposition level. The second is the use of Shannon's entropy, calculated over detail subimages, for automatic selection of the band for image reconstruction, which, unlike other techniques, such as those based on the co-occurrence matrix or on energy calculation, provides a lower decomposition level, thus avoiding excessive degradation of the image, allowing a more accurate defect segmentation. A metric analysis of the results of the proposed method with nine different thresholding algorithms determined that selecting the appropriate thresholding method is important to achieve optimum performance in defect detection. As a consequence, several different thresholding algorithms depending on the type of texture are proposed.
Psychometric Properties of the Children's Automatic Thoughts Scale (CATS) in Chinese Adolescents.
Sun, Ling; Rapee, Ronald M; Tao, Xuan; Yan, Yulei; Wang, Shanshan; Xu, Wei; Wang, Jianping
2015-08-01
The Children's Automatic Thoughts Scale (CATS) is a 40-item self-report questionnaire designed to measure children's negative thoughts. This study examined the psychometric properties of the Chinese translation of the CATS. Participants included 1,993 students (average age = 14.73) from three schools in Mainland China. A subsample of the participants was retested after 4 weeks. Confirmatory factor analysis replicated the original structure with four first-order factors loading on a single higher-order factor. The convergent and divergent validity of the CATS were good. The CATS demonstrated high internal consistency and test-retest reliability. Boys scored higher on the CATS-hostility subscale, but there were no other gender differences. Older adolescents (15-18 years) reported higher scores than younger adolescents (12-14 years) on the total score and on the physical threat, social threat, and hostility subscales. The CATS proved to be a reliable and valid measure of automatic thoughts in Chinese adolescents.
ERIC Educational Resources Information Center
DOLBY, J.L.; AND OTHERS
THE STUDY IS CONCERNED WITH THE LINGUISTIC PROBLEM INVOLVED IN TEXT COMPRESSION--EXTRACTING, INDEXING, AND THE AUTOMATIC CREATION OF SPECIAL-PURPOSE CITATION DICTIONARIES. IN SPITE OF EARLY SUCCESS IN USING LARGE-SCALE COMPUTERS TO AUTOMATE CERTAIN HUMAN TASKS, THESE PROBLEMS REMAIN AMONG THE MOST DIFFICULT TO SOLVE. ESSENTIALLY, THE PROBLEM IS TO…
[Development of a Software for Automatically Generated Contours in Eclipse TPS].
Xie, Zhao; Hu, Jinyou; Zou, Lian; Zhang, Weisha; Zou, Yuxin; Luo, Kelin; Liu, Xiangxiang; Yu, Luxin
2015-03-01
The automatic generation of planning targets and auxiliary contours have achieved in Eclipse TPS 11.0. The scripting language autohotkey was used to develop a software for automatically generated contours in Eclipse TPS. This software is named Contour Auto Margin (CAM), which is composed of operational functions of contours, script generated visualization and script file operations. RESULTS Ten cases in different cancers have separately selected, in Eclipse TPS 11.0 scripts generated by the software could not only automatically generate contours but also do contour post-processing. For different cancers, there was no difference between automatically generated contours and manually created contours. The CAM is a user-friendly and powerful software, and can automatically generated contours fast in Eclipse TPS 11.0. With the help of CAM, it greatly save plan preparation time and improve working efficiency of radiation therapy physicists.
Automatic Fastening Large Structures: a New Approach
NASA Technical Reports Server (NTRS)
Lumley, D. F.
1985-01-01
The external tank (ET) intertank structure for the space shuttle, a 27.5 ft diameter 22.5 ft long externally stiffened mechanically fastened skin-stringer-frame structure, was a labor intensitive manual structure built on a modified Saturn tooling position. A new approach was developed based on half-section subassemblies. The heart of this manufacturing approach will be 33 ft high vertical automatic riveting system with a 28 ft rotary positioner coming on-line in mid 1985. The Automatic Riveting System incorporates many of the latest automatic riveting technologies. Key features include: vertical columns with two sets of independently operating CNC drill-riveting heads; capability of drill, insert and upset any one piece fastener up to 3/8 inch diameter including slugs without displacing the workpiece offset bucking ram with programmable rotation and deep retraction; vision system for automatic parts program re-synchronization and part edge margin control; and an automatic rivet selection/handling system.
DELINEATING SUBTYPES OF SELF-INJURIOUS BEHAVIOR MAINTAINED BY AUTOMATIC REINFORCEMENT
Hagopian, Louis P.; Rooker, Griffin W.; Zarcone, Jennifer R.
2016-01-01
Self-injurious behavior (SIB) is maintained by automatic reinforcement in roughly 25% of cases. Automatically reinforced SIB typically has been considered a single functional category, and is less understood than socially reinforced SIB. Subtyping automatically reinforced SIB into functional categories has the potential to guide the development of more targeted interventions and increase our understanding of its biological underpinnings. The current study involved an analysis of 39 individuals with automatically reinforced SIB and a comparison group of 13 individuals with socially reinforced SIB. Automatically reinforced SIB was categorized into 3 subtypes based on patterns of responding in the functional analysis and the presence of self-restraint. These response features were selected as the basis for subtyping on the premise that they could reflect functional properties of SIB unique to each subtype. Analysis of treatment data revealed important differences across subtypes and provides preliminary support to warrant additional research on this proposed subtyping model. PMID:26223959
Patrick, Regan E; Rastogi, Anuj; Christensen, Bruce K
2015-01-01
Adaptive emotional responding relies on dual automatic and effortful processing streams. Dual-stream models of schizophrenia (SCZ) posit a selective deficit in neural circuits that govern goal-directed, effortful processes versus reactive, automatic processes. This imbalance suggests that when patients are confronted with competing automatic and effortful emotional response cues, they will exhibit diminished effortful responding and intact, possibly elevated, automatic responding compared to controls. This prediction was evaluated using a modified version of the face-vignette task (FVT). Participants viewed emotional faces (automatic response cue) paired with vignettes (effortful response cue) that signalled a different emotion category and were instructed to discriminate the manifest emotion. Patients made less vignette and more face responses than controls. However, the relationship between group and FVT responding was moderated by IQ and reading comprehension ability. These results replicate and extend previous research and provide tentative support for abnormal conflict resolution between automatic and effortful emotional processing predicted by dual-stream models of SCZ.
A simulator evaluation of an automatic terminal approach system
NASA Technical Reports Server (NTRS)
Hinton, D. A.
1983-01-01
The automatic terminal approach system (ATAS) is a concept for improving the pilot/machine interface with cockpit automation. The ATAS can automatically fly a published instrument approach by using stored instrument approach data to automatically tune airplane avionics, control the airplane's autopilot, and display status information to the pilot. A piloted simulation study was conducted to determine the feasibility of an ATAS, determine pilot acceptance, and examine pilot/ATAS interaction. Seven instrument-rated pilots each flew four instrument approaches with a base-line heading select autopilot mode. The ATAS runs resulted in lower flight technical error, lower pilot workload, and fewer blunders than with the baseline autopilot. The ATAS status display enabled the pilots to maintain situational awareness during the automatic approaches. The system was well accepted by the pilots.
Automatic vibration mode selection and excitation; combining modal filtering with autoresonance
NASA Astrophysics Data System (ADS)
Davis, Solomon; Bucher, Izhak
2018-02-01
Autoresonance is a well-known nonlinear feedback method used for automatically exciting a system at its natural frequency. Though highly effective in exciting single degree of freedom systems, in its simplest form it lacks a mechanism for choosing the mode of excitation when more than one is present. In this case a single mode will be automatically excited, but this mode cannot be chosen or changed. In this paper a new method for automatically exciting a general second-order system at any desired natural frequency using Autoresonance is proposed. The article begins by deriving a concise expression for the frequency of the limit cycle induced by an Autoresonance feedback loop enclosed on the system. The expression is based on modal decomposition, and provides valuable insight into the behavior of a system controlled in this way. With this expression, a method for selecting and exciting a desired mode naturally follows by combining Autoresonance with Modal Filtering. By taking various linear combinations of the sensor signals, by orthogonality one can "filter out" all the unwanted modes effectively. The desired mode's natural frequency is then automatically reflected in the limit cycle. In experiment the technique has proven extremely robust, even if the amplitude of the desired mode is significantly smaller than the others and the modal filters are greatly inaccurate.
Semi-automatic assessment of skin capillary density: proof of principle and validation.
Gronenschild, E H B M; Muris, D M J; Schram, M T; Karaca, U; Stehouwer, C D A; Houben, A J H M
2013-11-01
Skin capillary density and recruitment have been proven to be relevant measures of microvascular function. Unfortunately, the assessment of skin capillary density from movie files is very time-consuming, since this is done manually. This impedes the use of this technique in large-scale studies. We aimed to develop a (semi-) automated assessment of skin capillary density. CapiAna (Capillary Analysis) is a newly developed semi-automatic image analysis application. The technique involves four steps: 1) movement correction, 2) selection of the frame range and positioning of the region of interest (ROI), 3) automatic detection of capillaries, and 4) manual correction of detected capillaries. To gain insight into the performance of the technique, skin capillary density was measured in twenty participants (ten women; mean age 56.2 [42-72] years). To investigate the agreement between CapiAna and the classic manual counting procedure, we used weighted Deming regression and Bland-Altman analyses. In addition, intra- and inter-observer coefficients of variation (CVs), and differences in analysis time were assessed. We found a good agreement between CapiAna and the classic manual method, with a Pearson's correlation coefficient (r) of 0.95 (P<0.001) and a Deming regression coefficient of 1.01 (95%CI: 0.91; 1.10). In addition, we found no significant differences between the two methods, with an intercept of the Deming regression of 1.75 (-6.04; 9.54), while the Bland-Altman analysis showed a mean difference (bias) of 2.0 (-13.5; 18.4) capillaries/mm(2). The intra- and inter-observer CVs of CapiAna were 2.5% and 5.6% respectively, while for the classic manual counting procedure these were 3.2% and 7.2%, respectively. Finally, the analysis time for CapiAna ranged between 25 and 35min versus 80 and 95min for the manual counting procedure. We have developed a semi-automatic image analysis application (CapiAna) for the assessment of skin capillary density, which agrees well with the classic manual counting procedure, is time-saving, and has a better reproducibility as compared to the classic manual counting procedure. As a result, the use of skin capillaroscopy is feasible in large-scale studies, which importantly extends the possibilities to perform microcirculation research in humans. © 2013.
ERIC Educational Resources Information Center
Woodman, Geoffrey F.; Luck, Steven J.
2007-01-01
In many theories of cognition, researchers propose that working memory and perception operate interactively. For example, in previous studies researchers have suggested that sensory inputs matching the contents of working memory will have an automatic advantage in the competition for processing resources. The authors tested this hypothesis by…
A Neurobiological Theory of Automaticity in Perceptual Categorization
ERIC Educational Resources Information Center
Ashby, F. Gregory; Ennis, John M.; Spiering, Brian J.
2007-01-01
A biologically detailed computational model is described of how categorization judgments become automatic in tasks that depend on procedural learning. The model assumes 2 neural pathways from sensory association cortex to the premotor area that mediates response selection. A longer and slower path projects to the premotor area via the striatum,…
Study of the Acquisition of Peripheral Equipment for Use with Automatic Data Processing Systems.
ERIC Educational Resources Information Center
Comptroller General of the U.S., Washington, DC.
The General Accounting Office (GAO) performed this study because: preliminary indications showed that significant savings could be achieved in the procurement of selected computer components; the Federal Government is investing increasing amounts of money in Automatic Data Processing (ADP) equipment; and there is a widespread congressional…
ERIC Educational Resources Information Center
Army Ordnance Center and School, Aberdeen Proving Ground, MD.
These two texts and student workbook for a secondary/postsecondary-level correspondence course in automatic data processing comprise one of a number of military-developed curriculum packages selected for adaptation to vocational instruction and curriculum development in a civilian setting. The purpose stated for the individualized, self-paced…
An automatic detection software for differential reflection spectroscopy
NASA Astrophysics Data System (ADS)
Yuksel, Seniha Esen; Dubroca, Thierry; Hummel, Rolf E.; Gader, Paul D.
2012-06-01
Recent terrorist attacks have sprung a need for a large scale explosive detector. Our group has developed differential reflection spectroscopy which can detect explosive residue on surfaces such as parcel, cargo and luggage. In short, broad band ultra-violet and visible light is shone onto a material (such as a parcel) moving on a conveyor belt. Upon reflection off the surface, the light intensity is recorded with a spectrograph (spectrometer in combination with a CCD camera). This reflected light intensity is then subtracted and normalized with the next data point collected, resulting in differential reflection spectra in the 200-500 nm range. Explosives show spectral finger-prints at specific wavelengths, for example, the spectrum of 2,4,6, trinitrotoluene (TNT) shows an absorption edge at 420 nm. Additionally, we have developed an automated software which detects the characteristic features of explosives. One of the biggest challenges for the algorithm is to reach a practical limit of detection. In this study, we introduce our automatic detection software which is a combination of principal component analysis and support vector machines. Finally we present the sensitivity and selectivity response of our algorithm as a function of the amount of explosive detected on a given surface.
Wang, Bei; Wang, Xingyu; Ikeda, Akio; Nagamine, Takashi; Shibasaki, Hiroshi; Nakamura, Masatoshi
2014-01-01
EEG (Electroencephalograph) interpretation is important for the diagnosis of neurological disorders. The proper adjustment of the montage can highlight the EEG rhythm of interest and avoid false interpretation. The aim of this study was to develop an automatic reference selection method to identify a suitable reference. The results may contribute to the accurate inspection of the distribution of EEG rhythms for quantitative EEG interpretation. The method includes two pre-judgements and one iterative detection module. The diffuse case is initially identified by pre-judgement 1 when intermittent rhythmic waveforms occur over large areas along the scalp. The earlobe reference or averaged reference is adopted for the diffuse case due to the effect of the earlobe reference depending on pre-judgement 2. An iterative detection algorithm is developed for the localised case when the signal is distributed in a small area of the brain. The suitable averaged reference is finally determined based on the detected focal and distributed electrodes. The presented technique was applied to the pathological EEG recordings of nine patients. One example of the diffuse case is introduced by illustrating the results of the pre-judgements. The diffusely intermittent rhythmic slow wave is identified. The effect of active earlobe reference is analysed. Two examples of the localised case are presented, indicating the results of the iterative detection module. The focal and distributed electrodes are detected automatically during the repeating algorithm. The identification of diffuse and localised activity was satisfactory compared with the visual inspection. The EEG rhythm of interest can be highlighted using a suitable selected reference. The implementation of an automatic reference selection method is helpful to detect the distribution of an EEG rhythm, which can improve the accuracy of EEG interpretation during both visual inspection and automatic interpretation. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.
An Automated Blur Detection Method for Histological Whole Slide Imaging
Moles Lopez, Xavier; D'Andrea, Etienne; Barbot, Paul; Bridoux, Anne-Sophie; Rorive, Sandrine; Salmon, Isabelle; Debeir, Olivier; Decaestecker, Christine
2013-01-01
Whole slide scanners are novel devices that enable high-resolution imaging of an entire histological slide. Furthermore, the imaging is achieved in only a few minutes, which enables image rendering of large-scale studies involving multiple immunohistochemistry biomarkers. Although whole slide imaging has improved considerably, locally poor focusing causes blurred regions of the image. These artifacts may strongly affect the quality of subsequent analyses, making a slide review process mandatory. This tedious and time-consuming task requires the scanner operator to carefully assess the virtual slide and to manually select new focus points. We propose a statistical learning method that provides early image quality feedback and automatically identifies regions of the image that require additional focus points. PMID:24349343
Chameleon Coatings: Adaptive Surfaces to Reduce Friction and Wear in Extreme Environments
NASA Astrophysics Data System (ADS)
Muratore, C.; Voevodin, A. A.
2009-08-01
Adaptive nanocomposite coating materials that automatically and reversibly adjust their surface composition and morphology via multiple mechanisms are a promising development for the reduction of friction and wear over broad ranges of ambient conditions encountered in aerospace applications, such as cycling of temperature and atmospheric composition. Materials selection for these composites is based on extensive study of interactions occurring between solid lubricants and their surroundings, especially with novel in situ surface characterization techniques used to identify adaptive behavior on size scales ranging from 10-10 to 10-4 m. Recent insights on operative solid-lubricant mechanisms and their dependency upon the ambient environment are reviewed as a basis for a discussion of the state of the art in solid-lubricant materials.
The effects of visual search efficiency on object-based attention
Rosen, Maya; Cutrone, Elizabeth; Behrmann, Marlene
2017-01-01
The attentional prioritization hypothesis of object-based attention (Shomstein & Yantis in Perception & Psychophysics, 64, 41–51, 2002) suggests a two-stage selection process comprising an automatic spatial gradient and flexible strategic (prioritization) selection. The combined attentional priorities of these two stages of object-based selection determine the order in which participants will search the display for the presence of a target. The strategic process has often been likened to a prioritized visual search. By modifying the double-rectangle cueing paradigm (Egly, Driver, & Rafal in Journal of Experimental Psychology: General, 123, 161–177, 1994) and placing it in the context of a larger-scale visual search, we examined how the prioritization search is affected by search efficiency. By probing both targets located on the cued object and targets external to the cued object, we found that the attentional priority surrounding a selected object is strongly modulated by search mode. However, the ordering of the prioritization search is unaffected by search mode. The data also provide evidence that standard spatial visual search and object-based prioritization search may rely on distinct mechanisms. These results provide insight into the interactions between the mode of visual search and object-based selection, and help define the modulatory consequences of search efficiency for object-based attention. PMID:25832192
Automatic assembly of micro-optical components
NASA Astrophysics Data System (ADS)
Gengenbach, Ulrich K.
1996-12-01
Automatic assembly becomes an important issue as hybrid micro systems enter industrial fabrication. Moving from a laboratory scale production with manual assembly and bonding processes to automatic assembly requires a thorough re- evaluation of the design, the characteristics of the individual components and of the processes involved. Parts supply for automatic operation, sensitive and intelligent grippers adapted to size, surface and material properties of the microcomponents gain importance when the superior sensory and handling skills of a human are to be replaced by a machine. This holds in particular for the automatic assembly of micro-optical components. The paper outlines these issues exemplified at the automatic assembly of a micro-optical duplexer consisting of a micro-optical bench fabricated by the LIGA technique, two spherical lenses, a wavelength filter and an optical fiber. Spherical lenses, wavelength filter and optical fiber are supplied by third party vendors, which raises the question of parts supply for automatic assembly. The bonding processes for these components include press fit and adhesive bonding. The prototype assembly system with all relevant components e.g. handling system, parts supply, grippers and control is described. Results of first automatic assembly tests are presented.
Queiroz, Polyane Mazucatto; Rovaris, Karla; Santaella, Gustavo Machado; Haiter-Neto, Francisco; Freitas, Deborah Queiroz
2017-01-01
To calculate root canal volume and surface area in microCT images, an image segmentation by selecting threshold values is required, which can be determined by visual or automatic methods. Visual determination is influenced by the operator's visual acuity, while the automatic method is done entirely by computer algorithms. To compare between visual and automatic segmentation, and to determine the influence of the operator's visual acuity on the reproducibility of root canal volume and area measurements. Images from 31 extracted human anterior teeth were scanned with a μCT scanner. Three experienced examiners performed visual image segmentation, and threshold values were recorded. Automatic segmentation was done using the "Automatic Threshold Tool" available in the dedicated software provided by the scanner's manufacturer. Volume and area measurements were performed using the threshold values determined both visually and automatically. The paired Student's t-test showed no significant difference between visual and automatic segmentation methods regarding root canal volume measurements (p=0.93) and root canal surface (p=0.79). Although visual and automatic segmentation methods can be used to determine the threshold and calculate root canal volume and surface, the automatic method may be the most suitable for ensuring the reproducibility of threshold determination.
NASA Astrophysics Data System (ADS)
Liu, Q.; Jing, L.; Li, Y.; Tang, Y.; Li, H.; Lin, Q.
2016-04-01
For the purpose of forest management, high resolution LIDAR and optical remote sensing imageries are used for treetop detection, tree crown delineation, and classification. The purpose of this study is to develop a self-adjusted dominant scales calculation method and a new crown horizontal cutting method of tree canopy height model (CHM) to detect and delineate tree crowns from LIDAR, under the hypothesis that a treetop is radiometric or altitudinal maximum and tree crowns consist of multi-scale branches. The major concept of the method is to develop an automatic selecting strategy of feature scale on CHM, and a multi-scale morphological reconstruction-open crown decomposition (MRCD) to get morphological multi-scale features of CHM by: cutting CHM from treetop to the ground; analysing and refining the dominant multiple scales with differential horizontal profiles to get treetops; segmenting LiDAR CHM using watershed a segmentation approach marked with MRCD treetops. This method has solved the problems of false detection of CHM side-surface extracted by the traditional morphological opening canopy segment (MOCS) method. The novel MRCD delineates more accurate and quantitative multi-scale features of CHM, and enables more accurate detection and segmentation of treetops and crown. Besides, the MRCD method can also be extended to high optical remote sensing tree crown extraction. In an experiment on aerial LiDAR CHM of a forest of multi-scale tree crowns, the proposed method yielded high-quality tree crown maps.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dise, J; Liang, X; Lin, L
Purpose: To evaluate an automatic interstitial catheter digitization algorithm that reduces treatment planning time and provide means for adaptive re-planning in HDR Brachytherapy of Gynecologic Cancers. Methods: The semi-automatic catheter digitization tool utilizes a region growing algorithm in conjunction with a spline model of the catheters. The CT images were first pre-processed to enhance the contrast between the catheters and soft tissue. Several seed locations were selected in each catheter for the region growing algorithm. The spline model of the catheters assisted in the region growing by preventing inter-catheter cross-over caused by air or metal artifacts. Source dwell positions frommore » day one CT scans were applied to subsequent CTs and forward calculated using the automatically digitized catheter positions. This method was applied to 10 patients who had received HDR interstitial brachytherapy on an IRB approved image-guided radiation therapy protocol. The prescribed dose was 18.75 or 20 Gy delivered in 5 fractions, twice daily, over 3 consecutive days. Dosimetric comparisons were made between automatic and manual digitization on day two CTs. Results: The region growing algorithm, assisted by the spline model of the catheters, was able to digitize all catheters. The difference between automatic and manually digitized positions was 0.8±0.3 mm. The digitization time ranged from 34 minutes to 43 minutes with a mean digitization time of 37 minutes. The bulk of the time was spent on manual selection of initial seed positions and spline parameter adjustments. There was no significance difference in dosimetric parameters between the automatic and manually digitized plans. D90% to the CTV was 91.5±4.4% for the manual digitization versus 91.4±4.4% for the automatic digitization (p=0.56). Conclusion: A region growing algorithm was developed to semi-automatically digitize interstitial catheters in HDR brachytherapy using the Syed-Neblett template. This automatic digitization tool was shown to be accurate compared to manual digitization.« less
Urschler, Martin; Grassegger, Sabine; Štern, Darko
2015-01-01
Age estimation of individuals is important in human biology and has various medical and forensic applications. Recent interest in MR-based methods aims to investigate alternatives for established methods involving ionising radiation. Automatic, software-based methods additionally promise improved estimation objectivity. To investigate how informative automatically selected image features are regarding their ability to discriminate age, by exploring a recently proposed software-based age estimation method for MR images of the left hand and wrist. One hundred and two MR datasets of left hand images are used to evaluate age estimation performance, consisting of bone and epiphyseal gap volume localisation, computation of one age regression model per bone mapping image features to age and fusion of individual bone age predictions to a final age estimate. Quantitative results of the software-based method show an age estimation performance with a mean absolute difference of 0.85 years (SD = 0.58 years) to chronological age, as determined by a cross-validation experiment. Qualitatively, it is demonstrated how feature selection works and which image features of skeletal maturation are automatically chosen to model the non-linear regression function. Feasibility of automatic age estimation based on MRI data is shown and selected image features are found to be informative for describing anatomical changes during physical maturation in male adolescents.
Machine Beats Experts: Automatic Discovery of Skill Models for Data-Driven Online Course Refinement
ERIC Educational Resources Information Center
Matsuda, Noboru; Furukawa, Tadanobu; Bier, Norman; Faloutsos, Christos
2015-01-01
How can we automatically determine which skills must be mastered for the successful completion of an online course? Large-scale online courses (e.g., MOOCs) often contain a broad range of contents frequently intended to be a semester's worth of materials; this breadth often makes it difficult to articulate an accurate set of skills and knowledge…
An Algorithm for Automatic Checking of Exercises in a Dynamic Geometry System: iGeom
ERIC Educational Resources Information Center
Isotani, Seiji; de Oliveira Brandao, Leonidas
2008-01-01
One of the key issues in e-learning environments is the possibility of creating and evaluating exercises. However, the lack of tools supporting the authoring and automatic checking of exercises for specifics topics (e.g., geometry) drastically reduces advantages in the use of e-learning environments on a larger scale, as usually happens in Brazil.…
[A wavelet-transform-based method for the automatic detection of late-type stars].
Liu, Zhong-tian; Zhao, Rrui-zhen; Zhao, Yong-heng; Wu, Fu-chao
2005-07-01
The LAMOST project, the world largest sky survey project, urgently needs an automatic late-type stars detection system. However, to our knowledge, no effective methods for automatic late-type stars detection have been reported in the literature up to now. The present study work is intended to explore possible ways to deal with this issue. Here, by "late-type stars" we mean those stars with strong molecule absorption bands, including oxygen-rich M, L and T type stars and carbon-rich C stars. Based on experimental results, the authors find that after a wavelet transform with 5 scales on the late-type stars spectra, their frequency spectrum of the transformed coefficient on the 5th scale consistently manifests a unimodal distribution, and the energy of frequency spectrum is largely concentrated on a small neighborhood centered around the unique peak. However, for the spectra of other celestial bodies, the corresponding frequency spectrum is of multimodal and the energy of frequency spectrum is dispersible. Based on such a finding, the authors presented a wavelet-transform-based automatic late-type stars detection method. The proposed method is shown by extensive experiments to be practical and of good robustness.
A new fast scanning system for the measurement of large angle tracks in nuclear emulsions
NASA Astrophysics Data System (ADS)
Alexandrov, A.; Buonaura, A.; Consiglio, L.; D'Ambrosio, N.; De Lellis, G.; Di Crescenzo, A.; Di Marco, N.; Galati, G.; Lauria, A.; Montesi, M. C.; Pupilli, F.; Shchedrina, T.; Tioukov, V.; Vladymyrov, M.
2015-11-01
Nuclear emulsions have been widely used in particle physics to identify new particles through the observation of their decays thanks to their unique spatial resolution. Nevertheless, before the advent of automatic scanning systems, the emulsion analysis was very demanding in terms of well trained manpower. Due to this reason, they were gradually replaced by electronic detectors, until the '90s, when automatic microscopes started to be developed in Japan and in Europe. Automatic scanning was essential to conceive large scale emulsion-based neutrino experiments like CHORUS, DONUT and OPERA. Standard scanning systems have been initially designed to recognize tracks within a limited angular acceptance (θ lesssim 30°) where θ is the track angle with respect to a line perpendicular to the emulsion plane. In this paper we describe the implementation of a novel fast automatic scanning system aimed at extending the track recognition to the full angular range and improving the present scanning speed. Indeed, nuclear emulsions do not have any intrinsic limit to detect particle direction. Such improvement opens new perspectives to use nuclear emulsions in several fields in addition to large scale neutrino experiments, like muon radiography, medical applications and dark matter directional detection.
Stereophotogrammetry in studies of riparian vegetation dynamics
NASA Astrophysics Data System (ADS)
Hortobagyi, Borbala; Vautier, Franck; Corenblit, Dov; Steiger, Johannes
2014-05-01
Riparian vegetation responds to hydrogeomorphic disturbances and also controls sediment deposition and erosion. Spatio-temporal riparian vegetation dynamics within fluvial corridors have been quantified in many studies using aerial photographs and GIS. However, this approach does not allow the consideration of woody vegetation growth rates (i.e. vertical dimension) which are fundamental when studying feedbacks between the processes of fluvial landform construction and vegetation establishment and succession. We built 3D photogrammetric models of vegetation height based on aerial argentic and digital photographs from sites of the Allier and Garonne Rivers (France). The models were realized at two different spatial scales and with two different methods. The "large" scale corresponds to the reach of the river corridor on the Allier river (photograph taken in 2009) and the "small" scale to river bars of the Allier (photographs taken in 2002, 2009) and Garonne Rivers (photographs taken in 2000, 2002, 2006 and 2010). At the corridor scale, we generated vegetation height models using an automatic procedure. This method is fast but can only be used with digital photographs. At the bar scale, we constructed the models manually using a 3D visualization on the screen. This technique showed good results for digital and also argentic photographs but is very time-consuming. A diachronic study was performed in order to investigate vegetation succession by distinguishing three different classes according to the vegetation height: herbs (<1 m), shrubs (1-4 m) or trees (>4 m). Both methods, i.e. automatic and manual, were employed to study the evolution of the three vegetation classes and the recruitment of new vegetation patches. A comparison was conducted between the vegetation height given by models (automatic and manual) and the vegetation height measured in the field. The manually produced models (small scale) were of a precision of 0.5-1 m, allowing the quantification of woody vegetation growth rates. Thus, our results show that the manual method we developed is accurate to quantify vegetation growth rates at small scales, whereas the less accurate automatic method is appropriate to study vegetation succession at the corridor scale. Both methods are complementary and will contribute to a further exploration of the mutual relationships between hydrogeomorphic processes, topography and vegetation dynamics within alluvial systems, adding the quantification of the vertical dimension of riparian vegetation to their spatio-temporal characteristics.
Design of an automatic weight scale for an isolette
NASA Technical Reports Server (NTRS)
Peterka, R. J.; Griffin, W.
1974-01-01
The design of an infant weight scale is reported that fits into an isolette without disturbing its controlled atmosphere. The scale platform uses strain gages to measure electronically deflections of cantilever beams positioned at its four corners. The weight of the infant is proportional to the sum of the output voltages produced by the gauges on each beam of the scale.
NASA Astrophysics Data System (ADS)
Yusop, Hanafi M.; Ghazali, M. F.; Yusof, M. F. M.; PiRemli, M. A.; Karollah, B.; Rusman
2017-10-01
Pressure transient signal occurred due to sudden changes in fluid propagation filled in pipelines system, which is caused by rapid pressure and flow fluctuation in a system, such as closing and opening valve rapidly. The application of Hilbert-Huang Transform (HHT) as the method to analyse the pressure transient signal utilised in this research. However, this method has the difficulty in selecting the suitable IMF for the further post-processing, which is Hilbert Transform (HT). This paper proposed the implementation of Integrated Kurtosis-based Algorithm for z-filter Technique (I-kaz) to kurtosis ratio (I-kaz-Kurtosis) for that allows automatic selection of intrinsic mode function (IMF) that’s should be used. This work demonstrated the synthetic pressure transient signal generates using transmission line modelling (TLM) in order to test the effectiveness of I-kaz as the autonomous selection of intrinsic mode function (IMF). A straight fluid network was designed using TLM fixing with higher resistance at some point act as a leak and connecting to the pipe feature (junction, pipefitting or blockage). The analysis results using I-kaz-kurtosis ratio revealed that the method can be utilised as an automatic selection of intrinsic mode function (IMF) although the noise level ratio of the signal is lower. I-kaz-kurtosis ratio is recommended and advised to be implemented as automatic selection of intrinsic mode function (IMF) through HHT analysis.
Momeni, Saba; Pourghassem, Hossein
2014-08-01
Recently image fusion has prominent role in medical image processing and is useful to diagnose and treat many diseases. Digital subtraction angiography is one of the most applicable imaging to diagnose brain vascular diseases and radiosurgery of brain. This paper proposes an automatic fuzzy-based multi-temporal fusion algorithm for 2-D digital subtraction angiography images. In this algorithm, for blood vessel map extraction, the valuable frames of brain angiography video are automatically determined to form the digital subtraction angiography images based on a novel definition of vessel dispersion generated by injected contrast material. Our proposed fusion scheme contains different fusion methods for high and low frequency contents based on the coefficient characteristic of wrapping second generation of curvelet transform and a novel content selection strategy. Our proposed content selection strategy is defined based on sample correlation of the curvelet transform coefficients. In our proposed fuzzy-based fusion scheme, the selection of curvelet coefficients are optimized by applying weighted averaging and maximum selection rules for the high frequency coefficients. For low frequency coefficients, the maximum selection rule based on local energy criterion is applied to better visual perception. Our proposed fusion algorithm is evaluated on a perfect brain angiography image dataset consisting of one hundred 2-D internal carotid rotational angiography videos. The obtained results demonstrate the effectiveness and efficiency of our proposed fusion algorithm in comparison with common and basic fusion algorithms.
NASA Astrophysics Data System (ADS)
Sahal, A.; Leone, F.; Péroche, M.
2013-07-01
Small amplitude tsunamis have impacted the French Mediterranean shore (French Riviera) in the past centuries. Some caused casualties; others only generated economic losses. While the North Atlantic and Mediterranean tsunami warning system is being tested and is almost operational, no awareness and preparedness measure is being implemented at a local scale. Evacuation is to be considered along the French Riviera, but no plan exists within communities. We show that various approaches can provide local stakeholders with evacuation capacities assessments to develop adapted evacuation plans through the case study of the Cannes-Antibes region. The complementarity between large- and small-scale approaches is demonstrated with the use of macro-simulators (graph-based) and micro-simulators (multi-agent-based) to select shelter points and choose evacuation routes for pedestrians located on the beach. The first one allows automatically selecting shelter points and measuring and mapping their accessibility. The second one shows potential congestion issues during pedestrian evacuations, and provides leads for the improvement of urban environment. Temporal accessibility to shelters is compared to potential local and distal tsunami travel times, showing a 40 min deficit for an adequate crisis management in the first scenario, and a 30 min surplus for the second one.
Decision Variants for the Automatic Determination of Optimal Feature Subset in RF-RFE.
Chen, Qi; Meng, Zhaopeng; Liu, Xinyi; Jin, Qianguo; Su, Ran
2018-06-15
Feature selection, which identifies a set of most informative features from the original feature space, has been widely used to simplify the predictor. Recursive feature elimination (RFE), as one of the most popular feature selection approaches, is effective in data dimension reduction and efficiency increase. A ranking of features, as well as candidate subsets with the corresponding accuracy, is produced through RFE. The subset with highest accuracy (HA) or a preset number of features (PreNum) are often used as the final subset. However, this may lead to a large number of features being selected, or if there is no prior knowledge about this preset number, it is often ambiguous and subjective regarding final subset selection. A proper decision variant is in high demand to automatically determine the optimal subset. In this study, we conduct pioneering work to explore the decision variant after obtaining a list of candidate subsets from RFE. We provide a detailed analysis and comparison of several decision variants to automatically select the optimal feature subset. Random forest (RF)-recursive feature elimination (RF-RFE) algorithm and a voting strategy are introduced. We validated the variants on two totally different molecular biology datasets, one for a toxicogenomic study and the other one for protein sequence analysis. The study provides an automated way to determine the optimal feature subset when using RF-RFE.
NASA Astrophysics Data System (ADS)
Poiata, Natalia; Vilotte, Jean-Pierre; Bernard, Pascal; Satriano, Claudio; Obara, Kazushige
2018-02-01
In this study, we demonstrate the capability of an automatic network-based detection and location method to extract and analyse different components of tectonic tremor activity by analysing a 9-day energetic tectonic tremor sequence occurring at the down-dip extension of the subducting slab in southwestern Japan. The applied method exploits the coherency of multi-scale, frequency-selective characteristics of non-stationary signals recorded across the seismic network. Use of different characteristic functions, in the signal processing step of the method, allows to extract and locate the sources of short-duration impulsive signal transients associated with low-frequency earthquakes and of longer-duration energy transients during the tectonic tremor sequence. Frequency-dependent characteristic functions, based on higher-order statistics' properties of the seismic signals, are used for the detection and location of low-frequency earthquakes. This allows extracting a more complete (˜6.5 times more events) and time-resolved catalogue of low-frequency earthquakes than the routine catalogue provided by the Japan Meteorological Agency. As such, this catalogue allows resolving the space-time evolution of the low-frequency earthquakes activity in great detail, unravelling spatial and temporal clustering, modulation in response to tide, and different scales of space-time migration patterns. In the second part of the study, the detection and source location of longer-duration signal energy transients within the tectonic tremor sequence is performed using characteristic functions built from smoothed frequency-dependent energy envelopes. This leads to a catalogue of longer-duration energy sources during the tectonic tremor sequence, characterized by their durations and 3-D spatial likelihood maps of the energy-release source regions. The summary 3-D likelihood map for the 9-day tectonic tremor sequence, built from this catalogue, exhibits an along-strike spatial segmentation of the long-duration energy-release regions, matching the large-scale clustering features evidenced from the low-frequency earthquake's activity analysis. Further examination of the two catalogues showed that the extracted short-duration low-frequency earthquakes activity coincides in space, within about 10-15 km distance, with the longer-duration energy sources during the tectonic tremor sequence. This observation provides a potential constraint on the size of the longer-duration energy-radiating source region in relation with the clustering of low-frequency earthquakes activity during the analysed tectonic tremor sequence. We show that advanced statistical network-based methods offer new capabilities for automatic high-resolution detection, location and monitoring of different scale-components of tectonic tremor activity, enriching existing slow earthquakes catalogues. Systematic application of such methods to large continuous data sets will allow imaging the slow transient seismic energy-release activity at higher resolution, and therefore, provide new insights into the underlying multi-scale mechanisms of slow earthquakes generation.
Extraction and LOD control of colored interval volumes
NASA Astrophysics Data System (ADS)
Miyamura, Hiroko N.; Takeshima, Yuriko; Fujishiro, Issei; Saito, Takafumi
2005-03-01
Interval volume serves as a generalized isosurface and represents a three-dimensional subvolume for which the associated scalar filed values lie within a user-specified closed interval. In general, it is not an easy task for novices to specify the scalar field interval corresponding to their ROIs. In order to extract interval volumes from which desirable geometric features can be mined effectively, we propose a suggestive technique which extracts interval volumes automatically based on the global examination of the field contrast structure. Also proposed here is a simplification scheme for decimating resultant triangle patches to realize efficient transmission and rendition of large-scale interval volumes. Color distributions as well as geometric features are taken into account to select best edges to be collapsed. In addition, when a user wants to selectively display and analyze the original dataset, the simplified dataset is restructured to the original quality. Several simulated and acquired datasets are used to demonstrate the effectiveness of the present methods.
Voxel classification based airway tree segmentation
NASA Astrophysics Data System (ADS)
Lo, Pechin; de Bruijne, Marleen
2008-03-01
This paper presents a voxel classification based method for segmenting the human airway tree in volumetric computed tomography (CT) images. In contrast to standard methods that use only voxel intensities, our method uses a more complex appearance model based on a set of local image appearance features and Kth nearest neighbor (KNN) classification. The optimal set of features for classification is selected automatically from a large set of features describing the local image structure at several scales. The use of multiple features enables the appearance model to differentiate between airway tree voxels and other voxels of similar intensities in the lung, thus making the segmentation robust to pathologies such as emphysema. The classifier is trained on imperfect segmentations that can easily be obtained using region growing with a manual threshold selection. Experiments show that the proposed method results in a more robust segmentation that can grow into the smaller airway branches without leaking into emphysematous areas, and is able to segment many branches that are not present in the training set.
A method for feature selection of APT samples based on entropy
NASA Astrophysics Data System (ADS)
Du, Zhenyu; Li, Yihong; Hu, Jinsong
2018-05-01
By studying the known APT attack events deeply, this paper propose a feature selection method of APT sample and a logic expression generation algorithm IOCG (Indicator of Compromise Generate). The algorithm can automatically generate machine readable IOCs (Indicator of Compromise), to solve the existing IOCs logical relationship is fixed, the number of logical items unchanged, large scale and cannot generate a sample of the limitations of the expression. At the same time, it can reduce the redundancy and useless APT sample processing time consumption, and improve the sharing rate of information analysis, and actively respond to complex and volatile APT attack situation. The samples were divided into experimental set and training set, and then the algorithm was used to generate the logical expression of the training set with the IOC_ Aware plug-in. The contrast expression itself was different from the detection result. The experimental results show that the algorithm is effective and can improve the detection effect.
Carraro, Luciana; Castelli, Luigi; Macchiella, Claudia
2011-01-01
Research has widely explored the differences between conservatives and liberals, and it has been also recently demonstrated that conservatives display different reactions toward valenced stimuli. However, previous studies have not yet fully illuminated the cognitive underpinnings of these differences. In the current work, we argued that political ideology is related to selective attention processes, so that negative stimuli are more likely to automatically grab the attention of conservatives as compared to liberals. In Experiment 1, we demonstrated that negative (vs. positive) information impaired the performance of conservatives, more than liberals, in an Emotional Stroop Task. This finding was confirmed in Experiment 2 and in Experiment 3 employing a Dot-Probe Task, demonstrating that threatening stimuli were more likely to attract the attention of conservatives. Overall, results support the conclusion that people embracing conservative views of the world display an automatic selective attention for negative stimuli. PMID:22096486
Howell, Peter; Sackin, Stevie; Glenn, Kazan
2007-01-01
This program of work is intended to develop automatic recognition procedures to locate and assess stuttered dysfluencies. This and the following article together, develop and test recognizers for repetitions and prolongations. The automatic recognizers classify the speech in two stages: In the first, the speech is segmented and in the second the segments are categorized. The units that are segmented are words. Here assessments by human judges on the speech of 12 children who stutter are described using a corresponding procedure. The accuracy of word boundary placement across judges, categorization of the words as fluent, repetition or prolongation, and duration of the different fluency categories are reported. These measures allow reliable instances of repetitions and prolongations to be selected for training and assessing the recognizers in the subsequent paper. PMID:9328878
Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs
Chen, Haijian; Han, Dongmei; Zhao, Lina
2015-01-01
In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738
POPCORN: a Supervisory Control Simulation for Workload and Performance Research
NASA Technical Reports Server (NTRS)
Hart, S. G.; Battiste, V.; Lester, P. T.
1984-01-01
A multi-task simulation of a semi-automatic supervisory control system was developed to provide an environment in which training, operator strategy development, failure detection and resolution, levels of automation, and operator workload can be investigated. The goal was to develop a well-defined, but realistically complex, task that would lend itself to model-based analysis. The name of the task (POPCORN) reflects the visual display that depicts different task elements milling around waiting to be released and pop out to be performed. The operator's task was to complete each of 100 task elements that ere represented by different symbols, by selecting a target task and entering the desired a command. The simulated automatic system then completed the selected function automatically. Highly significant differences in performance, strategy, and rated workload were found as a function of all experimental manipulations (except reward/penalty).
A Multiple-range Self-balancing Thermocouple Potentiometer
NASA Technical Reports Server (NTRS)
Warshawsky, I; Estrin, M
1951-01-01
A multiple-range potentiometer circuit is described that provides automatic measurement of temperatures or temperature differences with any one of several thermocouple-material pairs. Techniques of automatic reference junction compensation, span adjustment, and zero suppression are described that permit rapid selection of range and wire material, without the necessity for restandardization, by setting of two external tap switches.
Chinese Journal of Lasers (Selected Articles),
1986-04-22
properties We first investigated silicate based glasses, then the other inorganic glasses such as borate, phosphate, germanate. tellurate ...of the growth of high melting temperature I.~ oxides, several upward pulling single crystal furnaces with nigh precision mechanical movement and high...automatic electronic weighting systems, and programmable automatic movement correction systems. The reliability of most of these control systems
An automatic camera device for measuring waterfowl use
Cowardin, L.M.; Ashe, J.E.
1965-01-01
A Yashica Sequelle camera was modified and equipped with a timing device so that it would take pictures automatically at 15-minute intervals. Several of these cameras were used to photograph randomly selected quadrats located in different marsh habitats. The number of birds photographed in the different areas was used as an index of waterfowl use.
Dissociating Working Memory Updating and Automatic Updating: The Reference-Back Paradigm
ERIC Educational Resources Information Center
Rac-Lubashevsky, Rachel; Kessler, Yoav
2016-01-01
Working memory (WM) updating is a controlled process through which relevant information in the environment is selected to enter the gate to WM and substitute its contents. We suggest that there is also an automatic form of updating, which influences performance in many tasks and is primarily manifested in reaction time sequential effects. The goal…
ERIC Educational Resources Information Center
Okurut, Jeje Moses
2018-01-01
The impact of automatic promotion practice on students dropping out of Uganda's primary education was assessed using propensity score in difference in differences analysis technique. The analysis strategy was instrumental in addressing the selection bias problem, as well as biases arising from common trends over time, and permanent latent…
Optimizing Input/Output Using Adaptive File System Policies
NASA Technical Reports Server (NTRS)
Madhyastha, Tara M.; Elford, Christopher L.; Reed, Daniel A.
1996-01-01
Parallel input/output characterization studies and experiments with flexible resource management algorithms indicate that adaptivity is crucial to file system performance. In this paper we propose an automatic technique for selecting and refining file system policies based on application access patterns and execution environment. An automatic classification framework allows the file system to select appropriate caching and pre-fetching policies, while performance sensors provide feedback used to tune policy parameters for specific system environments. To illustrate the potential performance improvements possible using adaptive file system policies, we present results from experiments involving classification-based and performance-based steering.
Automatic thermal control switches. [for use in Space Shuttle borne Get Away Special container
NASA Technical Reports Server (NTRS)
Wing, L. D.
1982-01-01
Two automatic, flexible connection thermal control switches have been designed and tested in a thermal vacuum facility and in the Get Away Special (GAS) container flown on the third Shuttle flight. The switches are complementary in that one switch passes heat when the plate on which it is mounted exceeds some selected temperature and the other switch will pass heat only when the mounting plate temperature is below the selected value. Both switches are driven and controlled by phase-change capsule motors and require no other power source or thermal sensors.
Virgolin, Marco; van Dijk, Irma W E M; Wiersma, Jan; Ronckers, Cécile M; Witteveen, Cees; Bel, Arjan; Alderliesten, Tanja; Bosman, Peter A N
2018-04-01
The aim of this study is to establish the first step toward a novel and highly individualized three-dimensional (3D) dose distribution reconstruction method, based on CT scans and organ delineations of recently treated patients. Specifically, the feasibility of automatically selecting the CT scan of a recently treated childhood cancer patient who is similar to a given historically treated child who suffered from Wilms' tumor is assessed. A cohort of 37 recently treated children between 2- and 6-yr old are considered. Five potential notions of ground-truth similarity are proposed, each focusing on different anatomical aspects. These notions are automatically computed from CT scans of the abdomen and 3D organ delineations (liver, spleen, spinal cord, external body contour). The first is based on deformable image registration, the second on the Dice similarity coefficient, the third on the Hausdorff distance, the fourth on pairwise organ distances, and the last is computed by means of the overlap volume histogram. The relationship between typically available features of historically treated patients and the proposed ground-truth notions of similarity is studied by adopting state-of-the-art machine learning techniques, including random forest. Also, the feasibility of automatically selecting the most similar patient is assessed by comparing ground-truth rankings of similarity with predicted rankings. Similarities (mainly) based on the external abdomen shape and on the pairwise organ distances are highly correlated (Pearson r p ≥ 0.70) and are successfully modeled with random forests based on historically recorded features (pseudo-R 2 ≥ 0.69). In contrast, similarities based on the shape of internal organs cannot be modeled. For the similarities that random forest can reliably model, an estimation of feature relevance indicates that abdominal diameters and weight are the most important. Experiments on automatically selecting similar patients lead to coarse, yet quite robust results: the most similar patient is retrieved only 22% of the times, however, the error in worst-case scenarios is limited, with the fourth most similar patient being retrieved. Results demonstrate that automatically selecting similar patients is feasible when focusing on the shape of the external abdomen and on the position of internal organs. Moreover, whereas the common practice in phantom-based dose reconstruction is to select a representative phantom using age, height, and weight as discriminant factors for any treatment scenario, our analysis on abdominal tumor treatment for children shows that the most relevant features are weight and the anterior-posterior and left-right abdominal diameters. © 2018 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Hostache, Renaud; Chini, Marco; Matgen, Patrick; Giustarini, Laura
2013-04-01
There is a clear need for developing innovative processing chains based on earth observation (EO) data to generate products supporting emergency response and flood management at a global scale. Here an automatic flood mapping application is introduced. The latter is currently hosted on the Grid Processing on Demand (G-POD) Fast Access to Imagery (Faire) environment of the European Space Agency. The main objective of the online application is to deliver flooded areas using both recent and historical acquisitions of SAR data in an operational framework. It is worth mentioning that the method can be applied to both medium and high resolution SAR images. The flood mapping application consists of two main blocks: 1) A set of query tools for selecting the "crisis image" and the optimal corresponding pre-flood "reference image" from the G-POD archive. 2) An algorithm for extracting flooded areas using the previously selected "crisis image" and "reference image". The proposed method is a hybrid methodology, which combines histogram thresholding, region growing and change detection as an approach enabling the automatic, objective and reliable flood extent extraction from SAR images. The method is based on the calibration of a statistical distribution of "open water" backscatter values inferred from SAR images of floods. Change detection with respect to a pre-flood reference image helps reducing over-detection of inundated areas. The algorithms are computationally efficient and operate with minimum data requirements, considering as input data a flood image and a reference image. Stakeholders in flood management and service providers are able to log onto the flood mapping application to get support for the retrieval, from the rolling archive, of the most appropriate pre-flood reference image. Potential users will also be able to apply the implemented flood delineation algorithm. Case studies of several recent high magnitude flooding events (e.g. July 2007 Severn River flood, UK and March 2010 Red River flood, US) observed by high-resolution SAR sensors as well as airborne photography highlight advantages and limitations of the online application. A mid-term target is the exploitation of ESA SENTINEL 1 SAR data streams. In the long term it is foreseen to develop a potential extension of the application for systematically extracting flooded areas from all SAR images acquired on a daily, weekly or monthly basis. On-going research activities investigate the usefulness of the method for mapping flood hazard at global scale using databases of historic SAR remote sensing-derived flood inundation maps.
Sprengers, Andre M J; Caan, Matthan W A; Moerman, Kevin M; Nederveen, Aart J; Lamerichs, Rolf M; Stoker, Jaap
2013-04-01
This study proposes a scale space based algorithm for automated segmentation of single-shot tagged images of modest SNR. Furthermore the algorithm was designed for analysis of discontinuous or shearing types of motion, i.e. segmentation of broken tag patterns. The proposed algorithm utilises non-linear scale space for automatic segmentation of single-shot tagged images. The algorithm's ability to automatically segment tagged shearing motion was evaluated in a numerical simulation and in vivo. A typical shearing deformation was simulated in a Shepp-Logan phantom allowing for quantitative evaluation of the algorithm's success rate as a function of both SNR and the amount of deformation. For a qualitative in vivo evaluation tagged images showing deformations in the calf muscles and eye movement in a healthy volunteer were acquired. Both the numerical simulation and the in vivo tagged data demonstrated the algorithm's ability for automated segmentation of single-shot tagged MR provided that SNR of the images is above 10 and the amount of deformation does not exceed the tag spacing. The latter constraint can be met by adjusting the tag delay or the tag spacing. The scale space based algorithm for automatic segmentation of single-shot tagged MR enables the application of tagged MR to complex (shearing) deformation and the processing of datasets with relatively low SNR.
Selecting Cases for Intensive Analysis: A Diversity of Goals and Methods
ERIC Educational Resources Information Center
Gerring, John; Cojocaru, Lee
2016-01-01
This study revisits the task of case selection in case study research, proposing a new typology of strategies that is explicit, disaggregated, and relatively comprehensive. A secondary goal is to explore the prospects for case selection by "algorithm," aka "ex ante," "automatic," "quantitative,"…
Ungvari, Gabor S; Goggins, William; Leung, Siu-Kau; Gerevich, Jozsef
2007-03-30
Previous factor analyses of catatonia have yielded conflicting results for several reasons including small and/or diagnostically heterogeneous samples and incomparability or lack of standardized assessment. This study examined the factor structure of catatonia in a large, diagnostically homogenous sample of patients with chronic schizophrenia using standardized rating instruments. A random sample of 225 Chinese inpatients diagnosed with schizophrenia according to DSM-IV criteria were selected from the long-stay wards of a psychiatric hospital. They were assessed with a battery of rating scales measuring psychopathology, extrapyramidal motor status, and level of functioning. Catatonia was rated using the Bush-Francis Catatonia Rating Scale. Factor analysis using principal component analysis and Varimax rotation with Kaiser normalization was performed. Four factors were identified with Eigenvalues of 3.27, 2.58, 2.28 and 1.88. The percentage of variance explained by each of the four factors was 15.9%, 12.0%, 11.8% and 10.2% respectively, and together they explained 49.9% of the total variance. Factor 1 loaded on "negative/withdrawn" phenomena, Factor 2 on "automatic" phenomena, Factor 3 on "repetitive/echo" phenomena and Factor 4 on "agitated/resistive" phenomena. In multivariate linear regression analysis negative symptoms and akinesia were associated with 'negative' catatonic symptoms, antipsychotic doses and atypical antipsychotics with 'automatic' symptoms, length of current admission, severity of psychopathology and younger age at onset with 'repetitive' symptoms and age, poor functioning and severity of psychopathology with 'agitated' catatonic symptom scores. The results support recent findings that four main factors underlie catatonic signs/symptoms in chronic schizophrenia.
Validation of automatic segmentation of ribs for NTCP modeling.
Stam, Barbara; Peulen, Heike; Rossi, Maddalena M G; Belderbos, José S A; Sonke, Jan-Jakob
2016-03-01
Determination of a dose-effect relation for rib fractures in a large patient group has been limited by the time consuming manual delineation of ribs. Automatic segmentation could facilitate such an analysis. We determine the accuracy of automatic rib segmentation in the context of normal tissue complication probability modeling (NTCP). Forty-one patients with stage I/II non-small cell lung cancer treated with SBRT to 54 Gy in 3 fractions were selected. Using the 4DCT derived mid-ventilation planning CT, all ribs were manually contoured and automatically segmented. Accuracy of segmentation was assessed using volumetric, shape and dosimetric measures. Manual and automatic dosimetric parameters Dx and EUD were tested for equivalence using the Two One-Sided T-test (TOST), and assessed for agreement using Bland-Altman analysis. NTCP models based on manual and automatic segmentation were compared. Automatic segmentation was comparable with the manual delineation in radial direction, but larger near the costal cartilage and vertebrae. Manual and automatic Dx and EUD were significantly equivalent. The Bland-Altman analysis showed good agreement. The two NTCP models were very similar. Automatic rib segmentation was significantly equivalent to manual delineation and can be used for NTCP modeling in a large patient group. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Quality Control of True Height Profiles Obtained Automatically from Digital Ionograms.
1982-05-01
nece.,ssary and Identify by block number) Ionosphere Digisonde Electron Density Profile Ionogram Autoscaling ARTIST 2 , ABSTRACT (Continue on reverae...analysis technique currently used with the ionogram traces scaled automatically by the ARTIST software [Reinisch and Huang, 1983; Reinisch et al...19841, and the generalized polynomial analysis technique POLAN [Titheridge, 1985], using the same ARTIST -identified ionogram traces. 2. To determine how
The PR2D (Place, Route in 2-Dimensions) automatic layout computer program handbook
NASA Technical Reports Server (NTRS)
Edge, T. M.
1978-01-01
Place, Route in 2-Dimensions is a standard cell automatic layout computer program for generating large scale integrated/metal oxide semiconductor arrays. The program was utilized successfully for a number of years in both government and private sectors but until now was undocumented. The compilation, loading, and execution of the program on a Sigma V CP-V operating system is described.
NASA Astrophysics Data System (ADS)
Horton, Pascal; Weingartner, Rolf; Obled, Charles; Jaboyedoff, Michel
2017-04-01
Analogue methods (AMs) rely on the hypothesis that similar situations, in terms of atmospheric circulation, are likely to result in similar local or regional weather conditions. These methods consist of sampling a certain number of past situations, based on different synoptic-scale meteorological variables (predictors), in order to construct a probabilistic prediction for a local weather variable of interest (predictand). They are often used for daily precipitation prediction, either in the context of real-time forecasting, reconstruction of past weather conditions, or future climate impact studies. The relationship between predictors and predictands is defined by several parameters (predictor variable, spatial and temporal windows used for the comparison, analogy criteria, and number of analogues), which are often calibrated by means of a semi-automatic sequential procedure that has strong limitations. AMs may include several subsampling levels (e.g. first sorting a set of analogs in terms of circulation, then restricting to those with similar moisture status). The parameter space of the AMs can be very complex, with substantial co-dependencies between the parameters. Thus, global optimization techniques are likely to be necessary for calibrating most AM variants, as they can optimize all parameters of all analogy levels simultaneously. Genetic algorithms (GAs) were found to be successful in finding optimal values of AM parameters. They allow taking into account parameters inter-dependencies, and selecting objectively some parameters that were manually selected beforehand (such as the pressure levels and the temporal windows of the predictor variables), and thus obviate the need of assessing a high number of combinations. The performance scores of the optimized methods increased compared to reference methods, and this even to a greater extent for days with high precipitation totals. The resulting parameters were found to be relevant and spatially coherent. Moreover, they were obtained automatically and objectively, which reduces efforts invested in exploration attempts when adapting the method to a new region or for a new predictand. In addition, the approach allowed for new degrees of freedom, such as a weighting between the pressure levels, and non overlapping spatial windows. Genetic algorithms were then used further in order to automatically select predictor variables and analogy criteria. This resulted in interesting outputs, providing new predictor-criterion combinations. However, some limitations of the approach were encountered, and the need of the expert input is likely to remain necessary. Nevertheless, letting GAs exploring a dataset for the best predictor for a predictand of interest is certainly a useful tool, particularly when applied for a new predictand or a new region under different climatic characteristics.
Berlin, Ivan; Singleton, Edward G; Pedarriosse, Anne-Marie; Lancrenon, Sylvie; Rames, Alexis; Aubin, Henri-Jean; Niaura, Raymond
2003-11-01
To assess the validity of the French version of the Modified Reasons for Smoking Scale (MRSS), and to identify which smoking patterns differentiate male and female smokers, which are related to tobacco dependence (as assessed by the Fagerström Test for Nicotine Dependence, FTND), to mood (Beck Depression Inventory II), to affect (Positive and Negative Affect Schedule) and which are predictors of successful quitting. Three hundred and thirty smokers [(mean +/- SD) aged 40 +/- 9 years, 145 (44%) women, mean FTND score: 6.2 +/- 2], candidates for a smoking cessation programme and smoking at least 15 cigarettes/day. Factor analysis of the 21-item scale gave the optimal fit for a seven-factor model, which accounted for 62.3% of the total variance. The following factors were identified: 'addictive smoking', 'pleasure from smoking', 'tension reduction/relaxation', 'social smoking', 'stimulation', 'habit/automatism' and 'handling'. The 'addictive smoking' score increased in a dose-dependent manner with number of cigarettes smoked per day; the 'habit/automatism' score was significantly higher, with more than 20 cigarettes per day than with < or = 20 cigarettes per day. The reasons for smoking were different for males and females: females scored higher on 'tension reduction/relaxation', 'stimulation' and 'social smoking'. A high level of dependence (FTND > or = 6) was associated with significantly higher scores only on 'addictive smoking', the association being stronger in females. Time to first cigarette after awakening was associated with higher 'addictive smoking' and 'habit/automatism' (P < 0.001). In a multivariate logistic regression, failed quitting was predicted by higher habit/automatism score (odds ratio = 1.44, 95% CI = 1.06-1.95, P = 0.02) and greater number of cigarettes smoked per day (odds ratio = 1.03, 95% CI = 1.01-1.06, p = 0.03). The questionnaire yielded a coherent factor structure; women smoked more for tension reduction/relaxation, stimulation and for social reasons than men; addictive smoking and automatic smoking behaviour were similar in both sexes and were associated strongly with a high level of nicotine dependence; the 'habit/automatism' score predicted failure to quit over and above cigarettes per day.
A Robust Automatic Ionospheric O/X Mode Separation Technique for Vertical Incidence Sounders
NASA Astrophysics Data System (ADS)
Harris, T. J.; Pederick, L. H.
2017-12-01
The sounding of the ionosphere by a vertical incidence sounder (VIS) is the oldest and most common technique for determining the state of the ionosphere. The automatic extraction of relevant ionospheric parameters from the ionogram image, referred to as scaling, is important for the effective utilization of data from large ionospheric sounder networks. Due to the Earth's magnetic field, the ionosphere is birefringent at radio frequencies, so a VIS will typically see two distinct returns for each frequency. For the automatic scaling of ionograms, it is highly desirable to be able to separate the two modes. Defence Science and Technology Group has developed a new VIS solution which is based on direct digital receiver technology and includes an algorithm to separate the O and X modes. This algorithm can provide high-quality separation even in difficult ionospheric conditions. In this paper we describe the algorithm and demonstrate its consistency and reliability in successfully separating 99.4% of the ionograms during a 27 day experimental campaign under sometimes demanding ionospheric conditions.
Automatic script identification from images using cluster-based templates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hochberg, J.; Kerns, L.; Kelly, P.
We have developed a technique for automatically identifying the script used to generate a document that is stored electronically in bit image form. Our approach differs from previous work in that the distinctions among scripts are discovered by an automatic learning procedure, without any handson analysis. We first develop a set of representative symbols (templates) for each script in our database (Cyrillic, Roman, etc.). We do this by identifying all textual symbols in a set of training documents, scaling each symbol to a fixed size, clustering similar symbols, pruning minor clusters, and finding each cluster`s centroid. To identify a newmore » document`s script, we identify and scale a subset of symbols from the document and compare them to the templates for each script. We choose the script whose templates provide the best match. Our current system distinguishes among the Armenian, Burmese, Chinese, Cyrillic, Ethiopic, Greek, Hebrew, Japanese, Korean, Roman, and Thai scripts with over 90% accuracy.« less
Amini, Reza; Sabourin, Catherine; De Koninck, Joseph
2011-12-01
Scientific study of dreams requires the most objective methods to reliably analyze dream content. In this context, artificial intelligence should prove useful for an automatic and non subjective scoring technique. Past research has utilized word search and emotional affiliation methods, to model and automatically match human judges' scoring of dream report's negative emotional tone. The current study added word associations to improve the model's accuracy. Word associations were established using words' frequency of co-occurrence with their defining words as found in a dictionary and an encyclopedia. It was hypothesized that this addition would facilitate the machine learning model and improve its predictability beyond those of previous models. With a sample of 458 dreams, this model demonstrated an improvement in accuracy from 59% to 63% (kappa=.485) on the negative emotional tone scale, and for the first time reached an accuracy of 77% (kappa=.520) on the positive scale. Copyright © 2011 Elsevier Inc. All rights reserved.
Yavuzer, Yasemin; Karataş, Zeynep
2013-01-01
This study aimed to examine the mediating role of anger in the relationship between automatic thoughts and physical aggression in adolescents. The study included 224 adolescents in the 9th grade of 3 different high schools in central Burdur during the 2011-2012 academic year. Participants completed the Aggression Questionnaire and Automatic Thoughts Scale in their classrooms during counseling sessions. Data were analyzed using simple and multiple linear regression analysis. There were positive correlations between the adolescents' automatic thoughts, and physical aggression, and anger. According to regression analysis, automatic thoughts effectively predicted the level of physical aggression (b= 0.233, P < 0.001)) and anger (b= 0.325, P < 0.001). Analysis of the mediating role of anger showed that anger fully mediated the relationship between automatic thoughts and physical aggression (Sobel z = 5.646, P < 0.001). Anger fully mediated the relationship between automatic thoughts and physical aggression. Providing adolescents with anger management skills training is very important for the prevention of physical aggression. Such training programs should include components related to the development of an awareness of dysfunctional and anger-triggering automatic thoughts, and how to change them. As the study group included adolescents from Burdur, the findings can only be generalized to groups with similar characteristics.
Automatic sleep scoring: a search for an optimal combination of measures.
Krakovská, Anna; Mezeiová, Kristína
2011-09-01
The objective of this study is to find the best set of characteristics of polysomnographic signals for the automatic classification of sleep stages. A selection was made from 74 measures, including linear spectral measures, interdependency measures, and nonlinear measures of complexity that were computed for the all-night polysomnographic recordings of 20 healthy subjects. The adopted multidimensional analysis involved quadratic discriminant analysis, forward selection procedure, and selection by the best subset procedure. Two situations were considered: the use of four polysomnographic signals (EEG, EMG, EOG, and ECG) and the use of the EEG alone. For the given database, the best automatic sleep classifier achieved approximately an 81% agreement with the hypnograms of experts. The classifier was based on the next 14 features of polysomnographic signals: the ratio of powers in the beta and delta frequency range (EEG, channel C3), the fractal exponent (EMG), the variance (EOG), the absolute power in the sigma 1 band (EEG, C3), the relative power in the delta 2 band (EEG, O2), theta/gamma (EEG, C3), theta/alpha (EEG, O1), sigma/gamma (EEG, C4), the coherence in the delta 1 band (EEG, O1-O2), the entropy (EMG), the absolute theta 2 (EEG, Fp1), theta/alpha (EEG, Fp1), the sigma 2 coherence (EEG, O1-C3), and the zero-crossing rate (ECG); however, even with only four features, we could perform sleep scoring with a 74% accuracy, which is comparable to the inter-rater agreement between two independent specialists. We have shown that 4-14 carefully selected polysomnographic features were sufficient for successful sleep scoring. The efficiency of the corresponding automatic classifiers was verified and conclusively demonstrated on all-night recordings from healthy adults. Copyright © 2011 Elsevier B.V. All rights reserved.
Glnemo2: Interactive Visualization 3D Program
NASA Astrophysics Data System (ADS)
Lambert, Jean-Charles
2011-10-01
Glnemo2 is an interactive 3D visualization program developed in C++ using the OpenGL library and Nokia QT 4.X API. It displays in 3D the particles positions of the different components of an nbody snapshot. It quickly gives a lot of information about the data (shape, density area, formation of structures such as spirals, bars, or peanuts). It allows for in/out zooms, rotations, changes of scale, translations, selection of different groups of particles and plots in different blending colors. It can color particles according to their density or temperature, play with the density threshold, trace orbits, display different time steps, take automatic screenshots to make movies, select particles using the mouse, and fly over a simulation using a given camera path. All these features are accessible from a very intuitive graphic user interface. Glnemo2 supports a wide range of input file formats (Nemo, Gadget 1 and 2, phiGrape, Ramses, list of files, realtime gyrfalcON simulation) which are automatically detected at loading time without user intervention. Glnemo2 uses a plugin mechanism to load the data, so that it is easy to add a new file reader. It's powered by a 3D engine which uses the latest OpenGL technology, such as shaders (glsl), vertex buffer object, frame buffer object, and takes in account the power of the graphic card used in order to accelerate the rendering. With a fast GPU, millions of particles can be rendered in real time. Glnemo2 runs on Linux, Windows (using minGW compiler), and MaxOSX, thanks to the QT4API.
Automatic color preference correction for color reproduction
NASA Astrophysics Data System (ADS)
Tsukada, Masato; Funayama, Chisato; Tajima, Johji
2000-12-01
The reproduction of natural objects in color images has attracted a great deal of attention. Reproduction more pleasing colors of natural objects is one of the methods available to improve image quality. We developed an automatic color correction method to maintain preferred color reproduction for three significant categories: facial skin color, green grass and blue sky. In this method, a representative color in an object area to be corrected is automatically extracted from an input image, and a set of color correction parameters is selected depending on the representative color. The improvement in image quality for reproductions of natural image was more than 93 percent in subjective experiments. These results show the usefulness of our automatic color correction method for the reproduction of preferred colors.
Automatic programming of simulation models
NASA Technical Reports Server (NTRS)
Schroer, Bernard J.; Tseng, Fan T.; Zhang, Shou X.; Dwan, Wen S.
1990-01-01
The concepts of software engineering were used to improve the simulation modeling environment. Emphasis was placed on the application of an element of rapid prototyping, or automatic programming, to assist the modeler define the problem specification. Then, once the problem specification has been defined, an automatic code generator is used to write the simulation code. The following two domains were selected for evaluating the concepts of software engineering for discrete event simulation: manufacturing domain and a spacecraft countdown network sequence. The specific tasks were to: (1) define the software requirements for a graphical user interface to the Automatic Manufacturing Programming System (AMPS) system; (2) develop a graphical user interface for AMPS; and (3) compare the AMPS graphical interface with the AMPS interactive user interface.
NASA Astrophysics Data System (ADS)
Gao, M.; Li, J.
2018-04-01
Geometric correction is an important preprocessing process in the application of GF4 PMS image. The method of geometric correction that is based on the manual selection of geometric control points is time-consuming and laborious. The more common method, based on a reference image, is automatic image registration. This method involves several steps and parameters. For the multi-spectral sensor GF4 PMS, it is necessary for us to identify the best combination of parameters and steps. This study mainly focuses on the following issues: necessity of Rational Polynomial Coefficients (RPC) correction before automatic registration, base band in the automatic registration and configuration of GF4 PMS spatial resolution.
Application of quantum-behaved particle swarm optimization to motor imagery EEG classification.
Hsu, Wei-Yen
2013-12-01
In this study, we propose a recognition system for single-trial analysis of motor imagery (MI) electroencephalogram (EEG) data. Applying event-related brain potential (ERP) data acquired from the sensorimotor cortices, the system chiefly consists of automatic artifact elimination, feature extraction, feature selection and classification. In addition to the use of independent component analysis, a similarity measure is proposed to further remove the electrooculographic (EOG) artifacts automatically. Several potential features, such as wavelet-fractal features, are then extracted for subsequent classification. Next, quantum-behaved particle swarm optimization (QPSO) is used to select features from the feature combination. Finally, selected sub-features are classified by support vector machine (SVM). Compared with without artifact elimination, feature selection using a genetic algorithm (GA) and feature classification with Fisher's linear discriminant (FLD) on MI data from two data sets for eight subjects, the results indicate that the proposed method is promising in brain-computer interface (BCI) applications.
Alternating evolutionary pressure in a genetic algorithm facilitates protein model selection
Offman, Marc N; Tournier, Alexander L; Bates, Paul A
2008-01-01
Background Automatic protein modelling pipelines are becoming ever more accurate; this has come hand in hand with an increasingly complicated interplay between all components involved. Nevertheless, there are still potential improvements to be made in template selection, refinement and protein model selection. Results In the context of an automatic modelling pipeline, we analysed each step separately, revealing several non-intuitive trends and explored a new strategy for protein conformation sampling using Genetic Algorithms (GA). We apply the concept of alternating evolutionary pressure (AEP), i.e. intermediate rounds within the GA runs where unrestrained, linear growth of the model populations is allowed. Conclusion This approach improves the overall performance of the GA by allowing models to overcome local energy barriers. AEP enabled the selection of the best models in 40% of all targets; compared to 25% for a normal GA. PMID:18673557
Mueller, David S.
2013-01-01
profiles from the entire cross section and multiple transects to determine a mean profile for the measurement. The use of an exponent derived from normalized data from the entire cross section is shown to be valid for application of the power velocity distribution law in the computation of the unmeasured discharge in a cross section. Selected statistics are combined with empirically derived criteria to automatically select the appropriate extrapolation methods. A graphical user interface (GUI) provides the user tools to visually evaluate the automatically selected extrapolation methods and manually change them, as necessary. The sensitivity of the total discharge to available extrapolation methods is presented in the GUI. Use of extrap by field hydrographers has demonstrated that extrap is a more accurate and efficient method of determining the appropriate extrapolation methods compared with tools currently (2012) provided in the ADCP manufacturers’ software.
System for definition of the central-chest vasculature
NASA Astrophysics Data System (ADS)
Taeprasartsit, Pinyo; Higgins, William E.
2009-02-01
Accurate definition of the central-chest vasculature from three-dimensional (3D) multi-detector CT (MDCT) images is important for pulmonary applications. For instance, the aorta and pulmonary artery help in automatic definition of the Mountain lymph-node stations for lung-cancer staging. This work presents a system for defining major vascular structures in the central chest. The system provides automatic methods for extracting the aorta and pulmonary artery and semi-automatic methods for extracting the other major central chest arteries/veins, such as the superior vena cava and azygos vein. Automatic aorta and pulmonary artery extraction are performed by model fitting and selection. The system also extracts certain vascular structure information to validate outputs. A semi-automatic method extracts vasculature by finding the medial axes between provided important sites. Results of the system are applied to lymph-node station definition and guidance of bronchoscopic biopsy.
ERIC Educational Resources Information Center
Khatib, Mohammad; Fat'hi, Jalil
2011-01-01
Prompted by the recent shift of attention from just focusing on the top-down processing in L2 reading towards considering the basic component, bottom-up processing, the role of phonological component has also enjoyed popularity among a selected circle of SLA investigators (Koda, 2005). This study investigated the effect of the automatization of…
Exploring the Developmental Changes in Automatic Two-Digit Number Processing
ERIC Educational Resources Information Center
Chan, Winnie Wai Lan; Au, Terry K.; Tang, Joey
2011-01-01
Even when two-digit numbers are irrelevant to the task at hand, adults process them. Do children process numbers automatically, and if so, what kind of information is activated? In a novel dot-number Stroop task, children (Grades 1-5) and adults were shown two different two-digit numbers made up of dots. Participants were asked to select the…
Tsugawa, Hiroshi; Arita, Masanori; Kanazawa, Mitsuhiro; Ogiwara, Atsushi; Bamba, Takeshi; Fukusaki, Eiichiro
2013-05-21
We developed a new software program, MRMPROBS, for widely targeted metabolomics by using the large-scale multiple reaction monitoring (MRM) mode. The strategy became increasingly popular for the simultaneous analysis of up to several hundred metabolites at high sensitivity, selectivity, and quantitative capability. However, the traditional method of assessing measured metabolomics data without probabilistic criteria is not only time-consuming but is often subjective and makeshift work. Our program overcomes these problems by detecting and identifying metabolites automatically, by separating isomeric metabolites, and by removing background noise using a probabilistic score defined as the odds ratio from an optimized multivariate logistic regression model. Our software program also provides a user-friendly graphical interface to curate and organize data matrices and to apply principal component analyses and statistical tests. For a demonstration, we conducted a widely targeted metabolome analysis (152 metabolites) of propagating Saccharomyces cerevisiae measured at 15 time points by gas and liquid chromatography coupled to triple quadrupole mass spectrometry. MRMPROBS is a useful and practical tool for the assessment of large-scale MRM data available to any instrument or any experimental condition.
Schott, Benjamin; Traub, Manuel; Schlagenhauf, Cornelia; Takamiya, Masanari; Antritter, Thomas; Bartschat, Andreas; Löffler, Katharina; Blessing, Denis; Otte, Jens C; Kobitski, Andrei Y; Nienhaus, G Ulrich; Strähle, Uwe; Mikut, Ralf; Stegmaier, Johannes
2018-04-01
State-of-the-art light-sheet and confocal microscopes allow recording of entire embryos in 3D and over time (3D+t) for many hours. Fluorescently labeled structures can be segmented and tracked automatically in these terabyte-scale 3D+t images, resulting in thousands of cell migration trajectories that provide detailed insights to large-scale tissue reorganization at the cellular level. Here we present EmbryoMiner, a new interactive open-source framework suitable for in-depth analyses and comparisons of entire embryos, including an extensive set of trajectory features. Starting at the whole-embryo level, the framework can be used to iteratively focus on a region of interest within the embryo, to investigate and test specific trajectory-based hypotheses and to extract quantitative features from the isolated trajectories. Thus, the new framework provides a valuable new way to quantitatively compare corresponding anatomical regions in different embryos that were manually selected based on biological prior knowledge. As a proof of concept, we analyzed 3D+t light-sheet microscopy images of zebrafish embryos, showcasing potential user applications that can be performed using the new framework.
Fast hierarchical knowledge-based approach for human face detection in color images
NASA Astrophysics Data System (ADS)
Jiang, Jun; Gong, Jie; Zhang, Guilin; Hu, Ruolan
2001-09-01
This paper presents a fast hierarchical knowledge-based approach for automatically detecting multi-scale upright faces in still color images. The approach consists of three levels. At the highest level, skin-like regions are determinated by skin model, which is based on the color attributes hue and saturation in HSV color space, as well color attributes red and green in normalized color space. In level 2, a new eye model is devised to select human face candidates in segmented skin-like regions. An important feature of the eye model is that it is independent of the scale of human face. So it is possible for finding human faces in different scale with scanning image only once, and it leads to reduction the computation time of face detection greatly. In level 3, a human face mosaic image model, which is consistent with physical structure features of human face well, is applied to judge whether there are face detects in human face candidate regions. This model includes edge and gray rules. Experiment results show that the approach has high robustness and fast speed. It has wide application perspective at human-computer interactions and visual telephone etc.
Gronchi, G; Righi, S; Pierguidi, L; Giovannelli, F; Murasecco, I; Viggiano, M P
2018-04-01
The positivity effect in the elderly consists of an attentional preference for positive information as well as avoidance of negative information. Extant theories predict either that the positivity effect depends on controlled attentional processes (socio-emotional selectivity theory), or on an automatic gating selection mechanism (dynamic integration theory). This study examined the role of automatic and controlled attention in the positivity effect. Two dot-probe tasks (with the duration of the stimuli lasting 100 ms and 500 ms, respectively) were employed to compare the attentional bias of 35 elderly people to that of 35 young adults. The stimuli used were expressive faces displaying neutral, disgusted, fearful, and happy expressions. In comparison to young people, the elderly allocated more attention to happy faces at 100 ms and they tended to avoid fearful faces at 500 ms. The findings are not predicted by either theory taken alone, but support the hypothesis that the positivity effect in the elderly is driven by two different processes: an automatic attention bias toward positive stimuli, and a controlled mechanism that diverts attention away from negative stimuli. Copyright © 2018 Elsevier B.V. All rights reserved.
Rivolo, Simone; Nagel, Eike; Smith, Nicolas P; Lee, Jack
2014-01-01
Coronary Wave Intensity Analysis (cWIA) is a technique capable of separating the effects of proximal arterial haemodynamics from cardiac mechanics. The cWIA ability to establish a mechanistic link between coronary haemodynamics measurements and the underlying pathophysiology has been widely demonstrated. Moreover, the prognostic value of a cWIA-derived metric has been recently proved. However, the clinical application of cWIA has been hindered due to the strong dependence on the practitioners, mainly ascribable to the cWIA-derived indices sensitivity to the pre-processing parameters. Specifically, as recently demonstrated, the cWIA-derived metrics are strongly sensitive to the Savitzky-Golay (S-G) filter, typically used to smooth the acquired traces. This is mainly due to the inability of the S-G filter to deal with the different timescale features present in the measured waveforms. Therefore, we propose to apply an adaptive S-G algorithm that automatically selects pointwise the optimal filter parameters. The newly proposed algorithm accuracy is assessed against a cWIA gold standard, provided by a newly developed in-silico cWIA modelling framework, when physiological noise is added to the simulated traces. The adaptive S-G algorithm, when used to automatically select the polynomial degree of the S-G filter, provides satisfactory results with ≤ 10% error for all the metrics through all the levels of noise tested. Therefore, the newly proposed method makes cWIA fully automatic and independent from the practitioners, opening the possibility to multi-centre trials.
Automatic location of L/H transition times for physical studies with a large statistical basis
NASA Astrophysics Data System (ADS)
González, S.; Vega, J.; Murari, A.; Pereira, A.; Dormido-Canto, S.; Ramírez, J. M.; contributors, JET-EFDA
2012-06-01
Completely automatic techniques to estimate and validate L/H transition times can be essential in L/H transition analyses. The generation of databases with hundreds of transition times and without human intervention is an important step to accomplish (a) L/H transition physics analysis, (b) validation of L/H theoretical models and (c) creation of L/H scaling laws. An entirely unattended methodology is presented in this paper to build large databases of transition times in JET using time series. The proposed technique has been applied to a dataset of 551 JET discharges between campaigns C21 and C26. A prediction with discharges that show a clear signature in time series is made through the locating properties of the wavelet transform. It is an accurate prediction and the uncertainty interval is ±3.2 ms. The discharges with a non-clear pattern in the time series use an L/H mode classifier based on discharges with a clear signature. In this case, the estimation error shows a distribution with mean and standard deviation of 27.9 ms and 37.62 ms, respectively. Two different regression methods have been applied to the measurements acquired at the transition times identified by the automatic system. The obtained scaling laws for the threshold power are not significantly different from those obtained using the data at the transition times determined manually by the experts. The automatic methods allow performing physical studies with a large number of discharges, showing, for example, that there are statistically different types of transitions characterized by different scaling laws.
National Earthquake Information Center Seismic Event Detections on Multiple Scales
NASA Astrophysics Data System (ADS)
Patton, J.; Yeck, W. L.; Benz, H.; Earle, P. S.; Soto-Cordero, L.; Johnson, C. E.
2017-12-01
The U.S. Geological Survey National Earthquake Information Center (NEIC) monitors seismicity on local, regional, and global scales using automatic picks from more than 2,000 near-real time seismic stations. This presents unique challenges in automated event detection due to the high variability in data quality, network geometries and density, and distance-dependent variability in observed seismic signals. To lower the overall detection threshold while minimizing false detection rates, NEIC has begun to test the incorporation of new detection and picking algorithms, including multiband (Lomax et al., 2012) and kurtosis (Baillard et al., 2014) pickers, and a new bayesian associator (Glass 3.0). The Glass 3.0 associator allows for simultaneous processing of variably scaled detection grids, each with a unique set of nucleation criteria (e.g., nucleation threshold, minimum associated picks, nucleation phases) to meet specific monitoring goals. We test the efficacy of these new tools on event detection in networks of various scales and geometries, compare our results with previous catalogs, and discuss lessons learned. For example, we find that on local and regional scales, rapid nucleation of small events may require event nucleation with both P and higher-amplitude secondary phases (e.g., S or Lg). We provide examples of the implementation of a scale-independent associator for an induced seismicity sequence (local-scale), a large aftershock sequence (regional-scale), and for monitoring global seismicity. Baillard, C., Crawford, W. C., Ballu, V., Hibert, C., & Mangeney, A. (2014). An automatic kurtosis-based P-and S-phase picker designed for local seismic networks. Bulletin of the Seismological Society of America, 104(1), 394-409. Lomax, A., Satriano, C., & Vassallo, M. (2012). Automatic picker developments and optimization: FilterPicker - a robust, broadband picker for real-time seismic monitoring and earthquake early-warning, Seism. Res. Lett. , 83, 531-540, doi: 10.1785/gssrl.83.3.531.
Mujtaba, Ghulam; Shuib, Liyana; Raj, Ram Gopal; Rajandram, Retnagowri; Shaikh, Khairunisa; Al-Garadi, Mohammed Ali
2017-01-01
Widespread implementation of electronic databases has improved the accessibility of plaintext clinical information for supplementary use. Numerous machine learning techniques, such as supervised machine learning approaches or ontology-based approaches, have been employed to obtain useful information from plaintext clinical data. This study proposes an automatic multi-class classification system to predict accident-related causes of death from plaintext autopsy reports through expert-driven feature selection with supervised automatic text classification decision models. Accident-related autopsy reports were obtained from one of the largest hospital in Kuala Lumpur. These reports belong to nine different accident-related causes of death. Master feature vector was prepared by extracting features from the collected autopsy reports by using unigram with lexical categorization. This master feature vector was used to detect cause of death [according to internal classification of disease version 10 (ICD-10) classification system] through five automated feature selection schemes, proposed expert-driven approach, five subset sizes of features, and five machine learning classifiers. Model performance was evaluated using precisionM, recallM, F-measureM, accuracy, and area under ROC curve. Four baselines were used to compare the results with the proposed system. Random forest and J48 decision models parameterized using expert-driven feature selection yielded the highest evaluation measure approaching (85% to 90%) for most metrics by using a feature subset size of 30. The proposed system also showed approximately 14% to 16% improvement in the overall accuracy compared with the existing techniques and four baselines. The proposed system is feasible and practical to use for automatic classification of ICD-10-related cause of death from autopsy reports. The proposed system assists pathologists to accurately and rapidly determine underlying cause of death based on autopsy findings. Furthermore, the proposed expert-driven feature selection approach and the findings are generally applicable to other kinds of plaintext clinical reports.
Mujtaba, Ghulam; Shuib, Liyana; Raj, Ram Gopal; Rajandram, Retnagowri; Shaikh, Khairunisa; Al-Garadi, Mohammed Ali
2017-01-01
Objectives Widespread implementation of electronic databases has improved the accessibility of plaintext clinical information for supplementary use. Numerous machine learning techniques, such as supervised machine learning approaches or ontology-based approaches, have been employed to obtain useful information from plaintext clinical data. This study proposes an automatic multi-class classification system to predict accident-related causes of death from plaintext autopsy reports through expert-driven feature selection with supervised automatic text classification decision models. Methods Accident-related autopsy reports were obtained from one of the largest hospital in Kuala Lumpur. These reports belong to nine different accident-related causes of death. Master feature vector was prepared by extracting features from the collected autopsy reports by using unigram with lexical categorization. This master feature vector was used to detect cause of death [according to internal classification of disease version 10 (ICD-10) classification system] through five automated feature selection schemes, proposed expert-driven approach, five subset sizes of features, and five machine learning classifiers. Model performance was evaluated using precisionM, recallM, F-measureM, accuracy, and area under ROC curve. Four baselines were used to compare the results with the proposed system. Results Random forest and J48 decision models parameterized using expert-driven feature selection yielded the highest evaluation measure approaching (85% to 90%) for most metrics by using a feature subset size of 30. The proposed system also showed approximately 14% to 16% improvement in the overall accuracy compared with the existing techniques and four baselines. Conclusion The proposed system is feasible and practical to use for automatic classification of ICD-10-related cause of death from autopsy reports. The proposed system assists pathologists to accurately and rapidly determine underlying cause of death based on autopsy findings. Furthermore, the proposed expert-driven feature selection approach and the findings are generally applicable to other kinds of plaintext clinical reports. PMID:28166263
Mutual information-based facial expression recognition
NASA Astrophysics Data System (ADS)
Hazar, Mliki; Hammami, Mohamed; Hanêne, Ben-Abdallah
2013-12-01
This paper introduces a novel low-computation discriminative regions representation for expression analysis task. The proposed approach relies on interesting studies in psychology which show that most of the descriptive and responsible regions for facial expression are located around some face parts. The contributions of this work lie in the proposition of new approach which supports automatic facial expression recognition based on automatic regions selection. The regions selection step aims to select the descriptive regions responsible or facial expression and was performed using Mutual Information (MI) technique. For facial feature extraction, we have applied Local Binary Patterns Pattern (LBP) on Gradient image to encode salient micro-patterns of facial expressions. Experimental studies have shown that using discriminative regions provide better results than using the whole face regions whilst reducing features vector dimension.
NASA Technical Reports Server (NTRS)
Spirkovska, Liljana (Inventor)
2006-01-01
Method and system for automatically displaying, visually and/or audibly and/or by an audible alarm signal, relevant weather data for an identified aircraft pilot, when each of a selected subset of measured or estimated aviation situation parameters, corresponding to a given aviation situation, has a value lying in a selected range. Each range for a particular pilot may be a default range, may be entered by the pilot and/or may be automatically determined from experience and may be subsequently edited by the pilot to change a range and to add or delete parameters describing a situation for which a display should be provided. The pilot can also verbally activate an audible display or visual display of selected information by verbal entry of a first command or a second command, respectively, that specifies the information required.
Kolbl, Sabina; Paloczi, Attila; Panjan, Jože; Stres, Blaž
2014-02-01
The primary aim of the study was to develop and validate an in-house upscale of Automatic Methane Potential Test System II for studying real-time inocula and real-scale substrates in batch, codigestion and enzyme enhanced hydrolysis experiments, in addition to semi-continuous operation of the developed equipment and experiments testing inoculum functional quality. The successful upscale to 5L enabled comparison of different process configurations in shorter preparation times with acceptable accuracy and high-through put intended for industrial decision making. The adoption of the same scales, equipment and methodologies in batch and semi-continuous tests mirroring those at full scale biogas plants resulted in matching methane yields between the two laboratory tests and full-scale, confirming thus the increased decision making value of the approach for industrial operations. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Postadjian, T.; Le Bris, A.; Sahbi, H.; Mallet, C.
2017-05-01
Semantic classification is a core remote sensing task as it provides the fundamental input for land-cover map generation. The very recent literature has shown the superior performance of deep convolutional neural networks (DCNN) for many classification tasks including the automatic analysis of Very High Spatial Resolution (VHR) geospatial images. Most of the recent initiatives have focused on very high discrimination capacity combined with accurate object boundary retrieval. Therefore, current architectures are perfectly tailored for urban areas over restricted areas but not designed for large-scale purposes. This paper presents an end-to-end automatic processing chain, based on DCNNs, that aims at performing large-scale classification of VHR satellite images (here SPOT 6/7). Since this work assesses, through various experiments, the potential of DCNNs for country-scale VHR land-cover map generation, a simple yet effective architecture is proposed, efficiently discriminating the main classes of interest (namely buildings, roads, water, crops, vegetated areas) by exploiting existing VHR land-cover maps for training.
Su, Jung-Jeng; Huang, Jeng-Fang; Wang, Yi-Lei; Hong, Yu-Ya
2018-06-15
The objective of this study is trying to solve water pollution problems related to duck house wastewater by developing a novel duck house wastewater treatment technology. A pilot-scale sequencing batch reactor (SBR) system using different hydraulic retention times (HRTs) for treating duck house wastewater was developed and applied in this study. Experimental results showed that removal efficiency of chemical oxygen demand in untreated duck house wastewater was 98.4, 98.4, 87.8, and 72.5% for the different HRTs of 5, 3, 1, and 0.5 d, respectively. In addition, removal efficiency of biochemical oxygen demand in untreated duck house wastewater was 99.6, 99.3, 90.4, and 58.0%, respectively. The pilot-scale SBR system was effective and deemed capable to be applied to treat duck house wastewater. It is feasible to apply an automatic SBR system on site based on the previous case study of the farm-scale automatic SBR systems for piggery wastewater treatment.
Large - scale Rectangular Ruler Automated Verification Device
NASA Astrophysics Data System (ADS)
Chen, Hao; Chang, Luping; Xing, Minjian; Xie, Xie
2018-03-01
This paper introduces a large-scale rectangular ruler automated verification device, which consists of photoelectric autocollimator and self-designed mechanical drive car and data automatic acquisition system. The design of mechanical structure part of the device refer to optical axis design, drive part, fixture device and wheel design. The design of control system of the device refer to hardware design and software design, and the hardware mainly uses singlechip system, and the software design is the process of the photoelectric autocollimator and the automatic data acquisition process. This devices can automated achieve vertical measurement data. The reliability of the device is verified by experimental comparison. The conclusion meets the requirement of the right angle test procedure.
Colagiorgio, P; Romano, F; Sardi, F; Moraschini, M; Sozzi, A; Bejor, M; Ricevuti, G; Buizza, A; Ramat, S
2014-01-01
The problem of a correct fall risk assessment is becoming more and more critical with the ageing of the population. In spite of the available approaches allowing a quantitative analysis of the human movement control system's performance, the clinical assessment and diagnostic approach to fall risk assessment still relies mostly on non-quantitative exams, such as clinical scales. This work documents our current effort to develop a novel method to assess balance control abilities through a system implementing an automatic evaluation of exercises drawn from balance assessment scales. Our aim is to overcome the classical limits characterizing these scales i.e. limited granularity and inter-/intra-examiner reliability, to obtain objective scores and more detailed information allowing to predict fall risk. We used Microsoft Kinect to record subjects' movements while performing challenging exercises drawn from clinical balance scales. We then computed a set of parameters quantifying the execution of the exercises and fed them to a supervised classifier to perform a classification based on the clinical score. We obtained a good accuracy (~82%) and especially a high sensitivity (~83%).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jamieson, Kevin; Davis, IV, Warren L.
Active learning methods automatically adapt data collection by selecting the most informative samples in order to accelerate machine learning. Because of this, real-world testing and comparing active learning algorithms requires collecting new datasets (adaptively), rather than simply applying algorithms to benchmark datasets, as is the norm in (passive) machine learning research. To facilitate the development, testing and deployment of active learning for real applications, we have built an open-source software system for large-scale active learning research and experimentation. The system, called NEXT, provides a unique platform for realworld, reproducible active learning research. This paper details the challenges of building themore » system and demonstrates its capabilities with several experiments. The results show how experimentation can help expose strengths and weaknesses of active learning algorithms, in sometimes unexpected and enlightening ways.« less
NASA Technical Reports Server (NTRS)
Davis, Robert P.; Underwood, Ian M.
1987-01-01
The use of database management systems (DBMS) and AI to minimize human involvement in the planning of optical navigation pictures for interplanetary space probes is discussed, with application to the Galileo mission. Parameters characterizing the desirability of candidate pictures, and the program generating them, are described. How these parameters automatically build picture records in a database, and the definition of the database structure, are then discussed. The various rules, priorities, and constraints used in selecting pictures are also described. An example is provided of an expert system, written in Prolog, for automatically performing the selection process.
Automatically operable self-leveling load table
NASA Technical Reports Server (NTRS)
Burch, J. L. (Inventor)
1974-01-01
A self-leveling load table is described which is automatically maintained level by selectively opening and closing solenoid valves for inserting and removing air from chambers under the table. The table is floated in a fluid by nine air chambers beneath the top of the table. These chambers are open at the bottom and four oppositely located chambers are used for leveling the table by having the air increased or decreased by means of a flexible hose. Air bearing pendulums are used for selectively energizing solenoid valves which either apply pressurized air to the chamber or evacuate air from the chamber by means of a vacuum source.
MASGOMAS PROJECT, New automatic-tool for cluster search on IR photometric surveys
NASA Astrophysics Data System (ADS)
Rübke, K.; Herrero, A.; Borissova, J.; Ramirez-Alegria, S.; García, M.; Marin-Franch, A.
2015-05-01
The Milky Way is expected to contain a large number of young massive (few x 1000 solar masses) stellar clusters, borne in dense cores of gas and dust. Yet, their known number remains small. We have started a programme to search for such clusters, MASGOMAS (MAssive Stars in Galactic Obscured MAssive clusterS). Initially, we selected promising candidates by means of visual inspection of infrared images. In a second phase of the project we have presented a semi-automatic method to search for obscured massive clusters that resulted in the identification of new massive clusters, like MASGOMAS-1 (with more than 10,000 solar masses) and MASGOMAS-4 (a double-cored association of about 3,000 solar masses). We have now developped a new automatic tool for MASGOMAS that allows the identification of a large number of massive cluster candidates from the 2MASS and VVV catalogues. Cluster candidates fulfilling criteria appropriated for massive OB stars are thus selected in an efficient and objective way. We present the results from this tool and the observations of the first selected cluster, and discuss the implications for the Milky Way structure.
An Automatic Prediction of Epileptic Seizures Using Cloud Computing and Wireless Sensor Networks.
Sareen, Sanjay; Sood, Sandeep K; Gupta, Sunil Kumar
2016-11-01
Epilepsy is one of the most common neurological disorders which is characterized by the spontaneous and unforeseeable occurrence of seizures. An automatic prediction of seizure can protect the patients from accidents and save their life. In this article, we proposed a mobile-based framework that automatically predict seizures using the information contained in electroencephalography (EEG) signals. The wireless sensor technology is used to capture the EEG signals of patients. The cloud-based services are used to collect and analyze the EEG data from the patient's mobile phone. The features from the EEG signal are extracted using the fast Walsh-Hadamard transform (FWHT). The Higher Order Spectral Analysis (HOSA) is applied to FWHT coefficients in order to select the features set relevant to normal, preictal and ictal states of seizure. We subsequently exploit the selected features as input to a k-means classifier to detect epileptic seizure states in a reasonable time. The performance of the proposed model is tested on Amazon EC2 cloud and compared in terms of execution time and accuracy. The findings show that with selected HOS based features, we were able to achieve a classification accuracy of 94.6 %.
Wang, Lin; Liu, Simin; Niu, Tianhua; Xu, Xin
2005-03-18
Single nucleotide polymorphisms (SNPs) provide an important tool in pinpointing susceptibility genes for complex diseases and in unveiling human molecular evolution. Selection and retrieval of an optimal SNP set from publicly available databases have emerged as the foremost bottlenecks in designing large-scale linkage disequilibrium studies, particularly in case-control settings. We describe the architectural structure and implementations of a novel software program, SNPHunter, which allows for both ad hoc-mode and batch-mode SNP search, automatic SNP filtering, and retrieval of SNP data, including physical position, function class, flanking sequences at user-defined lengths, and heterozygosity from NCBI dbSNP. The SNP data extracted from dbSNP via SNPHunter can be exported and saved in plain text format for further down-stream analyses. As an illustration, we applied SNPHunter for selecting SNPs for 10 major candidate genes for type 2 diabetes, including CAPN10, FABP4, IL6, NOS3, PPARG, TNF, UCP2, CRP, ESR1, and AR. SNPHunter constitutes an efficient and user-friendly tool for SNP screening, selection, and acquisition. The executable and user's manual are available at http://www.hsph.harvard.edu/ppg/software.htm
Zerouali, Younes; Jemel, Boutheina; Godbout, Roger
2010-01-13
The link between decrease in levels of attention and total sleep deprivation is well known but the respective contributions of slow wave sleep (SWS) and rapid eye movement sleep (REM) is still largely unknown. The aim of this study was to characterize the effects of sleep deprivation during the SWS phase (i.e., early night sleep) and the REM phase (i.e., late night sleep) on tasks that tap automatic and selective attention; these two forms of attention were indexed respectively by "mismatch negativity" (MMN) and "negative difference" (Nd) event-related potential (ERP) difference waves. Ten young adult participants were subjected to a three-night sleep protocol. They were each received one night of full sleep (F), one night of sleep deprivation during the first half of the night (H1), and one night of sleep deprivation during the second half of the night (H2). MMN and Nd were recorded the following morning of each night during two auditory oddball tasks that tapped automatic and selective attention. The effect of sleep deprivation condition was assessed using ERP amplitude measures and standardized low-resolution electromagnetic tomography method (sLORETA). ERP results revealed significant MMN amplitude reduction over frontal and temporal recording areas following the H2 night compared to F and H1, indicating reductions in levels of automatic attention. In addition, Nd amplitude over the parietal recording area was significantly increased following the H2 night compared to F and H1. sLORETA findings show significant changes from F to H2 night in frontal cortex activity, decreasing during the automatic attention task but increasing during the selective attention task. No significant change in brain activity is observed after H1 night. The restoration of attention processes is mainly achieved during REM sleep, which confirms results from previous studies in rat models. The anterior cortex seems to be more sensitive to sleep loss, while the parietal cortex acts as a compensatory resource to restore cognitive performance in a task context.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reyhan, M; Yue, N
Purpose: To validate an automated image processing algorithm designed to detect the center of radiochromic film used for in vivo film dosimetry against the current gold standard of manual selection. Methods: An image processing algorithm was developed to automatically select the region of interest (ROI) in *.tiff images that contain multiple pieces of radiochromic film (0.5x1.3cm{sup 2}). After a user has linked a calibration file to the processing algorithm and selected a *.tiff file for processing, an ROI is automatically detected for all films by a combination of thresholding and erosion, which removes edges and any additional markings for orientation.more » Calibration is applied to the mean pixel values from the ROIs and a *.tiff image is output displaying the original image with an overlay of the ROIs and the measured doses. Validation of the algorithm was determined by comparing in vivo dose determined using the current gold standard (manually drawn ROIs) versus automated ROIs for n=420 scanned films. Bland-Altman analysis, paired t-test, and linear regression were performed to demonstrate agreement between the processes. Results: The measured doses ranged from 0.2-886.6cGy. Bland-Altman analysis of the two techniques (automatic minus manual) revealed a bias of -0.28cGy and a 95% confidence interval of (5.5cGy,-6.1cGy). These values demonstrate excellent agreement between the two techniques. Paired t-test results showed no statistical differences between the two techniques, p=0.98. Linear regression with a forced zero intercept demonstrated that Automatic=0.997*Manual, with a Pearson correlation coefficient of 0.999. The minimal differences between the two techniques may be explained by the fact that the hand drawn ROIs were not identical to the automatically selected ones. The average processing time was 6.7seconds in Matlab on an IntelCore2Duo processor. Conclusion: An automated image processing algorithm has been developed and validated, which will help minimize user interaction and processing time of radiochromic film used for in vivo dosimetry.« less
Development of automatic body condition scoring using a low-cost 3-dimensional Kinect camera.
Spoliansky, Roii; Edan, Yael; Parmet, Yisrael; Halachmi, Ilan
2016-09-01
Body condition scoring (BCS) is a farm-management tool for estimating dairy cows' energy reserves. Today, BCS is performed manually by experts. This paper presents a 3-dimensional algorithm that provides a topographical understanding of the cow's body to estimate BCS. An automatic BCS system consisting of a Kinect camera (Microsoft Corp., Redmond, WA) triggered by a passive infrared motion detector was designed and implemented. Image processing and regression algorithms were developed and included the following steps: (1) image restoration, the removal of noise; (2) object recognition and separation, identification and separation of the cows; (3) movie and image selection, selection of movies and frames that include the relevant data; (4) image rotation, alignment of the cow parallel to the x-axis; and (5) image cropping and normalization, removal of irrelevant data, setting the image size to 150×200 pixels, and normalizing image values. All steps were performed automatically, including image selection and classification. Fourteen individual features per cow, derived from the cows' topography, were automatically extracted from the movies and from the farm's herd-management records. These features appear to be measurable in a commercial farm. Manual BCS was performed by a trained expert and compared with the output of the training set. A regression model was developed, correlating the features with the manual BCS references. Data were acquired for 4 d, resulting in a database of 422 movies of 101 cows. Movies containing cows' back ends were automatically selected (389 movies). The data were divided into a training set of 81 cows and a test set of 20 cows; both sets included the identical full range of BCS classes. Accuracy tests gave a mean absolute error of 0.26, median absolute error of 0.19, and coefficient of determination of 0.75, with 100% correct classification within 1 step and 91% correct classification within a half step for BCS classes. Results indicated good repeatability, with all standard deviations under 0.33. The algorithm is independent of the background and requires 10 cows for training with approximately 30 movies of 4 s each. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
All-automatic swimmer tracking system based on an optimized scaled composite JTC technique
NASA Astrophysics Data System (ADS)
Benarab, D.; Napoléon, T.; Alfalou, A.; Verney, A.; Hellard, P.
2016-04-01
In this paper, an all-automatic optimized JTC based swimmer tracking system is proposed and evaluated on real video database outcome from national and international swimming competitions (French National Championship, Limoges 2015, FINA World Championships, Barcelona 2013 and Kazan 2015). First, we proposed to calibrate the swimming pool using the DLT algorithm (Direct Linear Transformation). DLT calculates the homography matrix given a sufficient set of correspondence points between pixels and metric coordinates: i.e. DLT takes into account the dimensions of the swimming pool and the type of the swim. Once the swimming pool is calibrated, we extract the lane. Then we apply a motion detection approach to detect globally the swimmer in this lane. Next, we apply our optimized Scaled Composite JTC which consists of creating an adapted input plane that contains the predicted region and the head reference image. This latter is generated using a composite filter of fin images chosen from the database. The dimension of this reference will be scaled according to the ratio between the head's dimension and the width of the swimming lane. Finally, applying the proposed approach improves the performances of our previous tracking method by adding a detection module in order to achieve an all-automatic swimmer tracking system.
Principal visual word discovery for automatic license plate detection.
Zhou, Wengang; Li, Houqiang; Lu, Yijuan; Tian, Qi
2012-09-01
License plates detection is widely considered a solved problem, with many systems already in operation. However, the existing algorithms or systems work well only under some controlled conditions. There are still many challenges for license plate detection in an open environment, such as various observation angles, background clutter, scale changes, multiple plates, uneven illumination, and so on. In this paper, we propose a novel scheme to automatically locate license plates by principal visual word (PVW), discovery and local feature matching. Observing that characters in different license plates are duplicates of each other, we bring in the idea of using the bag-of-words (BoW) model popularly applied in partial-duplicate image search. Unlike the classic BoW model, for each plate character, we automatically discover the PVW characterized with geometric context. Given a new image, the license plates are extracted by matching local features with PVW. Besides license plate detection, our approach can also be extended to the detection of logos and trademarks. Due to the invariance virtue of scale-invariant feature transform feature, our method can adaptively deal with various changes in the license plates, such as rotation, scaling, illumination, etc. Promising results of the proposed approach are demonstrated with an experimental study in license plate detection.
Automatic humidification system to support the assessment of food drying processes
NASA Astrophysics Data System (ADS)
Ortiz Hernández, B. D.; Carreño Olejua, A. R.; Castellanos Olarte, J. M.
2016-07-01
This work shows the main features of an automatic humidification system to provide drying air that match environmental conditions of different climate zones. This conditioned air is then used to assess the drying process of different agro-industrial products at the Automation and Control for Agro-industrial Processes Laboratory of the Pontifical Bolivarian University of Bucaramanga, Colombia. The automatic system allows creating and improving control strategies to supply drying air under specified conditions of temperature and humidity. The development of automatic routines to control and acquire real time data was made possible by the use of robust control systems and suitable instrumentation. The signals are read and directed to a controller memory where they are scaled and transferred to a memory unit. Using the IP address is possible to access data to perform supervision tasks. One important characteristic of this automatic system is the Dynamic Data Exchange Server (DDE) to allow direct communication between the control unit and the computer used to build experimental curves.
INITIAL APPL;ICATION OF THE ADAPTIVE GRID AIR POLLUTION MODEL
The paper discusses an adaptive-grid algorithm used in air pollution models. The algorithm reduces errors related to insufficient grid resolution by automatically refining the grid scales in regions of high interest. Meanwhile the grid scales are coarsened in other parts of the d...
Gould van Praag, Cassandra D; Garfinkel, Sarah; Ward, Jamie; Bor, Daniel; Seth, Anil K
2016-07-29
In grapheme-colour synaesthesia (GCS), the presentation of letters or numbers induces an additional 'concurrent' experience of colour. Early functional MRI (fMRI) investigations of GCS reported activation in colour-selective area V4 during the concurrent experience. However, others have failed to replicate this key finding. We reasoned that individual differences in synaesthetic phenomenology might explain this inconsistency in the literature. To test this hypothesis, we examined fMRI BOLD responses in a group of grapheme-colour synaesthetes (n=20) and matched controls (n=20) while characterising the individual phenomenology of the synaesthetes along dimensions of 'automaticity' and 'localisation'. We used an independent functional localiser to identify colour-selective areas in both groups. Activations in these areas were then assessed during achromatic synaesthesia-inducing, and non-inducing conditions; we also explored whole brain activations, where we sought to replicate the existing literature regarding synaesthesia effects. Controls showed no significant activations in the contrast of inducing > non-inducing synaesthetic stimuli, in colour-selective ROIs or at the whole brain level. In the synaesthete group, we correlated activation within colour-selective ROIs with individual differences in phenomenology using the Coloured Letters and Numbers (CLaN) questionnaire which measures, amongst other attributes, the subjective automaticity/attention in synaesthetic concurrents, and their spatial localisation. Supporting our hypothesis, we found significant correlations between individual measures of synaesthetic phenomenology and BOLD responses in colour-selective areas, when contrasting inducing against non-inducing stimuli. Specifically, left-hemisphere colour area responses were stronger for synaesthetes scoring high on phenomenological localisation and automaticity/attention, while right-hemisphere colour area responses showed a relationship with localisation only. In exploratory whole brain analyses, the BOLD response within several other areas was also correlated with these phenomenological factors, including the intra-parietal sulcus, insula, precentral and supplementary motor areas. Our findings reveal a network of regions underlying synaesthetic phenomenology and they help reconcile the diversity of previous results regarding colour-selective BOLD responses during synaesthesia, by establishing a bridge between neural responses and individual synaesthetic phenomenology. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhongqin, G.; Chen, Y.
2017-12-01
Abstract Quickly identify the spatial distribution of landslides automatically is essential for the prevention, mitigation and assessment of the landslide hazard. It's still a challenging job owing to the complicated characteristics and vague boundary of the landslide areas on the image. The high resolution remote sensing image has multi-scales, complex spatial distribution and abundant features, the object-oriented image classification methods can make full use of the above information and thus effectively detect the landslides after the hazard happened. In this research we present a new semi-supervised workflow, taking advantages of recent object-oriented image analysis and machine learning algorithms to quick locate the different origins of landslides of some areas on the southwest part of China. Besides a sequence of image segmentation, feature selection, object classification and error test, this workflow ensemble the feature selection and classifier selection. The feature this study utilized were normalized difference vegetation index (NDVI) change, textural feature derived from the gray level co-occurrence matrices (GLCM), spectral feature and etc. The improvement of this study shows this algorithm significantly removes some redundant feature and the classifiers get fully used. All these improvements lead to a higher accuracy on the determination of the shape of landslides on the high resolution remote sensing image, in particular the flexibility aimed at different kinds of landslides.
Arkill, Kenton P.; Mantell, Judith M.; Plant, Simon R.; Verkade, Paul; Palmer, Richard E.
2015-01-01
A three-dimensional reconstruction of a nano-scale aqueous object can be achieved by taking a series of transmission electron micrographs tilted at different angles in vitreous ice: cryo-Transmission Electron Tomography. Presented here is a novel method of fine alignment for the tilt series. Size-selected gold clusters of ~2.7 nm (Au561 ± 14), ~3.2 nm (Au923 ± 22), and ~4.3 nm (Au2057 ± 45) in diameter were deposited onto separate graphene oxide films overlaying holes on amorphous carbon grids. After plunge freezing and subsequent transfer to cryo-Transmission Electron Tomography, the resulting tomograms have excellent (de-)focus and alignment properties during automatic acquisition. Fine alignment is accurate when the evenly distributed 3.2 nm gold particles are used as fiducial markers, demonstrated with a reconstruction of a tobacco mosaic virus. Using a graphene oxide film means the fiducial markers are not interfering with the ice bound sample and that automated collection is consistent. The use of pre-deposited size-selected clusters means there is no aggregation and a user defined concentration. The size-selected clusters are mono-dispersed and can be produced in a wide size range including 2–5 nm in diameter. The use of size-selected clusters on a graphene oxide films represents a significant technical advance for 3D cryo-electron microscopy. PMID:25783049
ERIC Educational Resources Information Center
Muris, Peter; Mayer, Birgit; den Adel, Madelon; Roos, Tamara; van Wamelen, Julie
2009-01-01
The purpose of the present study was to evaluate negative automatic thoughts and anxiety control as predictors of change produced by cognitive-behavioral treatment of youths with anxiety disorders. Forty-five high-anxious children aged between 9 and 12 years who were selected from the primary school population, received a standardized CBT…
AFETR Instrumentation Handbook
1971-09-01
of time. From this, vehicle velocity and acceleration can be computed. LOCATION Three Askanias are mobile and may be located at selected universal...Being mobile , these cinetheodolites may be placed for optimum launch coverage. Preprogrammed focusing is provided for automatic focus from 2000 and 8000...console trailer. IR (lead sulfide sensor ) Automatic Tracking System with 1 to 20 miles range. Elevation range: -10 deg to +90 deg Azimuth range: 350
Automatic assessment of voice quality according to the GRBAS scale.
Sáenz-Lechón, Nicolás; Godino-Llorente, Juan I; Osma-Ruiz, Víctor; Blanco-Velasco, Manuel; Cruz-Roldán, Fernando
2006-01-01
Nowadays, the most extended techniques to measure the voice quality are based on perceptual evaluation by well trained professionals. The GRBAS scale is a widely used method for perceptual evaluation of voice quality. The GRBAS scale is widely used in Japan and there is increasing interest in both Europe and the United States. However, this technique needs well-trained experts, and is based on the evaluator's expertise, depending a lot on his own psycho-physical state. Furthermore, a great variability in the assessments performed from one evaluator to another is observed. Therefore, an objective method to provide such measurement of voice quality would be very valuable. In this paper, the automatic assessment of voice quality is addressed by means of short-term Mel cepstral parameters (MFCC), and learning vector quantization (LVQ) in a pattern recognition stage. Results show that this approach provides acceptable results for this purpose, with accuracy around 65% at the best.
McCabe, David P; Roediger, Henry L; Karpicke, Jeffrey D
2011-04-01
Dual-process theories of retrieval suggest that controlled and automatic processing contribute to memory performance. Free recall tests are often considered pure measures of recollection, assessing only the controlled process. We report two experiments demonstrating that automatic processes also influence free recall. Experiment 1 used inclusion and exclusion tasks to estimate recollection and automaticity in free recall, adopting a new variant of the process dissociation procedure. Dividing attention during study selectively reduced the recollection estimate but did not affect the automatic component. In Experiment 2, we replicated the results of Experiment 1, and subjects additionally reported remember-know-guess judgments during recall in the inclusion condition. In the latter task, dividing attention during study reduced remember judgments for studied items, but know responses were unaffected. Results from both methods indicated that free recall is partly driven by automatic processes. Thus, we conclude that retrieval in free recall tests is not driven solely by conscious recollection (or remembering) but also by automatic influences of the same sort believed to drive priming on implicit memory tests. Sometimes items come to mind without volition in free recall.
Automatic comparison of striation marks and automatic classification of shoe prints
NASA Astrophysics Data System (ADS)
Geradts, Zeno J.; Keijzer, Jan; Keereweer, Isaac
1995-09-01
A database for toolmarks (named TRAX) and a database for footwear outsole designs (named REBEZO) have been developed on a PC. The databases are filled with video-images and administrative data about the toolmarks and the footwear designs. An algorithm for the automatic comparison of the digitized striation patterns has been developed for TRAX. The algorithm appears to work well for deep and complete striation marks and will be implemented in TRAX. For REBEZO some efforts have been made to the automatic classification of outsole patterns. The algorithm first segments the shoeprofile. Fourier-features are selected for the separate elements and are classified with a neural network. In future developments information on invariant moments of the shape and rotation angle will be included in the neural network.
Comparison of Acceleration Techniques for Selected Low-Level Bioinformatics Operations
Langenkämper, Daniel; Jakobi, Tobias; Feld, Dustin; Jelonek, Lukas; Goesmann, Alexander; Nattkemper, Tim W.
2016-01-01
Within the recent years clock rates of modern processors stagnated while the demand for computing power continued to grow. This applied particularly for the fields of life sciences and bioinformatics, where new technologies keep on creating rapidly growing piles of raw data with increasing speed. The number of cores per processor increased in an attempt to compensate for slight increments of clock rates. This technological shift demands changes in software development, especially in the field of high performance computing where parallelization techniques are gaining in importance due to the pressing issue of large sized datasets generated by e.g., modern genomics. This paper presents an overview of state-of-the-art manual and automatic acceleration techniques and lists some applications employing these in different areas of sequence informatics. Furthermore, we provide examples for automatic acceleration of two use cases to show typical problems and gains of transforming a serial application to a parallel one. The paper should aid the reader in deciding for a certain techniques for the problem at hand. We compare four different state-of-the-art automatic acceleration approaches (OpenMP, PluTo-SICA, PPCG, and OpenACC). Their performance as well as their applicability for selected use cases is discussed. While optimizations targeting the CPU worked better in the complex k-mer use case, optimizers for Graphics Processing Units (GPUs) performed better in the matrix multiplication example. But performance is only superior at a certain problem size due to data migration overhead. We show that automatic code parallelization is feasible with current compiler software and yields significant increases in execution speed. Automatic optimizers for CPU are mature and usually no additional manual adjustment is required. In contrast, some automatic parallelizers targeting GPUs still lack maturity and are limited to simple statements and structures. PMID:26904094
Comparison of Acceleration Techniques for Selected Low-Level Bioinformatics Operations.
Langenkämper, Daniel; Jakobi, Tobias; Feld, Dustin; Jelonek, Lukas; Goesmann, Alexander; Nattkemper, Tim W
2016-01-01
Within the recent years clock rates of modern processors stagnated while the demand for computing power continued to grow. This applied particularly for the fields of life sciences and bioinformatics, where new technologies keep on creating rapidly growing piles of raw data with increasing speed. The number of cores per processor increased in an attempt to compensate for slight increments of clock rates. This technological shift demands changes in software development, especially in the field of high performance computing where parallelization techniques are gaining in importance due to the pressing issue of large sized datasets generated by e.g., modern genomics. This paper presents an overview of state-of-the-art manual and automatic acceleration techniques and lists some applications employing these in different areas of sequence informatics. Furthermore, we provide examples for automatic acceleration of two use cases to show typical problems and gains of transforming a serial application to a parallel one. The paper should aid the reader in deciding for a certain techniques for the problem at hand. We compare four different state-of-the-art automatic acceleration approaches (OpenMP, PluTo-SICA, PPCG, and OpenACC). Their performance as well as their applicability for selected use cases is discussed. While optimizations targeting the CPU worked better in the complex k-mer use case, optimizers for Graphics Processing Units (GPUs) performed better in the matrix multiplication example. But performance is only superior at a certain problem size due to data migration overhead. We show that automatic code parallelization is feasible with current compiler software and yields significant increases in execution speed. Automatic optimizers for CPU are mature and usually no additional manual adjustment is required. In contrast, some automatic parallelizers targeting GPUs still lack maturity and are limited to simple statements and structures.
Papageorgiou, Eirini; Nieuwenhuys, Angela; Desloovere, Kaat
2017-01-01
Background This study aimed to improve the automatic probabilistic classification of joint motion gait patterns in children with cerebral palsy by using the expert knowledge available via a recently developed Delphi-consensus study. To this end, this study applied both Naïve Bayes and Logistic Regression classification with varying degrees of usage of the expert knowledge (expert-defined and discretized features). A database of 356 patients and 1719 gait trials was used to validate the classification performance of eleven joint motions. Hypotheses Two main hypotheses stated that: (1) Joint motion patterns in children with CP, obtained through a Delphi-consensus study, can be automatically classified following a probabilistic approach, with an accuracy similar to clinical expert classification, and (2) The inclusion of clinical expert knowledge in the selection of relevant gait features and the discretization of continuous features increases the performance of automatic probabilistic joint motion classification. Findings This study provided objective evidence supporting the first hypothesis. Automatic probabilistic gait classification using the expert knowledge available from the Delphi-consensus study resulted in accuracy (91%) similar to that obtained with two expert raters (90%), and higher accuracy than that obtained with non-expert raters (78%). Regarding the second hypothesis, this study demonstrated that the use of more advanced machine learning techniques such as automatic feature selection and discretization instead of expert-defined and discretized features can result in slightly higher joint motion classification performance. However, the increase in performance is limited and does not outweigh the additional computational cost and the higher risk of loss of clinical interpretability, which threatens the clinical acceptance and applicability. PMID:28570616
Layout compliance for triple patterning lithography: an iterative approach
NASA Astrophysics Data System (ADS)
Yu, Bei; Garreton, Gilda; Pan, David Z.
2014-10-01
As the semiconductor process further scales down, the industry encounters many lithography-related issues. In the 14nm logic node and beyond, triple patterning lithography (TPL) is one of the most promising techniques for Metal1 layer and possibly Via0 layer. As one of the most challenging problems in TPL, recently layout decomposition efforts have received more attention from both industry and academia. Ideally the decomposer should point out locations in the layout that are not triple patterning decomposable and therefore manual intervention by designers is required. A traditional decomposition flow would be an iterative process, where each iteration consists of an automatic layout decomposition step and manual layout modification task. However, due to the NP-hardness of triple patterning layout decomposition, automatic full chip level layout decomposition requires long computational time and therefore design closure issues continue to linger around in the traditional flow. Challenged by this issue, we present a novel incremental layout decomposition framework to facilitate accelerated iterative decomposition. In the first iteration, our decomposer not only points out all conflicts, but also provides the suggestions to fix them. After the layout modification, instead of solving the full chip problem from scratch, our decomposer can provide a quick solution for a selected portion of layout. We believe this framework is efficient, in terms of performance and designer friendly.
Predictors of Mental Health Symptoms, Automatic Thoughts, and Self-Esteem Among University Students.
Hiçdurmaz, Duygu; İnci, Figen; Karahan, Sevilay
2017-01-01
University youth is a risk group regarding mental health, and many mental health problems are frequent in this group. Sociodemographic factors such as level of income and familial factors such as relationship with father are reported to be associated with mental health symptoms, automatic thoughts, and self-esteem. Also, there are interrelations between mental health problems, automatic thoughts, and self-esteem. The extent of predictive effect of each of these variables on automatic thoughts, self-esteem, and mental health symptoms is not known. We aimed to determine the predictive factors of mental health symptoms, automatic thoughts, and self-esteem in university students. Participants were 530 students enrolled at a university in Turkey, during 2014-2015 academic year. Data were collected using the student information form, the Brief Symptom Inventory, the Automatic Thoughts Questionnaire, and the Rosenberg Self-Esteem Scale. Mental health symptoms, self-esteem, perception of the relationship with the father, and level of income as a student significantly predicted automatic thoughts. Automatic thoughts, mental health symptoms, participation in family decisions, and age had significant predictive effects on self-esteem. Finally, automatic thoughts, self-esteem, age, and perception of the relationship with the father had significant predictive effects on mental health symptoms. The predictive factors revealed in our study provide important information to practitioners and researchers by showing the elements that need to be screened for mental health of university students and issues that need to be included in counseling activities.
Development of an automated ultrasonic testing system
NASA Astrophysics Data System (ADS)
Shuxiang, Jiao; Wong, Brian Stephen
2005-04-01
Non-Destructive Testing is necessary in areas where defects in structures emerge over time due to wear and tear and structural integrity is necessary to maintain its usability. However, manual testing results in many limitations: high training cost, long training procedure, and worse, the inconsistent test results. A prime objective of this project is to develop an automatic Non-Destructive testing system for a shaft of the wheel axle of a railway carriage. Various methods, such as the neural network, pattern recognition methods and knowledge-based system are used for the artificial intelligence problem. In this paper, a statistical pattern recognition approach, Classification Tree is applied. Before feature selection, a thorough study on the ultrasonic signals produced was carried out. Based on the analysis of the ultrasonic signals, three signal processing methods were developed to enhance the ultrasonic signals: Cross-Correlation, Zero-Phase filter and Averaging. The target of this step is to reduce the noise and make the signal character more distinguishable. Four features: 1. The Auto Regressive Model Coefficients. 2. Standard Deviation. 3. Pearson Correlation 4. Dispersion Uniformity Degree are selected. And then a Classification Tree is created and applied to recognize the peak positions and amplitudes. Searching local maximum is carried out before feature computing. This procedure reduces much computation time in the real-time testing. Based on this algorithm, a software package called SOFRA was developed to recognize the peaks, calibrate automatically and test a simulated shaft automatically. The automatic calibration procedure and the automatic shaft testing procedure are developed.
NASA Astrophysics Data System (ADS)
Chawla, Viveak Kumar; Chanda, Arindam Kumar; Angra, Surjit
2018-03-01
The flexible manufacturing system (FMS) constitute of several programmable production work centers, material handling systems (MHSs), assembly stations and automatic storage and retrieval systems. In FMS, the automatic guided vehicles (AGVs) play a vital role in material handling operations and enhance the performance of the FMS in its overall operations. To achieve low makespan and high throughput yield in the FMS operations, it is highly imperative to integrate the production work centers schedules with the AGVs schedules. The Production schedule for work centers is generated by application of the Giffler and Thompson algorithm under four kind of priority hybrid dispatching rules. Then the clonal selection algorithm (CSA) is applied for the simultaneous scheduling to reduce backtracking as well as distance travel of AGVs within the FMS facility. The proposed procedure is computationally tested on the benchmark FMS configuration from the literature and findings from the investigations clearly indicates that the CSA yields best results in comparison of other applied methods from the literature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu, J; Washington University in St Louis, St Louis, MO; Li, H. Harlod
Purpose: In RT patient setup 2D images, tissues often cannot be seen well due to the lack of image contrast. Contrast enhancement features provided by image reviewing software, e.g. Mosaiq and ARIA, require manual selection of the image processing filters and parameters thus inefficient and cannot be automated. In this work, we developed a novel method to automatically enhance the 2D RT image contrast to allow automatic verification of patient daily setups as a prerequisite step of automatic patient safety assurance. Methods: The new method is based on contrast limited adaptive histogram equalization (CLAHE) and high-pass filtering algorithms. The mostmore » important innovation is to automatically select the optimal parameters by optimizing the image contrast. The image processing procedure includes the following steps: 1) background and noise removal, 2) hi-pass filtering by subtracting the Gaussian smoothed Result, and 3) histogram equalization using CLAHE algorithm. Three parameters were determined through an iterative optimization which was based on the interior-point constrained optimization algorithm: the Gaussian smoothing weighting factor, the CLAHE algorithm block size and clip limiting parameters. The goal of the optimization is to maximize the entropy of the processed Result. Results: A total 42 RT images were processed. The results were visually evaluated by RT physicians and physicists. About 48% of the images processed by the new method were ranked as excellent. In comparison, only 29% and 18% of the images processed by the basic CLAHE algorithm and by the basic window level adjustment process, were ranked as excellent. Conclusion: This new image contrast enhancement method is robust and automatic, and is able to significantly outperform the basic CLAHE algorithm and the manual window-level adjustment process that are currently used in clinical 2D image review software tools.« less
Gennaro, G; Ballaminut, A; Contento, G
2017-09-01
This study aims to illustrate a multiparametric automatic method for monitoring long-term reproducibility of digital mammography systems, and its application on a large scale. Twenty-five digital mammography systems employed within a regional screening programme were controlled weekly using the same type of phantom, whose images were analysed by an automatic software tool. To assess system reproducibility levels, 15 image quality indices (IQIs) were extracted and compared with the corresponding indices previously determined by a baseline procedure. The coefficients of variation (COVs) of the IQIs were used to assess the overall variability. A total of 2553 phantom images were collected from the 25 digital mammography systems from March 2013 to December 2014. Most of the systems showed excellent image quality reproducibility over the surveillance interval, with mean variability below 5%. Variability of each IQI was 5%, with the exception of one index associated with the smallest phantom objects (0.25 mm), which was below 10%. The method applied for reproducibility tests-multi-detail phantoms, cloud automatic software tool to measure multiple image quality indices and statistical process control-was proven to be effective and applicable on a large scale and to any type of digital mammography system. • Reproducibility of mammography image quality should be monitored by appropriate quality controls. • Use of automatic software tools allows image quality evaluation by multiple indices. • System reproducibility can be assessed comparing current index value with baseline data. • Overall system reproducibility of modern digital mammography systems is excellent. • The method proposed and applied is cost-effective and easily scalable.
Classification of independent components of EEG into multiple artifact classes.
Frølich, Laura; Andersen, Tobias S; Mørup, Morten
2015-01-01
In this study, we aim to automatically identify multiple artifact types in EEG. We used multinomial regression to classify independent components of EEG data, selecting from 65 spatial, spectral, and temporal features of independent components using forward selection. The classifier identified neural and five nonneural types of components. Between subjects within studies, high classification performances were obtained. Between studies, however, classification was more difficult. For neural versus nonneural classifications, performance was on par with previous results obtained by others. We found that automatic separation of multiple artifact classes is possible with a small feature set. Our method can reduce manual workload and allow for the selective removal of artifact classes. Identifying artifacts during EEG recording may be used to instruct subjects to refrain from activity causing them. Copyright © 2014 Society for Psychophysiological Research.
Hippert, Henrique S; Taylor, James W
2010-04-01
Artificial neural networks have frequently been proposed for electricity load forecasting because of their capabilities for the nonlinear modelling of large multivariate data sets. Modelling with neural networks is not an easy task though; two of the main challenges are defining the appropriate level of model complexity, and choosing the input variables. This paper evaluates techniques for automatic neural network modelling within a Bayesian framework, as applied to six samples containing daily load and weather data for four different countries. We analyse input selection as carried out by the Bayesian 'automatic relevance determination', and the usefulness of the Bayesian 'evidence' for the selection of the best structure (in terms of number of neurones), as compared to methods based on cross-validation. Copyright 2009 Elsevier Ltd. All rights reserved.
SU-E-T-362: Automatic Catheter Reconstruction of Flap Applicators in HDR Surface Brachytherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buzurovic, I; Devlin, P; Hansen, J
2014-06-01
Purpose: Catheter reconstruction is crucial for the accurate delivery of radiation dose in HDR brachytherapy. The process becomes complicated and time-consuming for large superficial clinical targets with a complex topology. A novel method for the automatic catheter reconstruction of flap applicators is proposed in this study. Methods: We have developed a program package capable of image manipulation, using C++class libraries of The-Visualization-Toolkit(VTK) software system. The workflow for automatic catheter reconstruction is: a)an anchor point is placed in 3D or in the axial view of the first slice at the tip of the first, last and middle points for the curvedmore » surface; b)similar points are placed on the last slice of the image set; c)the surface detection algorithm automatically registers the points to the images and applies the surface reconstruction filter; d)then a structured grid surface is generated through the center of the treatment catheters placed at a distance of 5mm from the patient's skin. As a result, a mesh-style plane is generated with the reconstructed catheters placed 10mm apart. To demonstrate automatic catheter reconstruction, we used CT images of patients diagnosed with cutaneous T-cell-lymphoma and imaged with Freiburg-Flap-Applicators (Nucletron™-Elekta, Netherlands). The coordinates for each catheter were generated and compared to the control points selected during the manual reconstruction for 16catheters and 368control point Results: The variation of the catheter tip positions between the automatically and manually reconstructed catheters was 0.17mm(SD=0.23mm). The position difference between the manually selected catheter control points and the corresponding points obtained automatically was 0.17mm in the x-direction (SD=0.23mm), 0.13mm in the y-direction (SD=0.22mm), and 0.14mm in the z-direction (SD=0.24mm). Conclusion: This study shows the feasibility of the automatic catheter reconstruction of flap applicators with a high level of positioning accuracy. Implementation of this technique has potential to decrease the planning time and may improve overall quality in superficial brachytherapy.« less
Van de Velde, Joris; Wouters, Johan; Vercauteren, Tom; De Gersem, Werner; Achten, Eric; De Neve, Wilfried; Van Hoof, Tom
2015-12-23
The present study aimed to measure the effect of a morphometric atlas selection strategy on the accuracy of multi-atlas-based BP autosegmentation using the commercially available software package ADMIRE® and to determine the optimal number of selected atlases to use. Autosegmentation accuracy was measured by comparing all generated automatic BP segmentations with anatomically validated gold standard segmentations that were developed using cadavers. Twelve cadaver computed tomography (CT) atlases were included in the study. One atlas was selected as a patient in ADMIRE®, and multi-atlas-based BP autosegmentation was first performed with a group of morphometrically preselected atlases. In this group, the atlases were selected on the basis of similarity in the shoulder protraction position with the patient. The number of selected atlases used started at two and increased up to eight. Subsequently, a group of randomly chosen, non-selected atlases were taken. In this second group, every possible combination of 2 to 8 random atlases was used for multi-atlas-based BP autosegmentation. For both groups, the average Dice similarity coefficient (DSC), Jaccard index (JI) and Inclusion index (INI) were calculated, measuring the similarity of the generated automatic BP segmentations and the gold standard segmentation. Similarity indices of both groups were compared using an independent sample t-test, and the optimal number of selected atlases was investigated using an equivalence trial. For each number of atlases, average similarity indices of the morphometrically selected atlas group were significantly higher than the random group (p < 0,05). In this study, the highest similarity indices were achieved using multi-atlas autosegmentation with 6 selected atlases (average DSC = 0,598; average JI = 0,434; average INI = 0,733). Morphometric atlas selection on the basis of the protraction position of the patient significantly improves multi-atlas-based BP autosegmentation accuracy. In this study, the optimal number of selected atlases used was six, but for definitive conclusions about the optimal number of atlases and to improve the autosegmentation accuracy for clinical use, more atlases need to be included.
Chen, Hsin-Chen; Jia, Wenyan; Yue, Yaofeng; Li, Zhaoxin; Sun, Yung-Nien; Fernstrom, John D.; Sun, Mingui
2013-01-01
Dietary assessment is important in health maintenance and intervention in many chronic conditions, such as obesity, diabetes, and cardiovascular disease. However, there is currently a lack of convenient methods for measuring the volume of food (portion size) in real-life settings. We present a computational method to estimate food volume from a single photographical image of food contained in a typical dining plate. First, we calculate the food location with respect to a 3D camera coordinate system using the plate as a scale reference. Then, the food is segmented automatically from the background in the image. Adaptive thresholding and snake modeling are implemented based on several image features, such as color contrast, regional color homogeneity and curve bending degree. Next, a 3D model representing the general shape of the food (e.g., a cylinder, a sphere, etc.) is selected from a pre-constructed shape model library. The position, orientation and scale of the selected shape model are determined by registering the projected 3D model and the food contour in the image, where the properties of the reference are used as constraints. Experimental results using various realistically shaped foods with known volumes demonstrated satisfactory performance of our image based food volume measurement method even if the 3D geometric surface of the food is not completely represented in the input image. PMID:24223474
NASA Astrophysics Data System (ADS)
Chen, Hsin-Chen; Jia, Wenyan; Yue, Yaofeng; Li, Zhaoxin; Sun, Yung-Nien; Fernstrom, John D.; Sun, Mingui
2013-10-01
Dietary assessment is important in health maintenance and intervention in many chronic conditions, such as obesity, diabetes and cardiovascular disease. However, there is currently a lack of convenient methods for measuring the volume of food (portion size) in real-life settings. We present a computational method to estimate food volume from a single photographic image of food contained on a typical dining plate. First, we calculate the food location with respect to a 3D camera coordinate system using the plate as a scale reference. Then, the food is segmented automatically from the background in the image. Adaptive thresholding and snake modeling are implemented based on several image features, such as color contrast, regional color homogeneity and curve bending degree. Next, a 3D model representing the general shape of the food (e.g., a cylinder, a sphere, etc) is selected from a pre-constructed shape model library. The position, orientation and scale of the selected shape model are determined by registering the projected 3D model and the food contour in the image, where the properties of the reference are used as constraints. Experimental results using various realistically shaped foods with known volumes demonstrated satisfactory performance of our image-based food volume measurement method even if the 3D geometric surface of the food is not completely represented in the input image.
Morphometric Atlas Selection for Automatic Brachial Plexus Segmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van de Velde, Joris, E-mail: joris.vandevelde@ugent.be; Department of Radiotherapy, Ghent University, Ghent; Wouters, Johan
Purpose: The purpose of this study was to determine the effects of atlas selection based on different morphometric parameters, on the accuracy of automatic brachial plexus (BP) segmentation for radiation therapy planning. The segmentation accuracy was measured by comparing all of the generated automatic segmentations with anatomically validated gold standard atlases developed using cadavers. Methods and Materials: Twelve cadaver computed tomography (CT) atlases (3 males, 9 females; mean age: 73 years) were included in the study. One atlas was selected to serve as a patient, and the other 11 atlases were registered separately onto this “patient” using deformable image registration. Thismore » procedure was repeated for every atlas as a patient. Next, the Dice and Jaccard similarity indices and inclusion index were calculated for every registered BP with the original gold standard BP. In parallel, differences in several morphometric parameters that may influence the BP segmentation accuracy were measured for the different atlases. Specific brachial plexus-related CT-visible bony points were used to define the morphometric parameters. Subsequently, correlations between the similarity indices and morphometric parameters were calculated. Results: A clear negative correlation between difference in protraction-retraction distance and the similarity indices was observed (mean Pearson correlation coefficient = −0.546). All of the other investigated Pearson correlation coefficients were weak. Conclusions: Differences in the shoulder protraction-retraction position between the atlas and the patient during planning CT influence the BP autosegmentation accuracy. A greater difference in the protraction-retraction distance between the atlas and the patient reduces the accuracy of the BP automatic segmentation result.« less
Wallace, Adam N; Vyhmeister, Ross; Bagade, Swapnil; Chatterjee, Arindam; Hicks, Brandon; Ramirez-Giraldo, Juan Carlos; McKinstry, Robert C
2015-06-01
Cerebrospinal fluid shunts are primarily used for the treatment of hydrocephalus. Shunt complications may necessitate multiple non-contrast head CT scans resulting in potentially high levels of radiation dose starting at an early age. A new head CT protocol using automatic exposure control and automated tube potential selection has been implemented at our institution to reduce radiation exposure. The purpose of this study was to evaluate the reduction in radiation dose achieved by this protocol compared with a protocol with fixed parameters. A retrospective sample of 60 non-contrast head CT scans assessing for cerebrospinal fluid shunt malfunction was identified, 30 of which were performed with each protocol. The radiation doses of the two protocols were compared using the volume CT dose index and dose length product. The diagnostic acceptability and quality of each scan were evaluated by three independent readers. The new protocol lowered the average volume CT dose index from 15.2 to 9.2 mGy representing a 39 % reduction (P < 0.01; 95 % CI 35-44 %) and lowered the dose length product from 259.5 to 151.2 mGy/cm representing a 42 % reduction (P < 0.01; 95 % CI 34-50 %). The new protocol produced diagnostically acceptable scans with comparable image quality to the fixed parameter protocol. A pediatric shunt non-contrast head CT protocol using automatic exposure control and automated tube potential selection reduced patient radiation dose compared with a fixed parameter protocol while producing diagnostic images of comparable quality.
Validity of Scores for a Developmental Writing Scale Based on Automated Scoring
ERIC Educational Resources Information Center
Attali, Yigal; Powers, Donald
2009-01-01
A developmental writing scale for timed essay-writing performance was created on the basis of automatically computed indicators of writing fluency, word choice, and conventions of standard written English. In a large-scale data collection effort that involved a national sample of more than 12,000 students from 4th, 6th, 8th, 10th, and 12th grade,…
NASA Astrophysics Data System (ADS)
Al-Jumaili, Safaa Kh.; Pearson, Matthew R.; Holford, Karen M.; Eaton, Mark J.; Pullin, Rhys
2016-05-01
An easy to use, fast to apply, cost-effective, and very accurate non-destructive testing (NDT) technique for damage localisation in complex structures is key for the uptake of structural health monitoring systems (SHM). Acoustic emission (AE) is a viable technique that can be used for SHM and one of the most attractive features is the ability to locate AE sources. The time of arrival (TOA) technique is traditionally used to locate AE sources, and relies on the assumption of constant wave speed within the material and uninterrupted propagation path between the source and the sensor. In complex structural geometries and complex materials such as composites, this assumption is no longer valid. Delta T mapping was developed in Cardiff in order to overcome these limitations; this technique uses artificial sources on an area of interest to create training maps. These are used to locate subsequent AE sources. However operator expertise is required to select the best data from the training maps and to choose the correct parameter to locate the sources, which can be a time consuming process. This paper presents a new and improved fully automatic delta T mapping technique where a clustering algorithm is used to automatically identify and select the highly correlated events at each grid point whilst the "Minimum Difference" approach is used to determine the source location. This removes the requirement for operator expertise, saving time and preventing human errors. A thorough assessment is conducted to evaluate the performance and the robustness of the new technique. In the initial test, the results showed excellent reduction in running time as well as improved accuracy of locating AE sources, as a result of the automatic selection of the training data. Furthermore, because the process is performed automatically, this is now a very simple and reliable technique due to the prevention of the potential source of error related to manual manipulation.
RoboTAP: Target priorities for robotic microlensing observations
NASA Astrophysics Data System (ADS)
Hundertmark, M.; Street, R. A.; Tsapras, Y.; Bachelet, E.; Dominik, M.; Horne, K.; Bozza, V.; Bramich, D. M.; Cassan, A.; D'Ago, G.; Figuera Jaimes, R.; Kains, N.; Ranc, C.; Schmidt, R. W.; Snodgrass, C.; Wambsganss, J.; Steele, I. A.; Mao, S.; Ment, K.; Menzies, J.; Li, Z.; Cross, S.; Maoz, D.; Shvartzvald, Y.
2018-01-01
Context. The ability to automatically select scientifically-important transient events from an alert stream of many such events, and to conduct follow-up observations in response, will become increasingly important in astronomy. With wide-angle time domain surveys pushing to fainter limiting magnitudes, the capability to follow-up on transient alerts far exceeds our follow-up telescope resources, and effective target prioritization becomes essential. The RoboNet-II microlensing program is a pathfinder project, which has developed an automated target selection process (RoboTAP) for gravitational microlensing events, which are observed in real time using the Las Cumbres Observatory telescope network. Aims: Follow-up telescopes typically have a much smaller field of view compared to surveys, therefore the most promising microlensing events must be automatically selected at any given time from an annual sample exceeding 2000 events. The main challenge is to select between events with a high planet detection sensitivity, with the aim of detecting many planets and characterizing planetary anomalies. Methods: Our target selection algorithm is a hybrid system based on estimates of the planet detection zones around a microlens. It follows automatic anomaly alerts and respects the expected survey coverage of specific events. Results: We introduce the RoboTAP algorithm, whose purpose is to select and prioritize microlensing events with high sensitivity to planetary companions. In this work, we determine the planet sensitivity of the RoboNet follow-up program and provide a working example of how a broker can be designed for a real-life transient science program conducting follow-up observations in response to alerts; we explore the issues that will confront similar programs being developed for the Large Synoptic Survey Telescope (LSST) and other time domain surveys.
Management of natural resources through automatic cartographic inventory
NASA Technical Reports Server (NTRS)
Rey, P. A.; Gourinard, Y.; Cambou, F. (Principal Investigator)
1974-01-01
The author has identified the following significant results. Significant correspondence codes relating ERTS imagery to ground truth from vegetation and geology maps have been established. The use of color equidensity and color composite methods for selecting zones of equal densitometric value on ERTS imagery was perfected. Primary interest of temporal color composite is stressed. A chain of transfer operations from ERTS imagery to the automatic mapping of natural resources was developed.
Miconi, Thomas; VanRullen, Rufin
2016-02-01
Visual attention has many effects on neural responses, producing complex changes in firing rates, as well as modifying the structure and size of receptive fields, both in topological and feature space. Several existing models of attention suggest that these effects arise from selective modulation of neural inputs. However, anatomical and physiological observations suggest that attentional modulation targets higher levels of the visual system (such as V4 or MT) rather than input areas (such as V1). Here we propose a simple mechanism that explains how a top-down attentional modulation, falling on higher visual areas, can produce the observed effects of attention on neural responses. Our model requires only the existence of modulatory feedback connections between areas, and short-range lateral inhibition within each area. Feedback connections redistribute the top-down modulation to lower areas, which in turn alters the inputs of other higher-area cells, including those that did not receive the initial modulation. This produces firing rate modulations and receptive field shifts. Simultaneously, short-range lateral inhibition between neighboring cells produce competitive effects that are automatically scaled to receptive field size in any given area. Our model reproduces the observed attentional effects on response rates (response gain, input gain, biased competition automatically scaled to receptive field size) and receptive field structure (shifts and resizing of receptive fields both spatially and in complex feature space), without modifying model parameters. Our model also makes the novel prediction that attentional effects on response curves should shift from response gain to contrast gain as the spatial focus of attention drifts away from the studied cell.
Evolutionary game dynamics of controlled and automatic decision-making
NASA Astrophysics Data System (ADS)
Toupo, Danielle F. P.; Strogatz, Steven H.; Cohen, Jonathan D.; Rand, David G.
2015-07-01
We integrate dual-process theories of human cognition with evolutionary game theory to study the evolution of automatic and controlled decision-making processes. We introduce a model in which agents who make decisions using either automatic or controlled processing compete with each other for survival. Agents using automatic processing act quickly and so are more likely to acquire resources, but agents using controlled processing are better planners and so make more effective use of the resources they have. Using the replicator equation, we characterize the conditions under which automatic or controlled agents dominate, when coexistence is possible and when bistability occurs. We then extend the replicator equation to consider feedback between the state of the population and the environment. Under conditions in which having a greater proportion of controlled agents either enriches the environment or enhances the competitive advantage of automatic agents, we find that limit cycles can occur, leading to persistent oscillations in the population dynamics. Critically, however, these limit cycles only emerge when feedback occurs on a sufficiently long time scale. Our results shed light on the connection between evolution and human cognition and suggest necessary conditions for the rise and fall of rationality.
Evolutionary game dynamics of controlled and automatic decision-making.
Toupo, Danielle F P; Strogatz, Steven H; Cohen, Jonathan D; Rand, David G
2015-07-01
We integrate dual-process theories of human cognition with evolutionary game theory to study the evolution of automatic and controlled decision-making processes. We introduce a model in which agents who make decisions using either automatic or controlled processing compete with each other for survival. Agents using automatic processing act quickly and so are more likely to acquire resources, but agents using controlled processing are better planners and so make more effective use of the resources they have. Using the replicator equation, we characterize the conditions under which automatic or controlled agents dominate, when coexistence is possible and when bistability occurs. We then extend the replicator equation to consider feedback between the state of the population and the environment. Under conditions in which having a greater proportion of controlled agents either enriches the environment or enhances the competitive advantage of automatic agents, we find that limit cycles can occur, leading to persistent oscillations in the population dynamics. Critically, however, these limit cycles only emerge when feedback occurs on a sufficiently long time scale. Our results shed light on the connection between evolution and human cognition and suggest necessary conditions for the rise and fall of rationality.
NASA Astrophysics Data System (ADS)
Battistini, Alessandro; Rosi, Ascanio; Segoni, Samuele; Catani, Filippo; Casagli, Nicola
2017-04-01
Landslide inventories are basic data for large scale landslide modelling, e.g. they are needed to calibrate and validate rainfall thresholds, physically based models and early warning systems. The setting up of landslide inventories with traditional methods (e.g. remote sensing, field surveys and manual retrieval of data from technical reports and local newspapers) is time consuming. The objective of this work is to automatically set up a landslide inventory using a state-of-the art semantic engine based on data mining on online news (Battistini et al., 2013) and to evaluate if the automatically generated inventory can be used to validate a regional scale landslide warning system based on rainfall-thresholds. The semantic engine scanned internet news in real time in a 50 months test period. At the end of the process, an inventory of approximately 900 landslides was set up for the Tuscany region (23,000 km2, Italy). The inventory was compared with the outputs of the regional landslide early warning system based on rainfall thresholds, and a good correspondence was found: e.g. 84% of the events reported in the news is correctly identified by the model. In addition, the cases of not correspondence were forwarded to the rainfall threshold developers, which used these inputs to update some of the thresholds. On the basis of the results obtained, we conclude that automatic validation of landslide models using geolocalized landslide events feedback is possible. The source of data for validation can be obtained directly from the internet channel using an appropriate semantic engine. We also automated the validation procedure, which is based on a comparison between forecasts and reported events. We verified that our approach can be automatically used for a near real time validation of the warning system and for a semi-automatic update of the rainfall thresholds, which could lead to an improvement of the forecasting effectiveness of the warning system. In the near future, the proposed procedure could operate in continuous time and could allow for a periodic update of landslide hazard models and landslide early warning systems with minimum human intervention. References: Battistini, A., Segoni, S., Manzo, G., Catani, F., Casagli, N. (2013). Web data mining for automatic inventory of geohazards at national scale. Applied Geography, 43, 147-158.
Automatic crack detection and classification method for subway tunnel safety monitoring.
Zhang, Wenyu; Zhang, Zhenjiang; Qi, Dapeng; Liu, Yun
2014-10-16
Cracks are an important indicator reflecting the safety status of infrastructures. This paper presents an automatic crack detection and classification methodology for subway tunnel safety monitoring. With the application of high-speed complementary metal-oxide-semiconductor (CMOS) industrial cameras, the tunnel surface can be captured and stored in digital images. In a next step, the local dark regions with potential crack defects are segmented from the original gray-scale images by utilizing morphological image processing techniques and thresholding operations. In the feature extraction process, we present a distance histogram based shape descriptor that effectively describes the spatial shape difference between cracks and other irrelevant objects. Along with other features, the classification results successfully remove over 90% misidentified objects. Also, compared with the original gray-scale images, over 90% of the crack length is preserved in the last output binary images. The proposed approach was tested on the safety monitoring for Beijing Subway Line 1. The experimental results revealed the rules of parameter settings and also proved that the proposed approach is effective and efficient for automatic crack detection and classification.
NASA Astrophysics Data System (ADS)
Adi, K.; Widodo, A. P.; Widodo, C. E.; Pamungkas, A.; Putranto, A. B.
2018-05-01
Traffic monitoring on road needs to be done, the counting of the number of vehicles passing the road is necessary. It is more emphasized for highway transportation management in order to prevent efforts. Therefore, it is necessary to develop a system that is able to counting the number of vehicles automatically. Video processing method is able to counting the number of vehicles automatically. This research has development a system of vehicle counting on toll road. This system includes processes of video acquisition, frame extraction, and image processing for each frame. Video acquisition is conducted in the morning, at noon, in the afternoon, and in the evening. This system employs of background subtraction and morphology methods on gray scale images for vehicle counting. The best vehicle counting results were obtained in the morning with a counting accuracy of 86.36 %, whereas the lowest accuracy was in the evening, at 21.43 %. Differences in morning and evening results are caused by different illumination in the morning and evening. This will cause the values in the image pixels to be different.
Automatic Crack Detection and Classification Method for Subway Tunnel Safety Monitoring
Zhang, Wenyu; Zhang, Zhenjiang; Qi, Dapeng; Liu, Yun
2014-01-01
Cracks are an important indicator reflecting the safety status of infrastructures. This paper presents an automatic crack detection and classification methodology for subway tunnel safety monitoring. With the application of high-speed complementary metal-oxide-semiconductor (CMOS) industrial cameras, the tunnel surface can be captured and stored in digital images. In a next step, the local dark regions with potential crack defects are segmented from the original gray-scale images by utilizing morphological image processing techniques and thresholding operations. In the feature extraction process, we present a distance histogram based shape descriptor that effectively describes the spatial shape difference between cracks and other irrelevant objects. Along with other features, the classification results successfully remove over 90% misidentified objects. Also, compared with the original gray-scale images, over 90% of the crack length is preserved in the last output binary images. The proposed approach was tested on the safety monitoring for Beijing Subway Line 1. The experimental results revealed the rules of parameter settings and also proved that the proposed approach is effective and efficient for automatic crack detection and classification. PMID:25325337
Fiber optic crossbar switch for automatically patching optical signals
NASA Technical Reports Server (NTRS)
Bell, C. H. (Inventor)
1983-01-01
A system for automatically optically switching fiber optic data signals between a plurality of input optical fibers and selective ones of a plurality of output fibers is described. The system includes optical detectors which are connected to each of the input fibers for converting the optic data signals appearing at the respective input fibers to an RF signal. A plurality of RF to optical signal converters are arranged in rows and columns. The output of each of the optical detectors are each applied to a respective row of optical signal converted for being converters back to an optical signal when the particular optical signal converter is selectively activated by a dc voltage.
Automatic Generation of Building Models with Levels of Detail 1-3
NASA Astrophysics Data System (ADS)
Nguatem, W.; Drauschke, M.; Mayer, H.
2016-06-01
We present a workflow for the automatic generation of building models with levels of detail (LOD) 1 to 3 according to the CityGML standard (Gröger et al., 2012). We start with orienting unsorted image sets employing (Mayer et al., 2012), we compute depth maps using semi-global matching (SGM) (Hirschmüller, 2008), and fuse these depth maps to reconstruct dense 3D point clouds (Kuhn et al., 2014). Based on planes segmented from these point clouds, we have developed a stochastic method for roof model selection (Nguatem et al., 2013) and window model selection (Nguatem et al., 2014). We demonstrate our workflow up to the export into CityGML.
[Development of a Compared Software for Automatically Generated DVH in Eclipse TPS].
Xie, Zhao; Luo, Kelin; Zou, Lian; Hu, Jinyou
2016-03-01
This study is to automatically calculate the dose volume histogram(DVH) for the treatment plan, then to compare it with requirements of doctor's prescriptions. The scripting language Autohotkey and programming language C# were used to develop a compared software for automatically generated DVH in Eclipse TPS. This software is named Show Dose Volume Histogram (ShowDVH), which is composed of prescription documents generation, operation functions of DVH, software visualization and DVH compared report generation. Ten cases in different cancers have been separately selected, in Eclipse TPS 11.0 ShowDVH could not only automatically generate DVH reports but also accurately determine whether treatment plans meet the requirements of doctor’s prescriptions, then reports gave direction for setting optimization parameters of intensity modulated radiated therapy. The ShowDVH is an user-friendly and powerful software, and can automatically generated compared DVH reports fast in Eclipse TPS 11.0. With the help of ShowDVH, it greatly saves plan designing time and improves working efficiency of radiation therapy physicists.
Zhu, Chengcheng; Patterson, Andrew J; Thomas, Owen M; Sadat, Umar; Graves, Martin J; Gillard, Jonathan H
2013-04-01
Luminal stenosis is used for selecting the optimal management strategy for patients with carotid artery disease. The aim of this study is to evaluate the reproducibility of carotid stenosis quantification using manual and automated segmentation methods using submillimeter through-plane resolution Multi-Detector CT angiography (MDCTA). 35 patients having carotid artery disease with >30 % luminal stenosis as identified by carotid duplex imaging underwent contrast enhanced MDCTA. Two experienced CT readers quantified carotid stenosis from axial source images, reconstructed maximum intensity projection (MIP) and 3D-carotid geometry which was automatically segmented by an open-source toolkit (Vascular Modelling Toolkit, VMTK) using NASCET criteria. Good agreement among the measurement using axial images, MIP and automatic segmentation was observed. Automatic segmentation methods show better inter-observer agreement between the readers (intra-class correlation coefficient (ICC): 0.99 for diameter stenosis measurement) than manual measurement of axial (ICC = 0.82) and MIP (ICC = 0.86) images. Carotid stenosis quantification using an automatic segmentation method has higher reproducibility compared with manual methods.
Roberts, Walter; Fillmore, Mark T.; Milich, Richard
2011-01-01
Researchers in the cognitive sciences recognize a fundamental distinction between automatic and intentional mechanisms of inhibitory control. The use of eye-tracking tasks to assess selective attention has led to a better understanding of this distinction in specific populations such as children with attention-deficit/hyperactivity disorder (ADHD). This study examined automatic and intentional inhibitory control mechanisms in adults with ADHD using a saccadic interference (SI) task and a delayed ocular response (DOR) task. Thirty adults with ADHD were compared to 27 comparison adults on measures of inhibitory control. The DOR task showed that adults with ADHD were less able than comparison adults to inhibit a reflexive saccade towards the sudden appearance of a stimulus in the periphery. However, SI task performance showed that the ADHD group did not differ significantly from the comparison group on a measure of automatic inhibitory control. These findings suggest a dissociation between automatic and intentional inhibitory deficits in adults with ADHD. PMID:21058752
Integrated Approach to Reconstruction of Microbial Regulatory Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodionov, Dmitry A; Novichkov, Pavel S
2013-11-04
This project had the goal(s) of development of integrated bioinformatics platform for genome-scale inference and visualization of transcriptional regulatory networks (TRNs) in bacterial genomes. The work was done in Sanford-Burnham Medical Research Institute (SBMRI, P.I. D.A. Rodionov) and Lawrence Berkeley National Laboratory (LBNL, co-P.I. P.S. Novichkov). The developed computational resources include: (1) RegPredict web-platform for TRN inference and regulon reconstruction in microbial genomes, and (2) RegPrecise database for collection, visualization and comparative analysis of transcriptional regulons reconstructed by comparative genomics. These analytical resources were selected as key components in the DOE Systems Biology KnowledgeBase (SBKB). The high-quality data accumulated inmore » RegPrecise will provide essential datasets of reference regulons in diverse microbes to enable automatic reconstruction of draft TRNs in newly sequenced genomes. We outline our progress toward the three aims of this grant proposal, which were: Develop integrated platform for genome-scale regulon reconstruction; Infer regulatory annotations in several groups of bacteria and building of reference collections of microbial regulons; and Develop KnowledgeBase on microbial transcriptional regulation.« less
Improving automatic peptide mass fingerprint protein identification by combining many peak sets.
Rögnvaldsson, Thorsteinn; Häkkinen, Jari; Lindberg, Claes; Marko-Varga, György; Potthast, Frank; Samuelsson, Jim
2004-08-05
An automated peak picking strategy is presented where several peak sets with different signal-to-noise levels are combined to form a more reliable statement on the protein identity. The strategy is compared against both manual peak picking and industry standard automated peak picking on a set of mass spectra obtained after tryptic in gel digestion of 2D-gel samples from human fetal fibroblasts. The set of spectra contain samples ranging from strong to weak spectra, and the proposed multiple-scale method is shown to be much better on weak spectra than the industry standard method and a human operator, and equal in performance to these on strong and medium strong spectra. It is also demonstrated that peak sets selected by a human operator display a considerable variability and that it is impossible to speak of a single "true" peak set for a given spectrum. The described multiple-scale strategy both avoids time-consuming parameter tuning and exceeds the human operator in protein identification efficiency. The strategy therefore promises reliable automated user-independent protein identification using peptide mass fingerprints.
Long-term variations of fluxes of solar protons and helium isotopes
NASA Astrophysics Data System (ADS)
Anufriev, G. S.
2012-11-01
The fluxes of hydrogen and helium isotopes in the solar wind are reconstructed over a long time scale since the present time up to 600 million years back. Abundances of helium isotopes, obtained in the helium isotopic analysis made for 8 lunar soil samples, were used as initial data in the reconstruction procedure. Samples were taken off from various levels of the 1.6-m core of lunar soil delivered by the automatic Luna-24 station in 1976. The data on modern hydrogen and helium fluxes were used as well. The developed reconstruction procedure allowed one to select various solar wind components in a "gross" composition. Proton flux variations over the interval of 600 million years do not exceed a value of 40 %. Helium flux variations reach a value of 1.5-2 relative to the average value. Most likely, this circumstance is caused by considerable variations of a number of coronal mass ejections ( CME) enriched by helium. The arguments in favor of solar activity polycyclicity on a long time scale are discussed.
Daffner, Kirk R; Alperin, Brittany R; Mott, Katherine K; Holcomb, Phillip J
2014-01-22
Older adults exhibit diminished ability to inhibit the processing of visual stimuli that are supposed to be ignored. The extent to which age-related changes in early visual processing contribute to impairments in selective attention remains to be determined. Here, 103 adults, 18-85 years of age, completed a color selective attention task in which they were asked to attend to a specified color and respond to designated target letters. An optimal approach would be to initially filter according to color and then process letter forms in the attend color to identify targets. An asymmetric N170 ERP component (larger amplitude over left posterior hemisphere sites) was used as a marker of the early automatic processing of letter forms. Young and middle-aged adults did not generate an asymmetric N170 component. In contrast, young-old and old-old adults produced a larger N170 over the left hemisphere. Furthermore, older adults generated a larger N170 to letter than nonletter stimuli over the left, but not right hemisphere. More asymmetric N170 responses predicted greater allocation of late selection resources to target letters in the ignore color, as indexed by P3b amplitude. These results suggest that unlike their younger counterparts, older adults automatically process stimuli as letters early in the selection process, when it would be more efficient to attend to color only. The inability to ignore letters early in the processing stream helps explain the age-related increase in subsequent processing of target letter forms presented in the ignore color.
NASA Technical Reports Server (NTRS)
Tarabalka, Y.; Tilton, J. C.; Benediktsson, J. A.; Chanussot, J.
2012-01-01
The Hierarchical SEGmentation (HSEG) algorithm, which combines region object finding with region object clustering, has given good performances for multi- and hyperspectral image analysis. This technique produces at its output a hierarchical set of image segmentations. The automated selection of a single segmentation level is often necessary. We propose and investigate the use of automatically selected markers for this purpose. In this paper, a novel Marker-based HSEG (M-HSEG) method for spectral-spatial classification of hyperspectral images is proposed. Two classification-based approaches for automatic marker selection are adapted and compared for this purpose. Then, a novel constrained marker-based HSEG algorithm is applied, resulting in a spectral-spatial classification map. Three different implementations of the M-HSEG method are proposed and their performances in terms of classification accuracies are compared. The experimental results, presented for three hyperspectral airborne images, demonstrate that the proposed approach yields accurate segmentation and classification maps, and thus is attractive for remote sensing image analysis.
Lymph node detection in IASLC-defined zones on PET/CT images
NASA Astrophysics Data System (ADS)
Song, Yihua; Udupa, Jayaram K.; Odhner, Dewey; Tong, Yubing; Torigian, Drew A.
2016-03-01
Lymph node detection is challenging due to the low contrast between lymph nodes as well as surrounding soft tissues and the variation in nodal size and shape. In this paper, we propose several novel ideas which are combined into a system to operate on positron emission tomography/ computed tomography (PET/CT) images to detect abnormal thoracic nodes. First, our previous Automatic Anatomy Recognition (AAR) approach is modified where lymph node zones predominantly following International Association for the Study of Lung Cancer (IASLC) specifications are modeled as objects arranged in a hierarchy along with key anatomic anchor objects. This fuzzy anatomy model built from diagnostic CT images is then deployed on PET/CT images for automatically recognizing the zones. A novel globular filter (g-filter) to detect blob-like objects over a specified range of sizes is designed to detect the most likely locations and sizes of diseased nodes. Abnormal nodes within each automatically localized zone are subsequently detected via combined use of different items of information at various scales: lymph node zone model poses found at recognition indicating the geographic layout at the global level of node clusters, g-filter response which hones in on and carefully selects node-like globular objects at the node level, and CT and PET gray value but within only the most plausible nodal regions for node presence at the voxel level. The models are built from 25 diagnostic CT scans and refined for an object hierarchy based on a separate set of 20 diagnostic CT scans. Node detection is tested on an additional set of 20 PET/CT scans. Our preliminary results indicate node detection sensitivity and specificity at around 90% and 85%, respectively.
Meyer, C R; Boes, J L; Kim, B; Bland, P H; Zasadny, K R; Kison, P V; Koral, K; Frey, K A; Wahl, R L
1997-04-01
This paper applies and evaluates an automatic mutual information-based registration algorithm across a broad spectrum of multimodal volume data sets. The algorithm requires little or no pre-processing, minimal user input and easily implements either affine, i.e. linear or thin-plate spline (TPS) warped registrations. We have evaluated the algorithm in phantom studies as well as in selected cases where few other algorithms could perform as well, if at all, to demonstrate the value of this new method. Pairs of multimodal gray-scale volume data sets were registered by iteratively changing registration parameters to maximize mutual information. Quantitative registration errors were assessed in registrations of a thorax phantom using PET/CT and in the National Library of Medicine's Visible Male using MRI T2-/T1-weighted acquisitions. Registrations of diverse clinical data sets were demonstrated including rotate-translate mapping of PET/MRI brain scans with significant missing data, full affine mapping of thoracic PET/CT and rotate-translate mapping of abdominal SPECT/CT. A five-point thin-plate spline (TPS) warped registration of thoracic PET/CT is also demonstrated. The registration algorithm converged in times ranging between 3.5 and 31 min for affine clinical registrations and 57 min for TPS warping. Mean error vector lengths for rotate-translate registrations were measured to be subvoxel in phantoms. More importantly the rotate-translate algorithm performs well even with missing data. The demonstrated clinical fusions are qualitatively excellent at all levels. We conclude that such automatic, rapid, robust algorithms significantly increase the likelihood that multimodality registrations will be routinely used to aid clinical diagnoses and post-therapeutic assessment in the near future.
Woodman, Geoffrey F.; Luck, Steven J.
2007-01-01
In many theories of cognition, researchers propose that working memory and perception operate interactively. For example, in previous studies researchers have suggested that sensory inputs matching the contents of working memory will have an automatic advantage in the competition for processing resources. The authors tested this hypothesis by requiring observers to perform a visual search task while concurrently maintaining object representations in visual working memory. The hypothesis that working memory activation produces a simple but uncontrollable bias signal leads to the prediction that items matching the contents of working memory will automatically capture attention. However, no evidence for automatic attentional capture was obtained; instead, the participants avoided attending to these items. Thus, the contents of working memory can be used in a flexible manner for facilitation or inhibition of processing. PMID:17469973
Woodman, Geoffrey F; Luck, Steven J
2007-04-01
In many theories of cognition, researchers propose that working memory and perception operate interactively. For example, in previous studies researchers have suggested that sensory inputs matching the contents of working memory will have an automatic advantage in the competition for processing resources. The authors tested this hypothesis by requiring observers to perform a visual search task while concurrently maintaining object representations in visual working memory. The hypothesis that working memory activation produces a simple but uncontrollable bias signal leads to the prediction that items matching the contents of working memory will automatically capture attention. However, no evidence for automatic attentional capture was obtained; instead, the participants avoided attending to these items. Thus, the contents of working memory can be used in a flexible manner for facilitation or inhibition of processing.
Automatic morphological classification of galaxy images
Shamir, Lior
2009-01-01
We describe an image analysis supervised learning algorithm that can automatically classify galaxy images. The algorithm is first trained using a manually classified images of elliptical, spiral, and edge-on galaxies. A large set of image features is extracted from each image, and the most informative features are selected using Fisher scores. Test images can then be classified using a simple Weighted Nearest Neighbor rule such that the Fisher scores are used as the feature weights. Experimental results show that galaxy images from Galaxy Zoo can be classified automatically to spiral, elliptical and edge-on galaxies with accuracy of ~90% compared to classifications carried out by the author. Full compilable source code of the algorithm is available for free download, and its general-purpose nature makes it suitable for other uses that involve automatic image analysis of celestial objects. PMID:20161594
Ionospheric Research Using Digital Ionosondes.
1983-07-01
HEIGHT ANALYSIS, ARTIST 96 7.0 CHEMICAL RELEASE EXPERIMENTS AT NATAL 105 8.0 IONOSPHERIC HEATING EXPERIMENTS AT ARECIBO 114 9.0 DIGISONDE 128...Jan 82 20:30 to 12 AST 89 67 Thule 82-022 92 68 Integrated Height Characteristic Thule 82-022 93 69 ARTIST Ionogram Print 103 70 Automatic Profiles...Where Manual and Automatic Scalings Fall Within Indicated Limits 97 6a ARTIST Initialization 99 6b ARTIST Initialization 100 6c ARTIST Output 101 N
Gül, A I; Simsek, G; Karaaslan, Ö; Inanir, S
2015-08-01
Automatic thoughts are measurable cognitive markers of the psychopathology and coping styles of individuals. This study measured and compared the automatic thoughts of patients with generalized anxiety disorder (GAD), major depressive disorder (MDD), and generalized social phobia (GSP). Fifty-two patients with GAD, 53 with MDD, and 50 with GSP and 52 healthy controls completed the validated Automatic Thoughts Questionnaire (ATQ) and a structured psychiatric interview. Patients with GAD, MDD, and GSP also completed the validated Generalized Anxiety Disorder-7 questionnaire, the Beck Depression Inventory (BDI), and the Liebowitz Social Anxiety Scale (LSAS) to determine the severity of their illnesses. All scales were completed before treatment and after diagnosis. The ATQ scores of all pairs of groups were compared. The ATQ scores of the GAD, MDD, and GSP groups were significantly higher than were those of the control group. We also found significant correlations among scores on the GAD-7, BDI, and LSAS. The mean age of patients with GSP was lower than was that of the other groups (30.90 ± 8.35). The significantly higher ATQ scores of the MDD, GAD, and GSP groups, compared with the control group, underscore the common cognitive psychopathology characterizing these three disorders. This finding confirms that similar cognitive therapy approaches should be effective for these patients. This study is the first to compare GAD, MDD, and GSP from a cognitive perspective.
a Model Study of Small-Scale World Map Generalization
NASA Astrophysics Data System (ADS)
Cheng, Y.; Yin, Y.; Li, C. M.; Wu, W.; Guo, P. P.; Ma, X. L.; Hu, F. M.
2018-04-01
With the globalization and rapid development every filed is taking an increasing interest in physical geography and human economics. There is a surging demand for small scale world map in large formats all over the world. Further study of automated mapping technology, especially the realization of small scale production on a large scale global map, is the key of the cartographic field need to solve. In light of this, this paper adopts the improved model (with the map and data separated) in the field of the mapmaking generalization, which can separate geographic data from mapping data from maps, mainly including cross-platform symbols and automatic map-making knowledge engine. With respect to the cross-platform symbol library, the symbol and the physical symbol in the geographic information are configured at all scale levels. With respect to automatic map-making knowledge engine consists 97 types, 1086 subtypes, 21845 basic algorithm and over 2500 relevant functional modules.In order to evaluate the accuracy and visual effect of our model towards topographic maps and thematic maps, we take the world map generalization in small scale as an example. After mapping generalization process, combining and simplifying the scattered islands make the map more explicit at 1 : 2.1 billion scale, and the map features more complete and accurate. Not only it enhance the map generalization of various scales significantly, but achieve the integration among map-makings of various scales, suggesting that this model provide a reference in cartographic generalization for various scales.
NASA Astrophysics Data System (ADS)
Mueller, David S.
2013-04-01
Selection of the appropriate extrapolation methods for computing the discharge in the unmeasured top and bottom parts of a moving-boat acoustic Doppler current profiler (ADCP) streamflow measurement is critical to the total discharge computation. The software tool, extrap, combines normalized velocity profiles from the entire cross section and multiple transects to determine a mean profile for the measurement. The use of an exponent derived from normalized data from the entire cross section is shown to be valid for application of the power velocity distribution law in the computation of the unmeasured discharge in a cross section. Selected statistics are combined with empirically derived criteria to automatically select the appropriate extrapolation methods. A graphical user interface (GUI) provides the user tools to visually evaluate the automatically selected extrapolation methods and manually change them, as necessary. The sensitivity of the total discharge to available extrapolation methods is presented in the GUI. Use of extrap by field hydrographers has demonstrated that extrap is a more accurate and efficient method of determining the appropriate extrapolation methods compared with tools currently (2012) provided in the ADCP manufacturers' software.
Local pulmonary structure classification for computer-aided nodule detection
NASA Astrophysics Data System (ADS)
Bahlmann, Claus; Li, Xianlin; Okada, Kazunori
2006-03-01
We propose a new method of classifying the local structure types, such as nodules, vessels, and junctions, in thoracic CT scans. This classification is important in the context of computer aided detection (CAD) of lung nodules. The proposed method can be used as a post-process component of any lung CAD system. In such a scenario, the classification results provide an effective means of removing false positives caused by vessels and junctions thus improving overall performance. As main advantage, the proposed solution transforms the complex problem of classifying various 3D topological structures into much simpler 2D data clustering problem, to which more generic and flexible solutions are available in literature, and which is better suited for visualization. Given a nodule candidate, first, our solution robustly fits an anisotropic Gaussian to the data. The resulting Gaussian center and spread parameters are used to affine-normalize the data domain so as to warp the fitted anisotropic ellipsoid into a fixed-size isotropic sphere. We propose an automatic method to extract a 3D spherical manifold, containing the appropriate bounding surface of the target structure. Scale selection is performed by a data driven entropy minimization approach. The manifold is analyzed for high intensity clusters, corresponding to protruding structures. Techniques involve EMclustering with automatic mode number estimation, directional statistics, and hierarchical clustering with a modified Bhattacharyya distance. The estimated number of high intensity clusters explicitly determines the type of pulmonary structures: nodule (0), attached nodule (1), vessel (2), junction (>3). We show accurate classification results for selected examples in thoracic CT scans. This local procedure is more flexible and efficient than current state of the art and will help to improve the accuracy of general lung CAD systems.
Seman, Ali; Sapawi, Azizian Mohd; Salleh, Mohd Zaki
2015-06-01
Y-chromosome short tandem repeats (Y-STRs) are genetic markers with practical applications in human identification. However, where mass identification is required (e.g., in the aftermath of disasters with significant fatalities), the efficiency of the process could be improved with new statistical approaches. Clustering applications are relatively new tools for large-scale comparative genotyping, and the k-Approximate Modal Haplotype (k-AMH), an efficient algorithm for clustering large-scale Y-STR data, represents a promising method for developing these tools. In this study we improved the k-AMH and produced three new algorithms: the Nk-AMH I (including a new initial cluster center selection), the Nk-AMH II (including a new dominant weighting value), and the Nk-AMH III (combining I and II). The Nk-AMH III was the superior algorithm, with mean clustering accuracy that increased in four out of six datasets and remained at 100% in the other two. Additionally, the Nk-AMH III achieved a 2% higher overall mean clustering accuracy score than the k-AMH, as well as optimal accuracy for all datasets (0.84-1.00). With inclusion of the two new methods, the Nk-AMH III produced an optimal solution for clustering Y-STR data; thus, the algorithm has potential for further development towards fully automatic clustering of any large-scale genotypic data.
Nie, Binbin; Liang, Shengxiang; Jiang, Xiaofeng; Duan, Shaofeng; Huang, Qi; Zhang, Tianhao; Li, Panlong; Liu, Hua; Shan, Baoci
2018-06-07
Positron emission tomography (PET) imaging of functional metabolism has been widely used to investigate functional recovery and to evaluate therapeutic efficacy after stroke. The voxel intensity of a PET image is the most important indicator of cellular activity, but is affected by other factors such as the basal metabolic ratio of each subject. In order to locate dysfunctional regions accurately, intensity normalization by a scale factor is a prerequisite in the data analysis, for which the global mean value is most widely used. However, this is unsuitable for stroke studies. Alternatively, a specified scale factor calculated from a reference region is also used, comprising neither hyper- nor hypo-metabolic voxels. But there is no such recognized reference region for stroke studies. Therefore, we proposed a totally data-driven automatic method for unbiased scale factor generation. This factor was generated iteratively until the residual deviation of two adjacent scale factors was reduced by < 5%. Moreover, both simulated and real stroke data were used for evaluation, and these suggested that our proposed unbiased scale factor has better sensitivity and accuracy for stroke studies.
Automatic seed selection for segmentation of liver cirrhosis in laparoscopic sequences
NASA Astrophysics Data System (ADS)
Sinha, Rahul; Marcinczak, Jan Marek; Grigat, Rolf-Rainer
2014-03-01
For computer aided diagnosis based on laparoscopic sequences, image segmentation is one of the basic steps which define the success of all further processing. However, many image segmentation algorithms require prior knowledge which is given by interaction with the clinician. We propose an automatic seed selection algorithm for segmentation of liver cirrhosis in laparoscopic sequences which assigns each pixel a probability of being cirrhotic liver tissue or background tissue. Our approach is based on a trained classifier using SIFT and RGB features with PCA. Due to the unique illumination conditions in laparoscopic sequences of the liver, a very low dimensional feature space can be used for classification via logistic regression. The methodology is evaluated on 718 cirrhotic liver and background patches that are taken from laparoscopic sequences of 7 patients. Using a linear classifier we achieve a precision of 91% in a leave-one-patient-out cross-validation. Furthermore, we demonstrate that with logistic probability estimates, seeds with high certainty of being cirrhotic liver tissue can be obtained. For example, our precision of liver seeds increases to 98.5% if only seeds with more than 95% probability of being liver are used. Finally, these automatically selected seeds can be used as priors in Graph Cuts which is demonstrated in this paper.
On the analysis of local and global features for hyperemia grading
NASA Astrophysics Data System (ADS)
Sánchez, L.; Barreira, N.; Sánchez, N.; Mosquera, A.; Pena-Verdeal, H.; Yebra-Pimentel, E.
2017-03-01
In optometry, hyperemia is the accumulation of blood flow in the conjunctival tissue. Dry eye syndrome or allergic conjunctivitis are two of its main causes. Its main symptom is the presence of a red hue in the eye that optometrists evaluate according to a scale in a subjective manner. In this paper, we propose an automatic approach to the problem of hyperemia grading in the bulbar conjunctiva. We compute several image features on images of the patients' eyes, analyse the relations among them by using feature selection techniques and transform the feature vector of each image to the value in the adequate range by means of machine learning techniques. We analyse different areas of the conjunctiva to evaluate their importance for the diagnosis. Our results show that it is possible to mimic the experts' behaviour through the proposed approach.
A device for automatically measuring and supervising the critical care patient's urine output.
Otero, Abraham; Palacios, Francisco; Akinfiev, Teodor; Fernández, Roemi
2010-01-01
Critical care units are equipped with commercial monitoring devices capable of sensing patients' physiological parameters and supervising the achievement of the established therapeutic goals. This avoids human errors in this task and considerably decreases the workload of the healthcare staff. However, at present there still is a very relevant physiological parameter that is measured and supervised manually by the critical care units' healthcare staff: urine output. This paper presents a patent-pending device capable of automatically recording and supervising the urine output of a critical care patient. A high precision scale is used to measure the weight of a commercial urine meter. On the scale's pan there is a support frame made up of Bosch profiles that isolates the scale from force transmission from the patient's bed, and guarantees that the urine flows properly through the urine meter input tube. The scale's readings are sent to a PC via Bluetooth where an application supervises the achievement of the therapeutic goals. The device is currently undergoing tests at a research unit associated with the University Hospital of Getafe in Spain.
Lee, Unseok; Chang, Sungyul; Putra, Gian Anantrio; Kim, Hyoungseok; Kim, Dong Hwan
2018-01-01
A high-throughput plant phenotyping system automatically observes and grows many plant samples. Many plant sample images are acquired by the system to determine the characteristics of the plants (populations). Stable image acquisition and processing is very important to accurately determine the characteristics. However, hardware for acquiring plant images rapidly and stably, while minimizing plant stress, is lacking. Moreover, most software cannot adequately handle large-scale plant imaging. To address these problems, we developed a new, automated, high-throughput plant phenotyping system using simple and robust hardware, and an automated plant-imaging-analysis pipeline consisting of machine-learning-based plant segmentation. Our hardware acquires images reliably and quickly and minimizes plant stress. Furthermore, the images are processed automatically. In particular, large-scale plant-image datasets can be segmented precisely using a classifier developed using a superpixel-based machine-learning algorithm (Random Forest), and variations in plant parameters (such as area) over time can be assessed using the segmented images. We performed comparative evaluations to identify an appropriate learning algorithm for our proposed system, and tested three robust learning algorithms. We developed not only an automatic analysis pipeline but also a convenient means of plant-growth analysis that provides a learning data interface and visualization of plant growth trends. Thus, our system allows end-users such as plant biologists to analyze plant growth via large-scale plant image data easily.
A quality score for coronary artery tree extraction results
NASA Astrophysics Data System (ADS)
Cao, Qing; Broersen, Alexander; Kitslaar, Pieter H.; Lelieveldt, Boudewijn P. F.; Dijkstra, Jouke
2018-02-01
Coronary artery trees (CATs) are often extracted to aid the fully automatic analysis of coronary artery disease on coronary computed tomography angiography (CCTA) images. Automatically extracted CATs often miss some arteries or include wrong extractions which require manual corrections before performing successive steps. For analyzing a large number of datasets, a manual quality check of the extraction results is time-consuming. This paper presents a method to automatically calculate quality scores for extracted CATs in terms of clinical significance of the extracted arteries and the completeness of the extracted CAT. Both right dominant (RD) and left dominant (LD) anatomical statistical models are generated and exploited in developing the quality score. To automatically determine which model should be used, a dominance type detection method is also designed. Experiments are performed on the automatically extracted and manually refined CATs from 42 datasets to evaluate the proposed quality score. In 39 (92.9%) cases, the proposed method is able to measure the quality of the manually refined CATs with higher scores than the automatically extracted CATs. In a 100-point scale system, the average scores for automatically and manually refined CATs are 82.0 (+/-15.8) and 88.9 (+/-5.4) respectively. The proposed quality score will assist the automatic processing of the CAT extractions for large cohorts which contain both RD and LD cases. To the best of our knowledge, this is the first time that a general quality score for an extracted CAT is presented.
NASA Astrophysics Data System (ADS)
Song, Huan; Hu, Yaogai; Jiang, Chunhua; Zhou, Chen; Zhao, Zhengyu; Zou, Xianjian
2016-12-01
Scaling oblique ionogram plays an important role in obtaining ionospheric structure at the midpoint of oblique sounding path. The paper proposed an automatic scaling method to extract the trace and parameters of oblique ionogram based on hybrid genetic algorithm (HGA). The extracted 10 parameters come from F2 layer and Es layer, such as maximum observation frequency, critical frequency, and virtual height. The method adopts quasi-parabolic (QP) model to describe F2 layer's electron density profile that is used to synthesize trace. And it utilizes secant theorem, Martyn's equivalent path theorem, image processing technology, and echoes' characteristics to determine seven parameters' best fit values, and three parameter's initial values in QP model to set up their searching spaces which are the needed input data of HGA. Then HGA searches the three parameters' best fit values from their searching spaces based on the fitness between the synthesized trace and the real trace. In order to verify the performance of the method, 240 oblique ionograms are scaled and their results are compared with manual scaling results and the inversion results of the corresponding vertical ionograms. The comparison results show that the scaling results are accurate or at least adequate 60-90% of the time.
The Selective Task Trainer: The Expert Solution.
ERIC Educational Resources Information Center
Gerson, Charles W.
1995-01-01
Examines simulator classification and design in light of new technology, current research, and a changing focus for using flight simulators in the military, and proposes a selective task trainer that addresses the expert's performance needs. Highlights include motor skill physiology; retention; automaticity skills; the novice to expert…
2011-06-01
implementing, and evaluating many feature selection algorithms. Mucciardi and Gose compared seven different techniques for choosing subsets of pattern...122 THIS PAGE INTENTIONALLY LEFT BLANK 123 LIST OF REFERENCES [1] A. Mucciardi and E. Gose , “A comparison of seven techniques for
NASA Astrophysics Data System (ADS)
Chen, Dongyue; Lin, Jianhui; Li, Yanping
2018-06-01
Complementary ensemble empirical mode decomposition (CEEMD) has been developed for the mode-mixing problem in Empirical Mode Decomposition (EMD) method. Compared to the ensemble empirical mode decomposition (EEMD), the CEEMD method reduces residue noise in the signal reconstruction. Both CEEMD and EEMD need enough ensemble number to reduce the residue noise, and hence it would be too much computation cost. Moreover, the selection of intrinsic mode functions (IMFs) for further analysis usually depends on experience. A modified CEEMD method and IMFs evaluation index are proposed with the aim of reducing the computational cost and select IMFs automatically. A simulated signal and in-service high-speed train gearbox vibration signals are employed to validate the proposed method in this paper. The results demonstrate that the modified CEEMD can decompose the signal efficiently with less computation cost, and the IMFs evaluation index can select the meaningful IMFs automatically.
An intelligent identification algorithm for the monoclonal picking instrument
NASA Astrophysics Data System (ADS)
Yan, Hua; Zhang, Rongfu; Yuan, Xujun; Wang, Qun
2017-11-01
The traditional colony selection is mainly operated by manual mode, which takes on low efficiency and strong subjectivity. Therefore, it is important to develop an automatic monoclonal-picking instrument. The critical stage of the automatic monoclonal-picking and intelligent optimal selection is intelligent identification algorithm. An auto-screening algorithm based on Support Vector Machine (SVM) is proposed in this paper, which uses the supervised learning method, which combined with the colony morphological characteristics to classify the colony accurately. Furthermore, through the basic morphological features of the colony, system can figure out a series of morphological parameters step by step. Through the establishment of maximal margin classifier, and based on the analysis of the growth trend of the colony, the selection of the monoclonal colony was carried out. The experimental results showed that the auto-screening algorithm could screen out the regular colony from the other, which meets the requirement of various parameters.
GAFFE: a gaze-attentive fixation finding engine.
Rajashekar, U; van der Linde, I; Bovik, A C; Cormack, L K
2008-04-01
The ability to automatically detect visually interesting regions in images has many practical applications, especially in the design of active machine vision and automatic visual surveillance systems. Analysis of the statistics of image features at observers' gaze can provide insights into the mechanisms of fixation selection in humans. Using a foveated analysis framework, we studied the statistics of four low-level local image features: luminance, contrast, and bandpass outputs of both luminance and contrast, and discovered that image patches around human fixations had, on average, higher values of each of these features than image patches selected at random. Contrast-bandpass showed the greatest difference between human and random fixations, followed by luminance-bandpass, RMS contrast, and luminance. Using these measurements, we present a new algorithm that selects image regions as likely candidates for fixation. These regions are shown to correlate well with fixations recorded from human observers.
A general graphical user interface for automatic reliability modeling
NASA Technical Reports Server (NTRS)
Liceaga, Carlos A.; Siewiorek, Daniel P.
1991-01-01
Reported here is a general Graphical User Interface (GUI) for automatic reliability modeling of Processor Memory Switch (PMS) structures using a Markov model. This GUI is based on a hierarchy of windows. One window has graphical editing capabilities for specifying the system's communication structure, hierarchy, reconfiguration capabilities, and requirements. Other windows have field texts, popup menus, and buttons for specifying parameters and selecting actions. An example application of the GUI is given.
Study of the cerrado vegetation in the Federal District area from orbital data. M.S. Thesis
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Aoki, H.; Dossantos, J. R.
1980-01-01
The physiognomic units of cerrado in the area of Distrito Federal (DF) were studied through the visual and automatic analysis of products provided by Multispectral Scanning System (MSS) of LANDSAT. The visual analysis of the multispectral images in black and white, at the 1:250,000 scale, was made based on the texture and tonal patterns. The automatic analysis of the compatible computer tapes (CCT) was made by means of IMAGE-100 system. The following conclusions were obtained: (1) the delimitation of cerrado vegetation forms can be made by the visual and automatic analysis; (2) in the visual analysis, the principal parameter used to discriminate the cerrado forms was the tonal pattern, independently of the year's seasons, and the channel 5 gave better information; (3) in the automatic analysis, the data of the four channels of MSS can be used in the discrimination of the cerrado forms; and (4) in the automatic analysis, the four channels combination possibilities gave more information in the separation of cerrado units when soil types were considered.
Recent advances in automatic alignment system for the National Ignition Facility
NASA Astrophysics Data System (ADS)
Wilhelmsen, Karl; Awwal, Abdul A. S.; Kalantar, Dan; Leach, Richard; Lowe-Webb, Roger; McGuigan, David; Miller Kamm, Vicki
2011-03-01
The automatic alignment system for the National Ignition Facility (NIF) is a large-scale parallel system that directs all 192 laser beams along the 300-m optical path to a 50-micron focus at target chamber in less than 50 minutes. The system automatically commands 9,000 stepping motors to adjust mirrors and other optics based upon images acquired from high-resolution digital cameras viewing beams at various locations. Forty-five control loops per beamline request image processing services running on a LINUX cluster to analyze these images of the beams and references, and automatically steer the beams toward the target. This paper discusses the upgrades to the NIF automatic alignment system to handle new alignment needs and evolving requirements as related to various types of experiments performed. As NIF becomes a continuously-operated system and more experiments are performed, performance monitoring is increasingly important for maintenance and commissioning work. Data, collected during operations, is analyzed for tuning of the laser and targeting maintenance work. Handling evolving alignment and maintenance needs is expected for the planned 30-year operational life of NIF.
PDE based scheme for multi-modal medical image watermarking.
Aherrahrou, N; Tairi, H
2015-11-25
This work deals with copyright protection of digital images, an issue that needs protection of intellectual property rights. It is an important issue with a large number of medical images interchanged on the Internet every day. So, it is a challenging task to ensure the integrity of received images as well as authenticity. Digital watermarking techniques have been proposed as valid solution for this problem. It is worth mentioning that the Region Of Interest (ROI)/Region Of Non Interest (RONI) selection can be seen as a significant limitation from which suffers most of ROI/RONI based watermarking schemes and that in turn affects and limit their applicability in an effective way. Generally, the ROI/RONI is defined by a radiologist or a computer-aided selection tool. And thus, this will not be efficient for an institute or health care system, where one has to process a large number of images. Therefore, developing an automatic ROI/RONI selection is a challenge task. The major aim of this work is to develop an automatic selection algorithm of embedding region based on the so called Partial Differential Equation (PDE) method. Thus avoiding ROI/RONI selection problems including: (1) computational overhead, (2) time consuming, and (3) modality dependent selection. The algorithm is evaluated in terms of imperceptibility, robustness, tamper localization and recovery using MRI, Ultrasound, CT and X-ray grey scale medical images. From experimental results that we have conducted on a database of 100 medical images of four modalities, it can be inferred that our method can achieve high imperceptibility, while showing good robustness against attacks. Furthermore, the experiment results confirm the effectiveness of the proposed algorithm in detecting and recovering the various types of tampering. The highest PSNR value reached over the 100 images is 94,746 dB, while the lowest PSNR value is 60,1272 dB, which demonstrates the higher imperceptibility nature of the proposed method. Moreover, the Normalized Correlation (NC) between the original watermark and the corresponding extracted watermark for 100 images is computed. We get a NC value greater than or equal to 0.998. This indicates that the extracted watermark is very similar to the original watermark for all modalities. The key features of our proposed method are to (1) increase the robustness of the watermark against attacks; (2) provide more transparency to the embedded watermark. (3) provide more authenticity and integrity protection of the content of medical images. (4) provide minimum ROI/RONI selection complexity.
Fan, Jianping; Gao, Yuli; Luo, Hangzai
2008-03-01
In this paper, we have developed a new scheme for achieving multilevel annotations of large-scale images automatically. To achieve more sufficient representation of various visual properties of the images, both the global visual features and the local visual features are extracted for image content representation. To tackle the problem of huge intraconcept visual diversity, multiple types of kernels are integrated to characterize the diverse visual similarity relationships between the images more precisely, and a multiple kernel learning algorithm is developed for SVM image classifier training. To address the problem of huge interconcept visual similarity, a novel multitask learning algorithm is developed to learn the correlated classifiers for the sibling image concepts under the same parent concept and enhance their discrimination and adaptation power significantly. To tackle the problem of huge intraconcept visual diversity for the image concepts at the higher levels of the concept ontology, a novel hierarchical boosting algorithm is developed to learn their ensemble classifiers hierarchically. In order to assist users on selecting more effective hypotheses for image classifier training, we have developed a novel hyperbolic framework for large-scale image visualization and interactive hypotheses assessment. Our experiments on large-scale image collections have also obtained very positive results.
Steigerwald, Sarah N.; Park, Jason; Hardy, Krista M.; Gillman, Lawrence; Vergis, Ashley S.
2015-01-01
Background Considerable resources have been invested in both low- and high-fidelity simulators in surgical training. The purpose of this study was to investigate if the Fundamentals of Laparoscopic Surgery (FLS, low-fidelity box trainer) and LapVR (high-fidelity virtual reality) training systems correlate with operative performance on the Global Operative Assessment of Laparoscopic Skills (GOALS) global rating scale using a porcine cholecystectomy model in a novice surgical group with minimal laparoscopic experience. Methods Fourteen postgraduate year 1 surgical residents with minimal laparoscopic experience performed tasks from the FLS program and the LapVR simulator as well as a live porcine laparoscopic cholecystectomy. Performance was evaluated using standardized FLS metrics, automatic computer evaluations, and a validated global rating scale. Results Overall, FLS score did not show an association with GOALS global rating scale score on the porcine cholecystectomy. None of the five LapVR task scores were significantly associated with GOALS score on the porcine cholecystectomy. Conclusions Neither the low-fidelity box trainer or the high-fidelity virtual simulator demonstrated significant correlation with GOALS operative scores. These findings offer caution against the use of these modalities for brief assessments of novice surgical trainees, especially for predictive or selection purposes. PMID:26641071
Reevaluation of pollen quantitation by an automatic pollen counter.
Muradil, Mutarifu; Okamoto, Yoshitaka; Yonekura, Syuji; Chazono, Hideaki; Hisamitsu, Minako; Horiguchi, Shigetoshi; Hanazawa, Toyoyuki; Takahashi, Yukie; Yokota, Kunihiko; Okumura, Satoshi
2010-01-01
Accurate and detailed pollen monitoring is useful for selection of medication and for allergen avoidance in patients with allergic rhinitis. Burkard and Durham pollen samplers are commonly used, but are labor and time intensive. In contrast, automatic pollen counters allow simple real-time pollen counting; however, these instruments have difficulty in distinguishing pollen from small nonpollen airborne particles. Misidentification and underestimation rates for an automatic pollen counter were examined to improve the accuracy of the pollen count. The characteristics of the automatic pollen counter were determined in a chamber study with exposure to cedar pollens or soil grains. The cedar pollen counts were monitored in 2006 and 2007, and compared with those from a Durham sampler. The pollen counts from the automatic counter showed a good correlation (r > 0.7) with those from the Durham sampler when pollen dispersal was high, but a poor correlation (r < 0.5) when pollen dispersal was low. The new correction method, which took into account the misidentification and underestimation, improved this correlation to r > 0.7 during the pollen season. The accuracy of automatic pollen counting can be improved using a correction to include rates of underestimation and misidentification in a particular geographical area.
OpenSim: A Flexible Distributed Neural Network Simulator with Automatic Interactive Graphics.
Jarosch, Andreas; Leber, Jean Francois
1997-06-01
An object-oriented simulator called OpenSim is presented that achieves a high degree of flexibility by relying on a small set of building blocks. The state variables and algorithms put in this framework can easily be accessed through a command shell. This allows one to distribute a large-scale simulation over several workstations and to generate the interactive graphics automatically. OpenSim opens new possibilities for cooperation among Neural Network researchers. Copyright 1997 Elsevier Science Ltd.
Modeling and Performance Optimization of Large-Scale Data-Communication Networks.
1981-06-01
IT-17, no. 1, pp. 71-76, 1971. 12. Y. Ho, M. Kastner, and E. Wong, "Teams, market signalling, and information theory," IEEE Trans. Automat. Contr...modifies the flow assignment to satisfy end-to-end delay constraints. 3.2.1 Rationale for Min-Hop Strategr The Min-Hop algorithm proposed in this...Prentice-Hall, 1980. Ho, Y., M. Kostner and E. Wong, "Teams, market signalling, and information theory," IEEE Trans. Automat. Contr., vol. AC-23, pp
Automatic item generation implemented for measuring artistic judgment aptitude.
Bezruczko, Nikolaus
2014-01-01
Automatic item generation (AIG) is a broad class of methods that are being developed to address psychometric issues arising from internet and computer-based testing. In general, issues emphasize efficiency, validity, and diagnostic usefulness of large scale mental testing. Rapid prominence of AIG methods and their implicit perspective on mental testing is bringing painful scrutiny to many sacred psychometric assumptions. This report reviews basic AIG ideas, then presents conceptual foundations, image model development, and operational application to artistic judgment aptitude testing.
Bellman Ford algorithm - in Routing Information Protocol (RIP)
NASA Astrophysics Data System (ADS)
Krianto Sulaiman, Oris; Mahmud Siregar, Amir; Nasution, Khairuddin; Haramaini, Tasliyah
2018-04-01
In a large scale network need a routing that can handle a lot number of users, one of the solutions to cope with large scale network is by using a routing protocol, There are 2 types of routing protocol that is static and dynamic, Static routing is manually route input based on network admin, while dynamic routing is automatically route input formed based on existing network. Dynamic routing is efficient used to network extensively because of the input of route automatic formed, Routing Information Protocol (RIP) is one of dynamic routing that uses the bellman-ford algorithm where this algorithm will search for the best path that traversed the network by leveraging the value of each link, so with the bellman-ford algorithm owned by RIP can optimize existing networks.
Eller, Achim; Wuest, Wolfgang; Scharf, Michael; Brand, Michael; Achenbach, Stephan; Uder, Michael; Lell, Michael M
2013-12-01
To evaluate an automated attenuation-based kV-selection in computed tomography of the chest in respect to radiation dose and image quality, compared to a standard 120 kV protocol. 104 patients were examined using a 128-slice scanner. Fifty examinations (58 ± 15 years, study group) were performed using the automated adaption of tube potential (100-140 kV), based on the attenuation profile of the scout scan, 54 examinations (62 ± 14 years, control group) with fixed 120 kV. Estimated CT dose index (CTDI) of the software-proposed setting was compared with a 120 kV protocol. After the scan CTDI volume (CTDIvol) and dose length product (DLP) were recorded. Image quality was assessed by region of interest (ROI) measurements, subjective image quality by two observers with a 4-point scale (3--excellent, 0--not diagnostic). The algorithm selected 100 kV in 78% and 120 kV in 22%. Overall CTDIvol reduction was 26.6% (34% in 100 kV) overall DLP reduction was 22.8% (32.1% in 100 kV) (all p<0.001). Subjective image quality was excellent in both groups. The attenuation based kV-selection algorithm enables relevant dose reduction (~27%) in chest-CT while keeping image quality parameters at high levels. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Automatic peak selection by a Benjamini-Hochberg-based algorithm.
Abbas, Ahmed; Kong, Xin-Bing; Liu, Zhi; Jing, Bing-Yi; Gao, Xin
2013-01-01
A common issue in bioinformatics is that computational methods often generate a large number of predictions sorted according to certain confidence scores. A key problem is then determining how many predictions must be selected to include most of the true predictions while maintaining reasonably high precision. In nuclear magnetic resonance (NMR)-based protein structure determination, for instance, computational peak picking methods are becoming more and more common, although expert-knowledge remains the method of choice to determine how many peaks among thousands of candidate peaks should be taken into consideration to capture the true peaks. Here, we propose a Benjamini-Hochberg (B-H)-based approach that automatically selects the number of peaks. We formulate the peak selection problem as a multiple testing problem. Given a candidate peak list sorted by either volumes or intensities, we first convert the peaks into [Formula: see text]-values and then apply the B-H-based algorithm to automatically select the number of peaks. The proposed approach is tested on the state-of-the-art peak picking methods, including WaVPeak [1] and PICKY [2]. Compared with the traditional fixed number-based approach, our approach returns significantly more true peaks. For instance, by combining WaVPeak or PICKY with the proposed method, the missing peak rates are on average reduced by 20% and 26%, respectively, in a benchmark set of 32 spectra extracted from eight proteins. The consensus of the B-H-selected peaks from both WaVPeak and PICKY achieves 88% recall and 83% precision, which significantly outperforms each individual method and the consensus method without using the B-H algorithm. The proposed method can be used as a standard procedure for any peak picking method and straightforwardly applied to some other prediction selection problems in bioinformatics. The source code, documentation and example data of the proposed method is available at http://sfb.kaust.edu.sa/pages/software.aspx.
Automatic Peak Selection by a Benjamini-Hochberg-Based Algorithm
Abbas, Ahmed; Kong, Xin-Bing; Liu, Zhi; Jing, Bing-Yi; Gao, Xin
2013-01-01
A common issue in bioinformatics is that computational methods often generate a large number of predictions sorted according to certain confidence scores. A key problem is then determining how many predictions must be selected to include most of the true predictions while maintaining reasonably high precision. In nuclear magnetic resonance (NMR)-based protein structure determination, for instance, computational peak picking methods are becoming more and more common, although expert-knowledge remains the method of choice to determine how many peaks among thousands of candidate peaks should be taken into consideration to capture the true peaks. Here, we propose a Benjamini-Hochberg (B-H)-based approach that automatically selects the number of peaks. We formulate the peak selection problem as a multiple testing problem. Given a candidate peak list sorted by either volumes or intensities, we first convert the peaks into -values and then apply the B-H-based algorithm to automatically select the number of peaks. The proposed approach is tested on the state-of-the-art peak picking methods, including WaVPeak [1] and PICKY [2]. Compared with the traditional fixed number-based approach, our approach returns significantly more true peaks. For instance, by combining WaVPeak or PICKY with the proposed method, the missing peak rates are on average reduced by 20% and 26%, respectively, in a benchmark set of 32 spectra extracted from eight proteins. The consensus of the B-H-selected peaks from both WaVPeak and PICKY achieves 88% recall and 83% precision, which significantly outperforms each individual method and the consensus method without using the B-H algorithm. The proposed method can be used as a standard procedure for any peak picking method and straightforwardly applied to some other prediction selection problems in bioinformatics. The source code, documentation and example data of the proposed method is available at http://sfb.kaust.edu.sa/pages/software.aspx. PMID:23308147
A prototype system to support evidence-based practice.
Demner-Fushman, Dina; Seckman, Charlotte; Fisher, Cheryl; Hauser, Susan E; Clayton, Jennifer; Thoma, George R
2008-11-06
Translating evidence into clinical practice is a complex process that depends on the availability of evidence, the environment into which the research evidence is translated, and the system that facilitates the translation. This paper presents InfoBot, a system designed for automatic delivery of patient-specific information from evidence-based resources. A prototype system has been implemented to support development of individualized patient care plans. The prototype explores possibilities to automatically extract patients problems from the interdisciplinary team notes and query evidence-based resources using the extracted terms. Using 4,335 de-identified interdisciplinary team notes for 525 patients, the system automatically extracted biomedical terminology from 4,219 notes and linked resources to 260 patient records. Sixty of those records (15 each for Pediatrics, Oncology & Hematology, Medical & Surgical, and Behavioral Health units) have been selected for an ongoing evaluation of the quality of automatically proactively delivered evidence and its usefulness in development of care plans.
Automatic inference of indexing rules for MEDLINE
Névéol, Aurélie; Shooshan, Sonya E; Claveau, Vincent
2008-01-01
Background: Indexing is a crucial step in any information retrieval system. In MEDLINE, a widely used database of the biomedical literature, the indexing process involves the selection of Medical Subject Headings in order to describe the subject matter of articles. The need for automatic tools to assist MEDLINE indexers in this task is growing with the increasing number of publications being added to MEDLINE. Methods: In this paper, we describe the use and the customization of Inductive Logic Programming (ILP) to infer indexing rules that may be used to produce automatic indexing recommendations for MEDLINE indexers. Results: Our results show that this original ILP-based approach outperforms manual rules when they exist. In addition, the use of ILP rules also improves the overall performance of the Medical Text Indexer (MTI), a system producing automatic indexing recommendations for MEDLINE. Conclusion: We expect the sets of ILP rules obtained in this experiment to be integrated into MTI. PMID:19025687
Automatic inference of indexing rules for MEDLINE.
Névéol, Aurélie; Shooshan, Sonya E; Claveau, Vincent
2008-11-19
Indexing is a crucial step in any information retrieval system. In MEDLINE, a widely used database of the biomedical literature, the indexing process involves the selection of Medical Subject Headings in order to describe the subject matter of articles. The need for automatic tools to assist MEDLINE indexers in this task is growing with the increasing number of publications being added to MEDLINE. In this paper, we describe the use and the customization of Inductive Logic Programming (ILP) to infer indexing rules that may be used to produce automatic indexing recommendations for MEDLINE indexers. Our results show that this original ILP-based approach outperforms manual rules when they exist. In addition, the use of ILP rules also improves the overall performance of the Medical Text Indexer (MTI), a system producing automatic indexing recommendations for MEDLINE. We expect the sets of ILP rules obtained in this experiment to be integrated into MTI.
Using suggestion to model different types of automatic writing.
Walsh, E; Mehta, M A; Oakley, D A; Guilmette, D N; Gabay, A; Halligan, P W; Deeley, Q
2014-05-01
Our sense of self includes awareness of our thoughts and movements, and our control over them. This feeling can be altered or lost in neuropsychiatric disorders as well as in phenomena such as "automatic writing" whereby writing is attributed to an external source. Here, we employed suggestion in highly hypnotically suggestible participants to model various experiences of automatic writing during a sentence completion task. Results showed that the induction of hypnosis, without additional suggestion, was associated with a small but significant reduction of control, ownership, and awareness for writing. Targeted suggestions produced a double dissociation between thought and movement components of writing, for both feelings of control and ownership, and additionally, reduced awareness of writing. Overall, suggestion produced selective alterations in the control, ownership, and awareness of thought and motor components of writing, thus enabling key aspects of automatic writing, observed across different clinical and cultural settings, to be modelled. Copyright © 2014. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Wang, Bei; Sugi, Takenao; Wang, Xingyu; Nakamura, Masatoshi
Data for human sleep study may be affected by internal and external influences. The recorded sleep data contains complex and stochastic factors, which increase the difficulties for the computerized sleep stage determination techniques to be applied for clinical practice. The aim of this study is to develop an automatic sleep stage determination system which is optimized for variable sleep data. The main methodology includes two modules: expert knowledge database construction and automatic sleep stage determination. Visual inspection by a qualified clinician is utilized to obtain the probability density function of parameters during the learning process of expert knowledge database construction. Parameter selection is introduced in order to make the algorithm flexible. Automatic sleep stage determination is manipulated based on conditional probability. The result showed close agreement comparing with the visual inspection by clinician. The developed system can meet the customized requirements in hospitals and institutions.
A Prototype System to Support Evidence-based Practice
Demner-Fushman, Dina; Seckman, Charlotte; Fisher, Cheryl; Hauser, Susan E.; Clayton, Jennifer; Thoma, George R.
2008-01-01
Translating evidence into clinical practice is a complex process that depends on the availability of evidence, the environment into which the research evidence is translated, and the system that facilitates the translation. This paper presents InfoBot, a system designed for automatic delivery of patient-specific information from evidence-based resources. A prototype system has been implemented to support development of individualized patient care plans. The prototype explores possibilities to automatically extract patients’ problems from the interdisciplinary team notes and query evidence-based resources using the extracted terms. Using 4,335 de-identified interdisciplinary team notes for 525 patients, the system automatically extracted biomedical terminology from 4,219 notes and linked resources to 260 patient records. Sixty of those records (15 each for Pediatrics, Oncology & Hematology, Medical & Surgical, and Behavioral Health units) have been selected for an ongoing evaluation of the quality of automatically proactively delivered evidence and its usefulness in development of care plans. PMID:18998835
2011-01-01
Background Bioinformatics data analysis is often using linear mixture model representing samples as additive mixture of components. Properly constrained blind matrix factorization methods extract those components using mixture samples only. However, automatic selection of extracted components to be retained for classification analysis remains an open issue. Results The method proposed here is applied to well-studied protein and genomic datasets of ovarian, prostate and colon cancers to extract components for disease prediction. It achieves average sensitivities of: 96.2 (sd = 2.7%), 97.6% (sd = 2.8%) and 90.8% (sd = 5.5%) and average specificities of: 93.6% (sd = 4.1%), 99% (sd = 2.2%) and 79.4% (sd = 9.8%) in 100 independent two-fold cross-validations. Conclusions We propose an additive mixture model of a sample for feature extraction using, in principle, sparseness constrained factorization on a sample-by-sample basis. As opposed to that, existing methods factorize complete dataset simultaneously. The sample model is composed of a reference sample representing control and/or case (disease) groups and a test sample. Each sample is decomposed into two or more components that are selected automatically (without using label information) as control specific, case specific and not differentially expressed (neutral). The number of components is determined by cross-validation. Automatic assignment of features (m/z ratios or genes) to particular component is based on thresholds estimated from each sample directly. Due to the locality of decomposition, the strength of the expression of each feature across the samples can vary. Yet, they will still be allocated to the related disease and/or control specific component. Since label information is not used in the selection process, case and control specific components can be used for classification. That is not the case with standard factorization methods. Moreover, the component selected by proposed method as disease specific can be interpreted as a sub-mode and retained for further analysis to identify potential biomarkers. As opposed to standard matrix factorization methods this can be achieved on a sample (experiment)-by-sample basis. Postulating one or more components with indifferent features enables their removal from disease and control specific components on a sample-by-sample basis. This yields selected components with reduced complexity and generally, it increases prediction accuracy. PMID:22208882
Musculoskeletal Simulation Model Generation from MRI Data Sets and Motion Capture Data
NASA Astrophysics Data System (ADS)
Schmid, Jérôme; Sandholm, Anders; Chung, François; Thalmann, Daniel; Delingette, Hervé; Magnenat-Thalmann, Nadia
Today computer models and computer simulations of the musculoskeletal system are widely used to study the mechanisms behind human gait and its disorders. The common way of creating musculoskeletal models is to use a generic musculoskeletal model based on data derived from anatomical and biomechanical studies of cadaverous specimens. To adapt this generic model to a specific subject, the usual approach is to scale it. This scaling has been reported to introduce several errors because it does not always account for subject-specific anatomical differences. As a result, a novel semi-automatic workflow is proposed that creates subject-specific musculoskeletal models from magnetic resonance imaging (MRI) data sets and motion capture data. Based on subject-specific medical data and a model-based automatic segmentation approach, an accurate modeling of the anatomy can be produced while avoiding the scaling operation. This anatomical model coupled with motion capture data, joint kinematics information, and muscle-tendon actuators is finally used to create a subject-specific musculoskeletal model.
Automatic co-registration of 3D multi-sensor point clouds
NASA Astrophysics Data System (ADS)
Persad, Ravi Ancil; Armenakis, Costas
2017-08-01
We propose an approach for the automatic coarse alignment of 3D point clouds which have been acquired from various platforms. The method is based on 2D keypoint matching performed on height map images of the point clouds. Initially, a multi-scale wavelet keypoint detector is applied, followed by adaptive non-maxima suppression. A scale, rotation and translation-invariant descriptor is then computed for all keypoints. The descriptor is built using the log-polar mapping of Gabor filter derivatives in combination with the so-called Rapid Transform. In the final step, source and target height map keypoint correspondences are determined using a bi-directional nearest neighbour similarity check, together with a threshold-free modified-RANSAC. Experiments with urban and non-urban scenes are presented and results show scale errors ranging from 0.01 to 0.03, 3D rotation errors in the order of 0.2° to 0.3° and 3D translation errors from 0.09 m to 1.1 m.
An Automatic Instrument to Study the Spatial Scaling Behavior of Emissivity
Tian, Jing; Zhang, Renhua; Su, Hongbo; Sun, Xiaomin; Chen, Shaohui; Xia, Jun
2008-01-01
In this paper, the design of an automatic instrument for measuring the spatial distribution of land surface emissivity is presented, which makes the direct in situ measurement of the spatial distribution of emissivity possible. The significance of this new instrument lies in two aspects. One is that it helps to investigate the spatial scaling behavior of emissivity and temperature; the other is that, the design of the instrument provides theoretical and practical foundations for the implement of measuring distribution of surface emissivity on airborne or spaceborne. To improve the accuracy of the measurements, the emissivity measurement and its uncertainty are examined in a series of carefully designed experiments. The impact of the variation of target temperature and the environmental irradiance on the measurement of emissivity is analyzed as well. In addition, the ideal temperature difference between hot environment and cool environment is obtained based on numerical simulations. Finally, the scaling behavior of surface emissivity caused by the heterogeneity of target is discussed. PMID:27879735
A System for Automatically Generating Scheduling Heuristics
NASA Technical Reports Server (NTRS)
Morris, Robert
1996-01-01
The goal of this research is to improve the performance of automated schedulers by designing and implementing an algorithm by automatically generating heuristics by selecting a schedule. The particular application selected by applying this method solves the problem of scheduling telescope observations, and is called the Associate Principal Astronomer. The input to the APA scheduler is a set of observation requests submitted by one or more astronomers. Each observation request specifies an observation program as well as scheduling constraints and preferences associated with the program. The scheduler employs greedy heuristic search to synthesize a schedule that satisfies all hard constraints of the domain and achieves a good score with respect to soft constraints expressed as an objective function established by an astronomer-user.
Automatic welding detection by an intelligent tool pipe inspection
NASA Astrophysics Data System (ADS)
Arizmendi, C. J.; Garcia, W. L.; Quintero, M. A.
2015-07-01
This work provide a model based on machine learning techniques in welds recognition, based on signals obtained through in-line inspection tool called “smart pig” in Oil and Gas pipelines. The model uses a signal noise reduction phase by means of pre-processing algorithms and attribute-selection techniques. The noise reduction techniques were selected after a literature review and testing with survey data. Subsequently, the model was trained using recognition and classification algorithms, specifically artificial neural networks and support vector machines. Finally, the trained model was validated with different data sets and the performance was measured with cross validation and ROC analysis. The results show that is possible to identify welding automatically with an efficiency between 90 and 98 percent.
Bevel Gear Driver and Method Having Torque Limit Selection
NASA Technical Reports Server (NTRS)
Cook, Joseph S., Jr. (Inventor)
1997-01-01
Methods and apparatus are provided for a torque driver including an axially displaceable gear with a biasing assembly to bias the displaceable gear into an engagement position. A rotatable cap is provided with a micrometer dial to select a desired output torque. An intermediate bevel gear assembly is disposed between an input gear and an output gear. A gear tooth profile provides a separation force that overcomes the bias to limit torque at a desired torque limit. The torque limit is adjustable and may be adjusted manually or automatically depending on the type of biasing assembly provided. A clutch assembly automatically limits axial force applied to a fastener by the operator to avoid alteration of the desired torque limit.
Self-evaluated automatic classifier as a decision-support tool for sleep/wake staging.
Charbonnier, S; Zoubek, L; Lesecq, S; Chapotot, F
2011-06-01
An automatic sleep/wake stages classifier that deals with the presence of artifacts and that provides a confidence index with each decision is proposed. The decision system is composed of two stages: the first stage checks the 20s epoch of polysomnographic signals (EEG, EOG and EMG) for the presence of artifacts and selects the artifact-free signals. The second stage classifies the epoch using one classifier selected out of four, using feature inputs extracted from the artifact-free signals only. A confidence index is associated with each decision made, depending on the classifier used and on the class assigned, so that the user's confidence in the automatic decision is increased. The two-stage system was tested on a large database of 46 night recordings. It reached 85.5% of overall accuracy with improved ability to discern NREM I stage from REM sleep. It was shown that only 7% of the database was classified with a low confidence index, and thus should be re-evaluated by a physiologist expert, which makes the system an efficient decision-support tool. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yoon, S.; Won, M.; Jang, K.; Lim, J.
2016-12-01
As there has been a recent increase in the case of forest fires in North Korea descending southward through the De-Militarized Zone (DMZ), ensuring proper response to such events has been a challenge. Therefore, in order to respond and manage these forest fires appropriately, an improvement in the forest fire predictability through integration of mountain weather information observed at the most optimal site is necessary. This study is a proactive case in which a spatial analysis and an on-site assessment method were developed for selecting an optimum site for a mountain weather observation in national forest. For spatial analysis, the class 1 and 2 forest fire danger areas for the past 10 years, accessibility maximum 100m, Automatic Weather Station (AWS) redundancy within 2.5km, and mountain terrains higher than 200m were analyzed. A final overlay analysis was performed to select the candidates for the field assessment. The sites selected through spatial analysis were quantitatively evaluated based on the optimal meteorological environment, forest and hiking trail accessibility, AWS redundancy, and supply of wireless communication and solar powered electricity. The sites with total score of 70 and higher were accepted as adequate. At the final selected sites, an AMOS was established, and integration of mountain and Korea Meteorological Administration (KMA) weather data improved the forest fire predictability in South Korea by 10%. Given these study results, we expect that establishing an automatic mountain meteorology observation station at the optimal sites in inaccessible area and integrating mountain weather data will improve the predictability of forest fires.
Forkert, N D; Cheng, B; Kemmling, A; Thomalla, G; Fiehler, J
2014-01-01
The objective of this work is to present the software tool ANTONIA, which has been developed to facilitate a quantitative analysis of perfusion-weighted MRI (PWI) datasets in general as well as the subsequent multi-parametric analysis of additional datasets for the specific purpose of acute ischemic stroke patient dataset evaluation. Three different methods for the analysis of DSC or DCE PWI datasets are currently implemented in ANTONIA, which can be case-specifically selected based on the study protocol. These methods comprise a curve fitting method as well as a deconvolution-based and deconvolution-free method integrating a previously defined arterial input function. The perfusion analysis is extended for the purpose of acute ischemic stroke analysis by additional methods that enable an automatic atlas-based selection of the arterial input function, an analysis of the perfusion-diffusion and DWI-FLAIR mismatch as well as segmentation-based volumetric analyses. For reliability evaluation, the described software tool was used by two observers for quantitative analysis of 15 datasets from acute ischemic stroke patients to extract the acute lesion core volume, FLAIR ratio, perfusion-diffusion mismatch volume with manually as well as automatically selected arterial input functions, and follow-up lesion volume. The results of this evaluation revealed that the described software tool leads to highly reproducible results for all parameters if the automatic arterial input function selection method is used. Due to the broad selection of processing methods that are available in the software tool, ANTONIA is especially helpful to support image-based perfusion and acute ischemic stroke research projects.
Cheng, George Shu-Xing; Mulkey, Steven L; Wang, Qiang; Chow, Andrew J
2013-11-26
A method and apparatus for intelligently controlling continuous process variables. A Dream Controller comprises an Intelligent Engine mechanism and a number of Model-Free Adaptive (MFA) controllers, each of which is suitable to control a process with specific behaviors. The Intelligent Engine can automatically select the appropriate MFA controller and its parameters so that the Dream Controller can be easily used by people with limited control experience and those who do not have the time to commission, tune, and maintain automatic controllers.
A State-of-the-Art Assessment of Automatic Name Placement.
1986-08-01
develop an automatic name placement system. 11 Balodis, M., "Positioning of typography on maps," Proc. ACSM Pall Con- vention, Salt Lake City, Utah, Sept...1983, pp. 28-44. This article deals with the selection of typography for maps. It describes psycho-visual experiments with groups of individuals to...Polytechnic Institute, Troy, NY 12181, May 1984. (Also available as Tech. Rept. IPL-TR-063.) SBalodis, M., "Positioning of typography on maps," Proc
Automatic microscopy for mitotic cell location.
NASA Technical Reports Server (NTRS)
Herron, J.; Ranshaw, R.; Castle, J.; Wald, N.
1972-01-01
Advances are reported in the development of an automatic microscope with which to locate hematologic or other cells in mitosis for subsequent chromosome analysis. The system under development is designed to perform the functions of: slide scanning to locate metaphase cells; conversion of images of selected cells into binary form; and on-line computer analysis of the digitized image for significant cytogenetic data. Cell detection criteria are evaluated using a test sample of 100 mitotic cells and 100 artifacts.
Examination of a cognitive model of stress, burnout, and intention to resign for Japanese nurses.
Ohue, Takashi; Moriyama, Michiko; Nakaya, Takashi
2011-06-01
A reduction in burnout is required to decrease the voluntary turnover of nurses. This study was carried out with the aim of establishing a cognitive model of stress, burnout, and intention to resign for nurses. A questionnaire survey was administered to 336 nurses (27 male and 309 female) who had worked for ≤5 years at a hospital with multiple departments. The survey included an evaluation of burnout (Maslach Burnout Inventory), stress (Nursing Job Stressor Scale), automatic thoughts (Automatic Thoughts Questionnaire-Revised), and irrational beliefs (Japanese Irrational Belief Test), in addition to the intention to resign. The stressors that affected burnout in the nurses included conflict with other nursing staff, nursing role conflict, qualitative workload, quantitative workload, and conflict with patients. The irrational beliefs that were related to burnout included dependence, problem avoidance, and helplessness. In order to examine the automatic thoughts affecting burnout, groups with low and high negative automatic thoughts and low and high positive automatic thoughts were established. A two-way ANOVA showed a significant interaction of these factors with emotional exhaustion, but no significant interaction with depersonalization and a personal sense of accomplishment. Only the major effect was significant. The final model showed a process of "stressor → irrational beliefs → negative automatic thoughts/positive automatic thoughts → burnout". In addition, a relationship between burnout and an intention to resign was shown. These results suggest that stress and burnout in nurses might be prevented and that the number of nurses who leave their position could be decreased by changing irrational beliefs to rational beliefs, decreasing negative automatic thoughts, and facilitating positive automatic thoughts. © 2010 The Authors. Japan Journal of Nursing Science © 2010 Japan Academy of Nursing Science.
Design of automatic leveling and centering system of theodolite
NASA Astrophysics Data System (ADS)
Liu, Chun-tong; He, Zhen-Xin; Huang, Xian-xiang; Zhan, Ying
2012-09-01
To realize the theodolite automation and improve the azimuth Angle measurement instrument, the theodolite automatic leveling and centering system with the function of leveling error compensation is designed, which includes the system solution, key components selection, the mechanical structure of leveling and centering, and system software solution. The redesigned leveling feet are driven by the DC servo motor; and the electronic control center device is installed. Using high precision of tilt sensors as horizontal skew detection sensors ensures the effectiveness of the leveling error compensation. Aiming round mark center is located using digital image processing through surface array CCD; and leveling measurement precision can reach the pixel level, which makes the theodolite accurate centering possible. Finally, experiments are conducted using the automatic leveling and centering system of the theodolite. The results show the leveling and centering system can realize automatic operation with high centering accuracy of 0.04mm.The measurement precision of the orientation angle after leveling error compensation is improved, compared with that of in the traditional method. Automatic leveling and centering system of theodolite can satisfy the requirements of the measuring precision and its automation.
NASA Astrophysics Data System (ADS)
Yu, Le; Zhang, Dengrong; Holden, Eun-Jung
2008-07-01
Automatic registration of multi-source remote-sensing images is a difficult task as it must deal with the varying illuminations and resolutions of the images, different perspectives and the local deformations within the images. This paper proposes a fully automatic and fast non-rigid image registration technique that addresses those issues. The proposed technique performs a pre-registration process that coarsely aligns the input image to the reference image by automatically detecting their matching points by using the scale invariant feature transform (SIFT) method and an affine transformation model. Once the coarse registration is completed, it performs a fine-scale registration process based on a piecewise linear transformation technique using feature points that are detected by the Harris corner detector. The registration process firstly finds in succession, tie point pairs between the input and the reference image by detecting Harris corners and applying a cross-matching strategy based on a wavelet pyramid for a fast search speed. Tie point pairs with large errors are pruned by an error-checking step. The input image is then rectified by using triangulated irregular networks (TINs) to deal with irregular local deformations caused by the fluctuation of the terrain. For each triangular facet of the TIN, affine transformations are estimated and applied for rectification. Experiments with Quickbird, SPOT5, SPOT4, TM remote-sensing images of the Hangzhou area in China demonstrate the efficiency and the accuracy of the proposed technique for multi-source remote-sensing image registration.
The Study of Cognitive Change Process on Depression during Aerobic Exercises.
Sadeghi, Kheirollah; Ahmadi, Seyed Mojtaba; Moghadam, Arash Parsa; Parvizifard, Aliakbar
2017-04-01
Several studies have shown that aerobic exercise is effective in treating the depression and improving the mental health. There are various theories which explains why aerobic exercise is effective in the treatment of depression and improve mental health, but there are limited studies to show how cognitive components and depression improve during aerobic exercises. The current study was carried out to investigate the cognitive change process during aerobic exercises in depressed students. This study was conducted through structural equation modeling; the study sample included 85 depressed students. Participants were selected through purposive sampling method. Beck Depression Inventory (BDI-II), Automatic Negative Thoughts (ATQ), and the Dysfunctional Attitude Scale (DAS) were used as the data collection instruments. The participants received eight sessions of aerobic exercise (three times a week). The obtained data was analysed by AMOS-18 & SPSS 18 software. The results showed that depression (p=0.001), automatic thoughts (ferquency p=0.413, beliefs p=0.676) and dysfunctional assumptions (p=0.219) reduce during aerobic exercise; however, it was only meaningful for the depression. The casual and consequential models were not fit to the data and partially and fully interactive models provided an adequate fit to the data. Fully interactive model provided the best fit of the data. It seems that aerobic exercise reduced cognitive components separately leading to reduce depression.
NASA Astrophysics Data System (ADS)
Romagnan, Jean Baptiste; Aldamman, Lama; Gasparini, Stéphane; Nival, Paul; Aubert, Anaïs; Jamet, Jean Louis; Stemmann, Lars
2016-10-01
The present work aims to show that high throughput imaging systems can be useful to estimate mesozooplankton community size and taxonomic descriptors that can be the base for consistent large scale monitoring of plankton communities. Such monitoring is required by the European Marine Strategy Framework Directive (MSFD) in order to ensure the Good Environmental Status (GES) of European coastal and offshore marine ecosystems. Time and cost-effective, automatic, techniques are of high interest in this context. An imaging-based protocol has been applied to a high frequency time series (every second day between April 2003 to April 2004 on average) of zooplankton obtained in a coastal site of the NW Mediterranean Sea, Villefranche Bay. One hundred eighty four mesozooplankton net collected samples were analysed with a Zooscan and an associated semi-automatic classification technique. The constitution of a learning set designed to maximize copepod identification with more than 10,000 objects enabled the automatic sorting of copepods with an accuracy of 91% (true positives) and a contamination of 14% (false positives). Twenty seven samples were then chosen from the total copepod time series for detailed visual sorting of copepods after automatic identification. This method enabled the description of the dynamics of two well-known copepod species, Centropages typicus and Temora stylifera, and 7 other taxonomically broader copepod groups, in terms of size, biovolume and abundance-size distributions (size spectra). Also, total copepod size spectra underwent significant changes during the sampling period. These changes could be partially related to changes in the copepod assemblage taxonomic composition and size distributions. This study shows that the use of high throughput imaging systems is of great interest to extract relevant coarse (i.e. total abundance, size structure) and detailed (i.e. selected species dynamics) descriptors of zooplankton dynamics. Innovative zooplankton analyses are therefore proposed and open the way for further development of zooplankton community indicators of changes.
NASA Technical Reports Server (NTRS)
Theodore, Colin R.
2010-01-01
A full-scale wind tunnel test to evaluate the effects of Individual Blade Control (IBC) on the performance, vibration, noise and loads of a UH-60A rotor was recently completed in the National Full-Scale Aerodynamics Complex (NFAC) 40- by 80-Foot Wind Tunnel [1]. A key component of this wind tunnel test was an automatic rotor trim control system that allowed the rotor trim state to be set more precisely, quickly and repeatably than was possible with the rotor operator setting the trim condition manually. The trim control system was also able to maintain the desired trim condition through changes in IBC actuation both in open- and closed-loop IBC modes, and through long-period transients in wind tunnel flow. This ability of the trim control system to automatically set and maintain a steady rotor trim enabled the effects of different IBC inputs to be compared at common trim conditions and to perform these tests quickly without requiring the rotor operator to re-trim the rotor. The trim control system described in this paper was developed specifically for use during the IBC wind tunnel test
Polarization transformation as an algorithm for automatic generalization and quality assessment
NASA Astrophysics Data System (ADS)
Qian, Haizhong; Meng, Liqiu
2007-06-01
Since decades it has been a dream of cartographers to computationally mimic the generalization processes in human brains for the derivation of various small-scale target maps or databases from a large-scale source map or database. This paper addresses in a systematic way the polarization transformation (PT) - a new algorithm that serves both the purpose of automatic generalization of discrete features and the quality assurance. By means of PT, two dimensional point clusters or line networks in the Cartesian system can be transformed into a polar coordinate system, which then can be unfolded as a single spectrum line r = f(α), where r and a stand for the polar radius and the polar angle respectively. After the transformation, the original features will correspond to nodes on the spectrum line delimited between 0° and 360° along the horizontal axis, and between the minimum and maximum polar radius along the vertical axis. Since PT is a lossless transformation, it allows a straighforward analysis and comparison of the original and generalized distributions, thus automatic generalization and quality assurance can be down in this way. Examples illustrate that PT algorithm meets with the requirement of generalization of discrete spatial features and is more scientific.
Nguyen, Nga; Vandenbroucke, Laurent; Hernández, Alfredo; Pham, Tu; Beuchée, Alain; Pladys, Patrick
2017-05-01
This study examined the heart rate variability characteristics associated with early-onset neonatal sepsis in a prospective, observational controlled study. Eligible patients were full-term neonates hospitalised with clinical signs that suggested early-onset sepsis and a C-reactive protein of >10 mg/L. Sepsis was considered proven in cases of symptomatic septicaemia, meningitis, pneumonia or enterocolitis. Heart rate variability parameters (n = 16) were assessed from five-, 15- and 30-minute stationary sequences automatically selected from electrocardiographic recordings performed at admission and compared with a control group using the U-test with post hoc Benjamini-Yekutieli correction. Stationary sequences corresponded to the periods with the lowest changes of heart rate variability over time. A total of 40 full-term infants were enrolled, including 14 with proven sepsis. The mean duration of the cardiac cycle length was lower in the proven sepsis group than in the control group (n = 11), without other significant changes in heart rate variability parameters. These durations, measured in five-minute stationary periods, were 406 (367-433) ms in proven sepsis group versus 507 (463-522) ms in the control group (p < 0.05). Early-onset neonatal sepsis was associated with a high mean heart rate measured during automatically selected stationary periods. ©2017 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.
Higher Education: Reputation Effects, Signal Distortions, and Propitious Selection
ERIC Educational Resources Information Center
Savitskaya, E. V.; Altunina, N. S.
2017-01-01
We attempt to prove the hypothesis that, under certain conditions, a phenomenon of propitious selection may arise on the higher education market: When talented university entrants favor applying to branded universities, the latter are able to automatically build up a positive reputation without having to actually improve the quality of their…
Automatic Selection of Suitable Sentences for Language Learning Exercises
ERIC Educational Resources Information Center
Pilán, Ildikó; Volodina, Elena; Johansson, Richard
2013-01-01
In our study we investigated second and foreign language (L2) sentence readability, an area little explored so far in the case of several languages, including Swedish. The outcome of our research consists of two methods for sentence selection from native language corpora based on Natural Language Processing (NLP) and machine learning (ML)…
Directional Microphone Hearing Aids in School Environments: Working toward Optimization
ERIC Educational Resources Information Center
Ricketts, Todd A.; Picou, Erin M.; Galster, Jason
2017-01-01
Purpose: The hearing aid microphone setting (omnidirectional or directional) can be selected manually or automatically. This study examined the percentage of time the microphone setting selected using each method was judged to provide the best signalto-noise ratio (SNR) for the talkers of interest in school environments. Method: A total of 26…
Vehicle-to-Grid Automatic Load Sharing with Driver Preference in Micro-Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yubo; Nazaripouya, Hamidreza; Chu, Chi-Cheng
Integration of Electrical Vehicles (EVs) with power grid not only brings new challenges for load management, but also opportunities for distributed storage and generation. This paper comprehensively models and analyzes distributed Vehicle-to-Grid (V2G) for automatic load sharing with driver preference. In a micro-grid with limited communications, V2G EVs need to decide load sharing based on their own power and voltage profile. A droop based controller taking into account driver preference is proposed in this paper to address the distributed control of EVs. Simulations are designed for three fundamental V2G automatic load sharing scenarios that include all system dynamics of suchmore » applications. Simulation results demonstrate that active power sharing is achieved proportionally among V2G EVs with consideration of driver preference. In additional, the results also verify the system stability and reactive power sharing analysis in system modelling, which sheds light on large scale V2G automatic load sharing in more complicated cases.« less
A cloud-based system for automatic glaucoma screening.
Fengshou Yin; Damon Wing Kee Wong; Ying Quan; Ai Ping Yow; Ngan Meng Tan; Gopalakrishnan, Kavitha; Beng Hai Lee; Yanwu Xu; Zhuo Zhang; Jun Cheng; Jiang Liu
2015-08-01
In recent years, there has been increasing interest in the use of automatic computer-based systems for the detection of eye diseases including glaucoma. However, these systems are usually standalone software with basic functions only, limiting their usage in a large scale. In this paper, we introduce an online cloud-based system for automatic glaucoma screening through the use of medical image-based pattern classification technologies. It is designed in a hybrid cloud pattern to offer both accessibility and enhanced security. Raw data including patient's medical condition and fundus image, and resultant medical reports are collected and distributed through the public cloud tier. In the private cloud tier, automatic analysis and assessment of colour retinal fundus images are performed. The ubiquitous anywhere access nature of the system through the cloud platform facilitates a more efficient and cost-effective means of glaucoma screening, allowing the disease to be detected earlier and enabling early intervention for more efficient intervention and disease management.
Semi-Automatic Extraction Algorithm for Images of the Ciliary Muscle
Kao, Chiu-Yen; Richdale, Kathryn; Sinnott, Loraine T.; Ernst, Lauren E.; Bailey, Melissa D.
2011-01-01
Purpose To development and evaluate a semi-automatic algorithm for segmentation and morphological assessment of the dimensions of the ciliary muscle in Visante™ Anterior Segment Optical Coherence Tomography images. Methods Geometric distortions in Visante images analyzed as binary files were assessed by imaging an optical flat and human donor tissue. The appropriate pixel/mm conversion factor to use for air (n = 1) was estimated by imaging calibration spheres. A semi-automatic algorithm was developed to extract the dimensions of the ciliary muscle from Visante images. Measurements were also made manually using Visante software calipers. Interclass correlation coefficients (ICC) and Bland-Altman analyses were used to compare the methods. A multilevel model was fitted to estimate the variance of algorithm measurements that was due to differences within- and between-examiners in scleral spur selection versus biological variability. Results The optical flat and the human donor tissue were imaged and appeared without geometric distortions in binary file format. Bland-Altman analyses revealed that caliper measurements tended to underestimate ciliary muscle thickness at 3 mm posterior to the scleral spur in subjects with the thickest ciliary muscles (t = 3.6, p < 0.001). The percent variance due to within- or between-examiner differences in scleral spur selection was found to be small (6%) when compared to the variance due to biological difference across subjects (80%). Using the mean of measurements from three images achieved an estimated ICC of 0.85. Conclusions The semi-automatic algorithm successfully segmented the ciliary muscle for further measurement. Using the algorithm to follow the scleral curvature to locate more posterior measurements is critical to avoid underestimating thickness measurements. This semi-automatic algorithm will allow for repeatable, efficient, and masked ciliary muscle measurements in large datasets. PMID:21169877
The One to Multiple Automatic High Accuracy Registration of Terrestrial LIDAR and Optical Images
NASA Astrophysics Data System (ADS)
Wang, Y.; Hu, C.; Xia, G.; Xue, H.
2018-04-01
The registration of ground laser point cloud and close-range image is the key content of high-precision 3D reconstruction of cultural relic object. In view of the requirement of high texture resolution in the field of cultural relic at present, The registration of point cloud and image data in object reconstruction will result in the problem of point cloud to multiple images. In the current commercial software, the two pairs of registration of the two kinds of data are realized by manually dividing point cloud data, manual matching point cloud and image data, manually selecting a two - dimensional point of the same name of the image and the point cloud, and the process not only greatly reduces the working efficiency, but also affects the precision of the registration of the two, and causes the problem of the color point cloud texture joint. In order to solve the above problems, this paper takes the whole object image as the intermediate data, and uses the matching technology to realize the automatic one-to-one correspondence between the point cloud and multiple images. The matching of point cloud center projection reflection intensity image and optical image is applied to realize the automatic matching of the same name feature points, and the Rodrigo matrix spatial similarity transformation model and weight selection iteration are used to realize the automatic registration of the two kinds of data with high accuracy. This method is expected to serve for the high precision and high efficiency automatic 3D reconstruction of cultural relic objects, which has certain scientific research value and practical significance.
Zhang, Ling; Chen, Siping; Chin, Chien Ting; Wang, Tianfu; Li, Shengli
2012-08-01
To assist radiologists and decrease interobserver variability when using 2D ultrasonography (US) to locate the standardized plane of early gestational sac (SPGS) and to perform gestational sac (GS) biometric measurements. In this paper, the authors report the design of the first automatic solution, called "intelligent scanning" (IS), for selecting SPGS and performing biometric measurements using real-time 2D US. First, the GS is efficiently and precisely located in each ultrasound frame by exploiting a coarse to fine detection scheme based on the training of two cascade AdaBoost classifiers. Next, the SPGS are automatically selected by eliminating false positives. This is accomplished using local context information based on the relative position of anatomies in the image sequence. Finally, a database-guided multiscale normalized cuts algorithm is proposed to generate the initial contour of the GS, based on which the GS is automatically segmented for measurement by a modified snake model. This system was validated on 31 ultrasound videos involving 31 pregnant volunteers. The differences between system performance and radiologist performance with respect to SPGS selection and length and depth (diameter) measurements are 7.5% ± 5.0%, 5.5% ± 5.2%, and 6.5% ± 4.6%, respectively. Additional validations prove that the IS precision is in the range of interobserver variability. Our system can display the SPGS along with biometric measurements in approximately three seconds after the video ends, when using a 1.9 GHz dual-core computer. IS of the GS from 2D real-time US is a practical, reproducible, and reliable approach.
Dual current readout for precision plating
NASA Technical Reports Server (NTRS)
Iceland, W. F.
1970-01-01
Bistable amplifier prevents damage in the low range circuitry of a dual scale ammeter. It senses the current and switches automatically to the high range circuitry as the current rises above a preset level.
Somasundaram, Karuppanagounder; Ezhilarasan, Kamalanathan
2015-01-01
To develop an automatic skull stripping method for magnetic resonance imaging (MRI) of human head scans. The proposed method is based on gray scale transformation and morphological operations. The proposed method has been tested with 20 volumes of normal T1-weighted images taken from Internet Brain Segmentation Repository. Experimental results show that the proposed method gives better results than the popular skull stripping methods Brain Extraction Tool and Brain Surface Extractor. The average value of Jaccard and Dice coefficients are 0.93 and 0.962 respectively. In this article, we have proposed a novel skull stripping method using intensity transformation and morphological operations. This is a low computational complexity method but gives competitive or better results than that of the popular skull stripping methods Brain Surface Extractor and Brain Extraction Tool.
NASA Astrophysics Data System (ADS)
Liu, Likun
2018-01-01
In the field of remote sensing image processing, remote sensing image segmentation is a preliminary step for later analysis of remote sensing image processing and semi-auto human interpretation, fully-automatic machine recognition and learning. Since 2000, a technique of object-oriented remote sensing image processing method and its basic thought prevails. The core of the approach is Fractal Net Evolution Approach (FNEA) multi-scale segmentation algorithm. The paper is intent on the research and improvement of the algorithm, which analyzes present segmentation algorithms and selects optimum watershed algorithm as an initialization. Meanwhile, the algorithm is modified by modifying an area parameter, and then combining area parameter with a heterogeneous parameter further. After that, several experiments is carried on to prove the modified FNEA algorithm, compared with traditional pixel-based method (FCM algorithm based on neighborhood information) and combination of FNEA and watershed, has a better segmentation result.
A New Strategy to Land Precisely on the Northern Plains of Mars
NASA Technical Reports Server (NTRS)
Cheng, Yang; Huertas, Andres
2010-01-01
During the Phoenix mission landing site selection process, the Mars Reconnaissance Orbiter (MRO) High Resolution Imaging Science Experiment (HiRISE) images revealed widely spread and dense rock fields in the northern plains. Automatic rock mapping and subsequent statistical analyses showed 30-90% CFA (cumulative fractional area) covered by rocks larger than 1 meter in dense rock fields around craters. Less dense rock fields had 5-30% rock coverage in terrain away from craters. Detectable meter-scale boulders were found nearly everywhere. These rocks present a risk to spacecraft safety during landing. However, they are the most salient topographic features in this region, and can be good landmarks for spacecraft localization during landing. In this paper we present a novel strategy that uses abundance of rocks in northern plains for spacecraft localization. The paper discusses this approach in three sections: a rock-based landmark terrain relative navigation (TRN) algorithm; the TRN algorithm feasibility; and conclusions.
A general system for automatic biomedical image segmentation using intensity neighborhoods.
Chen, Cheng; Ozolek, John A; Wang, Wei; Rohde, Gustavo K
2011-01-01
Image segmentation is important with applications to several problems in biology and medicine. While extensively researched, generally, current segmentation methods perform adequately in the applications for which they were designed, but often require extensive modifications or calibrations before being used in a different application. We describe an approach that, with few modifications, can be used in a variety of image segmentation problems. The approach is based on a supervised learning strategy that utilizes intensity neighborhoods to assign each pixel in a test image its correct class based on training data. We describe methods for modeling rotations and variations in scales as well as a subset selection for training the classifiers. We show that the performance of our approach in tissue segmentation tasks in magnetic resonance and histopathology microscopy images, as well as nuclei segmentation from fluorescence microscopy images, is similar to or better than several algorithms specifically designed for each of these applications.
LOFT data acquisition and visual display system (DAVDS) presentation program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bullock, M.G.; Miyasaki, F.S.
1976-03-01
The Data Acquisition and Visual Display System (DAVDS) at the Loss-of-Fluid Test Facility (LOFT) has 742 data channel recording capability of which 576 are recorded digitally. The purpose of this computer program is to graphically present the data acquired and/or processed by the LOFT DAVDS. This program takes specially created plot data buffers of up to 1024 words and generates time history plots on the system electrostatic printer-plotter. The data can be extracted from two system input devices: Magnetic disk or digital magnetic tape. Versatility has been designed in the program by providing the user three methods of scaling plots:more » Automatic, control record, and manual. Time required to produce a plot on the system electrostatic printer-plotter varies from 30 to 90 seconds depending on the options selected. The basic computer and program details are described.« less
Improved dynamical scaling analysis using the kernel method for nonequilibrium relaxation.
Echinaka, Yuki; Ozeki, Yukiyasu
2016-10-01
The dynamical scaling analysis for the Kosterlitz-Thouless transition in the nonequilibrium relaxation method is improved by the use of Bayesian statistics and the kernel method. This allows data to be fitted to a scaling function without using any parametric model function, which makes the results more reliable and reproducible and enables automatic and faster parameter estimation. Applying this method, the bootstrap method is introduced and a numerical discrimination for the transition type is proposed.
Criteria for the assessment of analyser practicability
Biosca, C.; Galimany, R.
1993-01-01
This article lists the theoretical criteria that need to be considered to assess the practicability of an automatic analyser. Two essential sets of criteria should be taken into account when selecting an automatic analyser: ‘reliability’ and ‘practicability’. Practibility covers the features that provide information about the suitability of an analyser for specific working conditions. These practibility criteria are classsified in this article and include the environment; work organization; versatility and flexibility; safely controls; staff training; maintenance and operational costs. PMID:18924972
A method for automatically abstracting visual documents
NASA Technical Reports Server (NTRS)
Rorvig, Mark E.
1994-01-01
Visual documents--motion sequences on film, videotape, and digital recording--constitute a major source of information for the Space Agency, as well as all other government and private sector entities. This article describes a method for automatically selecting key frames from visual documents. These frames may in turn be used to represent the total image sequence of visual documents in visual libraries, hypermedia systems, and training algorithm reduces 51 minutes of video sequences to 134 frames; a reduction of information in the range of 700:1.
Master/Programmable-Slave Computer
NASA Technical Reports Server (NTRS)
Smaistrla, David; Hall, William A.
1990-01-01
Unique modular computer features compactness, low power, mass storage of data, multiprocessing, and choice of various input/output modes. Master processor communicates with user via usual keyboard and video display terminal. Coordinates operations of as many as 24 slave processors, each dedicated to different experiment. Each slave circuit card includes slave microprocessor and assortment of input/output circuits for communication with external equipment, with master processor, and with other slave processors. Adaptable to industrial process control with selectable degrees of automatic control, automatic and/or manual monitoring, and manual intervention.
Nier, A.O.C.
1959-08-25
A voltage switching apparatus is described for use with a mass spectrometer in the concentratron analysis of several components of a gas mixture. The system automatically varies the voltage on the accelerating electrode of the mass spectrometer through a program of voltages which corresponds to the particular gas components under analysis. Automatic operation may be discontinued at any time to permit the operator to manually select any desired predetermined accelerating voltage. Further, the system may be manually adjusted to vary the accelerating voltage over a wide range.
Machine for Automatic Bacteriological Pour Plate Preparation
Sharpe, A. N.; Biggs, D. R.; Oliver, R. J.
1972-01-01
A fully automatic system for preparing poured plates for bacteriological analyses has been constructed and tested. The machine can make decimal dilutions of bacterial suspensions, dispense measured amounts into petri dishes, add molten agar, mix the dish contents, and label the dishes with sample and dilution numbers at the rate of 2,000 dishes per 8-hr day. In addition, the machine can be programmed to select different media so that plates for different types of bacteriological analysis may be made automatically from the same sample. The machine uses only the components of the media and sterile polystyrene petri dishes; requirements for all other materials, such as sterile pipettes and capped bottles of diluents and agar, are eliminated. Images PMID:4560475
Corner detection and sorting method based on improved Harris algorithm in camera calibration
NASA Astrophysics Data System (ADS)
Xiao, Ying; Wang, Yonghong; Dan, Xizuo; Huang, Anqi; Hu, Yue; Yang, Lianxiang
2016-11-01
In traditional Harris corner detection algorithm, the appropriate threshold which is used to eliminate false corners is selected manually. In order to detect corners automatically, an improved algorithm which combines Harris and circular boundary theory of corners is proposed in this paper. After detecting accurate corner coordinates by using Harris algorithm and Forstner algorithm, false corners within chessboard pattern of the calibration plate can be eliminated automatically by using circular boundary theory. Moreover, a corner sorting method based on an improved calibration plate is proposed to eliminate false background corners and sort remaining corners in order. Experiment results show that the proposed algorithms can eliminate all false corners and sort remaining corners correctly and automatically.
A semi-automatic traffic sign detection, classification, and positioning system
NASA Astrophysics Data System (ADS)
Creusen, I. M.; Hazelhoff, L.; de With, P. H. N.
2012-01-01
The availability of large-scale databases containing street-level panoramic images offers the possibility to perform semi-automatic surveying of real-world objects such as traffic signs. These inventories can be performed significantly more efficiently than using conventional methods. Governmental agencies are interested in these inventories for maintenance and safety reasons. This paper introduces a complete semi-automatic traffic sign inventory system. The system consists of several components. First, a detection algorithm locates the 2D position of the traffic signs in the panoramic images. Second, a classification algorithm is used to identify the traffic sign. Third, the 3D position of the traffic sign is calculated using the GPS position of the photographs. Finally, the results are listed in a table for quick inspection and are also visualized in a web browser.
A fast and automatic mosaic method for high-resolution satellite images
NASA Astrophysics Data System (ADS)
Chen, Hongshun; He, Hui; Xiao, Hongyu; Huang, Jing
2015-12-01
We proposed a fast and fully automatic mosaic method for high-resolution satellite images. First, the overlapped rectangle is computed according to geographical locations of the reference and mosaic images and feature points on both the reference and mosaic images are extracted by a scale-invariant feature transform (SIFT) algorithm only from the overlapped region. Then, the RANSAC method is used to match feature points of both images. Finally, the two images are fused into a seamlessly panoramic image by the simple linear weighted fusion method or other method. The proposed method is implemented in C++ language based on OpenCV and GDAL, and tested by Worldview-2 multispectral images with a spatial resolution of 2 meters. Results show that the proposed method can detect feature points efficiently and mosaic images automatically.
ExcelAutomat: a tool for systematic processing of files as applied to quantum chemical calculations
NASA Astrophysics Data System (ADS)
Laloo, Jalal Z. A.; Laloo, Nassirah; Rhyman, Lydia; Ramasami, Ponnadurai
2017-07-01
The processing of the input and output files of quantum chemical calculations often necessitates a spreadsheet as a key component of the workflow. Spreadsheet packages with a built-in programming language editor can automate the steps involved and thus provide a direct link between processing files and the spreadsheet. This helps to reduce user-interventions as well as the need to switch between different programs to carry out each step. The ExcelAutomat tool is the implementation of this method in Microsoft Excel (MS Excel) using the default Visual Basic for Application (VBA) programming language. The code in ExcelAutomat was adapted to work with the platform-independent open-source LibreOffice Calc, which also supports VBA. ExcelAutomat provides an interface through the spreadsheet to automate repetitive tasks such as merging input files, splitting, parsing and compiling data from output files, and generation of unique filenames. Selected extracted parameters can be retrieved as variables which can be included in custom codes for a tailored approach. ExcelAutomat works with Gaussian files and is adapted for use with other computational packages including the non-commercial GAMESS. ExcelAutomat is available as a downloadable MS Excel workbook or as a LibreOffice workbook.
ExcelAutomat: a tool for systematic processing of files as applied to quantum chemical calculations.
Laloo, Jalal Z A; Laloo, Nassirah; Rhyman, Lydia; Ramasami, Ponnadurai
2017-07-01
The processing of the input and output files of quantum chemical calculations often necessitates a spreadsheet as a key component of the workflow. Spreadsheet packages with a built-in programming language editor can automate the steps involved and thus provide a direct link between processing files and the spreadsheet. This helps to reduce user-interventions as well as the need to switch between different programs to carry out each step. The ExcelAutomat tool is the implementation of this method in Microsoft Excel (MS Excel) using the default Visual Basic for Application (VBA) programming language. The code in ExcelAutomat was adapted to work with the platform-independent open-source LibreOffice Calc, which also supports VBA. ExcelAutomat provides an interface through the spreadsheet to automate repetitive tasks such as merging input files, splitting, parsing and compiling data from output files, and generation of unique filenames. Selected extracted parameters can be retrieved as variables which can be included in custom codes for a tailored approach. ExcelAutomat works with Gaussian files and is adapted for use with other computational packages including the non-commercial GAMESS. ExcelAutomat is available as a downloadable MS Excel workbook or as a LibreOffice workbook.
Zheng, Rencheng; Yamabe, Shigeyuki; Nakano, Kimihiko; Suda, Yoshihiro
2015-01-01
Nowadays insight into human-machine interaction is a critical topic with the large-scale development of intelligent vehicles. Biosignal analysis can provide a deeper understanding of driver behaviors that may indicate rationally practical use of the automatic technology. Therefore, this study concentrates on biosignal analysis to quantitatively evaluate mental stress of drivers during automatic driving of trucks, with vehicles set at a closed gap distance apart to reduce air resistance to save energy consumption. By application of two wearable sensor systems, a continuous measurement was realized for palmar perspiration and masseter electromyography, and a biosignal processing method was proposed to assess mental stress levels. In a driving simulator experiment, ten participants completed automatic driving with 4, 8, and 12 m gap distances from the preceding vehicle, and manual driving with about 25 m gap distance as a reference. It was found that mental stress significantly increased when the gap distances decreased, and an abrupt increase in mental stress of drivers was also observed accompanying a sudden change of the gap distance during automatic driving, which corresponded to significantly higher ride discomfort according to subjective reports. PMID:25738768
Automatic thermographic image defect detection of composites
NASA Astrophysics Data System (ADS)
Luo, Bin; Liebenberg, Bjorn; Raymont, Jeff; Santospirito, SP
2011-05-01
Detecting defects, and especially reliably measuring defect sizes, are critical objectives in automatic NDT defect detection applications. In this work, the Sentence software is proposed for the analysis of pulsed thermography and near IR images of composite materials. Furthermore, the Sentence software delivers an end-to-end, user friendly platform for engineers to perform complete manual inspections, as well as tools that allow senior engineers to develop inspection templates and profiles, reducing the requisite thermographic skill level of the operating engineer. Finally, the Sentence software can also offer complete independence of operator decisions by the fully automated "Beep on Defect" detection functionality. The end-to-end automatic inspection system includes sub-systems for defining a panel profile, generating an inspection plan, controlling a robot-arm and capturing thermographic images to detect defects. A statistical model has been built to analyze the entire image, evaluate grey-scale ranges, import sentencing criteria and automatically detect impact damage defects. A full width half maximum algorithm has been used to quantify the flaw sizes. The identified defects are imported into the sentencing engine which then sentences (automatically compares analysis results against acceptance criteria) the inspection by comparing the most significant defect or group of defects against the inspection standards.
The QT Scale: A Weight Scale Measuring the QTc Interval.
Couderc, Jean-Philippe; Beshaw, Connor; Niu, Xiaodan; Serrano-Finetti, Ernesto; Casas, Oscar; Pallas-Areny, Ramon; Rosero, Spencer; Zareba, Wojciech
2017-01-01
Despite the strong evidence of the clinical utility of QTc prolongation as a surrogate marker of cardiac risk, QTc measurement is not part of clinical routine either in hospital or in physician offices. We evaluated a novel device ("the QT scale") to measure heart rate (HR) and QTc interval. The QT scale is a weight scale embedding an ECG acquisition system with four limb sensors (feet and hands: lead I, II, and III). We evaluated the reliability of QT scale in healthy subjects (cohort 1) and cardiac patients (cohorts 2 and 3) considering a learning (cohort 2) and two validation cohorts. The QT scale and the standard 12-lead recorder were compared using intraclass correlation coefficient (ICC) in cohorts 2 and 3. Absolute value of heart rate and QTc intervals between manual and automatic measurements using ECGs from the QT scale and a clinical device were compared in cohort 1. We enrolled 16 subjects in cohort 1 (8 w, 8 m; 32 ± 8 vs 34 ± 10 years, P = 0.7), 51 patients in cohort 2 (13 w, 38 m; 61 ± 16 vs 58 ± 18 years, P = 0.6), and 13 AF patients in cohort 3 (4 w, 9 m; 63 ± 10 vs 64 ± 10 years, P = 0.9). Similar automatic heart rate and QTc were delivered by the scale and the clinical device in cohort 1: paired difference in RR and QTc were -7 ± 34 milliseconds (P = 0.37) and 3.4 ± 28.6 milliseconds (P = 0.64), respectively. The measurement of stability was slightly lower in ECG from the QT scale than from the clinical device (ICC: 91% vs 80%) in cohort 3. The "QT scale device" delivers valid heart rate and QTc interval measurements. © 2016 Wiley Periodicals, Inc.
Atlas ranking and selection for automatic segmentation of the esophagus from CT scans
NASA Astrophysics Data System (ADS)
Yang, Jinzhong; Haas, Benjamin; Fang, Raymond; Beadle, Beth M.; Garden, Adam S.; Liao, Zhongxing; Zhang, Lifei; Balter, Peter; Court, Laurence
2017-12-01
In radiation treatment planning, the esophagus is an important organ-at-risk that should be spared in patients with head and neck cancer or thoracic cancer who undergo intensity-modulated radiation therapy. However, automatic segmentation of the esophagus from CT scans is extremely challenging because of the structure’s inconsistent intensity, low contrast against the surrounding tissues, complex and variable shape and location, and random air bubbles. The goal of this study is to develop an online atlas selection approach to choose a subset of optimal atlases for multi-atlas segmentation to the delineate esophagus automatically. We performed atlas selection in two phases. In the first phase, we used the correlation coefficient of the image content in a cubic region between each atlas and the new image to evaluate their similarity and to rank the atlases in an atlas pool. A subset of atlases based on this ranking was selected, and deformable image registration was performed to generate deformed contours and deformed images in the new image space. In the second phase of atlas selection, we used Kullback-Leibler divergence to measure the similarity of local-intensity histograms between the new image and each of the deformed images, and the measurements were used to rank the previously selected atlases. Deformed contours were overlapped sequentially, from the most to the least similar, and the overlap ratio was examined. We further identified a subset of optimal atlases by analyzing the variation of the overlap ratio versus the number of atlases. The deformed contours from these optimal atlases were fused together using a modified simultaneous truth and performance level estimation algorithm to produce the final segmentation. The approach was validated with promising results using both internal data sets (21 head and neck cancer patients and 15 thoracic cancer patients) and external data sets (30 thoracic patients).
Dynamic multiplexed analysis method using ion mobility spectrometer
Belov, Mikhail E [Richland, WA
2010-05-18
A method for multiplexed analysis using ion mobility spectrometer in which the effectiveness and efficiency of the multiplexed method is optimized by automatically adjusting rates of passage of analyte materials through an IMS drift tube during operation of the system. This automatic adjustment is performed by the IMS instrument itself after determining the appropriate levels of adjustment according to the method of the present invention. In one example, the adjustment of the rates of passage for these materials is determined by quantifying the total number of analyte molecules delivered to the ion trap in a preselected period of time, comparing this number to the charge capacity of the ion trap, selecting a gate opening sequence; and implementing the selected gate opening sequence to obtain a preselected rate of analytes within said IMS drift tube.
An automatic optimum kernel-size selection technique for edge enhancement
Chavez, Pat S.; Bauer, Brian P.
1982-01-01
Edge enhancement is a technique that can be considered, to a first order, a correction for the modulation transfer function of an imaging system. Digital imaging systems sample a continuous function at discrete intervals so that high-frequency information cannot be recorded at the same precision as lower frequency data. Because of this, fine detail or edge information in digital images is lost. Spatial filtering techniques can be used to enhance the fine detail information that does exist in the digital image, but the filter size is dependent on the type of area being processed. A technique has been developed by the authors that uses the horizontal first difference to automatically select the optimum kernel-size that should be used to enhance the edges that are contained in the image.
Melles, Reinhilde J; ter Kuile, Moniek M; Dewitte, Marieke; van Lankveld, Jacques J D M; Brauer, Marieke; de Jong, Peter J
2014-03-01
The intense fear response to vaginal penetration in women with lifelong vaginismus, who have never been able to experience coitus, may reflect negative automatic and deliberate appraisals of vaginal penetration stimuli which might be modified by exposure treatment. The aim of this study is to examine whether (i) sexual stimuli elicit relatively strong automatic and deliberate threat associations in women with vaginismus, as well as relatively negative automatic and deliberate global affective associations, compared with symptom-free women; and (ii) these automatic and more deliberate attitudes can be modified by therapist-aided exposure treatment. A single target Implicit Association Test (st-IAT) was used to index automatic threat associations, and an Affective Simon Task (AST) to index global automatic affective associations. Participants were women with lifelong vaginismus (N = 68) and women without sexual problems (N = 70). The vaginismus group was randomly allocated to treatment (n = 34) and a waiting list control condition (n = 34). Indices of automatic threat were obtained by the st-IAT and automatic global affective associations by the AST, visual analogue scales (VAS) were used to assess deliberate appraisals of the sexual pictures (fear and global positive affect). More deliberate fear and less global positive affective associations with sexual stimuli were found in women with vaginismus. Following therapist-aided exposure treatment, the strength of fear was strongly reduced, whereas global positive affective associations were strengthened. Automatic associations did not differ between women with and without vaginismus and did not change following treatment. Relatively stronger negative (threat or global affect) associations with sexual stimuli in vaginismus appeared restricted to the deliberate level. Therapist-aided exposure treatment was effective in reducing subjective fear of sexual penetration stimuli and led to more global positive affective associations with sexual stimuli. The impact of exposure might be further improved by strengthening the association between vaginal penetration and positive affect (e.g., by using counter-conditioning techniques). © 2013 International Society for Sexual Medicine.
A procedural method for the efficient implementation of full-custom VLSI designs
NASA Technical Reports Server (NTRS)
Belk, P.; Hickey, N.
1987-01-01
An imbedded language system for the layout of very large scale integration (VLSI) circuits is examined. It is shown that through the judicious use of this system, a large variety of circuits can be designed with circuit density and performance comparable to traditional full-custom design methods, but with design costs more comparable to semi-custom design methods. The high performance of this methodology is attributable to the flexibility of procedural descriptions of VLSI layouts and to a number of automatic and semi-automatic tools within the system.
Nimbi, Filippo Maria; Tripodi, Francesca; Simonelli, Chiara; Nobre, Pedro
2018-03-01
The Sexual Modes Questionnaire (SMQ) is a validated and widespread used tool to assess the association among negative automatic thoughts, emotions, and sexual response during sexual activity in men and women. To test the psychometric characteristics of the Italian version of the SMQ focusing on the Automatic Thoughts subscale (SMQ-AT). After linguistic translation, the psychometric properties (internal consistency, construct, and discriminant validity) were evaluated. 1,051 participants (425 men and 626 women, 776 healthy and 275 clinical groups complaining about sexual problems) participated in the present study. 2 confirmatory factor analyses were conducted to test the fit of the original factor structures of the SMQ versions. In addition, 2 principal component analyses were performed to highlight 2 new factorial structures that were further validated with confirmatory factor analyses. Cronbach α and composite reliability were used as internal consistency measures and differences between clinical and control groups were run to test the discriminant validity for the male and female versions. The associations with emotions and sexual functioning measures also are reported. Principal component analyses identified 5 factors in the male version: erection concerns thoughts, lack of erotic thoughts, age- and body-related thoughts, negative thoughts toward sex, and worries about partner's evaluation and failure anticipation thoughts. In the female version 6 factors were found: sexual abuse thoughts, lack of erotic thoughts, low self-body image thoughts, failure and disengagement thoughts, sexual passivity and control, and partner's lack of affection. Confirmatory factor analysis supported the adequacy of the factor structure for men and women. Moreover, the SMQ showed a strong association with emotional response and sexual functioning, differentiating between clinical and control groups. This measure is useful to evaluate patients and design interventions focused on negative automatic thoughts during sexual activity and to develop multicultural research. This study reports on the translation and validation of the Italian version of a clinically useful and widely used measure (assessing automatic thoughts during sexual activity). Limits regarding sampling technique and use of the Automatic Thoughts subscale are discussed in the article. The present findings support the validity and the internal consistency of the Italian version of the SMQ-AT and allow the assessment of negative automatic thoughts during sexual activity for clinical and research purposes. Nimbi FM, Tripodi F, Simonelli C, Nobre P. Sexual Modes Questionnaire (SMQ): Translation and Psychometric Properties of the Italian Version of the Automatic Thought Scale. J Sex Med 2018;15:396-409. Copyright © 2018 International Society for Sexual Medicine. Published by Elsevier Inc. All rights reserved.
Imbalance in Multiple Sclerosis: A Result of Slowed Spinal Somatosensory Conduction
Cameron, Michelle H.; Horak, Fay B.; Herndon, Robert R.; Bourdette, Dennis
2009-01-01
Balance problems and falls are common in people with multiple sclerosis (MS) but their cause and nature are not well understood. It is known that MS affects many areas of the central nervous system that can impact postural responses to maintain balance, including the cerebellum and the spinal cord. Cerebellar balance disorders are associated with normal latencies but reduced scaling of postural responses. We therefore examined the latency and scaling of automatic postural responses, and their relationship to somatosensory evoked potentials (SSEPs), in 10 people with MS and imbalance and 10 age-, sex-matched, healthy controls. The latency and scaling of postural responses to backward surface translations of 5 different velocities and amplitudes, and the latency of spinal and supraspinal somatosensory conduction, were examined. Subjects with MS had large, but very delayed automatic postural response latencies compared to controls (161ms ± 31 vs 102 ± 21, p < 0.01) and these postural response latencies correlated with the latencies of their spinal SSEPs (r=0.73, p< 0.01). Subjects with MS also had normal or excessive scaling of postural response amplitude to perturbation velocity and amplitude. Longer latency postural responses were associated with less velocity scaling and more amplitude scaling. Balance deficits in people with MS appear to be caused by slowed spinal somatosensory conduction and not by cerebellar involvement. People with MS appear to compensate for their slowed spinal somatosensory conduction by increasing the amplitude scaling and the magnitude of their postural responses. PMID:18570015
Automatically Determining Scale Within Unstructured Point Clouds
NASA Astrophysics Data System (ADS)
Kadamen, Jayren; Sithole, George
2016-06-01
Three dimensional models obtained from imagery have an arbitrary scale and therefore have to be scaled. Automatically scaling these models requires the detection of objects in these models which can be computationally intensive. Real-time object detection may pose problems for applications such as indoor navigation. This investigation poses the idea that relational cues, specifically height ratios, within indoor environments may offer an easier means to obtain scales for models created using imagery. The investigation aimed to show two things, (a) that the size of objects, especially the height off ground is consistent within an environment, and (b) that based on this consistency, objects can be identified and their general size used to scale a model. To test the idea a hypothesis is first tested on a terrestrial lidar scan of an indoor environment. Later as a proof of concept the same test is applied to a model created using imagery. The most notable finding was that the detection of objects can be more readily done by studying the ratio between the dimensions of objects that have their dimensions defined by human physiology. For example the dimensions of desks and chairs are related to the height of an average person. In the test, the difference between generalised and actual dimensions of objects were assessed. A maximum difference of 3.96% (2.93cm) was observed from automated scaling. By analysing the ratio between the heights (distance from the floor) of the tops of objects in a room, identification was also achieved.
Mlynek, Georg; Lehner, Anita; Neuhold, Jana; Leeb, Sarah; Kostan, Julius; Charnagalov, Alexej; Stolt-Bergner, Peggy; Djinović-Carugo, Kristina; Pinotsis, Nikos
2014-06-01
Expression in Escherichia coli represents the simplest and most cost effective means for the production of recombinant proteins. This is a routine task in structural biology and biochemistry where milligrams of the target protein are required in high purity and monodispersity. To achieve these criteria, the user often needs to screen several constructs in different expression and purification conditions in parallel. We describe a pipeline, implemented in the Center for Optimized Structural Studies, that enables the systematic screening of expression and purification conditions for recombinant proteins and relies on a series of logical decisions. We first use bioinformatics tools to design a series of protein fragments, which we clone in parallel, and subsequently screen in small scale for optimal expression and purification conditions. Based on a scoring system that assesses soluble expression, we then select the top ranking targets for large-scale purification. In the establishment of our pipeline, emphasis was put on streamlining the processes such that it can be easily but not necessarily automatized. In a typical run of about 2 weeks, we are able to prepare and perform small-scale expression screens for 20-100 different constructs followed by large-scale purification of at least 4-6 proteins. The major advantage of our approach is its flexibility, which allows for easy adoption, either partially or entirely, by any average hypothesis driven laboratory in a manual or robot-assisted manner.
Causes of cine image quality deterioration in cardiac catheterization laboratories.
Levin, D C; Dunham, L R; Stueve, R
1983-10-01
Deterioration of cineangiographic image quality can result from malfunctions or technical errors at a number of points along the cine imaging chain: generator and automatic brightness control, x-ray tube, x-ray beam geometry, image intensifier, optics, cine camera, cine film, film processing, and cine projector. Such malfunctions or errors can result in loss of image contrast, loss of spatial resolution, improper control of film optical density (brightness), or some combination thereof. While the electronic and photographic technology involved is complex, physicians who perform cardiac catheterization should be conversant with the problems and what can be done to solve them. Catheterization laboratory personnel have control over a number of factors that directly affect image quality, including radiation dose rate per cine frame, kilovoltage or pulse width (depending on type of automatic brightness control), cine run time, selection of small or large focal spot, proper object-intensifier distance and beam collimation, aperture of the cine camera lens, selection of cine film, processing temperature, processing immersion time, and selection of developer.
Chatter detection in milling process based on VMD and energy entropy
NASA Astrophysics Data System (ADS)
Liu, Changfu; Zhu, Lida; Ni, Chenbing
2018-05-01
This paper presents a novel approach to detect the milling chatter based on Variational Mode Decomposition (VMD) and energy entropy. VMD has already been employed in feature extraction from non-stationary signals. The parameters like number of modes (K) and the quadratic penalty (α) need to be selected empirically when raw signal is decomposed by VMD. Aimed at solving the problem how to select K and α, the automatic selection method of VMD's based on kurtosis is proposed in this paper. When chatter occurs in the milling process, energy will be absorbed to chatter frequency bands. To detect the chatter frequency bands automatically, the chatter detection method based on energy entropy is presented. The vibration signal containing chatter frequency is simulated and three groups of experiments which represent three cutting conditions are conducted. To verify the effectiveness of method presented by this paper, chatter feather extraction has been successfully employed on simulation signals and experimental signals. The simulation and experimental results show that the proposed method can effectively detect the chatter.
De Neys, Wim
2006-06-01
Human reasoning has been shown to overly rely on intuitive, heuristic processing instead of a more demanding analytic inference process. Four experiments tested the central claim of current dual-process theories that analytic operations involve time-consuming executive processing whereas the heuristic system would operate automatically. Participants solved conjunction fallacy problems and indicative and deontic selection tasks. Experiment 1 established that making correct analytic inferences demanded more processing time than did making heuristic inferences. Experiment 2 showed that burdening the executive resources with an attention-demanding secondary task decreased correct, analytic responding and boosted the rate of conjunction fallacies and indicative matching card selections. Results were replicated in Experiments 3 and 4 with a different secondary-task procedure. Involvement of executive resources for the deontic selection task was less clear. Findings validate basic processing assumptions of the dual-process framework and complete the correlational research programme of K. E. Stanovich and R. F. West (2000).
NASA Astrophysics Data System (ADS)
Wang, Jianing; Liu, Yuan; Noble, Jack H.; Dawant, Benoit M.
2017-02-01
Medical image registration establishes a correspondence between images of biological structures and it is at the core of many applications. Commonly used deformable image registration methods are dependent on a good preregistration initialization. The initialization can be performed by localizing homologous landmarks and calculating a point-based transformation between the images. The selection of landmarks is however important. In this work, we present a learning-based method to automatically find a set of robust landmarks in 3D MR image volumes of the head to initialize non-rigid transformations. To validate our method, these selected landmarks are localized in unknown image volumes and they are used to compute a smoothing thin-plate splines transformation that registers the atlas to the volumes. The transformed atlas image is then used as the preregistration initialization of an intensity-based non-rigid registration algorithm. We show that the registration accuracy of this algorithm is statistically significantly improved when using the presented registration initialization over a standard intensity-based affine registration.
Khotanlou, Hassan; Afrasiabi, Mahlagha
2012-10-01
This paper presents a new feature selection approach for automatically extracting multiple sclerosis (MS) lesions in three-dimensional (3D) magnetic resonance (MR) images. Presented method is applicable to different types of MS lesions. In this method, T1, T2, and fluid attenuated inversion recovery (FLAIR) images are firstly preprocessed. In the next phase, effective features to extract MS lesions are selected by using a genetic algorithm (GA). The fitness function of the GA is the Similarity Index (SI) of a support vector machine (SVM) classifier. The results obtained on different types of lesions have been evaluated by comparison with manual segmentations. This algorithm is evaluated on 15 real 3D MR images using several measures. As a result, the SI between MS regions determined by the proposed method and radiologists was 87% on average. Experiments and comparisons with other methods show the effectiveness and the efficiency of the proposed approach.
A semi-automatic 2D-to-3D video conversion with adaptive key-frame selection
NASA Astrophysics Data System (ADS)
Ju, Kuanyu; Xiong, Hongkai
2014-11-01
To compensate the deficit of 3D content, 2D to 3D video conversion (2D-to-3D) has recently attracted more attention from both industrial and academic communities. The semi-automatic 2D-to-3D conversion which estimates corresponding depth of non-key-frames through key-frames is more desirable owing to its advantage of balancing labor cost and 3D effects. The location of key-frames plays a role on quality of depth propagation. This paper proposes a semi-automatic 2D-to-3D scheme with adaptive key-frame selection to keep temporal continuity more reliable and reduce the depth propagation errors caused by occlusion. The potential key-frames would be localized in terms of clustered color variation and motion intensity. The distance of key-frame interval is also taken into account to keep the accumulated propagation errors under control and guarantee minimal user interaction. Once their depth maps are aligned with user interaction, the non-key-frames depth maps would be automatically propagated by shifted bilateral filtering. Considering that depth of objects may change due to the objects motion or camera zoom in/out effect, a bi-directional depth propagation scheme is adopted where a non-key frame is interpolated from two adjacent key frames. The experimental results show that the proposed scheme has better performance than existing 2D-to-3D scheme with fixed key-frame interval.
ERIC Educational Resources Information Center
Yoncheva, Yuliya N.; Maurer, Urs; Zevin, Jason D.; McCandliss, Bruce D.
2013-01-01
ERP responses to spoken words are sensitive to both rhyming effects and effects of associated spelling patterns. Are such effects automatically elicited by spoken words or dependent on selectively attending to phonology? To address this question, ERP responses to spoken word pairs were investigated under two equally demanding listening tasks that…
2015-12-01
however, solutions to these issues. A weighted mean can be used in place of the grand mean1 and the STATA software automatically handles the assignment of...covariance matrices between groups (i.e., sphericity) using the multivariate test of means provided in STATA 12.1. This test checks whether or not
Developing operation algorithms for vision subsystems in autonomous mobile robots
NASA Astrophysics Data System (ADS)
Shikhman, M. V.; Shidlovskiy, S. V.
2018-05-01
The paper analyzes algorithms for selecting keypoints on the image for the subsequent automatic detection of people and obstacles. The algorithm is based on the histogram of oriented gradients and the support vector method. The combination of these methods allows successful selection of dynamic and static objects. The algorithm can be applied in various autonomous mobile robots.
Spelling Test Generator--Volume 1: English. [CD-ROM].
ERIC Educational Resources Information Center
Aud, Joel; DeWolfe, Rosemary; Gintz, Christopher; Griswold, Scott; Hefter, Richard; Lowery, Adam; Richards, Maureen; Yi, Song Choi
This software product makes the manipulation of the more than 3000 most commonly used words in the English language easy to select and manipulate into various activities for elementary and middle school students. Users of the program have a variety of options: the program can automatically select words based on their age/grade level, frequency of…
IADE: a system for intelligent automatic design of bioisosteric analogs
NASA Astrophysics Data System (ADS)
Ertl, Peter; Lewis, Richard
2012-11-01
IADE, a software system supporting molecular modellers through the automatic design of non-classical bioisosteric analogs, scaffold hopping and fragment growing, is presented. The program combines sophisticated cheminformatics functionalities for constructing novel analogs and filtering them based on their drug-likeness and synthetic accessibility using automatic structure-based design capabilities: the best candidates are selected according to their similarity to the template ligand and to their interactions with the protein binding site. IADE works in an iterative manner, improving the fitness of designed molecules in every generation until structures with optimal properties are identified. The program frees molecular modellers from routine, repetitive tasks, allowing them to focus on analysis and evaluation of the automatically designed analogs, considerably enhancing their work efficiency as well as the area of chemical space that can be covered. The performance of IADE is illustrated through a case study of the design of a nonclassical bioisosteric analog of a farnesyltransferase inhibitor—an analog that has won a recent "Design a Molecule" competition.
Semi-automatic brain tumor segmentation by constrained MRFs using structural trajectories.
Zhao, Liang; Wu, Wei; Corso, Jason J
2013-01-01
Quantifying volume and growth of a brain tumor is a primary prognostic measure and hence has received much attention in the medical imaging community. Most methods have sought a fully automatic segmentation, but the variability in shape and appearance of brain tumor has limited their success and further adoption in the clinic. In reaction, we present a semi-automatic brain tumor segmentation framework for multi-channel magnetic resonance (MR) images. This framework does not require prior model construction and only requires manual labels on one automatically selected slice. All other slices are labeled by an iterative multi-label Markov random field optimization with hard constraints. Structural trajectories-the medical image analog to optical flow and 3D image over-segmentation are used to capture pixel correspondences between consecutive slices for pixel labeling. We show robustness and effectiveness through an evaluation on the 2012 MICCAI BRATS Challenge Dataset; our results indicate superior performance to baselines and demonstrate the utility of the constrained MRF formulation.
IADE: a system for intelligent automatic design of bioisosteric analogs.
Ertl, Peter; Lewis, Richard
2012-11-01
IADE, a software system supporting molecular modellers through the automatic design of non-classical bioisosteric analogs, scaffold hopping and fragment growing, is presented. The program combines sophisticated cheminformatics functionalities for constructing novel analogs and filtering them based on their drug-likeness and synthetic accessibility using automatic structure-based design capabilities: the best candidates are selected according to their similarity to the template ligand and to their interactions with the protein binding site. IADE works in an iterative manner, improving the fitness of designed molecules in every generation until structures with optimal properties are identified. The program frees molecular modellers from routine, repetitive tasks, allowing them to focus on analysis and evaluation of the automatically designed analogs, considerably enhancing their work efficiency as well as the area of chemical space that can be covered. The performance of IADE is illustrated through a case study of the design of a nonclassical bioisosteric analog of a farnesyltransferase inhibitor--an analog that has won a recent "Design a Molecule" competition.
Comparison of the efficiency between two sampling plans for aflatoxins analysis in maize
Mallmann, Adriano Olnei; Marchioro, Alexandro; Oliveira, Maurício Schneider; Rauber, Ricardo Hummes; Dilkin, Paulo; Mallmann, Carlos Augusto
2014-01-01
Variance and performance of two sampling plans for aflatoxins quantification in maize were evaluated. Eight lots of maize were sampled using two plans: manual, using sampling spear for kernels; and automatic, using a continuous flow to collect milled maize. Total variance and sampling, preparation, and analysis variance were determined and compared between plans through multifactor analysis of variance. Four theoretical distribution models were used to compare aflatoxins quantification distributions in eight maize lots. The acceptance and rejection probabilities for a lot under certain aflatoxin concentration were determined using variance and the information on the selected distribution model to build the operational characteristic curves (OC). Sampling and total variance were lower at the automatic plan. The OC curve from the automatic plan reduced both consumer and producer risks in comparison to the manual plan. The automatic plan is more efficient than the manual one because it expresses more accurately the real aflatoxin contamination in maize. PMID:24948911
GIS Data Based Automatic High-Fidelity 3D Road Network Modeling
NASA Technical Reports Server (NTRS)
Wang, Jie; Shen, Yuzhong
2011-01-01
3D road models are widely used in many computer applications such as racing games and driving simulations_ However, almost all high-fidelity 3D road models were generated manually by professional artists at the expense of intensive labor. There are very few existing methods for automatically generating 3D high-fidelity road networks, especially those existing in the real world. This paper presents a novel approach thai can automatically produce 3D high-fidelity road network models from real 2D road GIS data that mainly contain road. centerline in formation. The proposed method first builds parametric representations of the road centerlines through segmentation and fitting . A basic set of civil engineering rules (e.g., cross slope, superelevation, grade) for road design are then selected in order to generate realistic road surfaces in compliance with these rules. While the proposed method applies to any types of roads, this paper mainly addresses automatic generation of complex traffic interchanges and intersections which are the most sophisticated elements in the road networks
Hashimoto, Shinichi; Ogihara, Hiroyuki; Suenaga, Masato; Fujita, Yusuke; Terai, Shuji; Hamamoto, Yoshihiko; Sakaida, Isao
2017-08-01
Visibility in capsule endoscopic images is presently evaluated through intermittent analysis of frames selected by a physician. It is thus subjective and not quantitative. A method to automatically quantify the visibility on capsule endoscopic images has not been reported. Generally, when designing automated image recognition programs, physicians must provide a training image; this process is called supervised learning. We aimed to develop a novel automated self-learning quantification system to identify visible areas on capsule endoscopic images. The technique was developed using 200 capsule endoscopic images retrospectively selected from each of three patients. The rate of detection of visible areas on capsule endoscopic images between a supervised learning program, using training images labeled by a physician, and our novel automated self-learning program, using unlabeled training images without intervention by a physician, was compared. The rate of detection of visible areas was equivalent for the supervised learning program and for our automatic self-learning program. The visible areas automatically identified by self-learning program correlated to the areas identified by an experienced physician. We developed a novel self-learning automated program to identify visible areas in capsule endoscopic images.
Towards automatic planning for manufacturing generative processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
CALTON,TERRI L.
2000-05-24
Generative process planning describes methods process engineers use to modify manufacturing/process plans after designs are complete. A completed design may be the result from the introduction of a new product based on an old design, an assembly upgrade, or modified product designs used for a family of similar products. An engineer designs an assembly and then creates plans capturing manufacturing processes, including assembly sequences, component joining methods, part costs, labor costs, etc. When new products originate as a result of an upgrade, component geometry may change, and/or additional components and subassemblies may be added to or are omitted from themore » original design. As a result process engineers are forced to create new plans. This is further complicated by the fact that the process engineer is forced to manually generate these plans for each product upgrade. To generate new assembly plans for product upgrades, engineers must manually re-specify the manufacturing plan selection criteria and re-run the planners. To remedy this problem, special-purpose assembly planning algorithms have been developed to automatically recognize design modifications and automatically apply previously defined manufacturing plan selection criteria and constraints.« less
Dabbah, M A; Graham, J; Petropoulos, I N; Tavakoli, M; Malik, R A
2011-10-01
Diabetic peripheral neuropathy (DPN) is one of the most common long term complications of diabetes. Corneal confocal microscopy (CCM) image analysis is a novel non-invasive technique which quantifies corneal nerve fibre damage and enables diagnosis of DPN. This paper presents an automatic analysis and classification system for detecting nerve fibres in CCM images based on a multi-scale adaptive dual-model detection algorithm. The algorithm exploits the curvilinear structure of the nerve fibres and adapts itself to the local image information. Detected nerve fibres are then quantified and used as feature vectors for classification using random forest (RF) and neural networks (NNT) classifiers. We show, in a comparative study with other well known curvilinear detectors, that the best performance is achieved by the multi-scale dual model in conjunction with the NNT classifier. An evaluation of clinical effectiveness shows that the performance of the automated system matches that of ground-truth defined by expert manual annotation. Copyright © 2011 Elsevier B.V. All rights reserved.
Using the Chandra Source-Finding Algorithm to Automatically Identify Solar X-ray Bright Points
NASA Technical Reports Server (NTRS)
Adams, Mitzi L.; Tennant, A.; Cirtain, J. M.
2009-01-01
This poster details a technique of bright point identification that is used to find sources in Chandra X-ray data. The algorithm, part of a program called LEXTRCT, searches for regions of a given size that are above a minimum signal to noise ratio. The algorithm allows selected pixels to be excluded from the source-finding, thus allowing exclusion of saturated pixels (from flares and/or active regions). For Chandra data the noise is determined by photon counting statistics, whereas solar telescopes typically integrate a flux. Thus the calculated signal-to-noise ratio is incorrect, but we find we can scale the number to get reasonable results. For example, Nakakubo and Hara (1998) find 297 bright points in a September 11, 1996 Yohkoh image; with judicious selection of signal-to-noise ratio, our algorithm finds 300 sources. To further assess the efficacy of the algorithm, we analyze a SOHO/EIT image (195 Angstroms) and compare results with those published in the literature (McIntosh and Gurman, 2005). Finally, we analyze three sets of data from Hinode, representing different parts of the decline to minimum of the solar cycle.
Hebart, Martin N.; Görgen, Kai; Haynes, John-Dylan
2015-01-01
The multivariate analysis of brain signals has recently sparked a great amount of interest, yet accessible and versatile tools to carry out decoding analyses are scarce. Here we introduce The Decoding Toolbox (TDT) which represents a user-friendly, powerful and flexible package for multivariate analysis of functional brain imaging data. TDT is written in Matlab and equipped with an interface to the widely used brain data analysis package SPM. The toolbox allows running fast whole-brain analyses, region-of-interest analyses and searchlight analyses, using machine learning classifiers, pattern correlation analysis, or representational similarity analysis. It offers automatic creation and visualization of diverse cross-validation schemes, feature scaling, nested parameter selection, a variety of feature selection methods, multiclass capabilities, and pattern reconstruction from classifier weights. While basic users can implement a generic analysis in one line of code, advanced users can extend the toolbox to their needs or exploit the structure to combine it with external high-performance classification toolboxes. The toolbox comes with an example data set which can be used to try out the various analysis methods. Taken together, TDT offers a promising option for researchers who want to employ multivariate analyses of brain activity patterns. PMID:25610393
Automatic Matching of Large Scale Images and Terrestrial LIDAR Based on App Synergy of Mobile Phone
NASA Astrophysics Data System (ADS)
Xia, G.; Hu, C.
2018-04-01
The digitalization of Cultural Heritage based on ground laser scanning technology has been widely applied. High-precision scanning and high-resolution photography of cultural relics are the main methods of data acquisition. The reconstruction with the complete point cloud and high-resolution image requires the matching of image and point cloud, the acquisition of the homonym feature points, the data registration, etc. However, the one-to-one correspondence between image and corresponding point cloud depends on inefficient manual search. The effective classify and management of a large number of image and the matching of large image and corresponding point cloud will be the focus of the research. In this paper, we propose automatic matching of large scale images and terrestrial LiDAR based on APP synergy of mobile phone. Firstly, we develop an APP based on Android, take pictures and record related information of classification. Secondly, all the images are automatically grouped with the recorded information. Thirdly, the matching algorithm is used to match the global and local image. According to the one-to-one correspondence between the global image and the point cloud reflection intensity image, the automatic matching of the image and its corresponding laser radar point cloud is realized. Finally, the mapping relationship between global image, local image and intensity image is established according to homonym feature point. So we can establish the data structure of the global image, the local image in the global image, the local image corresponding point cloud, and carry on the visualization management and query of image.
Perceiving pain in others: validation of a dual processing model.
McCrystal, Kalie N; Craig, Kenneth D; Versloot, Judith; Fashler, Samantha R; Jones, Daniel N
2011-05-01
Accurate perception of another person's painful distress would appear to be accomplished through sensitivity to both automatic (unintentional, reflexive) and controlled (intentional, purposive) behavioural expression. We examined whether observers would construe diverse behavioural cues as falling within these domains, consistent with cognitive neuroscience findings describing activation of both automatic and controlled neuroregulatory processes. Using online survey methodology, 308 research participants rated behavioural cues as "goal directed vs. non-goal directed," "conscious vs. unconscious," "uncontrolled vs. controlled," "fast vs. slow," "intentional (deliberate) vs. unintentional," "stimulus driven (obligatory) vs. self driven," and "requiring contemplation vs. not requiring contemplation." The behavioural cues were the 39 items provided by the PROMIS pain behaviour bank, constructed to be representative of the diverse possibilities for pain expression. Inter-item correlations among rating scales provided evidence of sufficient internal consistency justifying a single score on an automatic/controlled dimension (excluding the inconsistent fast vs. slow scale). An initial exploratory factor analysis on 151 participant data sets yielded factors consistent with "controlled" and "automatic" actions, as well as behaviours characterized as "ambiguous." A confirmatory factor analysis using the remaining 151 data sets replicated EFA findings, supporting theoretical predictions that observers would distinguish immediate, reflexive, and spontaneous reactions (primarily facial expression and paralinguistic features of speech) from purposeful and controlled expression (verbal behaviour, instrumental behaviour requiring ongoing, integrated responses). There are implicit dispositions to organize cues signaling pain in others into the well-defined categories predicted by dual process theory. Copyright © 2011 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Halyo, N.
1983-01-01
The design and development of a 3-D Digital Integrated Automatic Landing System (DIALS) for the Terminal Configured Vehicle (TCV) Research Aircraft, a B-737-100 is described. The system was designed using sampled data Linear Quadratic Gaussian (LOG) methods, resulting in a direct digital design with a modern control structure which consists of a Kalman filter followed by a control gain matrix, all operating at 10 Hz. DIALS uses Microwave Landing System (MLS) position, body-mounted accelerometers, as well as on-board sensors usually available on commercial aircraft, but does not use inertial platforms. The phases of the final approach considered are the localizer and glideslope capture which may be performed simultaneously, localizer and steep glideslope track or hold, crab/decrab and flare to touchdown. DIALS captures, tracks and flares from steep glideslopes ranging from 2.5 deg to 5.5 deg, selected prior to glideslope capture. Digital Integrated Automatic Landing System is the first modern control design automatic landing system successfully flight tested. The results of an initial nonlinear simulation are presented here.
Automatic segmentation of the prostate on CT images using deep learning and multi-atlas fusion
NASA Astrophysics Data System (ADS)
Ma, Ling; Guo, Rongrong; Zhang, Guoyi; Tade, Funmilayo; Schuster, David M.; Nieh, Peter; Master, Viraj; Fei, Baowei
2017-02-01
Automatic segmentation of the prostate on CT images has many applications in prostate cancer diagnosis and therapy. However, prostate CT image segmentation is challenging because of the low contrast of soft tissue on CT images. In this paper, we propose an automatic segmentation method by combining a deep learning method and multi-atlas refinement. First, instead of segmenting the whole image, we extract the region of interesting (ROI) to delete irrelevant regions. Then, we use the convolutional neural networks (CNN) to learn the deep features for distinguishing the prostate pixels from the non-prostate pixels in order to obtain the preliminary segmentation results. CNN can automatically learn the deep features adapting to the data, which are different from some handcrafted features. Finally, we select some similar atlases to refine the initial segmentation results. The proposed method has been evaluated on a dataset of 92 prostate CT images. Experimental results show that our method achieved a Dice similarity coefficient of 86.80% as compared to the manual segmentation. The deep learning based method can provide a useful tool for automatic segmentation of the prostate on CT images and thus can have a variety of clinical applications.
NASA Astrophysics Data System (ADS)
Poiata, Natalia; Vilotte, Jean-Pierre; Bernard, Pascal; Satriano, Claudio; Obara, Kazushige
2018-06-01
In this study, we demonstrate the capability of an automatic network-based detection and location method to extract and analyse different components of tectonic tremor activity by analysing a 9-day energetic tectonic tremor sequence occurring at the downdip extension of the subducting slab in southwestern Japan. The applied method exploits the coherency of multiscale, frequency-selective characteristics of non-stationary signals recorded across the seismic network. Use of different characteristic functions, in the signal processing step of the method, allows to extract and locate the sources of short-duration impulsive signal transients associated with low-frequency earthquakes and of longer-duration energy transients during the tectonic tremor sequence. Frequency-dependent characteristic functions, based on higher-order statistics' properties of the seismic signals, are used for the detection and location of low-frequency earthquakes. This allows extracting a more complete (˜6.5 times more events) and time-resolved catalogue of low-frequency earthquakes than the routine catalogue provided by the Japan Meteorological Agency. As such, this catalogue allows resolving the space-time evolution of the low-frequency earthquakes activity in great detail, unravelling spatial and temporal clustering, modulation in response to tide, and different scales of space-time migration patterns. In the second part of the study, the detection and source location of longer-duration signal energy transients within the tectonic tremor sequence is performed using characteristic functions built from smoothed frequency-dependent energy envelopes. This leads to a catalogue of longer-duration energy sources during the tectonic tremor sequence, characterized by their durations and 3-D spatial likelihood maps of the energy-release source regions. The summary 3-D likelihood map for the 9-day tectonic tremor sequence, built from this catalogue, exhibits an along-strike spatial segmentation of the long-duration energy-release regions, matching the large-scale clustering features evidenced from the low-frequency earthquake's activity analysis. Further examination of the two catalogues showed that the extracted short-duration low-frequency earthquakes activity coincides in space, within about 10-15 km distance, with the longer-duration energy sources during the tectonic tremor sequence. This observation provides a potential constraint on the size of the longer-duration energy-radiating source region in relation with the clustering of low-frequency earthquakes activity during the analysed tectonic tremor sequence. We show that advanced statistical network-based methods offer new capabilities for automatic high-resolution detection, location and monitoring of different scale-components of tectonic tremor activity, enriching existing slow earthquakes catalogues. Systematic application of such methods to large continuous data sets will allow imaging the slow transient seismic energy-release activity at higher resolution, and therefore, provide new insights into the underlying multiscale mechanisms of slow earthquakes generation.
Image smoothing and enhancement via min/max curvature flow
NASA Astrophysics Data System (ADS)
Malladi, Ravikanth; Sethian, James A.
1996-03-01
We present a class of PDE-based algorithms suitable for a wide range of image processing applications. The techniques are applicable to both salt-and-pepper gray-scale noise and full- image continuous noise present in black and white images, gray-scale images, texture images and color images. At the core, the techniques rely on a level set formulation of evolving curves and surfaces and the viscosity in profile evolution. Essentially, the method consists of moving the isointensity contours in an image under curvature dependent speed laws to achieve enhancement. Compared to existing techniques, our approach has several distinct advantages. First, it contains only one enhancement parameter, which in most cases is automatically chosen. Second, the scheme automatically stops smoothing at some optimal point; continued application of the scheme produces no further change. Third, the method is one of the fastest possible schemes based on a curvature-controlled approach.
High-throughput automatic defect review for 300mm blank wafers with atomic force microscope
NASA Astrophysics Data System (ADS)
Zandiatashbar, Ardavan; Kim, Byong; Yoo, Young-kook; Lee, Keibock; Jo, Ahjin; Lee, Ju Suk; Cho, Sang-Joon; Park, Sang-il
2015-03-01
While feature size in lithography process continuously becomes smaller, defect sizes on blank wafers become more comparable to device sizes. Defects with nm-scale characteristic size could be misclassified by automated optical inspection (AOI) and require post-processing for proper classification. Atomic force microscope (AFM) is known to provide high lateral and the highest vertical resolution by mechanical probing among all techniques. However, its low throughput and tip life in addition to the laborious efforts for finding the defects have been the major limitations of this technique. In this paper we introduce automatic defect review (ADR) AFM as a post-inspection metrology tool for defect study and classification for 300 mm blank wafers and to overcome the limitations stated above. The ADR AFM provides high throughput, high resolution, and non-destructive means for obtaining 3D information for nm-scale defect review and classification.
NASA Astrophysics Data System (ADS)
Xu, Chao; Zhou, Dongxiang; Zhai, Yongping; Liu, Yunhui
2015-12-01
This paper realizes the automatic segmentation and classification of Mycobacterium tuberculosis with conventional light microscopy. First, the candidate bacillus objects are segmented by the marker-based watershed transform. The markers are obtained by an adaptive threshold segmentation based on the adaptive scale Gaussian filter. The scale of the Gaussian filter is determined according to the color model of the bacillus objects. Then the candidate objects are extracted integrally after region merging and contaminations elimination. Second, the shape features of the bacillus objects are characterized by the Hu moments, compactness, eccentricity, and roughness, which are used to classify the single, touching and non-bacillus objects. We evaluated the logistic regression, random forest, and intersection kernel support vector machines classifiers in classifying the bacillus objects respectively. Experimental results demonstrate that the proposed method yields to high robustness and accuracy. The logistic regression classifier performs best with an accuracy of 91.68%.
Disentangling Complexity in Bayesian Automatic Adaptive Quadrature
NASA Astrophysics Data System (ADS)
Adam, Gheorghe; Adam, Sanda
2018-02-01
The paper describes a Bayesian automatic adaptive quadrature (BAAQ) solution for numerical integration which is simultaneously robust, reliable, and efficient. Detailed discussion is provided of three main factors which contribute to the enhancement of these features: (1) refinement of the m-panel automatic adaptive scheme through the use of integration-domain-length-scale-adapted quadrature sums; (2) fast early problem complexity assessment - enables the non-transitive choice among three execution paths: (i) immediate termination (exceptional cases); (ii) pessimistic - involves time and resource consuming Bayesian inference resulting in radical reformulation of the problem to be solved; (iii) optimistic - asks exclusively for subrange subdivision by bisection; (3) use of the weaker accuracy target from the two possible ones (the input accuracy specifications and the intrinsic integrand properties respectively) - results in maximum possible solution accuracy under minimum possible computing time.
NASA Astrophysics Data System (ADS)
Lafolie, François; Cousin, Isabelle; Mollier, Alain; Pot, Valérie; Moitrier, Nicolas; Balesdent, Jérome; bruckler, Laurent; Moitrier, Nathalie; Nouguier, Cédric; Richard, Guy
2014-05-01
Models describing the soil functioning are valuable tools for addressing challenging issues related to agricultural production, soil protection or biogeochemical cycles. Coupling models that address different scientific fields is actually required in order to develop numerical tools able to simulate the complex interactions and feed-backs occurring within a soil profile in interaction with climate and human activities. We present here a component-based modelling platform named "VSoil", that aims at designing, developing, implementing and coupling numerical representation of biogeochemical and physical processes in soil, from the aggregate to the profile scales. The platform consists of four softwares, i) Vsoil_Processes dedicated to the conceptual description of processes and of their inputs and outputs, ii) Vsoil_Modules devoted to the development of numerical representation of elementary processes as modules, iii) Vsoil_Models which permits the coupling of modules to create models, iv) Vsoil_Player for the run of the model and the primary analysis of results. The platform is designed to be a collaborative tool, helping scientists to share not only their models, but also the scientific knowledge on which the models are built. The platform is based on the idea that processes of any kind can be described and characterized by their inputs (state variables required) and their outputs. The links between the processes are automatically detected by the platform softwares. For any process, several numerical representations (modules) can be developed and made available to platform users. When developing modules, the platform takes care of many aspects of the development task so that the user can focus on numerical calculations. Fortran2008 and C++ are the supported languages and existing codes can be easily incorporated into platform modules. Building a model from available modules simply requires selecting the processes being accounted for and for each process a module. During this task, the platform displays available modules and checks the compatibility between the modules. The model (main program) is automatically created when compatible modules have been selected for all the processes. A GUI is automatically generated to help the user providing parameters and initial situations. Numerical results can be immediately visualized, archived and exported. The platform also provides facilities to carry out sensitivity analysis. Parameters estimation and links with databases are being developed. The platform can be freely downloaded from the web site (http://www.inra.fr/sol_virtuel/) with a set of processes, variables, modules and models. However, it is designed so that any user can add its own components. Theses adds-on can be shared with co-workers by means of an export/import mechanism using the e-mail. The adds-on can also be made available to the whole community of platform users when developers asked for. A filtering tool is available to explore the content of the platform (processes, variables, modules, models).
NASA Technical Reports Server (NTRS)
Lien, Mei-Ching; Proctor, Robert W.
2002-01-01
The purpose of this paper was to provide insight into the nature of response selection by reviewing the literature on stimulus-response compatibility (SRC) effects and the psychological refractory period (PRP) effect individually and jointly. The empirical findings and theoretical explanations of SRC effects that have been studied within a single-task context suggest that there are two response-selection routes-automatic activation and intentional translation. In contrast, all major PRP models reviewed in this paper have treated response selection as a single processing stage. In particular, the response-selection bottleneck (RSB) model assumes that the processing of Task 1 and Task 2 comprises two separate streams and that the PRP effect is due to a bottleneck located at response selection. Yet, considerable evidence from studies of SRC in the PRP paradigm shows that the processing of the two tasks is more interactive than is suggested by the RSB model and by most other models of the PRP effect. The major implication drawn from the studies of SRC effects in the PRP context is that response activation is a distinct process from final response selection. Response activation is based on both long-term and short-term task-defined S-R associations and occurs automatically and in parallel for the two tasks. The final response selection is an intentional act required even for highly compatible and practiced tasks and is restricted to processing one task at a time. Investigations of SRC effects and response-selection variables in dual-task contexts should be conducted more systematically because they provide significant insight into the nature of response-selection mechanisms.
Exploiting the systematic review protocol for classification of medical abstracts.
Frunza, Oana; Inkpen, Diana; Matwin, Stan; Klement, William; O'Blenis, Peter
2011-01-01
To determine whether the automatic classification of documents can be useful in systematic reviews on medical topics, and specifically if the performance of the automatic classification can be enhanced by using the particular protocol of questions employed by the human reviewers to create multiple classifiers. The test collection is the data used in large-scale systematic review on the topic of the dissemination strategy of health care services for elderly people. From a group of 47,274 abstracts marked by human reviewers to be included in or excluded from further screening, we randomly selected 20,000 as a training set, with the remaining 27,274 becoming a separate test set. As a machine learning algorithm we used complement naïve Bayes. We tested both a global classification method, where a single classifier is trained on instances of abstracts and their classification (i.e., included or excluded), and a novel per-question classification method that trains multiple classifiers for each abstract, exploiting the specific protocol (questions) of the systematic review. For the per-question method we tested four ways of combining the results of the classifiers trained for the individual questions. As evaluation measures, we calculated precision and recall for several settings of the two methods. It is most important not to exclude any relevant documents (i.e., to attain high recall for the class of interest) but also desirable to exclude most of the non-relevant documents (i.e., to attain high precision on the class of interest) in order to reduce human workload. For the global method, the highest recall was 67.8% and the highest precision was 37.9%. For the per-question method, the highest recall was 99.2%, and the highest precision was 63%. The human-machine workflow proposed in this paper achieved a recall value of 99.6%, and a precision value of 17.8%. The per-question method that combines classifiers following the specific protocol of the review leads to better results than the global method in terms of recall. Because neither method is efficient enough to classify abstracts reliably by itself, the technology should be applied in a semi-automatic way, with a human expert still involved. When the workflow includes one human expert and the trained automatic classifier, recall improves to an acceptable level, showing that automatic classification techniques can reduce the human workload in the process of building a systematic review. Copyright © 2010 Elsevier B.V. All rights reserved.
NLM at TREC 2012 Medical Records Track
2012-11-01
automatic runs are not significantly above the medians. As in 2011, we conclude that the existing search engines are mature enough to support cohort selection tasks, and the quality of the queries could be
Application of automatic threshold in dynamic target recognition with low contrast
NASA Astrophysics Data System (ADS)
Miao, Hua; Guo, Xiaoming; Chen, Yu
2014-11-01
Hybrid photoelectric joint transform correlator can realize automatic real-time recognition with high precision through the combination of optical devices and electronic devices. When recognizing targets with low contrast using photoelectric joint transform correlator, because of the difference of attitude, brightness and grayscale between target and template, only four to five frames of dynamic targets can be recognized without any processing. CCD camera is used to capture the dynamic target images and the capturing speed of CCD is 25 frames per second. Automatic threshold has many advantages like fast processing speed, effectively shielding noise interference, enhancing diffraction energy of useful information and better reserving outline of target and template, so this method plays a very important role in target recognition with optical correlation method. However, the automatic obtained threshold by program can not achieve the best recognition results for dynamic targets. The reason is that outline information is broken to some extent. Optimal threshold is obtained by manual intervention in most cases. Aiming at the characteristics of dynamic targets, the processing program of improved automatic threshold is finished by multiplying OTSU threshold of target and template by scale coefficient of the processed image, and combining with mathematical morphology. The optimal threshold can be achieved automatically by improved automatic threshold processing for dynamic low contrast target images. The recognition rate of dynamic targets is improved through decreased background noise effect and increased correlation information. A series of dynamic tank images with the speed about 70 km/h are adapted as target images. The 1st frame of this series of tanks can correlate only with the 3rd frame without any processing. Through OTSU threshold, the 80th frame can be recognized. By automatic threshold processing of the joint images, this number can be increased to 89 frames. Experimental results show that the improved automatic threshold processing has special application value for the recognition of dynamic target with low contrast.
Koch, Saskia B J; Klumpers, Floris; Zhang, Wei; Hashemi, Mahur M; Kaldewaij, Reinoud; van Ast, Vanessa A; Smit, Annika S; Roelofs, Karin
2017-01-01
Background : Control over automatic tendencies is often compromised in challenging situations when people fall back on automatic defensive reactions, such as freeze - fight - flight responses. Stress-induced lack of control over automatic defensive responses constitutes a problem endemic to high-risk professions, such as the police. Difficulties controlling automatic defensive responses may not only impair split-second decisions under threat, but also increase the risk for and persistence of posttraumatic stress disorder (PTSD) symptoms. However, the significance of these automatic defensive responses in the development and maintenance of trauma-related symptoms remains unclear due to a shortage of large-scale prospective studies. Objective : The 'Police-in-Action' study is conducted to investigate the role of automatic defensive responses in the development and maintenance of PTSD symptomatology after trauma exposure. Methods : In this prospective study, 340 police recruits from the Dutch Police Academy are tested before (wave 1; pre-exposure) and after (wave 2; post-exposure) their first emergency aid experiences as police officers. The two waves of data assessment are separated by approximately 15 months. To control for unspecific time effects, a well-matched control group of civilians ( n = 85) is also tested twice, approximately 15 months apart, but without being frequently exposed to potentially traumatic events. Main outcomes are associations between (changes in) behavioural, psychophysiological, endocrine and neural markers of automatic defensive responses and development of trauma-related symptoms after trauma exposure in police recruits. Discussion : This prospective study in a large group of primary responders enables us to distinguish predisposing from acquired neurobiological abnormalities in automatic defensive responses, associated with the development of trauma-related symptoms. Identifying neurobiological correlates of (vulnerability for) trauma-related psychopathology may greatly improve screening for individuals at risk for developing PTSD symptomatology and offer valuable targets for (early preventive) interventions for PTSD.
Automatic affective appraisal of sexual penetration stimuli in women with vaginismus or dyspareunia.
Huijding, Jorg; Borg, Charmaine; Weijmar-Schultz, Willibrord; de Jong, Peter J
2011-03-01
Current psychological views are that negative appraisals of sexual stimuli lie at the core of sexual dysfunctions. It is important to differentiate between deliberate appraisals and more automatic appraisals, as research has shown that the former are most relevant to controllable behaviors, and the latter are most relevant to reflexive behaviors. Accordingly, it can be hypothesized that in women with vaginismus, the persistent difficulty to allow vaginal entry is due to global negative automatic affective appraisals that trigger reflexive pelvic floor muscle contraction at the prospect of penetration. To test whether sexual penetration pictures elicited global negative automatic affective appraisals in women with vaginismus or dyspareunia and to examine whether deliberate appraisals and automatic appraisals differed between the two patient groups. Women with persistent vaginismus (N = 24), dyspareunia (N = 23), or no sexual complaints (N = 30) completed a pictorial Extrinsic Affective Simon Task (EAST), and then made a global affective assessment of the EAST stimuli using visual analogue scales (VAS). The EAST assessed global automatic affective appraisals of sexual penetration stimuli, while the VAS assessed global deliberate affective appraisals of these stimuli. Automatic affective appraisals of sexual penetration stimuli tended to be positive, independent of the presence of sexual complaints. Deliberate appraisals of the same stimuli were significantly more negative in the women with vaginismus than in the dyspareunia group and control group, while the latter two groups did not differ in their appraisals. Unexpectedly, deliberate appraisals seemed to be most important in vaginismus, whereas dyspareunia did not seem to implicate negative deliberate or automatic affective appraisals. These findings dispute the view that global automatic affect lies at the core of vaginismus and indicate that a useful element in therapeutic interventions may be the modification of deliberate global affective appraisals of sexual penetration (e.g., via counter-conditioning). © 2010 International Society for Sexual Medicine.
NASA Astrophysics Data System (ADS)
Wojs, J.
2016-09-01
The paper proves that simplified, shorter examination of an object, feasible in laboratory classes, can produce results similar to those reached in scientific investigation of the device using extensive equipment. A thorough investigation of an object, an automatic clutch device in this case, enabled identifying the magnitudes that most significantly affect its operation. The knowledge of these most sensitive magnitudes allows focusing in the teaching process on simplified measurement of only selected magnitudes and verifying the given object in the positive or negative.
Sabet, Mahsheed; O'Connor, Daryl J.; Greer, Peter B.
2011-01-01
There have been several manual, semi‐automatic and fully‐automatic methods proposed for verification of the position of mechanical isocenter as part of comprehensive quality assurance programs required for linear accelerator‐based stereotactic radiosurgery/radiotherapy (SRS/SRT) treatments. In this paper, a systematic review has been carried out to discuss the present methods for isocenter verification and compare their characteristics, to help physicists in making a decision on selection of their quality assurance routine. PACS numbers: 87.53.Ly, 87.56.Fc, 87.56.‐v PMID:22089022
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pelaia II, Thomas A.
2015-06-30
There is a need for software that allows a tour guide to present different tracks of slides and then return to the default slide show automatically upon completion. A mobile solution is needed for trade shows. DiTour is an iPad/iPhone app that pulls presentation content from a website, stores it on the device and presents it on a connected display. A tour guide can select a track to present and it will automatically return to the default track after a timeout. It offers a mobile solution which is ideal for trade shows.
Study of Automatic Image Rectification and Registration of Scanned Historical Aerial Photographs
NASA Astrophysics Data System (ADS)
Chen, H. R.; Tseng, Y. H.
2016-06-01
Historical aerial photographs directly provide good evidences of past times. The Research Center for Humanities and Social Sciences (RCHSS) of Taiwan Academia Sinica has collected and scanned numerous historical maps and aerial images of Taiwan and China. Some maps or images have been geo-referenced manually, but most of historical aerial images have not been registered since there are no GPS or IMU data for orientation assisting in the past. In our research, we developed an automatic process of matching historical aerial images by SIFT (Scale Invariant Feature Transform) for handling the great quantity of images by computer vision. SIFT is one of the most popular method of image feature extracting and matching. This algorithm extracts extreme values in scale space into invariant image features, which are robust to changing in rotation scale, noise, and illumination. We also use RANSAC (Random sample consensus) to remove outliers, and obtain good conjugated points between photographs. Finally, we manually add control points for registration through least square adjustment based on collinear equation. In the future, we can use image feature points of more photographs to build control image database. Every new image will be treated as query image. If feature points of query image match the features in database, it means that the query image probably is overlapped with control images.With the updating of database, more and more query image can be matched and aligned automatically. Other research about multi-time period environmental changes can be investigated with those geo-referenced temporal spatial data.
Intended actions and unexpected outcomes: automatic and controlled processing in a rapid motor task
Cheyne, Douglas O.; Ferrari, Paul; Cheyne, James A.
2012-01-01
Human action involves a combination of controlled and automatic behavior. These processes may interact in tasks requiring rapid response selection or inhibition, where temporal constraints preclude timely intervention by conscious, controlled processes over automatized prepotent responses. Such contexts tend to produce frequent errors, but also rapidly executed correct responses, both of which may sometimes be perceived as surprising, unintended, or “automatic”. In order to identify neural processes underlying these two aspects of cognitive control, we measured neuromagnetic brain activity in 12 right-handed subjects during manual responses to rapidly presented digits, with an infrequent target digit that required switching response hand (bimanual task) or response finger (unimanual task). Automaticity of responding was evidenced by response speeding (shorter response times) prior to both failed and fast correct switches. Consistent with this automaticity interpretation of fast correct switches, we observed bilateral motor preparation, as indexed by suppression of beta band (15–30 Hz) oscillations in motor cortex, prior to processing of the switch cue in the bimanual task. In contrast, right frontal theta activity (4–8 Hz) accompanying correct switch responses began after cue onset, suggesting that it reflected controlled inhibition of the default response. Further, this activity was reduced on fast correct switch trials suggesting a more automatic mode of inhibitory control. We also observed post-movement (event-related negativity) ERN-like responses and theta band increases in medial and anterior frontal regions that were significantly larger on error trials, and may reflect a combination of error and delayed inhibitory signals. We conclude that both automatic and controlled processes are engaged in parallel during rapid motor tasks, and that the relative strength and timing of these processes may underlie both optimal task performance and subjective experiences of automaticity or control. PMID:22912612
NASA Astrophysics Data System (ADS)
Tamayo-Mas, Elena; Bianchi, Marco; Mansour, Majdi
2018-03-01
This study investigates the impact of model complexity and multi-scale prior hydrogeological data on the interpretation of pumping test data in a dual-porosity aquifer (the Chalk aquifer in England, UK). In order to characterize the hydrogeological properties, different approaches ranging from a traditional analytical solution (Theis approach) to more sophisticated numerical models with automatically calibrated input parameters are applied. Comparisons of results from the different approaches show that neither traditional analytical solutions nor a numerical model assuming a homogenous and isotropic aquifer can adequately explain the observed drawdowns. A better reproduction of the observed drawdowns in all seven monitoring locations is instead achieved when medium and local-scale prior information about the vertical hydraulic conductivity (K) distribution is used to constrain the model calibration process. In particular, the integration of medium-scale vertical K variations based on flowmeter measurements lead to an improvement in the goodness-of-fit of the simulated drawdowns of about 30%. Further improvements (up to 70%) were observed when a simple upscaling approach was used to integrate small-scale K data to constrain the automatic calibration process of the numerical model. Although the analysis focuses on a specific case study, these results provide insights about the representativeness of the estimates of hydrogeological properties based on different interpretations of pumping test data, and promote the integration of multi-scale data for the characterization of heterogeneous aquifers in complex hydrogeological settings.
An EEG-based functional connectivity measure for automatic detection of alcohol use disorder.
Mumtaz, Wajid; Saad, Mohamad Naufal B Mohamad; Kamel, Nidal; Ali, Syed Saad Azhar; Malik, Aamir Saeed
2018-01-01
The abnormal alcohol consumption could cause toxicity and could alter the human brain's structure and function, termed as alcohol used disorder (AUD). Unfortunately, the conventional screening methods for AUD patients are subjective and manual. Hence, to perform automatic screening of AUD patients, objective methods are needed. The electroencephalographic (EEG) data have been utilized to study the differences of brain signals between alcoholics and healthy controls that could further developed as an automatic screening tool for alcoholics. In this work, resting-state EEG-derived features were utilized as input data to the proposed feature selection and classification method. The aim was to perform automatic classification of AUD patients and healthy controls. The validation of the proposed method involved real-EEG data acquired from 30 AUD patients and 30 age-matched healthy controls. The resting-state EEG-derived features such as synchronization likelihood (SL) were computed involving 19 scalp locations resulted into 513 features. Furthermore, the features were rank-ordered to select the most discriminant features involving a rank-based feature selection method according to a criterion, i.e., receiver operating characteristics (ROC). Consequently, a reduced set of most discriminant features was identified and utilized further during classification of AUD patients and healthy controls. In this study, three different classification models such as Support Vector Machine (SVM), Naïve Bayesian (NB), and Logistic Regression (LR) were used. The study resulted into SVM classification accuracy=98%, sensitivity=99.9%, specificity=95%, and f-measure=0.97; LR classification accuracy=91.7%, sensitivity=86.66%, specificity=96.6%, and f-measure=0.90; NB classification accuracy=93.6%, sensitivity=100%, specificity=87.9%, and f-measure=0.95. The SL features could be utilized as objective markers to screen the AUD patients and healthy controls. Copyright © 2017 Elsevier B.V. All rights reserved.
Yu, Sheng; Liao, Katherine P; Shaw, Stanley Y; Gainer, Vivian S; Churchill, Susanne E; Szolovits, Peter; Murphy, Shawn N; Kohane, Isaac S; Cai, Tianxi
2015-09-01
Analysis of narrative (text) data from electronic health records (EHRs) can improve population-scale phenotyping for clinical and genetic research. Currently, selection of text features for phenotyping algorithms is slow and laborious, requiring extensive and iterative involvement by domain experts. This paper introduces a method to develop phenotyping algorithms in an unbiased manner by automatically extracting and selecting informative features, which can be comparable to expert-curated ones in classification accuracy. Comprehensive medical concepts were collected from publicly available knowledge sources in an automated, unbiased fashion. Natural language processing (NLP) revealed the occurrence patterns of these concepts in EHR narrative notes, which enabled selection of informative features for phenotype classification. When combined with additional codified features, a penalized logistic regression model was trained to classify the target phenotype. The authors applied our method to develop algorithms to identify patients with rheumatoid arthritis and coronary artery disease cases among those with rheumatoid arthritis from a large multi-institutional EHR. The area under the receiver operating characteristic curves (AUC) for classifying RA and CAD using models trained with automated features were 0.951 and 0.929, respectively, compared to the AUCs of 0.938 and 0.929 by models trained with expert-curated features. Models trained with NLP text features selected through an unbiased, automated procedure achieved comparable or slightly higher accuracy than those trained with expert-curated features. The majority of the selected model features were interpretable. The proposed automated feature extraction method, generating highly accurate phenotyping algorithms with improved efficiency, is a significant step toward high-throughput phenotyping. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
ERIC Educational Resources Information Center
Ledford, Bruce R.; Ledford, Suzanne Y.
This study investigated whether grade six students' self-esteem could be affected by the presentation of a selected stimulus below the threshold of conscious awareness via the medium of a specially prepared paper. It also investigated whether any statistically significant differences existed between the effects on self-esteem of a selected…
Secure web-based invocation of large-scale plasma simulation codes
NASA Astrophysics Data System (ADS)
Dimitrov, D. A.; Busby, R.; Exby, J.; Bruhwiler, D. L.; Cary, J. R.
2004-12-01
We present our design and initial implementation of a web-based system for running, both in parallel and serial, Particle-In-Cell (PIC) codes for plasma simulations with automatic post processing and generation of visual diagnostics.
Application of computer vision to automatic prescription verification in pharmaceutical mail order
NASA Astrophysics Data System (ADS)
Alouani, Ali T.
2005-05-01
In large volume pharmaceutical mail order, before shipping out prescriptions, licensed pharmacists ensure that the drug in the bottle matches the information provided in the patient prescription. Typically, the pharmacist has about 2 sec to complete the prescription verification process of one prescription. Performing about 1800 prescription verification per hour is tedious and can generate human errors as a result of visual and brain fatigue. Available automatic drug verification systems are limited to a single pill at a time. This is not suitable for large volume pharmaceutical mail order, where a prescription can have as many as 60 pills and where thousands of prescriptions are filled every day. In an attempt to reduce human fatigue, cost, and limit human error, the automatic prescription verification system (APVS) was invented to meet the need of large scale pharmaceutical mail order. This paper deals with the design and implementation of the first prototype online automatic prescription verification machine to perform the same task currently done by a pharmacist. The emphasis here is on the visual aspects of the machine. The system has been successfully tested on 43,000 prescriptions.
2014-01-01
Background Previous efforts such as Assessing Care of Vulnerable Elders (ACOVE) provide quality indicators for assessing the care of elderly patients, but thus far little has been done to leverage this knowledge to improve care for these patients. We describe a clinical decision support system to improve general practitioner (GP) adherence to ACOVE quality indicators and a protocol for investigating impact on GPs’ adherence to the rules. Design We propose two randomized controlled trials among a group of Dutch GP teams on adherence to ACOVE quality indicators. In both trials a clinical decision support system provides un-intrusive feedback appearing as a color-coded, dynamically updated, list of items needing attention. The first trial pertains to real-time automatically verifiable rules. The second trial concerns non-automatically verifiable rules (adherence cannot be established by the clinical decision support system itself, but the GPs report whether they will adhere to the rules). In both trials we will randomize teams of GPs caring for the same patients into two groups, A and B. For the automatically verifiable rules, group A GPs receive support only for a specific inter-related subset of rules, and group B GPs receive support only for the remainder of the rules. For non-automatically verifiable rules, group A GPs receive feedback framed as actions with positive consequences, and group B GPs receive feedback framed as inaction with negative consequences. GPs indicate whether they adhere to non-automatically verifiable rules. In both trials, the main outcome measure is mean adherence, automatically derived or self-reported, to the rules. Discussion We relied on active end-user involvement in selecting the rules to support, and on a model for providing feedback displayed as color-coded real-time messages concerning the patient visiting the GP at that time, without interrupting the GP’s workflow with pop-ups. While these aspects are believed to increase clinical decision support system acceptance and its impact on adherence to the selected clinical rules, systems with these properties have not yet been evaluated. Trial registration Controlled Trials NTR3566 PMID:24642339
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, T; Zhou, L; Li, Y
Purpose: For intensity modulated radiotherapy, the plan optimization is time consuming with difficulties of selecting objectives and constraints, and their relative weights. A fast and automatic multi-objective optimization algorithm with abilities to predict optimal constraints and manager their trade-offs can help to solve this problem. Our purpose is to develop such a framework and algorithm for a general inverse planning. Methods: There are three main components contained in this proposed multi-objective optimization framework: prediction of initial dosimetric constraints, further adjustment of constraints and plan optimization. We firstly use our previously developed in-house geometry-dosimetry correlation model to predict the optimal patient-specificmore » dosimetric endpoints, and treat them as initial dosimetric constraints. Secondly, we build an endpoint(organ) priority list and a constraint adjustment rule to repeatedly tune these constraints from their initial values, until every single endpoint has no room for further improvement. Lastly, we implement a voxel-independent based FMO algorithm for optimization. During the optimization, a model for tuning these voxel weighting factors respecting to constraints is created. For framework and algorithm evaluation, we randomly selected 20 IMRT prostate cases from the clinic and compared them with our automatic generated plans, in both the efficiency and plan quality. Results: For each evaluated plan, the proposed multi-objective framework could run fluently and automatically. The voxel weighting factor iteration time varied from 10 to 30 under an updated constraint, and the constraint tuning time varied from 20 to 30 for every case until no more stricter constraint is allowed. The average total costing time for the whole optimization procedure is ∼30mins. By comparing the DVHs, better OAR dose sparing could be observed in automatic generated plan, for 13 out of the 20 cases, while others are with competitive results. Conclusion: We have successfully developed a fast and automatic multi-objective optimization for intensity modulated radiotherapy. This work is supported by the National Natural Science Foundation of China (No: 81571771)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yixing; Hong, Tianzhen; Piette, Mary Ann
Buildings in cities consume 30–70% of total primary energy, and improving building energy efficiency is one of the key strategies towards sustainable urbanization. Urban building energy models (UBEM) can support city managers to evaluate and prioritize energy conservation measures (ECMs) for investment and the design of incentive and rebate programs. This paper presents the retrofit analysis feature of City Building Energy Saver (CityBES) to automatically generate and simulate UBEM using EnergyPlus based on cities’ building datasets and user-selected ECMs. CityBES is a new open web-based tool to support city-scale building energy efficiency strategic plans and programs. The technical details ofmore » using CityBES for UBEM generation and simulation are introduced, including the workflow, key assumptions, and major databases. Also presented is a case study that analyzes the potential retrofit energy use and energy cost savings of five individual ECMs and two measure packages for 940 office and retail buildings in six city districts in northeast San Francisco, United States. The results show that: (1) all five measures together can save 23–38% of site energy per building; (2) replacing lighting with light-emitting diode lamps and adding air economizers to existing heating, ventilation and air-conditioning (HVAC) systems are most cost-effective with an average payback of 2.0 and 4.3 years, respectively; and (3) it is not economical to upgrade HVAC systems or replace windows in San Francisco due to the city's mild climate and minimal cooling and heating loads. Furthermore, the CityBES retrofit analysis feature does not require users to have deep knowledge of building systems or technologies for the generation and simulation of building energy models, which helps overcome major technical barriers for city managers and their consultants to adopt UBEM.« less
Chen, Yixing; Hong, Tianzhen; Piette, Mary Ann
2017-08-07
Buildings in cities consume 30–70% of total primary energy, and improving building energy efficiency is one of the key strategies towards sustainable urbanization. Urban building energy models (UBEM) can support city managers to evaluate and prioritize energy conservation measures (ECMs) for investment and the design of incentive and rebate programs. This paper presents the retrofit analysis feature of City Building Energy Saver (CityBES) to automatically generate and simulate UBEM using EnergyPlus based on cities’ building datasets and user-selected ECMs. CityBES is a new open web-based tool to support city-scale building energy efficiency strategic plans and programs. The technical details ofmore » using CityBES for UBEM generation and simulation are introduced, including the workflow, key assumptions, and major databases. Also presented is a case study that analyzes the potential retrofit energy use and energy cost savings of five individual ECMs and two measure packages for 940 office and retail buildings in six city districts in northeast San Francisco, United States. The results show that: (1) all five measures together can save 23–38% of site energy per building; (2) replacing lighting with light-emitting diode lamps and adding air economizers to existing heating, ventilation and air-conditioning (HVAC) systems are most cost-effective with an average payback of 2.0 and 4.3 years, respectively; and (3) it is not economical to upgrade HVAC systems or replace windows in San Francisco due to the city's mild climate and minimal cooling and heating loads. Furthermore, the CityBES retrofit analysis feature does not require users to have deep knowledge of building systems or technologies for the generation and simulation of building energy models, which helps overcome major technical barriers for city managers and their consultants to adopt UBEM.« less
Variable Selection for Regression Models of Percentile Flows
NASA Astrophysics Data System (ADS)
Fouad, G.
2017-12-01
Percentile flows describe the flow magnitude equaled or exceeded for a given percent of time, and are widely used in water resource management. However, these statistics are normally unavailable since most basins are ungauged. Percentile flows of ungauged basins are often predicted using regression models based on readily observable basin characteristics, such as mean elevation. The number of these independent variables is too large to evaluate all possible models. A subset of models is typically evaluated using automatic procedures, like stepwise regression. This ignores a large variety of methods from the field of feature (variable) selection and physical understanding of percentile flows. A study of 918 basins in the United States was conducted to compare an automatic regression procedure to the following variable selection methods: (1) principal component analysis, (2) correlation analysis, (3) random forests, (4) genetic programming, (5) Bayesian networks, and (6) physical understanding. The automatic regression procedure only performed better than principal component analysis. Poor performance of the regression procedure was due to a commonly used filter for multicollinearity, which rejected the strongest models because they had cross-correlated independent variables. Multicollinearity did not decrease model performance in validation because of a representative set of calibration basins. Variable selection methods based strictly on predictive power (numbers 2-5 from above) performed similarly, likely indicating a limit to the predictive power of the variables. Similar performance was also reached using variables selected based on physical understanding, a finding that substantiates recent calls to emphasize physical understanding in modeling for predictions in ungauged basins. The strongest variables highlighted the importance of geology and land cover, whereas widely used topographic variables were the weakest predictors. Variables suffered from a high degree of multicollinearity, possibly illustrating the co-evolution of climatic and physiographic conditions. Given the ineffectiveness of many variables used here, future work should develop new variables that target specific processes associated with percentile flows.
Gyssels, Elodie; Bohy, Pascale; Cornil, Arnaud; van Muylem, Alain; Howarth, Nigel; Gevenois, Pierre A; Tack, Denis
2016-01-01
The aim of the study was to compare radiation dose and image quality between the "average" and the "very strong" automatic exposure control (AEC) strength curves. Images reconstructed with filtered back-projection techniques and radiation dose data of unenhanced helical chest computed tomography (CT) examinations obtained at 2 hospitals (hospital A, hospital B) using the same scanner devices and acquisition protocols but different AEC strength curves were evaluated over a 3-month period. The selected AEC strength curve applied to "slim" patients (diameter <32 cm estimated from the attenuation automatically measured on the topogram) was "average" and "very strong" in hospital A and hospital B, respectively. Two radiologists with 13 and 24 years of experience scored the image quality of the lung parenchyma and the mediastinum on a 5-point scale. The patients' effective diameter, the delivered CT dose index volume, and dose-length products were recorded. A total of 410 patients were included. The average body mass index was 24.0 kg/m in hospital A and 24.8 kg/m in hospital B. There was no significant difference between hospitals with respect to age, sex ratio, weight, height, body mass index, effective diameters, and image quality scores for each radiologist (P ranging from 0.050 to 1.000). The mean CT dose index volume for the entire population was 2.0 mGy and was significantly lower in hospital B with the "very strong" AEC curve as compared with hospital A (-11%, P=0.001). The mean dose-length product delivered in this 70 kg-weight population was 68 mGy cm, corresponding to an effective dose of 0.95 mSv. Changing the AEC strength curve from "average" to "very strong" for slim patients maintains image quality and reduces the radiation dose to <1 mSv in routine chest CT examinations reconstructed with filtered back-projection techniques.
Occupational Survey Report. AFSC 4A2X1 Biomedical Equipment
2004-05-01
Electrocardiograms 70 Hospital Beds, Electric 67 Surgical Lamps 67 Hospital Beds, Manual 66 Audiometers 64 Dental Curing Units 63 Dental Handpieces 63...Pumps 78 Pulse Oximeters 78 Dental Chairs 76 Blood Pressure Monitors, Automatic 74 Examination Lamps 72 Examination Tables 72 Blood Pressure Cuffs 71...Exercise Bicycles 63 Dental Amalgamators 62 Scales or Balances, other than Pediatric 62 Scales or Balances, Pediatric 61 First-Enlistment Personnel
Prigent, Sylvain; Nielsen, Jens Christian; Frisvad, Jens Christian; Nielsen, Jens
2018-06-05
Modelling of metabolism at the genome-scale have proved to be an efficient method for explaining observed phenotypic traits in living organisms. Further, it can be used as a means of predicting the effect of genetic modifications e.g. for development of microbial cell factories. With the increasing amount of genome sequencing data available, a need exists to accurately and efficiently generate such genome-scale metabolic models (GEMs) of non-model organisms, for which data is sparse. In this study, we present an automatic reconstruction approach applied to 24 Penicillium species, which have potential for production of pharmaceutical secondary metabolites or used in the manufacturing of food products such as cheeses. The models were based on the MetaCyc database and a previously published Penicillium GEM, and gave rise to comprehensive genome-scale metabolic descriptions. The models proved that while central carbon metabolism is highly conserved, secondary metabolic pathways represent the main diversity among the species. The automatic reconstruction approach presented in this study can be applied to generate GEMs of other understudied organisms, and the developed GEMs are a useful resource for the study of Penicillium metabolism, for example with the scope of developing novel cell factories. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.