Sample records for suitable adaptive automatic

  1. Auto-adaptive finite element meshes

    NASA Technical Reports Server (NTRS)

    Richter, Roland; Leyland, Penelope

    1995-01-01

    Accurate capturing of discontinuities within compressible flow computations is achieved by coupling a suitable solver with an automatic adaptive mesh algorithm for unstructured triangular meshes. The mesh adaptation procedures developed rely on non-hierarchical dynamical local refinement/derefinement techniques, which hence enable structural optimization as well as geometrical optimization. The methods described are applied for a number of the ICASE test cases are particularly interesting for unsteady flow simulations.

  2. Parallel three-dimensional magnetotelluric inversion using adaptive finite-element method. Part I: theory and synthetic study

    NASA Astrophysics Data System (ADS)

    Grayver, Alexander V.

    2015-07-01

    This paper presents a distributed magnetotelluric inversion scheme based on adaptive finite-element method (FEM). The key novel aspect of the introduced algorithm is the use of automatic mesh refinement techniques for both forward and inverse modelling. These techniques alleviate tedious and subjective procedure of choosing a suitable model parametrization. To avoid overparametrization, meshes for forward and inverse problems were decoupled. For calculation of accurate electromagnetic (EM) responses, automatic mesh refinement algorithm based on a goal-oriented error estimator has been adopted. For further efficiency gain, EM fields for each frequency were calculated using independent meshes in order to account for substantially different spatial behaviour of the fields over a wide range of frequencies. An automatic approach for efficient initial mesh design in inverse problems based on linearized model resolution matrix was developed. To make this algorithm suitable for large-scale problems, it was proposed to use a low-rank approximation of the linearized model resolution matrix. In order to fill a gap between initial and true model complexities and resolve emerging 3-D structures better, an algorithm for adaptive inverse mesh refinement was derived. Within this algorithm, spatial variations of the imaged parameter are calculated and mesh is refined in the neighborhoods of points with the largest variations. A series of numerical tests were performed to demonstrate the utility of the presented algorithms. Adaptive mesh refinement based on the model resolution estimates provides an efficient tool to derive initial meshes which account for arbitrary survey layouts, data types, frequency content and measurement uncertainties. Furthermore, the algorithm is capable to deliver meshes suitable to resolve features on multiple scales while keeping number of unknowns low. However, such meshes exhibit dependency on an initial model guess. Additionally, it is demonstrated that the adaptive mesh refinement can be particularly efficient in resolving complex shapes. The implemented inversion scheme was able to resolve a hemisphere object with sufficient resolution starting from a coarse discretization and refining mesh adaptively in a fully automatic process. The code is able to harness the computational power of modern distributed platforms and is shown to work with models consisting of millions of degrees of freedom. Significant computational savings were achieved by using locally refined decoupled meshes.

  3. Dream controller

    DOEpatents

    Cheng, George Shu-Xing; Mulkey, Steven L; Wang, Qiang; Chow, Andrew J

    2013-11-26

    A method and apparatus for intelligently controlling continuous process variables. A Dream Controller comprises an Intelligent Engine mechanism and a number of Model-Free Adaptive (MFA) controllers, each of which is suitable to control a process with specific behaviors. The Intelligent Engine can automatically select the appropriate MFA controller and its parameters so that the Dream Controller can be easily used by people with limited control experience and those who do not have the time to commission, tune, and maintain automatic controllers.

  4. Design and realization of an AEC&AGC system for the CCD aerial camera

    NASA Astrophysics Data System (ADS)

    Liu, Hai ying; Feng, Bing; Wang, Peng; Li, Yan; Wei, Hao yun

    2015-08-01

    An AEC and AGC(Automatic Exposure Control and Automatic Gain Control) system was designed for a CCD aerial camera with fixed aperture and electronic shutter. The normal AEC and AGE algorithm is not suitable to the aerial camera since the camera always takes high-resolution photographs in high-speed moving. The AEC and AGE system adjusts electronic shutter and camera gain automatically according to the target brightness and the moving speed of the aircraft. An automatic Gamma correction is used before the image is output so that the image is better for watching and analyzing by human eyes. The AEC and AGC system could avoid underexposure, overexposure, or image blurring caused by fast moving or environment vibration. A series of tests proved that the system meet the requirements of the camera system with its fast adjusting speed, high adaptability, high reliability in severe complex environment.

  5. Adaptive pseudolinear compensators of dynamic characteristics of automatic control systems

    NASA Astrophysics Data System (ADS)

    Skorospeshkin, M. V.; Sukhodoev, M. S.; Timoshenko, E. A.; Lenskiy, F. V.

    2016-04-01

    Adaptive pseudolinear gain and phase compensators of dynamic characteristics of automatic control systems are suggested. The automatic control system performance with adaptive compensators has been explored. The efficiency of pseudolinear adaptive compensators in the automatic control systems with time-varying parameters has been demonstrated.

  6. Automatic Learning of Fine Operating Rules for Online Power System Security Control.

    PubMed

    Sun, Hongbin; Zhao, Feng; Wang, Hao; Wang, Kang; Jiang, Weiyong; Guo, Qinglai; Zhang, Boming; Wehenkel, Louis

    2016-08-01

    Fine operating rules for security control and an automatic system for their online discovery were developed to adapt to the development of smart grids. The automatic system uses the real-time system state to determine critical flowgates, and then a continuation power flow-based security analysis is used to compute the initial transfer capability of critical flowgates. Next, the system applies the Monte Carlo simulations to expected short-term operating condition changes, feature selection, and a linear least squares fitting of the fine operating rules. The proposed system was validated both on an academic test system and on a provincial power system in China. The results indicated that the derived rules provide accuracy and good interpretability and are suitable for real-time power system security control. The use of high-performance computing systems enables these fine operating rules to be refreshed online every 15 min.

  7. Computer aided segmentation of kidneys using locally shape constrained deformable models on CT images

    NASA Astrophysics Data System (ADS)

    Erdt, Marius; Sakas, Georgios

    2010-03-01

    This work presents a novel approach for model based segmentation of the kidney in images acquired by Computed Tomography (CT). The developed computer aided segmentation system is expected to support computer aided diagnosis and operation planning. We have developed a deformable model based approach based on local shape constraints that prevents the model from deforming into neighboring structures while allowing the global shape to adapt freely to the data. Those local constraints are derived from the anatomical structure of the kidney and the presence and appearance of neighboring organs. The adaptation process is guided by a rule-based deformation logic in order to improve the robustness of the segmentation in areas of diffuse organ boundaries. Our work flow consists of two steps: 1.) a user guided positioning and 2.) an automatic model adaptation using affine and free form deformation in order to robustly extract the kidney. In cases which show pronounced pathologies, the system also offers real time mesh editing tools for a quick refinement of the segmentation result. Evaluation results based on 30 clinical cases using CT data sets show an average dice correlation coefficient of 93% compared to the ground truth. The results are therefore in most cases comparable to manual delineation. Computation times of the automatic adaptation step are lower than 6 seconds which makes the proposed system suitable for an application in clinical practice.

  8. Generation of Adaptive Gait Patterns for Quadruped Robot with CPG Network including Motor Dynamic Model

    NASA Astrophysics Data System (ADS)

    Son, Yurak; Kamano, Takuya; Yasuno, Takashi; Suzuki, Takayuki; Harada, Hironobu

    This paper describes the generation of adaptive gait patterns using new Central Pattern Generators (CPGs) including motor dynamic models for a quadruped robot under various environment. The CPGs act as the flexible oscillators of the joints and make the desired angle of the joints. The CPGs are mutually connected each other, and the sets of their coupling parameters are adjusted by genetic algorithm so that the quadruped robot can realize the stable and adequate gait patterns. As a result of generation, the suitable CPG networks for not only a walking straight gait pattern but also rotation gait patterns are obtained. Experimental results demonstrate that the proposed CPG networks are effective to automatically adjust the adaptive gait patterns for the tested quadruped robot under various environment. Furthermore, the target tracking control based on image processing is achieved by combining the generated gait patterns.

  9. Thermographic techniques and adapted algorithms for automatic detection of foreign bodies in food

    NASA Astrophysics Data System (ADS)

    Meinlschmidt, Peter; Maergner, Volker

    2003-04-01

    At the moment foreign substances in food are detected mainly by using mechanical and optical methods as well as ultrasonic technique and than they are removed from the further process. These techniques detect a large portion of the foreign substances due to their different mass (mechanical sieving), their different colour (optical method) and their different surface density (ultrasonic detection). Despite the numerous different methods a considerable portion of the foreign substances remain undetected. In order to recognise materials still undetected, a complementary detection method would be desirable removing the foreign substances not registered by the a.m. methods from the production process. In a project with 13 partner from the food industry, the Fraunhofer - Institut für Holzforschung (WKI) and the Technische Unsiversität are trying to adapt thermography for the detection of foreign bodies in the food industry. After the initial tests turned out to be very promising for the differentiation of food stuffs and foreign substances, more and detailed investigation were carried out to develop suitable algorithms for automatic detection of foreign bodies. In order to achieve -besides the mere visual detection of foreign substances- also an automatic detection under production conditions, numerous experiences in image processing and pattern recognition are exploited. Results for the detection of foreign bodies will be presented at the conference showing the different advantages and disadvantages of using grey - level, statistical and morphological image processing techniques.

  10. 30 CFR 27.23 - Automatic warning device.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Automatic warning device. 27.23 Section 27.23... Automatic warning device. (a) An automatic warning device shall be suitably constructed for incorporation in... automatic warning device shall include an alarm signal (audible or colored light), which shall be made to...

  11. 30 CFR 27.23 - Automatic warning device.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Automatic warning device. 27.23 Section 27.23... Automatic warning device. (a) An automatic warning device shall be suitably constructed for incorporation in... automatic warning device shall include an alarm signal (audible or colored light), which shall be made to...

  12. 30 CFR 27.23 - Automatic warning device.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Automatic warning device. 27.23 Section 27.23... Automatic warning device. (a) An automatic warning device shall be suitably constructed for incorporation in... automatic warning device shall include an alarm signal (audible or colored light), which shall be made to...

  13. 30 CFR 27.23 - Automatic warning device.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Automatic warning device. 27.23 Section 27.23... Automatic warning device. (a) An automatic warning device shall be suitably constructed for incorporation in... automatic warning device shall include an alarm signal (audible or colored light), which shall be made to...

  14. Stable adaptive PI control for permanent magnet synchronous motor drive based on improved JITL technique.

    PubMed

    Zheng, Shiqi; Tang, Xiaoqi; Song, Bao; Lu, Shaowu; Ye, Bosheng

    2013-07-01

    In this paper, a stable adaptive PI control strategy based on the improved just-in-time learning (IJITL) technique is proposed for permanent magnet synchronous motor (PMSM) drive. Firstly, the traditional JITL technique is improved. The new IJITL technique has less computational burden and is more suitable for online identification of the PMSM drive system which is highly real-time compared to traditional JITL. In this way, the PMSM drive system is identified by IJITL technique, which provides information to an adaptive PI controller. Secondly, the adaptive PI controller is designed in discrete time domain which is composed of a PI controller and a supervisory controller. The PI controller is capable of automatically online tuning the control gains based on the gradient descent method and the supervisory controller is developed to eliminate the effect of the approximation error introduced by the PI controller upon the system stability in the Lyapunov sense. Finally, experimental results on the PMSM drive system show accurate identification and favorable tracking performance. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Severity-Based Adaptation with Limited Data for ASR to Aid Dysarthric Speakers

    PubMed Central

    Mustafa, Mumtaz Begum; Salim, Siti Salwah; Mohamed, Noraini; Al-Qatab, Bassam; Siong, Chng Eng

    2014-01-01

    Automatic speech recognition (ASR) is currently used in many assistive technologies, such as helping individuals with speech impairment in their communication ability. One challenge in ASR for speech-impaired individuals is the difficulty in obtaining a good speech database of impaired speakers for building an effective speech acoustic model. Because there are very few existing databases of impaired speech, which are also limited in size, the obvious solution to build a speech acoustic model of impaired speech is by employing adaptation techniques. However, issues that have not been addressed in existing studies in the area of adaptation for speech impairment are as follows: (1) identifying the most effective adaptation technique for impaired speech; and (2) the use of suitable source models to build an effective impaired-speech acoustic model. This research investigates the above-mentioned two issues on dysarthria, a type of speech impairment affecting millions of people. We applied both unimpaired and impaired speech as the source model with well-known adaptation techniques like the maximum likelihood linear regression (MLLR) and the constrained-MLLR(C-MLLR). The recognition accuracy of each impaired speech acoustic model is measured in terms of word error rate (WER), with further assessments, including phoneme insertion, substitution and deletion rates. Unimpaired speech when combined with limited high-quality speech-impaired data improves performance of ASR systems in recognising severely impaired dysarthric speech. The C-MLLR adaptation technique was also found to be better than MLLR in recognising mildly and moderately impaired speech based on the statistical analysis of the WER. It was found that phoneme substitution was the biggest contributing factor in WER in dysarthric speech for all levels of severity. The results show that the speech acoustic models derived from suitable adaptation techniques improve the performance of ASR systems in recognising impaired speech with limited adaptation data. PMID:24466004

  16. Multimodal Translation System Using Texture-Mapped Lip-Sync Images for Video Mail and Automatic Dubbing Applications

    NASA Astrophysics Data System (ADS)

    Morishima, Shigeo; Nakamura, Satoshi

    2004-12-01

    We introduce a multimodal English-to-Japanese and Japanese-to-English translation system that also translates the speaker's speech motion by synchronizing it to the translated speech. This system also introduces both a face synthesis technique that can generate any viseme lip shape and a face tracking technique that can estimate the original position and rotation of a speaker's face in an image sequence. To retain the speaker's facial expression, we substitute only the speech organ's image with the synthesized one, which is made by a 3D wire-frame model that is adaptable to any speaker. Our approach provides translated image synthesis with an extremely small database. The tracking motion of the face from a video image is performed by template matching. In this system, the translation and rotation of the face are detected by using a 3D personal face model whose texture is captured from a video frame. We also propose a method to customize the personal face model by using our GUI tool. By combining these techniques and the translated voice synthesis technique, an automatic multimodal translation can be achieved that is suitable for video mail or automatic dubbing systems into other languages.

  17. A dual-adaptive support-based stereo matching algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Yin; Zhang, Yun

    2017-07-01

    Many stereo matching algorithms use fixed color thresholds and a rigid cross skeleton to segment supports (viz., Cross method), which, however, does not work well for different images. To address this issue, this paper proposes a novel dual adaptive support (viz., DAS)-based stereo matching method, which uses both appearance and shape information of a local region to segment supports automatically, and, then, integrates the DAS-based cost aggregation with the absolute difference plus census transform cost, scanline optimization and disparity refinement to develop a stereo matching system. The performance of the DAS method is also evaluated in the Middlebury benchmark and by comparing with the Cross method. The results show that the average error for the DAS method 25.06% lower than that for the Cross method, indicating that the proposed method is more accurate, with fewer parameters and suitable for parallel computing.

  18. Real-time range acquisition by adaptive structured light.

    PubMed

    Koninckx, Thomas P; Van Gool, Luc

    2006-03-01

    The goal of this paper is to provide a "self-adaptive" system for real-time range acquisition. Reconstructions are based on a single frame structured light illumination. Instead of using generic, static coding that is supposed to work under all circumstances, system adaptation is proposed. This occurs on-the-fly and renders the system more robust against instant scene variability and creates suitable patterns at startup. A continuous trade-off between speed and quality is made. A weighted combination of different coding cues--based upon pattern color, geometry, and tracking--yields a robust way to solve the correspondence problem. The individual coding cues are automatically adapted within a considered family of patterns. The weights to combine them are based on the average consistency with the result within a small time-window. The integration itself is done by reformulating the problem as a graph cut. Also, the camera-projector configuration is taken into account for generating the projection patterns. The correctness of the range maps is not guaranteed, but an estimation of the uncertainty is provided for each part of the reconstruction. Our prototype is implemented using unmodified consumer hardware only and, therefore, is cheap. Frame rates vary between 10 and 25 fps, dependent on scene complexity.

  19. SVAS3: Strain Vector Aided Sensorization of Soft Structures.

    PubMed

    Culha, Utku; Nurzaman, Surya G; Clemens, Frank; Iida, Fumiya

    2014-07-17

    Soft material structures exhibit high deformability and conformability which can be useful for many engineering applications such as robots adapting to unstructured and dynamic environments. However, the fact that they have almost infinite degrees of freedom challenges conventional sensory systems and sensorization approaches due to the difficulties in adapting to soft structure deformations. In this paper, we address this challenge by proposing a novel method which designs flexible sensor morphologies to sense soft material deformations by using a functional material called conductive thermoplastic elastomer (CTPE). This model-based design method, called Strain Vector Aided Sensorization of Soft Structures (SVAS3), provides a simulation platform which analyzes soft body deformations and automatically finds suitable locations for CTPE-based strain gauge sensors to gather strain information which best characterizes the deformation. Our chosen sensor material CTPE exhibits a set of unique behaviors in terms of strain length electrical conductivity, elasticity, and shape adaptability, allowing us to flexibly design sensor morphology that can best capture strain distributions in a given soft structure. We evaluate the performance of our approach by both simulated and real-world experiments and discuss the potential and limitations.

  20. Performance Analysis of the SIFT Operator for Automatic Feature Extraction and Matching in Photogrammetric Applications.

    PubMed

    Lingua, Andrea; Marenchino, Davide; Nex, Francesco

    2009-01-01

    In the photogrammetry field, interest in region detectors, which are widely used in Computer Vision, is quickly increasing due to the availability of new techniques. Images acquired by Mobile Mapping Technology, Oblique Photogrammetric Cameras or Unmanned Aerial Vehicles do not observe normal acquisition conditions. Feature extraction and matching techniques, which are traditionally used in photogrammetry, are usually inefficient for these applications as they are unable to provide reliable results under extreme geometrical conditions (convergent taking geometry, strong affine transformations, etc.) and for bad-textured images. A performance analysis of the SIFT technique in aerial and close-range photogrammetric applications is presented in this paper. The goal is to establish the suitability of the SIFT technique for automatic tie point extraction and approximate DSM (Digital Surface Model) generation. First, the performances of the SIFT operator have been compared with those provided by feature extraction and matching techniques used in photogrammetry. All these techniques have been implemented by the authors and validated on aerial and terrestrial images. Moreover, an auto-adaptive version of the SIFT operator has been developed, in order to improve the performances of the SIFT detector in relation to the texture of the images. The Auto-Adaptive SIFT operator (A(2) SIFT) has been validated on several aerial images, with particular attention to large scale aerial images acquired using mini-UAV systems.

  1. Automatic video shot boundary detection using k-means clustering and improved adaptive dual threshold comparison

    NASA Astrophysics Data System (ADS)

    Sa, Qila; Wang, Zhihui

    2018-03-01

    At present, content-based video retrieval (CBVR) is the most mainstream video retrieval method, using the video features of its own to perform automatic identification and retrieval. This method involves a key technology, i.e. shot segmentation. In this paper, the method of automatic video shot boundary detection with K-means clustering and improved adaptive dual threshold comparison is proposed. First, extract the visual features of every frame and divide them into two categories using K-means clustering algorithm, namely, one with significant change and one with no significant change. Then, as to the classification results, utilize the improved adaptive dual threshold comparison method to determine the abrupt as well as gradual shot boundaries.Finally, achieve automatic video shot boundary detection system.

  2. Design of a mobile brain computer interface-based smart multimedia controller.

    PubMed

    Tseng, Kevin C; Lin, Bor-Shing; Wong, Alice May-Kuen; Lin, Bor-Shyh

    2015-03-06

    Music is a way of expressing our feelings and emotions. Suitable music can positively affect people. However, current multimedia control methods, such as manual selection or automatic random mechanisms, which are now applied broadly in MP3 and CD players, cannot adaptively select suitable music according to the user's physiological state. In this study, a brain computer interface-based smart multimedia controller was proposed to select music in different situations according to the user's physiological state. Here, a commercial mobile tablet was used as the multimedia platform, and a wireless multi-channel electroencephalograph (EEG) acquisition module was designed for real-time EEG monitoring. A smart multimedia control program built in the multimedia platform was developed to analyze the user's EEG feature and select music according his/her state. The relationship between the user's state and music sorted by listener's preference was also examined in this study. The experimental results show that real-time music biofeedback according a user's EEG feature may positively improve the user's attention state.

  3. Method and system for spatial data input, manipulation and distribution via an adaptive wireless transceiver

    NASA Technical Reports Server (NTRS)

    Wang, Ray (Inventor)

    2009-01-01

    A method and system for spatial data manipulation input and distribution via an adaptive wireless transceiver. The method and system include a wireless transceiver for automatically and adaptively controlling wireless transmissions using a Waveform-DNA method. The wireless transceiver can operate simultaneously over both the short and long distances. The wireless transceiver is automatically adaptive and wireless devices can send and receive wireless digital and analog data from various sources rapidly in real-time via available networks and network services.

  4. Using Multi-threading for the Automatic Load Balancing of 2D Adaptive Finite Element Meshes

    NASA Technical Reports Server (NTRS)

    Heber, Gerd; Biswas, Rupak; Thulasiraman, Parimala; Gao, Guang R.; Saini, Subhash (Technical Monitor)

    1998-01-01

    In this paper, we present a multi-threaded approach for the automatic load balancing of adaptive finite element (FE) meshes The platform of our choice is the EARTH multi-threaded system which offers sufficient capabilities to tackle this problem. We implement the adaption phase of FE applications oil triangular meshes and exploit the EARTH token mechanism to automatically balance the resulting irregular and highly nonuniform workload. We discuss the results of our experiments oil EARTH-SP2, on implementation of EARTH on the IBM SP2 with different load balancing strategies that are built into the runtime system.

  5. Train Control and Operations

    DOT National Transportation Integrated Search

    1971-06-01

    ATO (automatic train operation) and ATC (automatic train control) systems are evaluated relative to available technology and cost-benefit. The technological evaluation shows that suitable mathematical models of the dynamics of long trains are require...

  6. Scaffolding and Integrated Assessment in Computer Assisted Learning (CAL) for Children with Learning Disabilities

    ERIC Educational Resources Information Center

    Beale, Ivan L.

    2005-01-01

    Computer assisted learning (CAL) can involve a computerised intelligent learning environment, defined as an environment capable of automatically, dynamically and continuously adapting to the learning context. One aspect of this adaptive capability involves automatic adjustment of instructional procedures in response to each learner's performance,…

  7. A Self-Organizing Incremental Neural Network based on local distribution learning.

    PubMed

    Xing, Youlu; Shi, Xiaofeng; Shen, Furao; Zhou, Ke; Zhao, Jinxi

    2016-12-01

    In this paper, we propose an unsupervised incremental learning neural network based on local distribution learning, which is called Local Distribution Self-Organizing Incremental Neural Network (LD-SOINN). The LD-SOINN combines the advantages of incremental learning and matrix learning. It can automatically discover suitable nodes to fit the learning data in an incremental way without a priori knowledge such as the structure of the network. The nodes of the network store rich local information regarding the learning data. The adaptive vigilance parameter guarantees that LD-SOINN is able to add new nodes for new knowledge automatically and the number of nodes will not grow unlimitedly. While the learning process continues, nodes that are close to each other and have similar principal components are merged to obtain a concise local representation, which we call a relaxation data representation. A denoising process based on density is designed to reduce the influence of noise. Experiments show that the LD-SOINN performs well on both artificial and real-word data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Improving emergency medical dispatching with emphasis on mass-casualty incidents.

    PubMed

    Kleinoscheg, Gabriel; Burgsteiner, Harald; Bernroider, Martin; Kiechle, Günter; Obermayer, Maria

    2014-01-01

    Dispatching ambulances is a demanding and stressful task for dispatchers. This is especially true in case of mass-casualty incidents. Therefore, the aim of this work was to investigate if and to what extent the dispatch operation of the Red Cross Salzburg can be optimized on such occasions with a computerized system. The basic problem of a dynamic multi-vehicle Dial-a-Ride Problem with time windows was enhanced according to the requirements of the Red Cross Salzburg. The general objective was to minimize the total mileage covered by ambulances and the waiting time of patients. Furthermore, in case of emergencies suitable adaptions to a plan should be carried out automatically. Consequently, the problem is solved by using the Adaptive Large Neighborhood Search. Evaluation results indicate that the system outperforms a human dispatcher by between 2.5% and 36% within 1 minute of runtime concerning total costs. Moreover, the system's response time in case that a plan has to be updated is less than 1 minute on average.

  9. Automated Kinematics Equations Generation and Constrained Motion Planning Resolution for Modular and Reconfigurable Robots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pin, Francois G.; Love, Lonnie L.; Jung, David L.

    2004-03-29

    Contrary to the repetitive tasks performed by industrial robots, the tasks in most DOE missions such as environmental restoration or Decontamination and Decommissioning (D&D) can be characterized as ''batches-of-one'', in which robots must be capable of adapting to changes in constraints, tools, environment, criteria and configuration. No commercially available robot control code is suitable for use with such widely varying conditions. In this talk we present our development of a ''generic code'' to allow real time (at loop rate) robot behavior adaptation to changes in task objectives, tools, number and type of constraints, modes of controls or kinematics configuration. Wemore » present the analytical framework underlying our approach and detail the design of its two major modules for the automatic generation of the kinematics equations when the robot configuration or tools change and for the motion planning under time-varying constraints. Sample problems illustrating the capabilities of the developed system are presented.« less

  10. Feature determination from powered wheelchair user joystick input characteristics for adapting driving assistance.

    PubMed

    Gillham, Michael; Pepper, Matthew; Kelly, Steve; Howells, Gareth

    2017-01-01

    Background : Many powered wheelchair users find their medical condition and their ability to drive the wheelchair will change over time. In order to maintain their independent mobility, the powered chair will require adjustment over time to suit the user's needs, thus regular input from healthcare professionals is required. These limited resources can result in the user having to wait weeks for appointments, resulting in the user losing independent mobility, consequently affecting their quality of life and that of their family and carers. In order to provide an adaptive assistive driving system, a range of features need to be identified which are suitable for initial system setup and can automatically provide data for re-calibration over the long term. Methods : A questionnaire was designed to collect information from powered wheelchair users with regard to their symptoms and how they changed over time. Another group of volunteer participants were asked to drive a test platform and complete a course which represented manoeuvring in a very confined space as quickly as possible. Two of those participants were also monitored over a longer period in their normal home daily environment. Features, thought to be suitable, were examined using pattern recognition classifiers to determine their suitability for identifying the changing user input over time. Results : The results are not designed to provide absolute insight into the individual user behaviour, as no ground truth of their ability has been determined, they do nevertheless demonstrate the utility of the measured features to provide evidence of the users' changing ability over time whilst driving a powered wheelchair. Conclusions : Determining the driving features and adjustable elements provides the initial step towards developing an adaptable assistive technology for the user when the ground truths of the individual and their machine have been learned by a smart pattern recognition system.

  11. Feature determination from powered wheelchair user joystick input characteristics for adapting driving assistance

    PubMed Central

    Gillham, Michael; Pepper, Matthew; Kelly, Steve; Howells, Gareth

    2018-01-01

    Background: Many powered wheelchair users find their medical condition and their ability to drive the wheelchair will change over time. In order to maintain their independent mobility, the powered chair will require adjustment over time to suit the user's needs, thus regular input from healthcare professionals is required. These limited resources can result in the user having to wait weeks for appointments, resulting in the user losing independent mobility, consequently affecting their quality of life and that of their family and carers. In order to provide an adaptive assistive driving system, a range of features need to be identified which are suitable for initial system setup and can automatically provide data for re-calibration over the long term. Methods: A questionnaire was designed to collect information from powered wheelchair users with regard to their symptoms and how they changed over time. Another group of volunteer participants were asked to drive a test platform and complete a course which represented manoeuvring in a very confined space as quickly as possible. Two of those participants were also monitored over a longer period in their normal home daily environment. Features, thought to be suitable, were examined using pattern recognition classifiers to determine their suitability for identifying the changing user input over time. Results: The results are not designed to provide absolute insight into the individual user behaviour, as no ground truth of their ability has been determined, they do nevertheless demonstrate the utility of the measured features to provide evidence of the users’ changing ability over time whilst driving a powered wheelchair. Conclusions: Determining the driving features and adjustable elements provides the initial step towards developing an adaptable assistive technology for the user when the ground truths of the individual and their machine have been learned by a smart pattern recognition system. PMID:29552641

  12. Toward Agent Programs with Circuit Semantics

    NASA Technical Reports Server (NTRS)

    Nilsson, Nils J.

    1992-01-01

    New ideas are presented for computing and organizing actions for autonomous agents in dynamic environments-environments in which the agent's current situation cannot always be accurately discerned and in which the effects of actions cannot always be reliably predicted. The notion of 'circuit semantics' for programs based on 'teleo-reactive trees' is introduced. Program execution builds a combinational circuit which receives sensory inputs and controls actions. These formalisms embody a high degree of inherent conditionality and thus yield programs that are suitably reactive to their environments. At the same time, the actions computed by the programs are guided by the overall goals of the agent. The paper also speculates about how programs using these ideas could be automatically generated by artificial intelligence planning systems and adapted by learning methods.

  13. Study on application of adaptive fuzzy control and neural network in the automatic leveling system

    NASA Astrophysics Data System (ADS)

    Xu, Xiping; Zhao, Zizhao; Lan, Weiyong; Sha, Lei; Qian, Cheng

    2015-04-01

    This paper discusses the adaptive fuzzy control and neural network BP algorithm in large flat automatic leveling control system application. The purpose is to develop a measurement system with a flat quick leveling, Make the installation on the leveling system of measurement with tablet, to be able to achieve a level in precision measurement work quickly, improve the efficiency of the precision measurement. This paper focuses on the automatic leveling system analysis based on fuzzy controller, Use of the method of combining fuzzy controller and BP neural network, using BP algorithm improve the experience rules .Construct an adaptive fuzzy control system. Meanwhile the learning rate of the BP algorithm has also been run-rate adjusted to accelerate convergence. The simulation results show that the proposed control method can effectively improve the leveling precision of automatic leveling system and shorten the time of leveling.

  14. Disentangling Complexity in Bayesian Automatic Adaptive Quadrature

    NASA Astrophysics Data System (ADS)

    Adam, Gheorghe; Adam, Sanda

    2018-02-01

    The paper describes a Bayesian automatic adaptive quadrature (BAAQ) solution for numerical integration which is simultaneously robust, reliable, and efficient. Detailed discussion is provided of three main factors which contribute to the enhancement of these features: (1) refinement of the m-panel automatic adaptive scheme through the use of integration-domain-length-scale-adapted quadrature sums; (2) fast early problem complexity assessment - enables the non-transitive choice among three execution paths: (i) immediate termination (exceptional cases); (ii) pessimistic - involves time and resource consuming Bayesian inference resulting in radical reformulation of the problem to be solved; (iii) optimistic - asks exclusively for subrange subdivision by bisection; (3) use of the weaker accuracy target from the two possible ones (the input accuracy specifications and the intrinsic integrand properties respectively) - results in maximum possible solution accuracy under minimum possible computing time.

  15. SVAS3: Strain Vector Aided Sensorization of Soft Structures

    PubMed Central

    Culha, Utku; Nurzaman, Surya G.; Clemens, Frank; Iida, Fumiya

    2014-01-01

    Soft material structures exhibit high deformability and conformability which can be useful for many engineering applications such as robots adapting to unstructured and dynamic environments. However, the fact that they have almost infinite degrees of freedom challenges conventional sensory systems and sensorization approaches due to the difficulties in adapting to soft structure deformations. In this paper, we address this challenge by proposing a novel method which designs flexible sensor morphologies to sense soft material deformations by using a functional material called conductive thermoplastic elastomer (CTPE). This model-based design method, called Strain Vector Aided Sensorization of Soft Structures (SVAS3), provides a simulation platform which analyzes soft body deformations and automatically finds suitable locations for CTPE-based strain gauge sensors to gather strain information which best characterizes the deformation. Our chosen sensor material CTPE exhibits a set of unique behaviors in terms of strain length electrical conductivity, elasticity, and shape adaptability, allowing us to flexibly design sensor morphology that can best capture strain distributions in a given soft structure. We evaluate the performance of our approach by both simulated and real-world experiments and discuss the potential and limitations. PMID:25036332

  16. Irregular and adaptive sampling for automatic geophysic measure systems

    NASA Astrophysics Data System (ADS)

    Avagnina, Davide; Lo Presti, Letizia; Mulassano, Paolo

    2000-07-01

    In this paper a sampling method, based on an irregular and adaptive strategy, is described. It can be used as automatic guide for rovers designed to explore terrestrial and planetary environments. Starting from the hypothesis that a explorative vehicle is equipped with a payload able to acquire measurements of interesting quantities, the method is able to detect objects of interest from measured points and to realize an adaptive sampling, while badly describing the not interesting background.

  17. Sensors for rate responsive pacing

    PubMed Central

    Dell'Orto, Simonetta; Valli, Paolo; Greco, Enrico Maria

    2004-01-01

    Advances in pacemaker technology in the 1980s have generated a wide variety of complex multiprogrammable pacemakers and pacing modes. The aim of the present review is to address the different rate responsive pacing modalities presently available in respect to physiological situations and pathological conditions. Rate adaptive pacing has been shown to improve exercise capacity in patients with chronotropic incompetence. A number of activity and metabolic sensors have been proposed and used for rate control. However, all sensors used to optimize pacing rate metabolic demands show typical limitations. To overcome these weaknesses the use of two sensors has been proposed. Indeed an unspecific but fast reacting sensor is combined with a more specific but slower metabolic one. Clinical studies have demonstrated that this methodology is suitable to reproduce normal sinus behavior during different types and loads of exercise. Sensor combinations require adequate sensor blending and cross checking possibly controlled by automatic algorithms for sensors optimization and simplicity of programming. Assessment and possibly deactivation of some automatic functions should be also possible to maximize benefits from the dual sensor system in particular conditions. This is of special relevance in patient whose myocardial contractility is limited such as in subjects with implantable defibrillators and biventricular pacemakers. The concept of closed loop pacing, implementing a negative feedback relating pacing rate and the control signal, will provide new opportunities to optimize dual-sensors system and deserves further investigation. The integration of rate adaptive pacing into defibrillators is the natural consequence of technical evolution. PMID:16943981

  18. Thai Automatic Speech Recognition

    DTIC Science & Technology

    2005-01-01

    used in an external DARPA evaluation involving medical scenarios between an American Doctor and a naïve monolingual Thai patient. 2. Thai Language... dictionary generation more challenging, and (3) the lack of word segmentation, which calls for automatic segmentation approaches to make n-gram language...requires a dictionary and provides various segmentation algorithms to automatically select suitable segmentations. Here we used a maximal matching

  19. A review of automatic patient identification options for public health care centers with restricted budgets.

    PubMed

    García-Betances, Rebeca I; Huerta, Mónica K

    2012-01-01

    A comparative review is presented of available technologies suitable for automatic reading of patient identification bracelet tags. Existing technologies' backgrounds, characteristics, advantages and disadvantages, are described in relation to their possible use by public health care centers with budgetary limitations. A comparative assessment is presented of suitable automatic identification systems based on graphic codes, both one- (1D) and two-dimensional (2D), printed on labels, as well as those based on radio frequency identification (RFID) tags. The analysis looks at the tradeoffs of these technologies to provide guidance to hospital administrator looking to deploy patient identification technology. The results suggest that affordable automatic patient identification systems can be easily and inexpensively implemented using 2D code printed on low cost bracelet labels, which can then be read and automatically decoded by ordinary mobile smart phones. Because of mobile smart phones' present versatility and ubiquity, the implantation and operation of 2D code, and especially Quick Response® (QR) Code, technology emerges as a very attractive alternative to automate the patients' identification processes in low-budget situations.

  20. A Review of Automatic Patient Identification Options for Public Health Care Centers with Restricted Budgets

    PubMed Central

    García-Betances, Rebeca I.; Huerta, Mónica K.

    2012-01-01

    A comparative review is presented of available technologies suitable for automatic reading of patient identification bracelet tags. Existing technologies’ backgrounds, characteristics, advantages and disadvantages, are described in relation to their possible use by public health care centers with budgetary limitations. A comparative assessment is presented of suitable automatic identification systems based on graphic codes, both one- (1D) and two-dimensional (2D), printed on labels, as well as those based on radio frequency identification (RFID) tags. The analysis looks at the tradeoffs of these technologies to provide guidance to hospital administrator looking to deploy patient identification technology. The results suggest that affordable automatic patient identification systems can be easily and inexpensively implemented using 2D code printed on low cost bracelet labels, which can then be read and automatically decoded by ordinary mobile smart phones. Because of mobile smart phones’ present versatility and ubiquity, the implantation and operation of 2D code, and especially Quick Response® (QR) Code, technology emerges as a very attractive alternative to automate the patients’ identification processes in low-budget situations. PMID:23569629

  1. Design of a Mobile Brain Computer Interface-Based Smart Multimedia Controller

    PubMed Central

    Tseng, Kevin C.; Lin, Bor-Shing; Wong, Alice May-Kuen; Lin, Bor-Shyh

    2015-01-01

    Music is a way of expressing our feelings and emotions. Suitable music can positively affect people. However, current multimedia control methods, such as manual selection or automatic random mechanisms, which are now applied broadly in MP3 and CD players, cannot adaptively select suitable music according to the user’s physiological state. In this study, a brain computer interface-based smart multimedia controller was proposed to select music in different situations according to the user’s physiological state. Here, a commercial mobile tablet was used as the multimedia platform, and a wireless multi-channel electroencephalograph (EEG) acquisition module was designed for real-time EEG monitoring. A smart multimedia control program built in the multimedia platform was developed to analyze the user’s EEG feature and select music according his/her state. The relationship between the user’s state and music sorted by listener’s preference was also examined in this study. The experimental results show that real-time music biofeedback according a user’s EEG feature may positively improve the user’s attention state. PMID:25756862

  2. Automatic depth grading tool to successfully adapt stereoscopic 3D content to digital cinema and home viewing environments

    NASA Astrophysics Data System (ADS)

    Thébault, Cédric; Doyen, Didier; Routhier, Pierre; Borel, Thierry

    2013-03-01

    To ensure an immersive, yet comfortable experience, significant work is required during post-production to adapt the stereoscopic 3D (S3D) content to the targeted display and its environment. On the one hand, the content needs to be reconverged using horizontal image translation (HIT) so as to harmonize the depth across the shots. On the other hand, to prevent edge violation, specific re-convergence is required and depending on the viewing conditions floating windows need to be positioned. In order to simplify this time-consuming work we propose a depth grading tool that automatically adapts S3D content to digital cinema or home viewing environments. Based on a disparity map, a stereo point of interest in each shot is automatically evaluated. This point of interest is used for depth matching, i.e. to position the objects of interest of consecutive shots in a same plane so as to reduce visual fatigue. The tool adapts the re-convergence to avoid edge-violation, hyper-convergence and hyper-divergence. Floating windows are also automatically positioned. The method has been tested on various types of S3D content, and the results have been validated by a stereographer.

  3. Book4All: A Tool to Make an e-Book More Accessible to Students with Vision/Visual-Impairments

    NASA Astrophysics Data System (ADS)

    Calabrò, Antonello; Contini, Elia; Leporini, Barbara

    Empowering people who are blind or otherwise visually impaired includes ensuring that products and electronic materials incorporate a broad range of accessibility features and work well with screen readers and other assistive technology devices. This is particularly important for students with vision impairments. Unfortunately, authors and publishers often do not include specific criteria when preparing the contents. Consequently, e-books can be inadequate for blind and low vision users, especially for students. In this paper we describe a semi-automatic tool developed to support operators who adapt e-documents for visually impaired students. The proposed tool can be used to convert a PDF e-book into a more suitable accessible and usable format readable on desktop computer or on mobile devices.

  4. MEANS FOR CONTROLLING A NUCLEAR REACTOR

    DOEpatents

    Wilson, V.C.; Overbeck, W.P.; Slotin, L.; Froman, D.K.

    1957-12-17

    This patent relates to nuclear reactors of the type using a solid neutron absorbing material as a means for controlling the reproduction ratio of the system and thereby the power output. Elongated rods of neutron absorbing material, such as boron steel for example, are adapted to be inserted and removed from the core of tae reactor by electronic motors and suitable drive means. The motors and drive means are controlled by means responsive to the neutron density, such as ionization chambers. The control system is designed to be responsive also to the rate of change in neutron density to automatically maintain the total power output at a substantially constant predetermined value. A safety rod means responsive to neutron density is also provided for keeping the power output below a predetermined maximum value at all times.

  5. A deterministic particle method for one-dimensional reaction-diffusion equations

    NASA Technical Reports Server (NTRS)

    Mascagni, Michael

    1995-01-01

    We derive a deterministic particle method for the solution of nonlinear reaction-diffusion equations in one spatial dimension. This deterministic method is an analog of a Monte Carlo method for the solution of these problems that has been previously investigated by the author. The deterministic method leads to the consideration of a system of ordinary differential equations for the positions of suitably defined particles. We then consider the time explicit and implicit methods for this system of ordinary differential equations and we study a Picard and Newton iteration for the solution of the implicit system. Next we solve numerically this system and study the discretization error both analytically and numerically. Numerical computation shows that this deterministic method is automatically adaptive to large gradients in the solution.

  6. Optical Automatic Car Identification (OACI) Field Test Program

    DOT National Transportation Integrated Search

    1976-05-01

    The results of the Optical Automatic Car Identification (OACI) tests at Chicago conducted from August 16 to September 4, 1975 are presented. The main purpose of this test was to determine the suitability of optics as a principle of operation for an a...

  7. Segmentation of the heart and major vascular structures in cardiovascular CT images

    NASA Astrophysics Data System (ADS)

    Peters, J.; Ecabert, O.; Lorenz, C.; von Berg, J.; Walker, M. J.; Ivanc, T. B.; Vembar, M.; Olszewski, M. E.; Weese, J.

    2008-03-01

    Segmentation of organs in medical images can be successfully performed with shape-constrained deformable models. A surface mesh is attracted to detected image boundaries by an external energy, while an internal energy keeps the mesh similar to expected shapes. Complex organs like the heart with its four chambers can be automatically segmented using a suitable shape variablility model based on piecewise affine degrees of freedom. In this paper, we extend the approach to also segment highly variable vascular structures. We introduce a dedicated framework to adapt an extended mesh model to freely bending vessels. This is achieved by subdividing each vessel into (short) tube-shaped segments ("tubelets"). These are assigned to individual similarity transformations for local orientation and scaling. Proper adaptation is achieved by progressively adapting distal vessel parts to the image only after proximal neighbor tubelets have already converged. In addition, each newly activated tubelet inherits the local orientation and scale of the preceeding one. To arrive at a joint segmentation of chambers and vasculature, we extended a previous model comprising endocardial surfaces of the four chambers, the left ventricular epicardium, and a pulmonary artery trunk. Newly added are the aorta (ascending and descending plus arch), superior and inferior vena cava, coronary sinus, and four pulmonary veins. These vessels are organized as stacks of triangulated rings. This mesh configuration is most suitable to define tubelet segments. On 36 CT data sets reconstructed at several cardiac phases from 17 patients, segmentation accuracies of 0.61-0.80mm are obtained for the cardiac chambers. For the visible parts of the newly added great vessels, surface accuracies of 0.47-1.17mm are obtained (larger errors are asscociated with faintly contrasted venous structures).

  8. Automated Image Analysis of Lung Branching Morphogenesis from Microscopic Images of Fetal Rat Explants

    PubMed Central

    Rodrigues, Pedro L.; Rodrigues, Nuno F.; Duque, Duarte; Granja, Sara; Correia-Pinto, Jorge; Vilaça, João L.

    2014-01-01

    Background. Regulating mechanisms of branching morphogenesis of fetal lung rat explants have been an essential tool for molecular research. This work presents a new methodology to accurately quantify the epithelial, outer contour, and peripheral airway buds of lung explants during cellular development from microscopic images. Methods. The outer contour was defined using an adaptive and multiscale threshold algorithm whose level was automatically calculated based on an entropy maximization criterion. The inner lung epithelium was defined by a clustering procedure that groups small image regions according to the minimum description length principle and local statistical properties. Finally, the number of peripheral buds was counted as the skeleton branched ends from a skeletonized image of the lung inner epithelia. Results. The time for lung branching morphometric analysis was reduced in 98% in contrast to the manual method. Best results were obtained in the first two days of cellular development, with lesser standard deviations. Nonsignificant differences were found between the automatic and manual results in all culture days. Conclusions. The proposed method introduces a series of advantages related to its intuitive use and accuracy, making the technique suitable to images with different lighting characteristics and allowing a reliable comparison between different researchers. PMID:25250057

  9. Invited review: technical solutions for analysis of milk constituents and abnormal milk.

    PubMed

    Brandt, M; Haeussermann, A; Hartung, E

    2010-02-01

    Information about constituents of milk and visual alterations can be used for management support in improving mastitis detection, monitoring fertility and reproduction, and adapting individual diets. Numerous sensors that gather this information are either currently available or in development. Nevertheless, there is still a need to adapt these sensors to special requirements of on-farm utilization such as robustness, calibration and maintenance, costs, operating cycle duration, and high sensitivity and specificity. This paper provides an overview of available sensors, ongoing research, and areas of application for analysis of milk constituents. Currently, the recognition of abnormal milk and the control of udder health is achieved mainly by recording electrical conductivity and changes in milk color. Further indicators of inflammation were recently investigated either to satisfy the high specificity necessary for automatic separation of milk or to create reliable alarm lists. Likewise, milk composition, especially fat:protein ratio, milk urea nitrogen content, and concentration of ketone bodies, provides suitable information about energy and protein supply, roughage fraction in the diet, and metabolic imbalances in dairy cows. In this regard, future prospects are to use frequent on-farm measurements of milk constituents for short-term automatic nutritional management. Finally, measuring progesterone concentration in milk helps farmers detect ovulation, pregnancy, and infertility. Monitoring systems for on-farm or on-line analysis of milk composition are mostly based on infrared spectroscopy, optical methods, biosensors, or sensor arrays. Their calibration and maintenance requirements have to be checked thoroughly before they can be regularly implemented on dairy farms. Copyright 2010 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  10. Automatic Organ Localization for Adaptive Radiation Therapy for Prostate Cancer

    DTIC Science & Technology

    2005-05-01

    and provides a framework for task 3. Key Research Accomplishments "* Comparison of manual segmentation with our automatic method, using several...well as manual segmentations by a different rater. "* Computation of the actual cumulative dose delivered to both the cancerous and critical healthy...adaptive treatment of prostate or other cancer. As a result, all such work must be done manually . However, manual segmentation of the tumor and neighboring

  11. ACIR: automatic cochlea image registration

    NASA Astrophysics Data System (ADS)

    Al-Dhamari, Ibraheem; Bauer, Sabine; Paulus, Dietrich; Lissek, Friedrich; Jacob, Roland

    2017-02-01

    Efficient Cochlear Implant (CI) surgery requires prior knowledge of the cochlea's size and its characteristics. This information helps to select suitable implants for different patients. To get these measurements, a segmentation method of cochlea medical images is needed. An important pre-processing step for good cochlea segmentation involves efficient image registration. The cochlea's small size and complex structure, in addition to the different resolutions and head positions during imaging, reveals a big challenge for the automated registration of the different image modalities. In this paper, an Automatic Cochlea Image Registration (ACIR) method for multi- modal human cochlea images is proposed. This method is based on using small areas that have clear structures from both input images instead of registering the complete image. It uses the Adaptive Stochastic Gradient Descent Optimizer (ASGD) and Mattes's Mutual Information metric (MMI) to estimate 3D rigid transform parameters. The use of state of the art medical image registration optimizers published over the last two years are studied and compared quantitatively using the standard Dice Similarity Coefficient (DSC). ACIR requires only 4.86 seconds on average to align cochlea images automatically and to put all the modalities in the same spatial locations without human interference. The source code is based on the tool elastix and is provided for free as a 3D Slicer plugin. Another contribution of this work is a proposed public cochlea standard dataset which can be downloaded for free from a public XNAT server.

  12. Using Multithreading for the Automatic Load Balancing of 2D Adaptive Finite Element Meshes

    NASA Technical Reports Server (NTRS)

    Heber, Gerd; Biswas, Rupak; Thulasiraman, Parimala; Gao, Guang R.; Bailey, David H. (Technical Monitor)

    1998-01-01

    In this paper, we present a multi-threaded approach for the automatic load balancing of adaptive finite element (FE) meshes. The platform of our choice is the EARTH multi-threaded system which offers sufficient capabilities to tackle this problem. We implement the question phase of FE applications on triangular meshes, and exploit the EARTH token mechanism to automatically balance the resulting irregular and highly nonuniform workload. We discuss the results of our experiments on EARTH-SP2, an implementation of EARTH on the IBM SP2, with different load balancing strategies that are built into the runtime system.

  13. A hierarchical structure for automatic meshing and adaptive FEM analysis

    NASA Technical Reports Server (NTRS)

    Kela, Ajay; Saxena, Mukul; Perucchio, Renato

    1987-01-01

    A new algorithm for generating automatically, from solid models of mechanical parts, finite element meshes that are organized as spatially addressable quaternary trees (for 2-D work) or octal trees (for 3-D work) is discussed. Because such meshes are inherently hierarchical as well as spatially addressable, they permit efficient substructuring techniques to be used for both global analysis and incremental remeshing and reanalysis. The global and incremental techniques are summarized and some results from an experimental closed loop 2-D system in which meshing, analysis, error evaluation, and remeshing and reanalysis are done automatically and adaptively are presented. The implementation of 3-D work is briefly discussed.

  14. Optimizing Input/Output Using Adaptive File System Policies

    NASA Technical Reports Server (NTRS)

    Madhyastha, Tara M.; Elford, Christopher L.; Reed, Daniel A.

    1996-01-01

    Parallel input/output characterization studies and experiments with flexible resource management algorithms indicate that adaptivity is crucial to file system performance. In this paper we propose an automatic technique for selecting and refining file system policies based on application access patterns and execution environment. An automatic classification framework allows the file system to select appropriate caching and pre-fetching policies, while performance sensors provide feedback used to tune policy parameters for specific system environments. To illustrate the potential performance improvements possible using adaptive file system policies, we present results from experiments involving classification-based and performance-based steering.

  15. Locomotor adaptation is modulated by observing the actions of others

    PubMed Central

    Patel, Mitesh; Roberts, R. Edward; Riyaz, Mohammed U.; Ahmed, Maroof; Buckwell, David; Bunday, Karen; Ahmad, Hena; Kaski, Diego; Arshad, Qadeer

    2015-01-01

    Observing the motor actions of another person could facilitate compensatory motor behavior in the passive observer. Here we explored whether action observation alone can induce automatic locomotor adaptation in humans. To explore this possibility, we used the “broken escalator” paradigm. Conventionally this involves stepping upon a stationary sled after having previously experienced it actually moving (Moving trials). This history of motion produces a locomotor aftereffect when subsequently stepping onto a stationary sled. We found that viewing an actor perform the Moving trials was sufficient to generate a locomotor aftereffect in the observer, the size of which was significantly correlated with the size of the movement (postural sway) observed. Crucially, the effect is specific to watching the task being performed, as no motor adaptation occurs after simply viewing the sled move in isolation. These findings demonstrate that locomotor adaptation in humans can be driven purely by action observation, with the brain adapting motor plans in response to the size of the observed individual's motion. This mechanism may be mediated by a mirror neuron system that automatically adapts behavior to minimize movement errors and improve motor skills through social cues, although further neurophysiological studies are required to support this theory. These data suggest that merely observing the gait of another person in a challenging environment is sufficient to generate appropriate postural countermeasures, implying the existence of an automatic mechanism for adapting locomotor behavior. PMID:26156386

  16. High-throughput image analysis of tumor spheroids: a user-friendly software application to measure the size of spheroids automatically and accurately.

    PubMed

    Chen, Wenjin; Wong, Chung; Vosburgh, Evan; Levine, Arnold J; Foran, David J; Xu, Eugenia Y

    2014-07-08

    The increasing number of applications of three-dimensional (3D) tumor spheroids as an in vitro model for drug discovery requires their adaptation to large-scale screening formats in every step of a drug screen, including large-scale image analysis. Currently there is no ready-to-use and free image analysis software to meet this large-scale format. Most existing methods involve manually drawing the length and width of the imaged 3D spheroids, which is a tedious and time-consuming process. This study presents a high-throughput image analysis software application - SpheroidSizer, which measures the major and minor axial length of the imaged 3D tumor spheroids automatically and accurately; calculates the volume of each individual 3D tumor spheroid; then outputs the results in two different forms in spreadsheets for easy manipulations in the subsequent data analysis. The main advantage of this software is its powerful image analysis application that is adapted for large numbers of images. It provides high-throughput computation and quality-control workflow. The estimated time to process 1,000 images is about 15 min on a minimally configured laptop, or around 1 min on a multi-core performance workstation. The graphical user interface (GUI) is also designed for easy quality control, and users can manually override the computer results. The key method used in this software is adapted from the active contour algorithm, also known as Snakes, which is especially suitable for images with uneven illumination and noisy background that often plagues automated imaging processing in high-throughput screens. The complimentary "Manual Initialize" and "Hand Draw" tools provide the flexibility to SpheroidSizer in dealing with various types of spheroids and diverse quality images. This high-throughput image analysis software remarkably reduces labor and speeds up the analysis process. Implementing this software is beneficial for 3D tumor spheroids to become a routine in vitro model for drug screens in industry and academia.

  17. Automatic speech recognition in air traffic control

    NASA Technical Reports Server (NTRS)

    Karlsson, Joakim

    1990-01-01

    Automatic Speech Recognition (ASR) technology and its application to the Air Traffic Control system are described. The advantages of applying ASR to Air Traffic Control, as well as criteria for choosing a suitable ASR system are presented. Results from previous research and directions for future work at the Flight Transportation Laboratory are outlined.

  18. Automatic Dance Lesson Generation

    ERIC Educational Resources Information Center

    Yang, Yang; Leung, H.; Yue, Lihua; Deng, LiQun

    2012-01-01

    In this paper, an automatic lesson generation system is presented which is suitable in a learning-by-mimicking scenario where the learning objects can be represented as multiattribute time series data. The dance is used as an example in this paper to illustrate the idea. Given a dance motion sequence as the input, the proposed lesson generation…

  19. Automatic Clustering Using Multi-objective Particle Swarm and Simulated Annealing

    PubMed Central

    Abubaker, Ahmad; Baharum, Adam; Alrefaei, Mahmoud

    2015-01-01

    This paper puts forward a new automatic clustering algorithm based on Multi-Objective Particle Swarm Optimization and Simulated Annealing, “MOPSOSA”. The proposed algorithm is capable of automatic clustering which is appropriate for partitioning datasets to a suitable number of clusters. MOPSOSA combines the features of the multi-objective based particle swarm optimization (PSO) and the Multi-Objective Simulated Annealing (MOSA). Three cluster validity indices were optimized simultaneously to establish the suitable number of clusters and the appropriate clustering for a dataset. The first cluster validity index is centred on Euclidean distance, the second on the point symmetry distance, and the last cluster validity index is based on short distance. A number of algorithms have been compared with the MOPSOSA algorithm in resolving clustering problems by determining the actual number of clusters and optimal clustering. Computational experiments were carried out to study fourteen artificial and five real life datasets. PMID:26132309

  20. Automatic water inventory, collecting, and dispensing unit

    NASA Technical Reports Server (NTRS)

    Hall, J. B., Jr.; Williams, E. F.

    1972-01-01

    Two cylindrical tanks with piston bladders and associated components for automatic filling and emptying use liquid inventory readout devices in control of water flow. Unit provides for adaptive water collection, storage, and dispensation in weightlessness environment.

  1. Virtual reality based adaptive dose assessment method for arbitrary geometries in nuclear facility decommissioning.

    PubMed

    Liu, Yong-Kuo; Chao, Nan; Xia, Hong; Peng, Min-Jun; Ayodeji, Abiodun

    2018-05-17

    This paper presents an improved and efficient virtual reality-based adaptive dose assessment method (VRBAM) applicable to the cutting and dismantling tasks in nuclear facility decommissioning. The method combines the modeling strength of virtual reality with the flexibility of adaptive technology. The initial geometry is designed with the three-dimensional computer-aided design tools, and a hybrid model composed of cuboids and a point-cloud is generated automatically according to the virtual model of the object. In order to improve the efficiency of dose calculation while retaining accuracy, the hybrid model is converted to a weighted point-cloud model, and the point kernels are generated by adaptively simplifying the weighted point-cloud model according to the detector position, an approach that is suitable for arbitrary geometries. The dose rates are calculated with the Point-Kernel method. To account for radiation scattering effects, buildup factors are calculated with the Geometric-Progression formula in the fitting function. The geometric modeling capability of VRBAM was verified by simulating basic geometries, which included a convex surface, a concave surface, a flat surface and their combination. The simulation results show that the VRBAM is more flexible and superior to other approaches in modeling complex geometries. In this paper, the computation time and dose rate results obtained from the proposed method were also compared with those obtained using the MCNP code and an earlier virtual reality-based method (VRBM) developed by the same authors. © 2018 IOP Publishing Ltd.

  2. Method and apparatus for telemetry adaptive bandwidth compression

    NASA Technical Reports Server (NTRS)

    Graham, Olin L.

    1987-01-01

    Methods and apparatus are provided for automatic and/or manual adaptive bandwidth compression of telemetry. An adaptive sampler samples a video signal from a scanning sensor and generates a sequence of sampled fields. Each field and range rate information from the sensor are hence sequentially transmitted to and stored in a multiple and adaptive field storage means. The field storage means then, in response to an automatic or manual control signal, transfers the stored sampled field signals to a video monitor in a form for sequential or simultaneous display of a desired number of stored signal fields. The sampling ratio of the adaptive sample, the relative proportion of available communication bandwidth allocated respectively to transmitted data and video information, and the number of fields simultaneously displayed are manually or automatically selectively adjustable in functional relationship to each other and detected range rate. In one embodiment, when relatively little or no scene motion is detected, the control signal maximizes sampling ratio and causes simultaneous display of all stored fields, thus maximizing resolution and bandwidth available for data transmission. When increased scene motion is detected, the control signal is adjusted accordingly to cause display of fewer fields. If greater resolution is desired, the control signal is adjusted to increase the sampling ratio.

  3. Cyber-Physical System Security With Deceptive Virtual Hosts for Industrial Control Networks

    DOE PAGES

    Vollmer, Todd; Manic, Milos

    2014-05-01

    A challenge facing industrial control network administrators is protecting the typically large number of connected assets for which they are responsible. These cyber devices may be tightly coupled with the physical processes they control and human induced failures risk dire real-world consequences. Dynamic virtual honeypots are effective tools for observing and attracting network intruder activity. This paper presents a design and implementation for self-configuring honeypots that passively examine control system network traffic and actively adapt to the observed environment. In contrast to prior work in the field, six tools were analyzed for suitability of network entity information gathering. Ettercap, anmore » established network security tool not commonly used in this capacity, outperformed the other tools and was chosen for implementation. Utilizing Ettercap XML output, a novel four-step algorithm was developed for autonomous creation and update of a Honeyd configuration. This algorithm was tested on an existing small campus grid and sensor network by execution of a collaborative usage scenario. Automatically created virtual hosts were deployed in concert with an anomaly behavior (AB) system in an attack scenario. Virtual hosts were automatically configured with unique emulated network stack behaviors for 92% of the targeted devices. The AB system alerted on 100% of the monitored emulated devices.« less

  4. Edge Sharpness Assessment by Parametric Modeling: Application to Magnetic Resonance Imaging.

    PubMed

    Ahmad, R; Ding, Y; Simonetti, O P

    2015-05-01

    In biomedical imaging, edge sharpness is an important yet often overlooked image quality metric. In this work, a semi-automatic method to quantify edge sharpness in the presence of significant noise is presented with application to magnetic resonance imaging (MRI). The method is based on parametric modeling of image edges. First, an edge map is automatically generated and one or more edges-of-interest (EOI) are manually selected using graphical user interface. Multiple exclusion criteria are then enforced to eliminate edge pixels that are potentially not suitable for sharpness assessment. Second, at each pixel of the EOI, an image intensity profile is read along a small line segment that runs locally normal to the EOI. Third, the profiles corresponding to all EOI pixels are individually fitted with a sigmoid function characterized by four parameters, including one that represents edge sharpness. Last, the distribution of the sharpness parameter is used to quantify edge sharpness. For validation, the method is applied to simulated data as well as MRI data from both phantom imaging and cine imaging experiments. This method allows for fast, quantitative evaluation of edge sharpness even in images with poor signal-to-noise ratio. Although the utility of this method is demonstrated for MRI, it can be adapted for other medical imaging applications.

  5. Efficient self-organizing multilayer neural network for nonlinear system modeling.

    PubMed

    Han, Hong-Gui; Wang, Li-Dan; Qiao, Jun-Fei

    2013-07-01

    It has been shown extensively that the dynamic behaviors of a neural system are strongly influenced by the network architecture and learning process. To establish an artificial neural network (ANN) with self-organizing architecture and suitable learning algorithm for nonlinear system modeling, an automatic axon-neural network (AANN) is investigated in the following respects. First, the network architecture is constructed automatically to change both the number of hidden neurons and topologies of the neural network during the training process. The approach introduced in adaptive connecting-and-pruning algorithm (ACP) is a type of mixed mode operation, which is equivalent to pruning or adding the connecting of the neurons, as well as inserting some required neurons directly. Secondly, the weights are adjusted, using a feedforward computation (FC) to obtain the information for the gradient during learning computation. Unlike most of the previous studies, AANN is able to self-organize the architecture and weights, and to improve the network performances. Also, the proposed AANN has been tested on a number of benchmark problems, ranging from nonlinear function approximating to nonlinear systems modeling. The experimental results show that AANN can have better performances than that of some existing neural networks. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  6. Adaptive Intelligent Support to Improve Peer Tutoring in Algebra

    ERIC Educational Resources Information Center

    Walker, Erin; Rummel, Nikol; Koedinger, Kenneth R.

    2014-01-01

    Adaptive collaborative learning support (ACLS) involves collaborative learning environments that adapt their characteristics, and sometimes provide intelligent hints and feedback, to improve individual students' collaborative interactions. ACLS often involves a system that can automatically assess student dialogue, model effective and…

  7. Automatically Generated Vegetation Density Maps with LiDAR Survey for Orienteering Purpose

    NASA Astrophysics Data System (ADS)

    Petrovič, Dušan

    2018-05-01

    The focus of our research was to automatically generate the most adequate vegetation density maps for orienteering purpose. Application Karttapullatuin was used for automated generation of vegetation density maps, which requires LiDAR data to process an automatically generated map. A part of the orienteering map in the area of Kazlje-Tomaj was used to compare the graphical display of vegetation density. With different settings of parameters in the Karttapullautin application we changed the way how vegetation density of automatically generated map was presented, and tried to match it as much as possible with the orienteering map of Kazlje-Tomaj. Comparing more created maps of vegetation density the most suitable parameter settings to automatically generate maps on other areas were proposed, too.

  8. Approaches to the automatic generation and control of finite element meshes

    NASA Technical Reports Server (NTRS)

    Shephard, Mark S.

    1987-01-01

    The algorithmic approaches being taken to the development of finite element mesh generators capable of automatically discretizing general domains without the need for user intervention are discussed. It is demonstrated that because of the modeling demands placed on a automatic mesh generator, all the approaches taken to date produce unstructured meshes. Consideration is also given to both a priori and a posteriori mesh control devices for automatic mesh generators as well as their integration with geometric modeling and adaptive analysis procedures.

  9. Automated encoding of clinical documents based on natural language processing.

    PubMed

    Friedman, Carol; Shagina, Lyudmila; Lussier, Yves; Hripcsak, George

    2004-01-01

    The aim of this study was to develop a method based on natural language processing (NLP) that automatically maps an entire clinical document to codes with modifiers and to quantitatively evaluate the method. An existing NLP system, MedLEE, was adapted to automatically generate codes. The method involves matching of structured output generated by MedLEE consisting of findings and modifiers to obtain the most specific code. Recall and precision applied to Unified Medical Language System (UMLS) coding were evaluated in two separate studies. Recall was measured using a test set of 150 randomly selected sentences, which were processed using MedLEE. Results were compared with a reference standard determined manually by seven experts. Precision was measured using a second test set of 150 randomly selected sentences from which UMLS codes were automatically generated by the method and then validated by experts. Recall of the system for UMLS coding of all terms was .77 (95% CI.72-.81), and for coding terms that had corresponding UMLS codes recall was .83 (.79-.87). Recall of the system for extracting all terms was .84 (.81-.88). Recall of the experts ranged from .69 to .91 for extracting terms. The precision of the system was .89 (.87-.91), and precision of the experts ranged from .61 to .91. Extraction of relevant clinical information and UMLS coding were accomplished using a method based on NLP. The method appeared to be comparable to or better than six experts. The advantage of the method is that it maps text to codes along with other related information, rendering the coded output suitable for effective retrieval.

  10. Perception Evolution Network Based on Cognition Deepening Model--Adapting to the Emergence of New Sensory Receptor.

    PubMed

    Xing, Youlu; Shen, Furao; Zhao, Jinxi

    2016-03-01

    The proposed perception evolution network (PEN) is a biologically inspired neural network model for unsupervised learning and online incremental learning. It is able to automatically learn suitable prototypes from learning data in an incremental way, and it does not require the predefined prototype number or the predefined similarity threshold. Meanwhile, being more advanced than the existing unsupervised neural network model, PEN permits the emergence of a new dimension of perception in the perception field of the network. When a new dimension of perception is introduced, PEN is able to integrate the new dimensional sensory inputs with the learned prototypes, i.e., the prototypes are mapped to a high-dimensional space, which consists of both the original dimension and the new dimension of the sensory inputs. In the experiment, artificial data and real-world data are used to test the proposed PEN, and the results show that PEN can work effectively.

  11. Automation in visual inspection tasks: X-ray luggage screening supported by a system of direct, indirect or adaptable cueing with low and high system reliability.

    PubMed

    Chavaillaz, Alain; Schwaninger, Adrian; Michel, Stefan; Sauer, Juergen

    2018-05-25

    The present study evaluated three automation modes for improving performance in an X-ray luggage screening task. 140 participants were asked to detect the presence of prohibited items in X-ray images of cabin luggage. Twenty participants conducted this task without automatic support (control group), whereas the others worked with either indirect cues (system indicated the target presence without specifying its location), or direct cues (system pointed out the exact target location) or adaptable automation (participants could freely choose between no cue, direct and indirect cues). Furthermore, automatic support reliability was manipulated (low vs. high). The results showed a clear advantage for direct cues regarding detection performance and response time. No benefits were observed for adaptable automation. Finally, high automation reliability led to better performance and higher operator trust. The findings overall confirmed that automatic support systems for luggage screening should be designed such that they provide direct, highly reliable cues.

  12. Automatic and user-centric approaches to video summary evaluation

    NASA Astrophysics Data System (ADS)

    Taskiran, Cuneyt M.; Bentley, Frank

    2007-01-01

    Automatic video summarization has become an active research topic in content-based video processing. However, not much emphasis has been placed on developing rigorous summary evaluation methods and developing summarization systems based on a clear understanding of user needs, obtained through user centered design. In this paper we address these two topics and propose an automatic video summary evaluation algorithm adapted from teh text summarization domain.

  13. Adaptive Self Tuning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peterson, Matthew; Draelos, Timothy; Knox, Hunter

    2017-05-02

    The AST software includes numeric methods to 1) adjust STA/LTA signal detector trigger level (TL) values and 2) filter detections for a network of sensors. AST adapts TL values to the current state of the environment by leveraging cooperation within a neighborhood of sensors. The key metric that guides the dynamic tuning is consistency of each sensor with its nearest neighbors: TL values are automatically adjusted on a per station basis to be more or less sensitive to produce consistent agreement of detections in its neighborhood. The AST algorithm adapts in near real-time to changing conditions in an attempt tomore » automatically self-tune a signal detector to identify (detect) only signals from events of interest.« less

  14. Rapid Spoligotyping of Mycobacterium tuberculosis Complex Bacteria by Use of a Microarray System with Automatic Data Processing and Assignment

    PubMed Central

    Ruettger, Anke; Nieter, Johanna; Skrypnyk, Artem; Engelmann, Ines; Ziegler, Albrecht; Moser, Irmgard; Monecke, Stefan; Ehricht, Ralf

    2012-01-01

    Membrane-based spoligotyping has been converted to DNA microarray format to qualify it for high-throughput testing. We have shown the assay's validity and suitability for direct typing from tissue and detecting new spoligotypes. Advantages of the microarray methodology include rapidity, ease of operation, automatic data processing, and affordability. PMID:22553239

  15. Rapid spoligotyping of Mycobacterium tuberculosis complex bacteria by use of a microarray system with automatic data processing and assignment.

    PubMed

    Ruettger, Anke; Nieter, Johanna; Skrypnyk, Artem; Engelmann, Ines; Ziegler, Albrecht; Moser, Irmgard; Monecke, Stefan; Ehricht, Ralf; Sachse, Konrad

    2012-07-01

    Membrane-based spoligotyping has been converted to DNA microarray format to qualify it for high-throughput testing. We have shown the assay's validity and suitability for direct typing from tissue and detecting new spoligotypes. Advantages of the microarray methodology include rapidity, ease of operation, automatic data processing, and affordability.

  16. Automatic Adaptation to Fast Input Changes in a Time-Invariant Neural Circuit

    PubMed Central

    Bharioke, Arjun; Chklovskii, Dmitri B.

    2015-01-01

    Neurons must faithfully encode signals that can vary over many orders of magnitude despite having only limited dynamic ranges. For a correlated signal, this dynamic range constraint can be relieved by subtracting away components of the signal that can be predicted from the past, a strategy known as predictive coding, that relies on learning the input statistics. However, the statistics of input natural signals can also vary over very short time scales e.g., following saccades across a visual scene. To maintain a reduced transmission cost to signals with rapidly varying statistics, neuronal circuits implementing predictive coding must also rapidly adapt their properties. Experimentally, in different sensory modalities, sensory neurons have shown such adaptations within 100 ms of an input change. Here, we show first that linear neurons connected in a feedback inhibitory circuit can implement predictive coding. We then show that adding a rectification nonlinearity to such a feedback inhibitory circuit allows it to automatically adapt and approximate the performance of an optimal linear predictive coding network, over a wide range of inputs, while keeping its underlying temporal and synaptic properties unchanged. We demonstrate that the resulting changes to the linearized temporal filters of this nonlinear network match the fast adaptations observed experimentally in different sensory modalities, in different vertebrate species. Therefore, the nonlinear feedback inhibitory network can provide automatic adaptation to fast varying signals, maintaining the dynamic range necessary for accurate neuronal transmission of natural inputs. PMID:26247884

  17. ExcelAutomat: a tool for systematic processing of files as applied to quantum chemical calculations

    NASA Astrophysics Data System (ADS)

    Laloo, Jalal Z. A.; Laloo, Nassirah; Rhyman, Lydia; Ramasami, Ponnadurai

    2017-07-01

    The processing of the input and output files of quantum chemical calculations often necessitates a spreadsheet as a key component of the workflow. Spreadsheet packages with a built-in programming language editor can automate the steps involved and thus provide a direct link between processing files and the spreadsheet. This helps to reduce user-interventions as well as the need to switch between different programs to carry out each step. The ExcelAutomat tool is the implementation of this method in Microsoft Excel (MS Excel) using the default Visual Basic for Application (VBA) programming language. The code in ExcelAutomat was adapted to work with the platform-independent open-source LibreOffice Calc, which also supports VBA. ExcelAutomat provides an interface through the spreadsheet to automate repetitive tasks such as merging input files, splitting, parsing and compiling data from output files, and generation of unique filenames. Selected extracted parameters can be retrieved as variables which can be included in custom codes for a tailored approach. ExcelAutomat works with Gaussian files and is adapted for use with other computational packages including the non-commercial GAMESS. ExcelAutomat is available as a downloadable MS Excel workbook or as a LibreOffice workbook.

  18. ExcelAutomat: a tool for systematic processing of files as applied to quantum chemical calculations.

    PubMed

    Laloo, Jalal Z A; Laloo, Nassirah; Rhyman, Lydia; Ramasami, Ponnadurai

    2017-07-01

    The processing of the input and output files of quantum chemical calculations often necessitates a spreadsheet as a key component of the workflow. Spreadsheet packages with a built-in programming language editor can automate the steps involved and thus provide a direct link between processing files and the spreadsheet. This helps to reduce user-interventions as well as the need to switch between different programs to carry out each step. The ExcelAutomat tool is the implementation of this method in Microsoft Excel (MS Excel) using the default Visual Basic for Application (VBA) programming language. The code in ExcelAutomat was adapted to work with the platform-independent open-source LibreOffice Calc, which also supports VBA. ExcelAutomat provides an interface through the spreadsheet to automate repetitive tasks such as merging input files, splitting, parsing and compiling data from output files, and generation of unique filenames. Selected extracted parameters can be retrieved as variables which can be included in custom codes for a tailored approach. ExcelAutomat works with Gaussian files and is adapted for use with other computational packages including the non-commercial GAMESS. ExcelAutomat is available as a downloadable MS Excel workbook or as a LibreOffice workbook.

  19. FIREFLY LUCIFERASE ATP ASSAY DEVELOPMENT FOR MONITORING BACTERIAL CONCENTRATIONS IN WATER SUPPLIES

    EPA Science Inventory

    This research program was initiated to develop a rapid, automatable system for measuring total viable microorganisms in potable drinking water supplies using the firefly luciferase ATP assay. The assay was adapted to an automatable flow system that provided comparable sensitivity...

  20. A design of LED adaptive dimming lighting system based on incremental PID controller

    NASA Astrophysics Data System (ADS)

    He, Xiangyan; Xiao, Zexin; He, Shaojia

    2010-11-01

    As a new generation energy-saving lighting source, LED is applied widely in various technology and industry fields. The requirement of its adaptive lighting technology is more and more rigorous, especially in the automatic on-line detecting system. In this paper, a closed loop feedback LED adaptive dimming lighting system based on incremental PID controller is designed, which consists of MEGA16 chip as a Micro-controller Unit (MCU), the ambient light sensor BH1750 chip with Inter-Integrated Circuit (I2C), and constant-current driving circuit. A given value of light intensity required for the on-line detecting environment need to be saved to the register of MCU. The optical intensity, detected by BH1750 chip in real time, is converted to digital signal by AD converter of the BH1750 chip, and then transmitted to MEGA16 chip through I2C serial bus. Since the variation law of light intensity in the on-line detecting environment is usually not easy to be established, incremental Proportional-Integral-Differential (PID) algorithm is applied in this system. Control variable obtained by the incremental PID determines duty cycle of Pulse-Width Modulation (PWM). Consequently, LED's forward current is adjusted by PWM, and the luminous intensity of the detection environment is stabilized by self-adaptation. The coefficients of incremental PID are obtained respectively after experiments. Compared with the traditional LED dimming system, it has advantages of anti-interference, simple construction, fast response, and high stability by the use of incremental PID algorithm and BH1750 chip with I2C serial bus. Therefore, it is suitable for the adaptive on-line detecting applications.

  1. A Numerical Study of Three Moving-Grid Methods for One-Dimensional Partial Differential Equations Which Are Based on the Method of Lines

    NASA Astrophysics Data System (ADS)

    Furzeland, R. M.; Verwer, J. G.; Zegeling, P. A.

    1990-08-01

    In recent years, several sophisticated packages based on the method of lines (MOL) have been developed for the automatic numerical integration of time-dependent problems in partial differential equations (PDEs), notably for problems in one space dimension. These packages greatly benefit from the very successful developments of automatic stiff ordinary differential equation solvers. However, from the PDE point of view, they integrate only in a semiautomatic way in the sense that they automatically adjust the time step sizes, but use just a fixed space grid, chosen a priori, for the entire calculation. For solutions possessing sharp spatial transitions that move, e.g., travelling wave fronts or emerging boundary and interior layers, a grid held fixed for the entire calculation is computationally inefficient, since for a good solution this grid often must contain a very large number of nodes. In such cases methods which attempt automatically to adjust the sizes of both the space and the time steps are likely to be more successful in efficiently resolving critical regions of high spatial and temporal activity. Methods and codes that operate this way belong to the realm of adaptive or moving-grid methods. Following the MOL approach, this paper is devoted to an evaluation and comparison, mainly based on extensive numerical tests, of three moving-grid methods for 1D problems, viz., the finite-element method of Miller and co-workers, the method published by Petzold, and a method based on ideas adopted from Dorfi and Drury. Our examination of these three methods is aimed at assessing which is the most suitable from the point of view of retaining the acknowledged features of reliability, robustness, and efficiency of the conventional MOL approach. Therefore, considerable attention is paid to the temporal performance of the methods.

  2. Optimal and adaptive methods of processing hydroacoustic signals (review)

    NASA Astrophysics Data System (ADS)

    Malyshkin, G. S.; Sidel'nikov, G. B.

    2014-09-01

    Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.

  3. Identification of suitable fundus images using automated quality assessment methods.

    PubMed

    Şevik, Uğur; Köse, Cemal; Berber, Tolga; Erdöl, Hidayet

    2014-04-01

    Retinal image quality assessment (IQA) is a crucial process for automated retinal image analysis systems to obtain an accurate and successful diagnosis of retinal diseases. Consequently, the first step in a good retinal image analysis system is measuring the quality of the input image. We present an approach for finding medically suitable retinal images for retinal diagnosis. We used a three-class grading system that consists of good, bad, and outlier classes. We created a retinal image quality dataset with a total of 216 consecutive images called the Diabetic Retinopathy Image Database. We identified the suitable images within the good images for automatic retinal image analysis systems using a novel method. Subsequently, we evaluated our retinal image suitability approach using the Digital Retinal Images for Vessel Extraction and Standard Diabetic Retinopathy Database Calibration level 1 public datasets. The results were measured through the F1 metric, which is a harmonic mean of precision and recall metrics. The highest F1 scores of the IQA tests were 99.60%, 96.50%, and 85.00% for good, bad, and outlier classes, respectively. Additionally, the accuracy of our suitable image detection approach was 98.08%. Our approach can be integrated into any automatic retinal analysis system with sufficient performance scores.

  4. Apparatus enables automatic microanalysis of body fluids

    NASA Technical Reports Server (NTRS)

    Soffen, G. A.; Stuart, J. L.

    1966-01-01

    Apparatus will automatically and quantitatively determine body fluid constituents which are amenable to analysis by fluorometry or colorimetry. The results of the tests are displayed as percentages of full scale deflection on a strip-chart recorder. The apparatus can also be adapted for microanalysis of various other fluids.

  5. Automatic reference selection for quantitative EEG interpretation: identification of diffuse/localised activity and the active earlobe reference, iterative detection of the distribution of EEG rhythms.

    PubMed

    Wang, Bei; Wang, Xingyu; Ikeda, Akio; Nagamine, Takashi; Shibasaki, Hiroshi; Nakamura, Masatoshi

    2014-01-01

    EEG (Electroencephalograph) interpretation is important for the diagnosis of neurological disorders. The proper adjustment of the montage can highlight the EEG rhythm of interest and avoid false interpretation. The aim of this study was to develop an automatic reference selection method to identify a suitable reference. The results may contribute to the accurate inspection of the distribution of EEG rhythms for quantitative EEG interpretation. The method includes two pre-judgements and one iterative detection module. The diffuse case is initially identified by pre-judgement 1 when intermittent rhythmic waveforms occur over large areas along the scalp. The earlobe reference or averaged reference is adopted for the diffuse case due to the effect of the earlobe reference depending on pre-judgement 2. An iterative detection algorithm is developed for the localised case when the signal is distributed in a small area of the brain. The suitable averaged reference is finally determined based on the detected focal and distributed electrodes. The presented technique was applied to the pathological EEG recordings of nine patients. One example of the diffuse case is introduced by illustrating the results of the pre-judgements. The diffusely intermittent rhythmic slow wave is identified. The effect of active earlobe reference is analysed. Two examples of the localised case are presented, indicating the results of the iterative detection module. The focal and distributed electrodes are detected automatically during the repeating algorithm. The identification of diffuse and localised activity was satisfactory compared with the visual inspection. The EEG rhythm of interest can be highlighted using a suitable selected reference. The implementation of an automatic reference selection method is helpful to detect the distribution of an EEG rhythm, which can improve the accuracy of EEG interpretation during both visual inspection and automatic interpretation. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.

  6. SU-C-202-03: A Tool for Automatic Calculation of Delivered Dose Variation for Off-Line Adaptive Therapy Using Cone Beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, B; Lee, S; Chen, S

    Purpose: Monitoring the delivered dose is an important task for the adaptive radiotherapy (ART) and for determining time to re-plan. A software tool which enables automatic delivered dose calculation using cone-beam CT (CBCT) has been developed and tested. Methods: The tool consists of four components: a CBCT Colleting Module (CCM), a Plan Registration Moduel (PRM), a Dose Calculation Module (DCM), and an Evaluation and Action Module (EAM). The CCM is triggered periodically (e.g. every 1:00 AM) to search for newly acquired CBCTs of patients of interest and then export the DICOM files of the images and related registrations defined inmore » ARIA followed by triggering the PRM. The PRM imports the DICOM images and registrations, links the CBCTs to the related treatment plan of the patient in the planning system (RayStation V4.5, RaySearch, Stockholm, Sweden). A pre-determined CT-to-density table is automatically generated for dose calculation. Current version of the DCM uses a rigid registration which regards the treatment isocenter of the CBCT to be the isocenter of the treatment plan. Then it starts the dose calculation automatically. The AEM evaluates the plan using pre-determined plan evaluation parameters: PTV dose-volume metrics and critical organ doses. The tool has been tested for 10 patients. Results: Automatic plans are generated and saved in the order of the treatment dates of the Adaptive Planning module of the RayStation planning system, without any manual intervention. Once the CTV dose deviates more than 3%, both email and page alerts are sent to the physician and the physicist of the patient so that one can look the case closely. Conclusion: The tool is capable to perform automatic dose tracking and to alert clinicians when an action is needed. It is clinically useful for off-line adaptive therapy to catch any gross error. Practical way of determining alarming level for OAR is under development.« less

  7. Managing hardwood-softwood mixtures for future forests in eastern North America: assessing suitability to projected climate change

    Treesearch

    John M. Kabrick; Kenneth L. Clark; Anthony W. D' Amato; Daniel C. Dey; Laura S. Kenefic; Christel C. Kern; Benjamin O. Knapp; David A. MacLean; Patricia Raymond; Justin D. Waskiewicz

    2017-01-01

    Despite growing interest in management strategies for climate change adaptation, there are few methods for assessing the ability of stands to endure or adapt to projected future climates. We developed a means for assigning climate "Compatibility" and "Adaptability" scores to stands for assessing the suitability of tree species for projected climate...

  8. Automatic Training of Rat Cyborgs for Navigation.

    PubMed

    Yu, Yipeng; Wu, Zhaohui; Xu, Kedi; Gong, Yongyue; Zheng, Nenggan; Zheng, Xiaoxiang; Pan, Gang

    2016-01-01

    A rat cyborg system refers to a biological rat implanted with microelectrodes in its brain, via which the outer electrical stimuli can be delivered into the brain in vivo to control its behaviors. Rat cyborgs have various applications in emergency, such as search and rescue in disasters. Prior to a rat cyborg becoming controllable, a lot of effort is required to train it to adapt to the electrical stimuli. In this paper, we build a vision-based automatic training system for rat cyborgs to replace the time-consuming manual training procedure. A hierarchical framework is proposed to facilitate the colearning between rats and machines. In the framework, the behavioral states of a rat cyborg are visually sensed by a camera, a parameterized state machine is employed to model the training action transitions triggered by rat's behavioral states, and an adaptive adjustment policy is developed to adaptively adjust the stimulation intensity. The experimental results of three rat cyborgs prove the effectiveness of our system. To the best of our knowledge, this study is the first to tackle automatic training of animal cyborgs.

  9. Automatic Training of Rat Cyborgs for Navigation

    PubMed Central

    Yu, Yipeng; Wu, Zhaohui; Xu, Kedi; Gong, Yongyue; Zheng, Nenggan; Zheng, Xiaoxiang; Pan, Gang

    2016-01-01

    A rat cyborg system refers to a biological rat implanted with microelectrodes in its brain, via which the outer electrical stimuli can be delivered into the brain in vivo to control its behaviors. Rat cyborgs have various applications in emergency, such as search and rescue in disasters. Prior to a rat cyborg becoming controllable, a lot of effort is required to train it to adapt to the electrical stimuli. In this paper, we build a vision-based automatic training system for rat cyborgs to replace the time-consuming manual training procedure. A hierarchical framework is proposed to facilitate the colearning between rats and machines. In the framework, the behavioral states of a rat cyborg are visually sensed by a camera, a parameterized state machine is employed to model the training action transitions triggered by rat's behavioral states, and an adaptive adjustment policy is developed to adaptively adjust the stimulation intensity. The experimental results of three rat cyborgs prove the effectiveness of our system. To the best of our knowledge, this study is the first to tackle automatic training of animal cyborgs. PMID:27436999

  10. Spectral saliency via automatic adaptive amplitude spectrum analysis

    NASA Astrophysics Data System (ADS)

    Wang, Xiaodong; Dai, Jialun; Zhu, Yafei; Zheng, Haiyong; Qiao, Xiaoyan

    2016-03-01

    Suppressing nonsalient patterns by smoothing the amplitude spectrum at an appropriate scale has been shown to effectively detect the visual saliency in the frequency domain. Different filter scales are required for different types of salient objects. We observe that the optimal scale for smoothing amplitude spectrum shares a specific relation with the size of the salient region. Based on this observation and the bottom-up saliency detection characterized by spectrum scale-space analysis for natural images, we propose to detect visual saliency, especially with salient objects of different sizes and locations via automatic adaptive amplitude spectrum analysis. We not only provide a new criterion for automatic optimal scale selection but also reserve the saliency maps corresponding to different salient objects with meaningful saliency information by adaptive weighted combination. The performance of quantitative and qualitative comparisons is evaluated by three different kinds of metrics on the four most widely used datasets and one up-to-date large-scale dataset. The experimental results validate that our method outperforms the existing state-of-the-art saliency models for predicting human eye fixations in terms of accuracy and robustness.

  11. Logs Analysis of Adapted Pedagogical Scenarios Generated by a Simulation Serious Game Architecture

    ERIC Educational Resources Information Center

    Callies, Sophie; Gravel, Mathieu; Beaudry, Eric; Basque, Josianne

    2017-01-01

    This paper presents an architecture designed for simulation serious games, which automatically generates game-based scenarios adapted to learner's learning progression. We present three central modules of the architecture: (1) the learner model, (2) the adaptation module and (3) the logs module. The learner model estimates the progression of the…

  12. Unsupervised MDP Value Selection for Automating ITS Capabilities

    ERIC Educational Resources Information Center

    Stamper, John; Barnes, Tiffany

    2009-01-01

    We seek to simplify the creation of intelligent tutors by using student data acquired from standard computer aided instruction (CAI) in conjunction with educational data mining methods to automatically generate adaptive hints. In our previous work, we have automatically generated hints for logic tutoring by constructing a Markov Decision Process…

  13. Candida albicans Germ-Tube Antibody: Evaluation of a New Automatic Assay for Diagnosing Invasive Candidiasis in ICU Patients.

    PubMed

    Parra-Sánchez, Manuel; Zakariya-Yousef Breval, Ismail; Castro Méndez, Carmen; García-Rey, Silvia; Loza Vazquez, Ana; Úbeda Iglesias, Alejandro; Macías Guerrero, Desiree; Romero Mejías, Ana; León Gil, Cristobal; Martín-Mazuelos, Estrella

    2017-08-01

    Testing for Candida albicans germ-tube antibody IFA IgG assay (CAGTA) is used to detect invasive candidiasis infection. However, most suitable assays lack automation and rapid single-sample testing. The CAGTA assay was adapted in an automatic monotest system (invasive candidiasis [CAGTA] VirClia ® IgG monotest (VirClia ® ), a chemiluminescence assay with ready-to-use reagents that provides a rapid objective result. CAGTA assay was compared with the monotest automatic VirClia ® assay in order to establish the diagnostic reliability, accuracy, and usefulness of this method. A prospective study with 361 samples from 179 non-neutropenic critically ill adults patients was conducted, including 21 patients with candidemia, 18 with intra-abdominal candidiasis, 84 with Candida spp. colonization, and 56 with culture-negative samples, as well as samples from ten healthy subjects. Overall agreement between the two assays (CAGTA and VirCLIA) was 85.3%. These assays were compared with the gold-standard method to determine the sensitivity, specificity as well as positive and negative predictive values. In patients with candidemia, values for CAGTA and VirCLIA assays were 76.2 versus 85.7%, 80.3 versus 75.8%, 55.2 versus 52.9%, and 91.4 versus 94.3%, respectively. The corresponding values in patients with intra-abdominal candidiasis were 61.1 versus 66.7%, 80.3 versus 75.8%, 45.8 versus 42.9%, and 88.3 versus 89.3%, respectively. No differences were found according to the species of Candida isolated in culture, except for Candida albicans and C. parapsilosis, for which VirClia ® was better than CAGTA. According to these results, the automated VirClia ® assay was a reliable, rapid, and very easy to perform technique as tool for the diagnosis invasive candidiasis.

  14. Design and implementation of monitoring and evaluation of healthcare organization management

    NASA Astrophysics Data System (ADS)

    Charalampos, Platis; Emmanouil, Zoulias; Dimitrios, Iracleous; Lappa, Evaggelia

    2017-09-01

    The management of a healthcare organization is monitored using a suitably designed questionnaire to 271 nurses operating in Greek hospital. The data are fed to an automatic data mining system to obtain a suitable series of models to analyse, visualise and study the obtained information. Hidden patterns, correlations and interdependencies are investigated and the results are analytically presented.

  15. Posteriori error determination and grid adaptation for AMR and ALE computational fluid dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lapenta, G. M.

    2002-01-01

    We discuss grid adaptation for application to AMR and ALE codes. Two new contributions are presented. First, a new method to locate the regions where the truncation error is being created due to an insufficient accuracy: the operator recovery error origin (OREO) detector. The OREO detector is automatic, reliable, easy to implement and extremely inexpensive. Second, a new grid motion technique is presented for application to ALE codes. The method is based on the Brackbill-Saltzman approach but it is directly linked to the OREO detector and moves the grid automatically to minimize the error.

  16. The design of digital-adaptive controllers for VTOL aircraft

    NASA Technical Reports Server (NTRS)

    Stengel, R. F.; Broussard, J. R.; Berry, P. W.

    1976-01-01

    Design procedures for VTOL automatic control systems have been developed and are presented. Using linear-optimal estimation and control techniques as a starting point, digital-adaptive control laws have been designed for the VALT Research Aircraft, a tandem-rotor helicopter which is equipped for fully automatic flight in terminal area operations. These control laws are designed to interface with velocity-command and attitude-command guidance logic, which could be used in short-haul VTOL operations. Developments reported here include new algorithms for designing non-zero-set-point digital regulators, design procedures for rate-limited systems, and algorithms for dynamic control trim setting.

  17. An 8-node tetrahedral finite element suitable for explicit transient dynamic simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Key, S.W.; Heinstein, M.W.; Stone, C.M.

    1997-12-31

    Considerable effort has been expended in perfecting the algorithmic properties of 8-node hexahedral finite elements. Today the element is well understood and performs exceptionally well when used in modeling three-dimensional explicit transient dynamic events. However, the automatic generation of all-hexahedral meshes remains an elusive achievement. The alternative of automatic generation for all-tetrahedral finite element is a notoriously poor performer, and the 10-node quadratic tetrahedral finite element while a better performer numerically is computationally expensive. To use the all-tetrahedral mesh generation extant today, the authors have explored the creation of a quality 8-node tetrahedral finite element (a four-node tetrahedral finite elementmore » enriched with four midface nodal points). The derivation of the element`s gradient operator, studies in obtaining a suitable mass lumping and the element`s performance in applications are presented. In particular, they examine the 80node tetrahedral finite element`s behavior in longitudinal plane wave propagation, in transverse cylindrical wave propagation, and in simulating Taylor bar impacts. The element only samples constant strain states and, therefore, has 12 hourglass modes. In this regard, it bears similarities to the 8-node, mean-quadrature hexahedral finite element. Given automatic all-tetrahedral meshing, the 8-node, constant-strain tetrahedral finite element is a suitable replacement for the 8-node hexahedral finite element and handbuilt meshes.« less

  18. Multiple-Diode-Laser Gas-Detection Spectrometer

    NASA Technical Reports Server (NTRS)

    Webster, Christopher R.; Beer, Reinhard; Sander, Stanley P.

    1988-01-01

    Small concentrations of selected gases measured automatically. Proposed multiple-laser-diode spectrometer part of system for measuring automatically concentrations of selected gases at part-per-billion level. Array of laser/photodetector pairs measure infrared absorption spectrum of atmosphere along probing laser beams. Adaptable to terrestrial uses as monitoring pollution or control of industrial processes.

  19. Automatic Data Processing, 4-1. Military Curriculum Materials for Vocational and Technical Education.

    ERIC Educational Resources Information Center

    Army Ordnance Center and School, Aberdeen Proving Ground, MD.

    These two texts and student workbook for a secondary/postsecondary-level correspondence course in automatic data processing comprise one of a number of military-developed curriculum packages selected for adaptation to vocational instruction and curriculum development in a civilian setting. The purpose stated for the individualized, self-paced…

  20. Speaker-Machine Interaction in Automatic Speech Recognition. Technical Report.

    ERIC Educational Resources Information Center

    Makhoul, John I.

    The feasibility and limitations of speaker adaptation in improving the performance of a "fixed" (speaker-independent) automatic speech recognition system were examined. A fixed vocabulary of 55 syllables is used in the recognition system which contains 11 stops and fricatives and five tense vowels. The results of an experiment on speaker…

  1. The Automatic Sweetheart: An Assignment in a History of Psychology Course

    ERIC Educational Resources Information Center

    Sibicky, Mark E.

    2007-01-01

    This article describes an assignment in a History of Psychology course used to enhance student retention of material and increase student interest and discussion of the long-standing debate between humanistic and mechanistic models in psychology. Adapted from William James's (1955) automatic sweetheart question, the assignment asks students to…

  2. Text Structuration Leading to an Automatic Summary System: RAFI.

    ERIC Educational Resources Information Center

    Lehman, Abderrafih

    1999-01-01

    Describes the design and construction of Resume Automatique a Fragments Indicateurs (RAFI), a system of automatic text summary which sums up scientific and technical texts. The RAFI system transforms a long source text into several versions of more condensed texts, using discourse analysis, to make searching easier; it could be adapted to the…

  3. A taxonomy of explanations in a general practitioner clinic for patients with persistent "medically unexplained" physical symptoms.

    PubMed

    Morton, LaKrista; Elliott, Alison; Cleland, Jennifer; Deary, Vincent; Burton, Christopher

    2017-02-01

    To develop a taxonomy of explanations for patients with persistent physical symptoms. We analysed doctors' explanations from two studies of a moderately-intensive consultation intervention for patients with multiple, often "medically-unexplained," physical symptoms. We used a constant comparative method to develop a taxonomy which was then applied to all verbatim explanations. We analysed 138 explanations provided by five general practitioners to 38 patients. The taxonomy comprised explanation types and explanation components. Three explanation types described the overall structure of the explanations: Rational Adaptive, Automatic Adaptive, and Complex. These differed in terms of who or what was given agency within the explanation. Three explanation components described the content of the explanation: Facts - generic statements about normal or dysfunctional processes; Causes - person-specific statements about proximal or distal causes for symptoms; Mechanisms - processes by which symptoms arise or persist in the individual. Most explanations conformed to one type and contained several components. This novel taxonomy for classifying clinical explanations permits detailed classification of explanation types and content. Explanation types appear to carry different implications of agency. The taxonomy is suitable for examining explanations and developing prototype explanatory scripts in both training and research settings. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. Evaluation of three automatic oxygen therapy control algorithms on ventilated low birth weight neonates.

    PubMed

    Morozoff, Edmund P; Smyth, John A

    2009-01-01

    Neonates with under developed lungs often require oxygen therapy. During the course of oxygen therapy, elevated levels of blood oxygenation, hyperoxemia, must be avoided or the risk of chronic lung disease or retinal damage is increased. Low levels of blood oxygen, hypoxemia, may lead to permanent brain tissue damage and, in some cases, mortality. A closed loop controller that automatically administers oxygen therapy using 3 algorithms - state machine, adaptive model, and proportional integral derivative (PID) - is applied to 7 ventilated low birth weight neonates and compared to manual oxygen therapy. All 3 automatic control algorithms demonstrated their ability to improve manual oxygen therapy by increasing periods of normoxemia and reducing the need for manual FiO(2) adjustments. Of the three control algorithms, the adaptive model showed the best performance with 0.25 manual adjustments per hour and 73% time spent within target +/- 3% SpO(2).

  5. Comparison of automatic and visual methods used for image segmentation in Endodontics: a microCT study.

    PubMed

    Queiroz, Polyane Mazucatto; Rovaris, Karla; Santaella, Gustavo Machado; Haiter-Neto, Francisco; Freitas, Deborah Queiroz

    2017-01-01

    To calculate root canal volume and surface area in microCT images, an image segmentation by selecting threshold values is required, which can be determined by visual or automatic methods. Visual determination is influenced by the operator's visual acuity, while the automatic method is done entirely by computer algorithms. To compare between visual and automatic segmentation, and to determine the influence of the operator's visual acuity on the reproducibility of root canal volume and area measurements. Images from 31 extracted human anterior teeth were scanned with a μCT scanner. Three experienced examiners performed visual image segmentation, and threshold values were recorded. Automatic segmentation was done using the "Automatic Threshold Tool" available in the dedicated software provided by the scanner's manufacturer. Volume and area measurements were performed using the threshold values determined both visually and automatically. The paired Student's t-test showed no significant difference between visual and automatic segmentation methods regarding root canal volume measurements (p=0.93) and root canal surface (p=0.79). Although visual and automatic segmentation methods can be used to determine the threshold and calculate root canal volume and surface, the automatic method may be the most suitable for ensuring the reproducibility of threshold determination.

  6. Automatic segmentation of right ventricle on ultrasound images using sparse matrix transform and level set

    NASA Astrophysics Data System (ADS)

    Qin, Xulei; Cong, Zhibin; Halig, Luma V.; Fei, Baowei

    2013-03-01

    An automatic framework is proposed to segment right ventricle on ultrasound images. This method can automatically segment both epicardial and endocardial boundaries from a continuous echocardiography series by combining sparse matrix transform (SMT), a training model, and a localized region based level set. First, the sparse matrix transform extracts main motion regions of myocardium as eigenimages by analyzing statistical information of these images. Second, a training model of right ventricle is registered to the extracted eigenimages in order to automatically detect the main location of the right ventricle and the corresponding transform relationship between the training model and the SMT-extracted results in the series. Third, the training model is then adjusted as an adapted initialization for the segmentation of each image in the series. Finally, based on the adapted initializations, a localized region based level set algorithm is applied to segment both epicardial and endocardial boundaries of the right ventricle from the whole series. Experimental results from real subject data validated the performance of the proposed framework in segmenting right ventricle from echocardiography. The mean Dice scores for both epicardial and endocardial boundaries are 89.1%+/-2.3% and 83.6+/-7.3%, respectively. The automatic segmentation method based on sparse matrix transform and level set can provide a useful tool for quantitative cardiac imaging.

  7. Automatic orbital GTAW welding: Highest quality welds for tomorrow's high-performance systems

    NASA Technical Reports Server (NTRS)

    Henon, B. K.

    1985-01-01

    Automatic orbital gas tungsten arc welding (GTAW) or TIG welding is certain to play an increasingly prominent role in tomorrow's technology. The welds are of the highest quality and the repeatability of automatic weldings is vastly superior to that of manual welding. Since less heat is applied to the weld during automatic welding than manual welding, there is less change in the metallurgical properties of the parent material. The possibility of accurate control and the cleanliness of the automatic GTAW welding process make it highly suitable to the welding of the more exotic and expensive materials which are now widely used in the aerospace and hydrospace industries. Titanium, stainless steel, Inconel, and Incoloy, as well as, aluminum can all be welded to the highest quality specifications automatically. Automatic orbital GTAW equipment is available for the fusion butt welding of tube-to-tube, as well as, tube to autobuttweld fittings. The same equipment can also be used for the fusion butt welding of up to 6 inch pipe with a wall thickness of up to 0.154 inches.

  8. Validation of automatic landmark identification for atlas-based segmentation for radiation treatment planning of the head-and-neck region

    NASA Astrophysics Data System (ADS)

    Leavens, Claudia; Vik, Torbjørn; Schulz, Heinrich; Allaire, Stéphane; Kim, John; Dawson, Laura; O'Sullivan, Brian; Breen, Stephen; Jaffray, David; Pekar, Vladimir

    2008-03-01

    Manual contouring of target volumes and organs at risk in radiation therapy is extremely time-consuming, in particular for treating the head-and-neck area, where a single patient treatment plan can take several hours to contour. As radiation treatment delivery moves towards adaptive treatment, the need for more efficient segmentation techniques will increase. We are developing a method for automatic model-based segmentation of the head and neck. This process can be broken down into three main steps: i) automatic landmark identification in the image dataset of interest, ii) automatic landmark-based initialization of deformable surface models to the patient image dataset, and iii) adaptation of the deformable models to the patient-specific anatomical boundaries of interest. In this paper, we focus on the validation of the first step of this method, quantifying the results of our automatic landmark identification method. We use an image atlas formed by applying thin-plate spline (TPS) interpolation to ten atlas datasets, using 27 manually identified landmarks in each atlas/training dataset. The principal variation modes returned by principal component analysis (PCA) of the landmark positions were used by an automatic registration algorithm, which sought the corresponding landmarks in the clinical dataset of interest using a controlled random search algorithm. Applying a run time of 60 seconds to the random search, a root mean square (rms) distance to the ground-truth landmark position of 9.5 +/- 0.6 mm was calculated for the identified landmarks. Automatic segmentation of the brain, mandible and brain stem, using the detected landmarks, is demonstrated.

  9. Recognizing lexical and semantic change patterns in evolving life science ontologies to inform mapping adaptation.

    PubMed

    Dos Reis, Julio Cesar; Dinh, Duy; Da Silveira, Marcos; Pruski, Cédric; Reynaud-Delaître, Chantal

    2015-03-01

    Mappings established between life science ontologies require significant efforts to maintain them up to date due to the size and frequent evolution of these ontologies. In consequence, automatic methods for applying modifications on mappings are highly demanded. The accuracy of such methods relies on the available description about the evolution of ontologies, especially regarding concepts involved in mappings. However, from one ontology version to another, a further understanding of ontology changes relevant for supporting mapping adaptation is typically lacking. This research work defines a set of change patterns at the level of concept attributes, and proposes original methods to automatically recognize instances of these patterns based on the similarity between attributes denoting the evolving concepts. This investigation evaluates the benefits of the proposed methods and the influence of the recognized change patterns to select the strategies for mapping adaptation. The summary of the findings is as follows: (1) the Precision (>60%) and Recall (>35%) achieved by comparing manually identified change patterns with the automatic ones; (2) a set of potential impact of recognized change patterns on the way mappings is adapted. We found that the detected correlations cover ∼66% of the mapping adaptation actions with a positive impact; and (3) the influence of the similarity coefficient calculated between concept attributes on the performance of the recognition algorithms. The experimental evaluations conducted with real life science ontologies showed the effectiveness of our approach to accurately characterize ontology evolution at the level of concept attributes. This investigation confirmed the relevance of the proposed change patterns to support decisions on mapping adaptation. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. In-hardware demonstration of model-independent adaptive tuning of noisy systems with arbitrary phase drift

    DOE PAGES

    Scheinker, Alexander; Baily, Scott; Young, Daniel; ...

    2014-08-01

    In this work, an implementation of a recently developed model-independent adaptive control scheme, for tuning uncertain and time varying systems, is demonstrated on the Los Alamos linear particle accelerator. The main benefits of the algorithm are its simplicity, ability to handle an arbitrary number of components without increased complexity, and the approach is extremely robust to measurement noise, a property which is both analytically proven and demonstrated in the experiments performed. We report on the application of this algorithm for simultaneous tuning of two buncher radio frequency (RF) cavities, in order to maximize beam acceptance into the accelerating electromagnetic fieldmore » cavities of the machine, with the tuning based only on a noisy measurement of the surviving beam current downstream from the two bunching cavities. The algorithm automatically responds to arbitrary phase shift of the cavity phases, automatically re-tuning the cavity settings and maximizing beam acceptance. Because it is model independent it can be utilized for continuous adaptation to time-variation of a large system, such as due to thermal drift, or damage to components, in which the remaining, functional components would be automatically re-tuned to compensate for the failing ones. We start by discussing the general model-independent adaptive scheme and how it may be digitally applied to a large class of multi-parameter uncertain systems, and then present our experimental results.« less

  11. Adaptive sleep-wake discrimination for wearable devices.

    PubMed

    Karlen, Walter; Floreano, Dario

    2011-04-01

    Sleep/wake classification systems that rely on physiological signals suffer from intersubject differences that make accurate classification with a single, subject-independent model difficult. To overcome the limitations of intersubject variability, we suggest a novel online adaptation technique that updates the sleep/wake classifier in real time. The objective of the present study was to evaluate the performance of a newly developed adaptive classification algorithm that was embedded on a wearable sleep/wake classification system called SleePic. The algorithm processed ECG and respiratory effort signals for the classification task and applied behavioral measurements (obtained from accelerometer and press-button data) for the automatic adaptation task. When trained as a subject-independent classifier algorithm, the SleePic device was only able to correctly classify 74.94 ± 6.76% of the human-rated sleep/wake data. By using the suggested automatic adaptation method, the mean classification accuracy could be significantly improved to 92.98 ± 3.19%. A subject-independent classifier based on activity data only showed a comparable accuracy of 90.44 ± 3.57%. We demonstrated that subject-independent models used for online sleep-wake classification can successfully be adapted to previously unseen subjects without the intervention of human experts or off-line calibration.

  12. Physiological Self-Regulation and Adaptive Automation

    NASA Technical Reports Server (NTRS)

    Prinzell, Lawrence J.; Pope, Alan T.; Freeman, Frederick G.

    2007-01-01

    Adaptive automation has been proposed as a solution to current problems of human-automation interaction. Past research has shown the potential of this advanced form of automation to enhance pilot engagement and lower cognitive workload. However, there have been concerns voiced regarding issues, such as automation surprises, associated with the use of adaptive automation. This study examined the use of psychophysiological self-regulation training with adaptive automation that may help pilots deal with these problems through the enhancement of cognitive resource management skills. Eighteen participants were assigned to 3 groups (self-regulation training, false feedback, and control) and performed resource management, monitoring, and tracking tasks from the Multiple Attribute Task Battery. The tracking task was cycled between 3 levels of task difficulty (automatic, adaptive aiding, manual) on the basis of the electroencephalogram-derived engagement index. The other two tasks remained in automatic mode that had a single automation failure. Those participants who had received self-regulation training performed significantly better and reported lower National Aeronautics and Space Administration Task Load Index scores than participants in the false feedback and control groups. The theoretical and practical implications of these results for adaptive automation are discussed.

  13. Adaptive Personalized Training Games for Individual and Collaborative Rehabilitation of People with Multiple Sclerosis

    PubMed Central

    2014-01-01

    Any rehabilitation involves people who are unique individuals with their own characteristics and rehabilitation needs, including patients suffering from Multiple Sclerosis (MS). The prominent variation of MS symptoms and the disease severity elevate a need to accommodate the patient diversity and support adaptive personalized training to meet every patient's rehabilitation needs. In this paper, we focus on integrating adaptivity and personalization in rehabilitation training for MS patients. We introduced the automatic adjustment of difficulty levels as an adaptation that can be provided in individual and collaborative rehabilitation training exercises for MS patients. Two user studies have been carried out with nine MS patients to investigate the outcome of this adaptation. The findings showed that adaptive personalized training trajectories have been successfully provided to MS patients according to their individual training progress, which was appreciated by the patients and the therapist. They considered the automatic adjustment of difficulty levels to provide more variety in the training and to minimize the therapists involvement in setting up the training. With regard to social interaction in the collaborative training exercise, we have observed some social behaviors between the patients and their training partner which indicated the development of social interaction during the training. PMID:24982862

  14. Towards autonomous neuroprosthetic control using Hebbian reinforcement learning.

    PubMed

    Mahmoudi, Babak; Pohlmeyer, Eric A; Prins, Noeline W; Geng, Shijia; Sanchez, Justin C

    2013-12-01

    Our goal was to design an adaptive neuroprosthetic controller that could learn the mapping from neural states to prosthetic actions and automatically adjust adaptation using only a binary evaluative feedback as a measure of desirability/undesirability of performance. Hebbian reinforcement learning (HRL) in a connectionist network was used for the design of the adaptive controller. The method combines the efficiency of supervised learning with the generality of reinforcement learning. The convergence properties of this approach were studied using both closed-loop control simulations and open-loop simulations that used primate neural data from robot-assisted reaching tasks. The HRL controller was able to perform classification and regression tasks using its episodic and sequential learning modes, respectively. In our experiments, the HRL controller quickly achieved convergence to an effective control policy, followed by robust performance. The controller also automatically stopped adapting the parameters after converging to a satisfactory control policy. Additionally, when the input neural vector was reorganized, the controller resumed adaptation to maintain performance. By estimating an evaluative feedback directly from the user, the HRL control algorithm may provide an efficient method for autonomous adaptation of neuroprosthetic systems. This method may enable the user to teach the controller the desired behavior using only a simple feedback signal.

  15. ArControl: An Arduino-Based Comprehensive Behavioral Platform with Real-Time Performance.

    PubMed

    Chen, Xinfeng; Li, Haohong

    2017-01-01

    Studying animal behavior in the lab requires reliable delivering stimulations and monitoring responses. We constructed a comprehensive behavioral platform (ArControl: Arduino Control Platform) that was an affordable, easy-to-use, high-performance solution combined software and hardware components. The hardware component was consisted of an Arduino UNO board and a simple drive circuit. As for software, the ArControl provided a stand-alone and intuitive GUI (graphical user interface) application that did not require users to master scripts. The experiment data were automatically recorded with the built in DAQ (data acquisition) function. The ArControl also allowed the behavioral schedule to be entirely stored in and operated on the Arduino chip. This made the ArControl a genuine, real-time system with high temporal resolution (<1 ms). We tested the ArControl, based on strict performance measurements and two mice behavioral experiments. The results showed that the ArControl was an adaptive and reliable system suitable for behavioral research.

  16. ArControl: An Arduino-Based Comprehensive Behavioral Platform with Real-Time Performance

    PubMed Central

    Chen, Xinfeng; Li, Haohong

    2017-01-01

    Studying animal behavior in the lab requires reliable delivering stimulations and monitoring responses. We constructed a comprehensive behavioral platform (ArControl: Arduino Control Platform) that was an affordable, easy-to-use, high-performance solution combined software and hardware components. The hardware component was consisted of an Arduino UNO board and a simple drive circuit. As for software, the ArControl provided a stand-alone and intuitive GUI (graphical user interface) application that did not require users to master scripts. The experiment data were automatically recorded with the built in DAQ (data acquisition) function. The ArControl also allowed the behavioral schedule to be entirely stored in and operated on the Arduino chip. This made the ArControl a genuine, real-time system with high temporal resolution (<1 ms). We tested the ArControl, based on strict performance measurements and two mice behavioral experiments. The results showed that the ArControl was an adaptive and reliable system suitable for behavioral research. PMID:29321735

  17. Morphological self-organizing feature map neural network with applications to automatic target recognition

    NASA Astrophysics Data System (ADS)

    Zhang, Shijun; Jing, Zhongliang; Li, Jianxun

    2005-01-01

    The rotation invariant feature of the target is obtained using the multi-direction feature extraction property of the steerable filter. Combining the morphological operation top-hat transform with the self-organizing feature map neural network, the adaptive topological region is selected. Using the erosion operation, the topological region shrinkage is achieved. The steerable filter based morphological self-organizing feature map neural network is applied to automatic target recognition of binary standard patterns and real-world infrared sequence images. Compared with Hamming network and morphological shared-weight networks respectively, the higher recognition correct rate, robust adaptability, quick training, and better generalization of the proposed method are achieved.

  18. Development of a prototype automatic controller for liquid cooling garment inlet temperature

    NASA Technical Reports Server (NTRS)

    Weaver, C. S.; Webbon, B. W.; Montgomery, L. D.

    1982-01-01

    The development of a computer control of a liquid cooled garment (LCG) inlet temperature is descirbed. An adaptive model of the LCG is used to predict the heat-removal rates for various inlet temperatures. An experimental system that contains a microcomputer was constructed. The LCG inlet and outlet temperatures and the heat exchanger outlet temperature form the inputs to the computer. The adaptive model prediction method of control is successful during tests where the inlet temperature is automatically chosen by the computer. It is concluded that the program can be implemented in a microprocessor of a size that is practical for a life support back-pack.

  19. Control Automation in Undersea Search and Manipulation

    NASA Technical Reports Server (NTRS)

    Weltman, Gershon; Freedy, Amos

    1974-01-01

    Automatic decision making and control mechanisms of the type termed "adaptive" or "intelligent" offer unique advantages for exploration and manipulation of the undersea environment, particularly at great depths. Because they are able to carry out human-like functions autonomously, such mechanisms can aid and extend the capabilities of the human operator. This paper reviews past and present work in the areas of adaptive control and robotics with the purpose of establishing logical guidelines for the application of automatic techniques underwater. Experimental research data are used to illustrate the importance of information feedback, personnel training, and methods of control allocation in the interaction between operator and intelligent machine.

  20. A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines.

    PubMed

    Khan, Arif Ul Maula; Mikut, Ralf; Reischl, Markus

    2016-01-01

    The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts.

  1. Automatic vasculature identification in coronary angiograms by adaptive geometrical tracking.

    PubMed

    Xiao, Ruoxiu; Yang, Jian; Goyal, Mahima; Liu, Yue; Wang, Yongtian

    2013-01-01

    As the uneven distribution of contrast agents and the perspective projection principle of X-ray, the vasculatures in angiographic image are with low contrast and are generally superposed with other organic tissues; therefore, it is very difficult to identify the vasculature and quantitatively estimate the blood flow directly from angiographic images. In this paper, we propose a fully automatic algorithm named adaptive geometrical vessel tracking (AGVT) for coronary artery identification in X-ray angiograms. Initially, the ridge enhancement (RE) image is obtained utilizing multiscale Hessian information. Then, automatic initialization procedures including seed points detection, and initial directions determination are performed on the RE image. The extracted ridge points can be adjusted to the geometrical centerline points adaptively through diameter estimation. Bifurcations are identified by discriminating connecting relationship of the tracked ridge points. Finally, all the tracked centerlines are merged and smoothed by classifying the connecting components on the vascular structures. Synthetic angiographic images and clinical angiograms are used to evaluate the performance of the proposed algorithm. The proposed algorithm is compared with other two vascular tracking techniques in terms of the efficiency and accuracy, which demonstrate successful applications of the proposed segmentation and extraction scheme in vasculature identification.

  2. A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines

    PubMed Central

    Mikut, Ralf; Reischl, Markus

    2016-01-01

    The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts. PMID:27764213

  3. Feedback Control of Rotor Overspeed

    NASA Technical Reports Server (NTRS)

    Churchill, G. B.

    1984-01-01

    Feedback system for automatically governing helicopter rotor speed promises to lessen pilot's workload, enhance maneuverability, and protect airframe. With suitable modifications, concept applied to control speed of electrical generators, automotive engines and other machinery.

  4. Study of Adaptive Mathematical Models for Deriving Automated Pilot Performance Measurement Techniques. Volume I. Model Development.

    ERIC Educational Resources Information Center

    Connelly, Edward A.; And Others

    A new approach to deriving human performance measures and criteria for use in automatically evaluating trainee performance is documented in this report. The ultimate application of the research is to provide methods for automatically measuring pilot performance in a flight simulator or from recorded in-flight data. An efficient method of…

  5. Study of Adaptive Mathematical Models for Deriving Automated Pilot Performance Measurement Techniques. Volume II. Appendices. Final Report.

    ERIC Educational Resources Information Center

    Connelly, E. M.; And Others

    A new approach to deriving human performance measures and criteria for use in automatically evaluating trainee performance is described. Ultimately, this approach will allow automatic measurement of pilot performance in a flight simulator or from recorded in-flight data. An efficient method of representing performance data within a computer is…

  6. Using Automatic Item Generation to Meet the Increasing Item Demands of High-Stakes Educational and Occupational Assessment

    ERIC Educational Resources Information Center

    Arendasy, Martin E.; Sommer, Markus

    2012-01-01

    The use of new test administration technologies such as computerized adaptive testing in high-stakes educational and occupational assessments demands large item pools. Classic item construction processes and previous approaches to automatic item generation faced the problems of a considerable loss of items after the item calibration phase. In this…

  7. 38 CFR 17.157 - Definition-adaptive equipment.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... includes, but is not limited to, a basic automatic transmission, power steering, power brakes, power window lifts, power seats, air-conditioning equipment when necessary for the health and safety of the veteran... MEDICAL Automotive Equipment and Driver Training § 17.157 Definition-adaptive equipment. The term...

  8. Estimating the quality of pasturage in the municipality of Paragominas (PA) by means of automatic analysis of LANDSAT data

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Dossantos, A. P.; Novo, E. M. L. D.; Duarte, V.

    1981-01-01

    The use of LANDSAT data to evaluate pasture quality in the Amazon region is demonstrated. Pasture degradation in deforested areas of a traditional tropical forest cattle-raising region was estimated. Automatic analysis using interactive multispectral analysis (IMAGE-100) shows that 24% of the deforested areas were occupied by natural vegetation regrowth, 24% by exposed soil, 15% by degraded pastures, and 46% was suitable grazing land.

  9. Automatic single questionnaire intensity (SQI, EMS98 scale) estimation using ranking models built on the existing BCSF database

    NASA Astrophysics Data System (ADS)

    Schlupp, A.; Sira, C.; Schmitt, K.; Schaming, M.

    2013-12-01

    In charge of intensity estimations in France, BCSF has collected and manually analyzed more than 47000 online individual macroseismic questionnaires since 2000 up to intensity VI. These macroseismic data allow us to estimate one SQI value (Single Questionnaire Intensity) for each form following the EMS98 scale. The reliability of the automatic intensity estimation is important as they are today used for automatic shakemaps communications and crisis management. Today, the automatic intensity estimation at BCSF is based on the direct use of thumbnails selected on a menu by the witnesses. Each thumbnail corresponds to an EMS-98 intensity value, allowing us to quickly issue an intensity map of the communal intensity by averaging the SQIs at each city. Afterwards an expert, to determine a definitive SQI, manually analyzes each form. This work is time consuming and not anymore suitable considering the increasing number of testimonies at BCSF. Nevertheless, it can take into account incoherent answers. We tested several automatic methods (USGS algorithm, Correlation coefficient, Thumbnails) (Sira et al. 2013, IASPEI) and compared them with 'expert' SQIs. These methods gave us medium score (between 50 to 60% of well SQI determined and 35 to 40% with plus one or minus one intensity degree). The best fit was observed with the thumbnails. Here, we present new approaches based on 3 statistical ranking methods as 1) Multinomial logistic regression model, 2) Discriminant analysis DISQUAL and 3) Support vector machines (SVMs). The two first methods are standard methods, while the third one is more recent. Theses methods could be applied because the BCSF has already in his database more then 47000 forms and because their questions and answers are well adapted for a statistical analysis. The ranking models could then be used as automatic method constrained on expert analysis. The performance of the automatic methods and the reliability of the estimated SQI can be evaluated thanks to the fact that each definitive BCSF SQIs is determined by an expert analysis. We compare the SQIs obtained by these methods from our database and discuss the coherency and variations between automatic and manual processes. These methods lead to high scores with up to 85% of the forms well classified and most of the remaining forms classified with only a shift of one intensity degree. This allows us to use the ranking methods as the best automatic methods to fast SQIs estimation and to produce fast shakemaps. The next step, to improve the use of these methods, will be to identify explanations for the forms not classified at the correct value and a way to select the few remaining forms that should be analyzed by the expert. Note that beyond intensity VI, on-line questionnaires are insufficient and a field survey is indispensable to estimate intensity. For such survey, in France, BCSF leads a macroseismic intervention group (GIM).

  10. Comparison of a brain-based adaptive system and a manual adaptable system for invoking automation.

    PubMed

    Bailey, Nathan R; Scerbo, Mark W; Freeman, Frederick G; Mikulka, Peter J; Scott, Lorissa A

    2006-01-01

    Two experiments are presented examining adaptive and adaptable methods for invoking automation. Empirical investigations of adaptive automation have focused on methods used to invoke automation or on automation-related performance implications. However, no research has addressed whether performance benefits associated with brain-based systems exceed those in which users have control over task allocations. Participants performed monitoring and resource management tasks as well as a tracking task that shifted between automatic and manual modes. In the first experiment, participants worked with an adaptive system that used their electroencephalographic signals to switch the tracking task between automatic and manual modes. Participants were also divided between high- and low-reliability conditions for the system-monitoring task as well as high- and low-complacency potential. For the second experiment, participants operated an adaptable system that gave them manual control over task allocations. Results indicated increased situation awareness (SA) of gauge instrument settings for individuals high in complacency potential using the adaptive system. In addition, participants who had control over automation performed more poorly on the resource management task and reported higher levels of workload. A comparison between systems also revealed enhanced SA of gauge instrument settings and decreased workload in the adaptive condition. The present results suggest that brain-based adaptive automation systems may enhance perceptual level SA while reducing mental workload relative to systems requiring user-initiated control. Potential applications include automated systems for which operator monitoring performance and high-workload conditions are of concern.

  11. INITIAL APPL;ICATION OF THE ADAPTIVE GRID AIR POLLUTION MODEL

    EPA Science Inventory

    The paper discusses an adaptive-grid algorithm used in air pollution models. The algorithm reduces errors related to insufficient grid resolution by automatically refining the grid scales in regions of high interest. Meanwhile the grid scales are coarsened in other parts of the d...

  12. Continuously Adaptive vs. Discrete Changes of Task Difficulty in the Training of a Complex Perceptual-Motor Task.

    ERIC Educational Resources Information Center

    Wood, Milton E.

    The purpose of the effort was to determine the benefits to be derived from the adaptive training technique of automatically adjusting task difficulty as a function of a student skill during early learning of a complex perceptual motor task. A digital computer provided the task dynamics, scoring, and adaptive control of a second-order, two-axis,…

  13. Automatic Artifact Removal from Electroencephalogram Data Based on A Priori Artifact Information.

    PubMed

    Zhang, Chi; Tong, Li; Zeng, Ying; Jiang, Jingfang; Bu, Haibing; Yan, Bin; Li, Jianxin

    2015-01-01

    Electroencephalogram (EEG) is susceptible to various nonneural physiological artifacts. Automatic artifact removal from EEG data remains a key challenge for extracting relevant information from brain activities. To adapt to variable subjects and EEG acquisition environments, this paper presents an automatic online artifact removal method based on a priori artifact information. The combination of discrete wavelet transform and independent component analysis (ICA), wavelet-ICA, was utilized to separate artifact components. The artifact components were then automatically identified using a priori artifact information, which was acquired in advance. Subsequently, signal reconstruction without artifact components was performed to obtain artifact-free signals. The results showed that, using this automatic online artifact removal method, there were statistical significant improvements of the classification accuracies in both two experiments, namely, motor imagery and emotion recognition.

  14. Automatic Artifact Removal from Electroencephalogram Data Based on A Priori Artifact Information

    PubMed Central

    Zhang, Chi; Tong, Li; Zeng, Ying; Jiang, Jingfang; Bu, Haibing; Li, Jianxin

    2015-01-01

    Electroencephalogram (EEG) is susceptible to various nonneural physiological artifacts. Automatic artifact removal from EEG data remains a key challenge for extracting relevant information from brain activities. To adapt to variable subjects and EEG acquisition environments, this paper presents an automatic online artifact removal method based on a priori artifact information. The combination of discrete wavelet transform and independent component analysis (ICA), wavelet-ICA, was utilized to separate artifact components. The artifact components were then automatically identified using a priori artifact information, which was acquired in advance. Subsequently, signal reconstruction without artifact components was performed to obtain artifact-free signals. The results showed that, using this automatic online artifact removal method, there were statistical significant improvements of the classification accuracies in both two experiments, namely, motor imagery and emotion recognition. PMID:26380294

  15. Automatic morphological classification of galaxy images

    PubMed Central

    Shamir, Lior

    2009-01-01

    We describe an image analysis supervised learning algorithm that can automatically classify galaxy images. The algorithm is first trained using a manually classified images of elliptical, spiral, and edge-on galaxies. A large set of image features is extracted from each image, and the most informative features are selected using Fisher scores. Test images can then be classified using a simple Weighted Nearest Neighbor rule such that the Fisher scores are used as the feature weights. Experimental results show that galaxy images from Galaxy Zoo can be classified automatically to spiral, elliptical and edge-on galaxies with accuracy of ~90% compared to classifications carried out by the author. Full compilable source code of the algorithm is available for free download, and its general-purpose nature makes it suitable for other uses that involve automatic image analysis of celestial objects. PMID:20161594

  16. Toward automatic finite element analysis

    NASA Technical Reports Server (NTRS)

    Kela, Ajay; Perucchio, Renato; Voelcker, Herbert

    1987-01-01

    Two problems must be solved if the finite element method is to become a reliable and affordable blackbox engineering tool. Finite element meshes must be generated automatically from computer aided design databases and mesh analysis must be made self-adaptive. The experimental system described solves both problems in 2-D through spatial and analytical substructuring techniques that are now being extended into 3-D.

  17. Spectral-Element Seismic Wave Propagation Codes for both Forward Modeling in Complex Media and Adjoint Tomography

    NASA Astrophysics Data System (ADS)

    Smith, J. A.; Peter, D. B.; Tromp, J.; Komatitsch, D.; Lefebvre, M. P.

    2015-12-01

    We present both SPECFEM3D_Cartesian and SPECFEM3D_GLOBE open-source codes, representing high-performance numerical wave solvers simulating seismic wave propagation for local-, regional-, and global-scale application. These codes are suitable for both forward propagation in complex media and tomographic imaging. Both solvers compute highly accurate seismic wave fields using the continuous Galerkin spectral-element method on unstructured meshes. Lateral variations in compressional- and shear-wave speeds, density, as well as 3D attenuation Q models, topography and fluid-solid coupling are all readily included in both codes. For global simulations, effects due to rotation, ellipticity, the oceans, 3D crustal models, and self-gravitation are additionally included. Both packages provide forward and adjoint functionality suitable for adjoint tomography on high-performance computing architectures. We highlight the most recent release of the global version which includes improved performance, simultaneous MPI runs, OpenCL and CUDA support via an automatic source-to-source transformation library (BOAST), parallel I/O readers and writers for databases using ADIOS and seismograms using the recently developed Adaptable Seismic Data Format (ASDF) with built-in provenance. This makes our spectral-element solvers current state-of-the-art, open-source community codes for high-performance seismic wave propagation on arbitrarily complex 3D models. Together with these solvers, we provide full-waveform inversion tools to image the Earth's interior at unprecedented resolution.

  18. Automatic Incubator-type Temperature Control System for Brain Hypothermia Treatment

    NASA Astrophysics Data System (ADS)

    Gaohua, Lu; Wakamatsu, Hidetoshi

    An automatic air-cooling incubator is proposed to replace the manual water-cooling blanket to control the brain tissue temperature for brain hypothermia treatment. Its feasibility is theoretically discussed as follows: First, an adult patient with the cooling incubator is modeled as a linear dynamical patient-incubator biothermal system. The patient is represented by an 18-compartment structure and described by its state equations. The air-cooling incubator provides almost same cooling effect as the water-cooling blanket, if a light breeze of speed around 3 m/s is circulated in the incubator. Then, in order to control the brain temperature automatically, an adaptive-optimal control algorithm is adopted, while the patient-blanket therapeutic system is considered as a reference model. Finally, the brain temperature of the patient-incubator biothermal system is controlled to follow up the given reference temperature course, in which an adaptive algorithm is confirmed useful for unknown environmental change and/or metabolic rate change of the patient in the incubating system. Thus, the present work ensures the development of the automatic air-cooling incubator for a better temperature regulation of the brain hypothermia treatment in ICU.

  19. Functional relationships among monitoring performance: Subjective report of thought process and compromising states of awareness

    NASA Technical Reports Server (NTRS)

    Freeman, Frederick

    1995-01-01

    A biocybernetic system for use in adaptive automation was evaluated using EEG indices based on the beta, alpha, and theta bandwidths. Subjects performed a compensatory tracking task while their EEG was recorded and one of three engagement indices was derived: beta/(alpha + theta), beta/alpha, or 1/alpha. The task was switched between manual and automatic modes as a function of the subjects' level of engagement and whether they were under a positive or negative feedback condition. It was hypothesized that negative feedback would produce more switches between manual and automatic modes, and that the beta/(alpha + theta) index would produce the strongest effect. The results confirmed these hypotheses. There were no systematic changes in these effects over three 16-minute trials. Tracking performance was found to be better under negative feedback. An analysis of the different EEG bands under positive and negative feedback in manual and automatic modes found more beta power in the positive feedback/manual condition and less in the positive feedback/automatic condition. The opposite effect was observed for alpha and theta power. The implications of biocybernetic systems for adaptive automation are discussed.

  20. Conflict Adaptation Depends on Task Structure

    ERIC Educational Resources Information Center

    Akcay, Caglar; Hazeltine, Eliot

    2008-01-01

    The dependence of the Simon effect on the correspondence of the previous trial can be explained by the conflict-monitoring theory, which holds that a control system adjusts automatic activation from irrelevant stimulus information (conflict adaptation) on the basis of the congruency of the previous trial. The authors report on 4 experiments…

  1. Convergence of an hp-Adaptive Finite Element Strategy in Two and Three Space-Dimensions

    NASA Astrophysics Data System (ADS)

    Bürg, Markus; Dörfler, Willy

    2010-09-01

    We show convergence of an automatic hp-adaptive refinement strategy for the finite element method on the elliptic boundary value problem. The strategy is a generalization of a refinement strategy proposed for one-dimensional situations to problems in two and three space-dimensions.

  2. Dynamic Learner Profiling and Automatic Learner Classification for Adaptive E-Learning Environment

    ERIC Educational Resources Information Center

    Premlatha, K. R.; Dharani, B.; Geetha, T. V.

    2016-01-01

    E-learning allows learners individually to learn "anywhere, anytime" and offers immediate access to specific information. However, learners have different behaviors, learning styles, attitudes, and aptitudes, which affect their learning process, and therefore learning environments need to adapt according to these differences, so as to…

  3. Criteria for the assessment of analyser practicability

    PubMed Central

    Biosca, C.; Galimany, R.

    1993-01-01

    This article lists the theoretical criteria that need to be considered to assess the practicability of an automatic analyser. Two essential sets of criteria should be taken into account when selecting an automatic analyser: ‘reliability’ and ‘practicability’. Practibility covers the features that provide information about the suitability of an analyser for specific working conditions. These practibility criteria are classsified in this article and include the environment; work organization; versatility and flexibility; safely controls; staff training; maintenance and operational costs. PMID:18924972

  4. A system for programming experiments and for recording and analyzing data automatically1

    PubMed Central

    Herrick, Robert M.; Denelsbeck, John S.

    1963-01-01

    A system designed for use in complex operant conditioning experiments is described. Some of its key features are: (a) plugboards that permit the experimenter to change either from one program to another or from one analysis to another in less than a minute, (b) time-sharing of permanently-wired, electronic logic components, (c) recordings suitable for automatic analyses. Included are flow diagrams of the system and sample logic diagrams for programming experiments and for analyzing data. ImagesFig. 4. PMID:14055967

  5. Automatic focusing system of BSST in Antarctic

    NASA Astrophysics Data System (ADS)

    Tang, Peng-Yi; Liu, Jia-Jing; Zhang, Guang-yu; Wang, Jian

    2015-10-01

    Automatic focusing (AF) technology plays an important role in modern astronomical telescopes. Based on the focusing requirement of BSST (Bright Star Survey Telescope) in Antarctic, an AF system is set up. In this design, functions in OpenCV is used to find stars, the algorithm of area, HFD or FWHM are used to degree the focus metric by choosing. Curve fitting method is used to find focus position as the method of camera moving. All these design are suitable for unattended small telescope.

  6. Semi-automatic version of the potentiometric titration method for characterization of uranium compounds.

    PubMed

    Cristiano, Bárbara F G; Delgado, José Ubiratan; da Silva, José Wanderley S; de Barros, Pedro D; de Araújo, Radier M S; Dias, Fábio C; Lopes, Ricardo T

    2012-09-01

    The potentiometric titration method was used for characterization of uranium compounds to be applied in intercomparison programs. The method is applied with traceability assured using a potassium dichromate primary standard. A semi-automatic version was developed to reduce the analysis time and the operator variation. The standard uncertainty in determining the total concentration of uranium was around 0.01%, which is suitable for uranium characterization and compatible with those obtained by manual techniques. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. An automatic method to detect and track the glottal gap from high speed videoendoscopic images.

    PubMed

    Andrade-Miranda, Gustavo; Godino-Llorente, Juan I; Moro-Velázquez, Laureano; Gómez-García, Jorge Andrés

    2015-10-29

    The image-based analysis of the vocal folds vibration plays an important role in the diagnosis of voice disorders. The analysis is based not only on the direct observation of the video sequences, but also in an objective characterization of the phonation process by means of features extracted from the recorded images. However, such analysis is based on a previous accurate identification of the glottal gap, which is the most challenging step for a further automatic assessment of the vocal folds vibration. In this work, a complete framework to automatically segment and track the glottal area (or glottal gap) is proposed. The algorithm identifies a region of interest that is adapted along time, and combine active contours and watershed transform for the final delineation of the glottis and also an automatic procedure for synthesize different videokymograms is proposed. Thanks to the ROI implementation, our technique is robust to the camera shifting and also the objective test proved the effectiveness and performance of the approach in the most challenging scenarios that it is when exist an inappropriate closure of the vocal folds. The novelties of the proposed algorithm relies on the used of temporal information for identify an adaptive ROI and the use of watershed merging combined with active contours for the glottis delimitation. Additionally, an automatic procedure for synthesize multiline VKG by the identification of the glottal main axis is developed.

  8. Review of Medical Image Classification using the Adaptive Neuro-Fuzzy Inference System

    PubMed Central

    Hosseini, Monireh Sheikh; Zekri, Maryam

    2012-01-01

    Image classification is an issue that utilizes image processing, pattern recognition and classification methods. Automatic medical image classification is a progressive area in image classification, and it is expected to be more developed in the future. Because of this fact, automatic diagnosis can assist pathologists by providing second opinions and reducing their workload. This paper reviews the application of the adaptive neuro-fuzzy inference system (ANFIS) as a classifier in medical image classification during the past 16 years. ANFIS is a fuzzy inference system (FIS) implemented in the framework of an adaptive fuzzy neural network. It combines the explicit knowledge representation of an FIS with the learning power of artificial neural networks. The objective of ANFIS is to integrate the best features of fuzzy systems and neural networks. A brief comparison with other classifiers, main advantages and drawbacks of this classifier are investigated. PMID:23493054

  9. Autonomous beating rate adaptation in human stem cell-derived cardiomyocytes

    PubMed Central

    Eng, George; Lee, Benjamin W.; Protas, Lev; Gagliardi, Mark; Brown, Kristy; Kass, Robert S.; Keller, Gordon; Robinson, Richard B.; Vunjak-Novakovic, Gordana

    2016-01-01

    The therapeutic success of human stem cell-derived cardiomyocytes critically depends on their ability to respond to and integrate with the surrounding electromechanical environment. Currently, the immaturity of human cardiomyocytes derived from stem cells limits their utility for regenerative medicine and biological research. We hypothesize that biomimetic electrical signals regulate the intrinsic beating properties of cardiomyocytes. Here we show that electrical conditioning of human stem cell-derived cardiomyocytes in three-dimensional culture promotes cardiomyocyte maturation, alters their automaticity and enhances connexin expression. Cardiomyocytes adapt their autonomous beating rate to the frequency at which they were stimulated, an effect mediated by the emergence of a rapidly depolarizing cell population, and the expression of hERG. This rate-adaptive behaviour is long lasting and transferable to the surrounding cardiomyocytes. Thus, electrical conditioning may be used to promote cardiomyocyte maturation and establish their automaticity, with implications for cell-based reduction of arrhythmia during heart regeneration. PMID:26785135

  10. Modelling Errors in Automatic Speech Recognition for Dysarthric Speakers

    NASA Astrophysics Data System (ADS)

    Caballero Morales, Santiago Omar; Cox, Stephen J.

    2009-12-01

    Dysarthria is a motor speech disorder characterized by weakness, paralysis, or poor coordination of the muscles responsible for speech. Although automatic speech recognition (ASR) systems have been developed for disordered speech, factors such as low intelligibility and limited phonemic repertoire decrease speech recognition accuracy, making conventional speaker adaptation algorithms perform poorly on dysarthric speakers. In this work, rather than adapting the acoustic models, we model the errors made by the speaker and attempt to correct them. For this task, two techniques have been developed: (1) a set of "metamodels" that incorporate a model of the speaker's phonetic confusion matrix into the ASR process; (2) a cascade of weighted finite-state transducers at the confusion matrix, word, and language levels. Both techniques attempt to correct the errors made at the phonetic level and make use of a language model to find the best estimate of the correct word sequence. Our experiments show that both techniques outperform standard adaptation techniques.

  11. Effortful versus automatic emotional processing in schizophrenia: Insights from a face-vignette task.

    PubMed

    Patrick, Regan E; Rastogi, Anuj; Christensen, Bruce K

    2015-01-01

    Adaptive emotional responding relies on dual automatic and effortful processing streams. Dual-stream models of schizophrenia (SCZ) posit a selective deficit in neural circuits that govern goal-directed, effortful processes versus reactive, automatic processes. This imbalance suggests that when patients are confronted with competing automatic and effortful emotional response cues, they will exhibit diminished effortful responding and intact, possibly elevated, automatic responding compared to controls. This prediction was evaluated using a modified version of the face-vignette task (FVT). Participants viewed emotional faces (automatic response cue) paired with vignettes (effortful response cue) that signalled a different emotion category and were instructed to discriminate the manifest emotion. Patients made less vignette and more face responses than controls. However, the relationship between group and FVT responding was moderated by IQ and reading comprehension ability. These results replicate and extend previous research and provide tentative support for abnormal conflict resolution between automatic and effortful emotional processing predicted by dual-stream models of SCZ.

  12. A consideration of the operation of automatic production machines.

    PubMed

    Hoshi, Toshiro; Sugimoto, Noboru

    2015-01-01

    At worksites, various automatic production machines are in use to release workers from muscular labor or labor in the detrimental environment. On the other hand, a large number of industrial accidents have been caused by automatic production machines. In view of this, this paper considers the operation of automatic production machines from the viewpoint of accident prevention, and points out two types of machine operation - operation for which quick performance is required (operation that is not permitted to be delayed) - and operation for which composed performance is required (operation that is not permitted to be performed in haste). These operations are distinguished by operation buttons of suitable colors and shapes. This paper shows that these characteristics are evaluated as "asymmetric on the time-axis". Here, in order for workers to accept the risk of automatic production machines, it is preconditioned in general that harm should be sufficiently small or avoidance of harm is easy. In this connection, this paper shows the possibility of facilitating the acceptance of the risk of automatic production machines by enhancing the asymmetric on the time-axis.

  13. A consideration of the operation of automatic production machines

    PubMed Central

    HOSHI, Toshiro; SUGIMOTO, Noboru

    2015-01-01

    At worksites, various automatic production machines are in use to release workers from muscular labor or labor in the detrimental environment. On the other hand, a large number of industrial accidents have been caused by automatic production machines. In view of this, this paper considers the operation of automatic production machines from the viewpoint of accident prevention, and points out two types of machine operation − operation for which quick performance is required (operation that is not permitted to be delayed) − and operation for which composed performance is required (operation that is not permitted to be performed in haste). These operations are distinguished by operation buttons of suitable colors and shapes. This paper shows that these characteristics are evaluated as “asymmetric on the time-axis”. Here, in order for workers to accept the risk of automatic production machines, it is preconditioned in general that harm should be sufficiently small or avoidance of harm is easy. In this connection, this paper shows the possibility of facilitating the acceptance of the risk of automatic production machines by enhancing the asymmetric on the time-axis. PMID:25739898

  14. Adaptation of aeronautical engines to high altitude flying

    NASA Technical Reports Server (NTRS)

    Kutzbach, K

    1923-01-01

    Issues and techniques relative to the adaptation of aircraft engines to high altitude flight are discussed. Covered here are the limits of engine output, modifications and characteristics of high altitude engines, the influence of air density on the proportions of fuel mixtures, methods of varying the proportions of fuel mixtures, the automatic prevention of fuel waste, and the design and application of air pressure regulators to high altitude flying. Summary: 1. Limits of engine output. 2. High altitude engines. 3. Influence of air density on proportions of mixture. 4. Methods of varying proportions of mixture. 5. Automatic prevention of fuel waste. 6. Design and application of air pressure regulators to high altitude flying.

  15. WE-A-17A-06: Evaluation of An Automatic Interstitial Catheter Digitization Algorithm That Reduces Treatment Planning Time and Provide Means for Adaptive Re-Planning in HDR Brachytherapy of Gynecologic Cancers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dise, J; Liang, X; Lin, L

    Purpose: To evaluate an automatic interstitial catheter digitization algorithm that reduces treatment planning time and provide means for adaptive re-planning in HDR Brachytherapy of Gynecologic Cancers. Methods: The semi-automatic catheter digitization tool utilizes a region growing algorithm in conjunction with a spline model of the catheters. The CT images were first pre-processed to enhance the contrast between the catheters and soft tissue. Several seed locations were selected in each catheter for the region growing algorithm. The spline model of the catheters assisted in the region growing by preventing inter-catheter cross-over caused by air or metal artifacts. Source dwell positions frommore » day one CT scans were applied to subsequent CTs and forward calculated using the automatically digitized catheter positions. This method was applied to 10 patients who had received HDR interstitial brachytherapy on an IRB approved image-guided radiation therapy protocol. The prescribed dose was 18.75 or 20 Gy delivered in 5 fractions, twice daily, over 3 consecutive days. Dosimetric comparisons were made between automatic and manual digitization on day two CTs. Results: The region growing algorithm, assisted by the spline model of the catheters, was able to digitize all catheters. The difference between automatic and manually digitized positions was 0.8±0.3 mm. The digitization time ranged from 34 minutes to 43 minutes with a mean digitization time of 37 minutes. The bulk of the time was spent on manual selection of initial seed positions and spline parameter adjustments. There was no significance difference in dosimetric parameters between the automatic and manually digitized plans. D90% to the CTV was 91.5±4.4% for the manual digitization versus 91.4±4.4% for the automatic digitization (p=0.56). Conclusion: A region growing algorithm was developed to semi-automatically digitize interstitial catheters in HDR brachytherapy using the Syed-Neblett template. This automatic digitization tool was shown to be accurate compared to manual digitization.« less

  16. Automating the application of smart materials for protein crystallization.

    PubMed

    Khurshid, Sahir; Govada, Lata; El-Sharif, Hazim F; Reddy, Subrayal M; Chayen, Naomi E

    2015-03-01

    The fabrication and validation of the first semi-liquid nonprotein nucleating agent to be administered automatically to crystallization trials is reported. This research builds upon prior demonstration of the suitability of molecularly imprinted polymers (MIPs; known as `smart materials') for inducing protein crystal growth. Modified MIPs of altered texture suitable for high-throughput trials are demonstrated to improve crystal quality and to increase the probability of success when screening for suitable crystallization conditions. The application of these materials is simple, time-efficient and will provide a potent tool for structural biologists embarking on crystallization trials.

  17. Automatic humidification system to support the assessment of food drying processes

    NASA Astrophysics Data System (ADS)

    Ortiz Hernández, B. D.; Carreño Olejua, A. R.; Castellanos Olarte, J. M.

    2016-07-01

    This work shows the main features of an automatic humidification system to provide drying air that match environmental conditions of different climate zones. This conditioned air is then used to assess the drying process of different agro-industrial products at the Automation and Control for Agro-industrial Processes Laboratory of the Pontifical Bolivarian University of Bucaramanga, Colombia. The automatic system allows creating and improving control strategies to supply drying air under specified conditions of temperature and humidity. The development of automatic routines to control and acquire real time data was made possible by the use of robust control systems and suitable instrumentation. The signals are read and directed to a controller memory where they are scaled and transferred to a memory unit. Using the IP address is possible to access data to perform supervision tasks. One important characteristic of this automatic system is the Dynamic Data Exchange Server (DDE) to allow direct communication between the control unit and the computer used to build experimental curves.

  18. Learning-based image preprocessing for robust computer-aided detection

    NASA Astrophysics Data System (ADS)

    Raghupathi, Laks; Devarakota, Pandu R.; Wolf, Matthias

    2013-03-01

    Recent studies have shown that low dose computed tomography (LDCT) can be an effective screening tool to reduce lung cancer mortality. Computer-aided detection (CAD) would be a beneficial second reader for radiologists in such cases. Studies demonstrate that while iterative reconstructions (IR) improve LDCT diagnostic quality, it however degrades CAD performance significantly (increased false positives) when applied directly. For improving CAD performance, solutions such as retraining with newer data or applying a standard preprocessing technique may not be suffice due to high prevalence of CT scanners and non-uniform acquisition protocols. Here, we present a learning-based framework that can adaptively transform a wide variety of input data to boost an existing CAD performance. This not only enhances their robustness but also their applicability in clinical workflows. Our solution consists of applying a suitable pre-processing filter automatically on the given image based on its characteristics. This requires the preparation of ground truth (GT) of choosing an appropriate filter resulting in improved CAD performance. Accordingly, we propose an efficient consolidation process with a novel metric. Using key anatomical landmarks, we then derive consistent feature descriptors for the classification scheme that then uses a priority mechanism to automatically choose an optimal preprocessing filter. We demonstrate CAD prototype∗ performance improvement using hospital-scale datasets acquired from North America, Europe and Asia. Though we demonstrated our results for a lung nodule CAD, this scheme is straightforward to extend to other post-processing tools dedicated to other organs and modalities.

  19. Binding of motion and colour is early and automatic.

    PubMed

    Blaser, Erik; Papathomas, Thomas; Vidnyánszky, Zoltán

    2005-04-01

    At what stages of the human visual hierarchy different features are bound together, and whether this binding requires attention, is still highly debated. We used a colour-contingent motion after-effect (CCMAE) to study the binding of colour and motion signals. The logic of our approach was as follows: if CCMAEs can be evoked by targeted adaptation of early motion processing stages, without allowing for feedback from higher motion integration stages, then this would support our hypothesis that colour and motion are bound automatically on the basis of spatiotemporally local information. Our results show for the first time that CCMAE's can be evoked by adaptation to a locally paired opposite-motion dot display, a stimulus that, importantly, is known to trigger direction-specific responses in the primary visual cortex yet results in strong inhibition of the directional responses in area MT of macaques as well as in area MT+ in humans and, indeed, is perceived only as motionless flicker. The magnitude of the CCMAE in the locally paired condition was not significantly different from control conditions where the different directions were spatiotemporally separated (i.e. not locally paired) and therefore perceived as two moving fields. These findings provide evidence that adaptation at an early, local motion stage, and only adaptation at this stage, underlies this CCMAE, which in turn implies that spatiotemporally coincident colour and motion signals are bound automatically, most probably as early as cortical area V1, even when the association between colour and motion is perceptually inaccessible.

  20. Phase coherence adaptive processor for automatic signal detection and identification

    NASA Astrophysics Data System (ADS)

    Wagstaff, Ronald A.

    2006-05-01

    A continuously adapting acoustic signal processor with an automatic detection/decision aid is presented. Its purpose is to preserve the signals of tactical interest, and filter out other signals and noise. It utilizes single sensor or beamformed spectral data and transforms the signal and noise phase angles into "aligned phase angles" (APA). The APA increase the phase temporal coherence of signals and leave the noise incoherent. Coherence thresholds are set, which are representative of the type of source "threat vehicle" and the geographic area or volume in which it is operating. These thresholds separate signals, based on the "quality" of their APA coherence. An example is presented in which signals from a submerged source in the ocean are preserved, while clutter signals from ships and noise are entirely eliminated. Furthermore, the "signals of interest" were identified by the processor's automatic detection aid. Similar performance is expected for air and ground vehicles. The processor's equations are formulated in such a manner that they can be tuned to eliminate noise and exploit signal, based on the "quality" of their APA temporal coherence. The mathematical formulation for this processor is presented, including the method by which the processor continuously self-adapts. Results show nearly complete elimination of noise, with only the selected category of signals remaining, and accompanying enhancements in spectral and spatial resolution. In most cases, the concept of signal-to-noise ratio looses significance, and "adaptive automated /decision aid" is more relevant.

  1. Effects of single cortisol administrations on human affect reviewed: Coping with stress through adaptive regulation of automatic cognitive processing.

    PubMed

    Putman, Peter; Roelofs, Karin

    2011-05-01

    The human stress hormone cortisol may facilitate effective coping after psychological stress. In apparent agreement, administration of cortisol has been demonstrated to reduce fear in response to stressors. For anxious patients with phobias or posttraumatic stress disorder this has been ascribed to hypothetical inhibition of retrieval of traumatic memories. However, such stress-protective effects may also work via adaptive regulation of early cognitive processing of threatening information from the environment. This paper selectively reviews the available literature on effects of single cortisol administrations on affect and early cognitive processing of affectively significant information. The concluded working hypothesis is that immediate effects of high concentration of cortisol may facilitate stress-coping via inhibition of automatic processing of goal-irrelevant threatening information and through increased automatic approach-avoidance responses in early emotional processing. Limitations in the existing literature and suggestions for future directions are briefly discussed. Copyright © 2010 Elsevier Ltd. All rights reserved.

  2. Automatic knee cartilage delineation using inheritable segmentation

    NASA Astrophysics Data System (ADS)

    Dries, Sebastian P. M.; Pekar, Vladimir; Bystrov, Daniel; Heese, Harald S.; Blaffert, Thomas; Bos, Clemens; van Muiswinkel, Arianne M. C.

    2008-03-01

    We present a fully automatic method for segmentation of knee joint cartilage from fat suppressed MRI. The method first applies 3-D model-based segmentation technology, which allows to reliably segment the femur, patella, and tibia by iterative adaptation of the model according to image gradients. Thin plate spline interpolation is used in the next step to position deformable cartilage models for each of the three bones with reference to the segmented bone models. After initialization, the cartilage models are fine adjusted by automatic iterative adaptation to image data based on gray value gradients. The method has been validated on a collection of 8 (3 left, 5 right) fat suppressed datasets and demonstrated the sensitivity of 83+/-6% compared to manual segmentation on a per voxel basis as primary endpoint. Gross cartilage volume measurement yielded an average error of 9+/-7% as secondary endpoint. For cartilage being a thin structure, already small deviations in distance result in large errors on a per voxel basis, rendering the primary endpoint a hard criterion.

  3. An automatic holographic adaptive phoropter

    NASA Astrophysics Data System (ADS)

    Amirsolaimani, Babak; Peyghambarian, N.; Schwiegerling, Jim; Bablumyan, Arkady; Savidis, Nickolaos; Peyman, Gholam

    2017-08-01

    Phoropters are the most common instrument used to detect refractive errors. During a refractive exam, lenses are flipped in front of the patient who looks at the eye chart and tries to read the symbols. The procedure is fully dependent on the cooperation of the patient to read the eye chart, provides only a subjective measurement of visual acuity, and can at best provide a rough estimate of the patient's vision. Phoropters are difficult to use for mass screenings requiring a skilled examiner, and it is hard to screen young children and the elderly etc. We have developed a simplified, lightweight automatic phoropter that can measure the optical error of the eye objectively without requiring the patient's input. The automatic holographic adaptive phoropter is based on a Shack-Hartmann wave front sensor and three computercontrolled fluidic lenses. The fluidic lens system is designed to be able to provide power and astigmatic corrections over a large range of corrections without the need for verbal feedback from the patient in less than 20 seconds.

  4. Adaptive and automatic red blood cell counting method based on microscopic hyperspectral imaging technology

    NASA Astrophysics Data System (ADS)

    Liu, Xi; Zhou, Mei; Qiu, Song; Sun, Li; Liu, Hongying; Li, Qingli; Wang, Yiting

    2017-12-01

    Red blood cell counting, as a routine examination, plays an important role in medical diagnoses. Although automated hematology analyzers are widely used, manual microscopic examination by a hematologist or pathologist is still unavoidable, which is time-consuming and error-prone. This paper proposes a full-automatic red blood cell counting method which is based on microscopic hyperspectral imaging of blood smears and combines spatial and spectral information to achieve high precision. The acquired hyperspectral image data of the blood smear in the visible and near-infrared spectral range are firstly preprocessed, and then a quadratic blind linear unmixing algorithm is used to get endmember abundance images. Based on mathematical morphological operation and an adaptive Otsu’s method, a binaryzation process is performed on the abundance images. Finally, the connected component labeling algorithm with magnification-based parameter setting is applied to automatically select the binary images of red blood cell cytoplasm. Experimental results show that the proposed method can perform well and has potential for clinical applications.

  5. Gradient maintenance: A new algorithm for fast online replanning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahunbay, Ergun E., E-mail: eahunbay@mcw.edu; Li, X. Allen

    2015-06-15

    Purpose: Clinical use of online adaptive replanning has been hampered by the unpractically long time required to delineate volumes based on the image of the day. The authors propose a new replanning algorithm, named gradient maintenance (GM), which does not require the delineation of organs at risk (OARs), and can enhance automation, drastically reducing planning time and improving consistency and throughput of online replanning. Methods: The proposed GM algorithm is based on the hypothesis that if the dose gradient toward each OAR in daily anatomy can be maintained the same as that in the original plan, the intended plan qualitymore » of the original plan would be preserved in the adaptive plan. The algorithm requires a series of partial concentric rings (PCRs) to be automatically generated around the target toward each OAR on the planning and the daily images. The PCRs are used in the daily optimization objective function. The PCR dose constraints are generated with dose–volume data extracted from the original plan. To demonstrate this idea, GM plans generated using daily images acquired using an in-room CT were compared to regular optimization and image guided radiation therapy repositioning plans for representative prostate and pancreatic cancer cases. Results: The adaptive replanning using the GM algorithm, requiring only the target contour from the CT of the day, can be completed within 5 min without using high-power hardware. The obtained adaptive plans were almost as good as the regular optimization plans and were better than the repositioning plans for the cases studied. Conclusions: The newly proposed GM replanning algorithm, requiring only target delineation, not full delineation of OARs, substantially increased planning speed for online adaptive replanning. The preliminary results indicate that the GM algorithm may be a solution to improve the ability for automation and may be especially suitable for sites with small-to-medium size targets surrounded by several critical structures.« less

  6. AWARE: Adaptive Software Monitoring and Dynamic Reconfiguration for Critical Infrastructure Protection

    DTIC Science & Technology

    2015-04-29

    in which we applied these adaptation patterns to an adaptive news web server intended to tolerate extremely heavy, unexpected loads. To address...collection of existing models used as benchmarks for OO-based refactoring and an existing web -based repository called REMODD to provide users with model...invariant properties. Specifically, we developed Avida- MDE (based on the Avida digital evolution platform) to support the automatic generation of software

  7. Automated monitoring of recovered water quality

    NASA Technical Reports Server (NTRS)

    Misselhorn, J. E.; Hartung, W. H.; Witz, S. W.

    1974-01-01

    Laboratory prototype water quality monitoring system provides automatic system for online monitoring of chemical, physical, and bacteriological properties of recovered water and for signaling malfunction in water recovery system. Monitor incorporates whenever possible commercially available sensors suitably modified.

  8. Development of Advanced Verification and Validation Procedures and Tools for the Certification of Learning Systems in Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Jacklin, Stephen; Schumann, Johann; Gupta, Pramod; Richard, Michael; Guenther, Kurt; Soares, Fola

    2005-01-01

    Adaptive control technologies that incorporate learning algorithms have been proposed to enable automatic flight control and vehicle recovery, autonomous flight, and to maintain vehicle performance in the face of unknown, changing, or poorly defined operating environments. In order for adaptive control systems to be used in safety-critical aerospace applications, they must be proven to be highly safe and reliable. Rigorous methods for adaptive software verification and validation must be developed to ensure that control system software failures will not occur. Of central importance in this regard is the need to establish reliable methods that guarantee convergent learning, rapid convergence (learning) rate, and algorithm stability. This paper presents the major problems of adaptive control systems that use learning to improve performance. The paper then presents the major procedures and tools presently developed or currently being developed to enable the verification, validation, and ultimate certification of these adaptive control systems. These technologies include the application of automated program analysis methods, techniques to improve the learning process, analytical methods to verify stability, methods to automatically synthesize code, simulation and test methods, and tools to provide on-line software assurance.

  9. Proposals for best-quality immunohistochemical staining of paraffin-embedded brain tissue slides in forensics.

    PubMed

    Trautz, Florian; Dreßler, Jan; Stassart, Ruth; Müller, Wolf; Ondruschka, Benjamin

    2018-01-03

    Immunohistochemistry (IHC) has become an integral part in forensic histopathology over the last decades. However, the underlying methods for IHC vary greatly depending on the institution, creating a lack of comparability. The aim of this study was to assess the optimal approach for different technical aspects of IHC, in order to improve and standardize this procedure. Therefore, qualitative results from manual and automatic IHC staining of brain samples were compared, as well as potential differences in suitability of common IHC glass slides. Further, possibilities of image digitalization and connected issues were investigated. In our study, automatic staining showed more consistent staining results, compared to manual staining procedures. Digitalization and digital post-processing facilitated direct analysis and analysis for reproducibility considerably. No differences were found for different commercially available microscopic glass slides regarding suitability of IHC brain researches, but a certain rate of tissue loss should be expected during the staining process.

  10. Parametric diagnosis of the adaptive gas path in the automatic control system of the aircraft engine

    NASA Astrophysics Data System (ADS)

    Kuznetsova, T. A.

    2017-01-01

    The paper dwells on the adaptive multimode mathematical model of the gas-turbine aircraft engine (GTE) embedded in the automatic control system (ACS). The mathematical model is based on the throttle performances, and is characterized by high accuracy of engine parameters identification in stationary and dynamic modes. The proposed on-board engine model is the state space linearized low-level simulation. The engine health is identified by the influence of the coefficient matrix. The influence coefficient is determined by the GTE high-level mathematical model based on measurements of gas-dynamic parameters. In the automatic control algorithm, the sum of squares of the deviation between the parameters of the mathematical model and real GTE is minimized. The proposed mathematical model is effectively used for gas path defects detecting in on-line GTE health monitoring. The accuracy of the on-board mathematical model embedded in ACS determines the quality of adaptive control and reliability of the engine. To improve the accuracy of identification solutions and sustainability provision, the numerical method of Monte Carlo was used. The parametric diagnostic algorithm based on the LPτ - sequence was developed and tested. Analysis of the results suggests that the application of the developed algorithms allows achieving higher identification accuracy and reliability than similar models used in practice.

  11. Online automatic tuning and control for fed-batch cultivation

    PubMed Central

    van Straten, Gerrit; van der Pol, Leo A.; van Boxtel, Anton J. B.

    2007-01-01

    Performance of controllers applied in biotechnological production is often below expectation. Online automatic tuning has the capability to improve control performance by adjusting control parameters. This work presents automatic tuning approaches for model reference specific growth rate control during fed-batch cultivation. The approaches are direct methods that use the error between observed specific growth rate and its set point; systematic perturbations of the cultivation are not necessary. Two automatic tuning methods proved to be efficient, in which the adaptation rate is based on a combination of the error, squared error and integral error. These methods are relatively simple and robust against disturbances, parameter uncertainties, and initialization errors. Application of the specific growth rate controller yields a stable system. The controller and automatic tuning methods are qualified by simulations and laboratory experiments with Bordetella pertussis. PMID:18157554

  12. Adaptive inferential sensors based on evolving fuzzy models.

    PubMed

    Angelov, Plamen; Kordon, Arthur

    2010-04-01

    A new technique to the design and use of inferential sensors in the process industry is proposed in this paper, which is based on the recently introduced concept of evolving fuzzy models (EFMs). They address the challenge that the modern process industry faces today, namely, to develop such adaptive and self-calibrating online inferential sensors that reduce the maintenance costs while keeping the high precision and interpretability/transparency. The proposed new methodology makes possible inferential sensors to recalibrate automatically, which reduces significantly the life-cycle efforts for their maintenance. This is achieved by the adaptive and flexible open-structure EFM used. The novelty of this paper lies in the following: (1) the overall concept of inferential sensors with evolving and self-developing structure from the data streams; (2) the new methodology for online automatic selection of input variables that are most relevant for the prediction; (3) the technique to detect automatically a shift in the data pattern using the age of the clusters (and fuzzy rules); (4) the online standardization technique used by the learning procedure of the evolving model; and (5) the application of this innovative approach to several real-life industrial processes from the chemical industry (evolving inferential sensors, namely, eSensors, were used for predicting the chemical properties of different products in The Dow Chemical Company, Freeport, TX). It should be noted, however, that the methodology and conclusions of this paper are valid for the broader area of chemical and process industries in general. The results demonstrate that well-interpretable and with-simple-structure inferential sensors can automatically be designed from the data stream in real time, which predict various process variables of interest. The proposed approach can be used as a basis for the development of a new generation of adaptive and evolving inferential sensors that can address the challenges of the modern advanced process industry.

  13. Features: Real-Time Adaptive Feature and Document Learning for Web Search.

    ERIC Educational Resources Information Center

    Chen, Zhixiang; Meng, Xiannong; Fowler, Richard H.; Zhu, Binhai

    2001-01-01

    Describes Features, an intelligent Web search engine that is able to perform real-time adaptive feature (i.e., keyword) and document learning. Explains how Features learns from users' document relevance feedback and automatically extracts and suggests indexing keywords relevant to a search query, and learns from users' keyword relevance feedback…

  14. Pilot Clinical Application of an Adaptive Robotic System for Young Children with Autism

    ERIC Educational Resources Information Center

    Bekele, Esubalew; Crittendon, Julie A.; Swanson, Amy; Sarkar, Nilanjan; Warren, Zachary E.

    2014-01-01

    It has been argued that clinical applications of advanced technology may hold promise for addressing impairments associated with autism spectrum disorders. This pilot feasibility study evaluated the application of a novel adaptive robot-mediated system capable of both administering and automatically adjusting joint attention prompts to a small…

  15. Adaptable Learning Assistant for Item Bank Management

    ERIC Educational Resources Information Center

    Nuntiyagul, Atorn; Naruedomkul, Kanlaya; Cercone, Nick; Wongsawang, Damras

    2008-01-01

    We present PKIP, an adaptable learning assistant tool for managing question items in item banks. PKIP is not only able to automatically assist educational users to categorize the question items into predefined categories by their contents but also to correctly retrieve the items by specifying the category and/or the difficulty level. PKIP adapts…

  16. Adaptive System Modeling for Spacecraft Simulation

    NASA Technical Reports Server (NTRS)

    Thomas, Justin

    2011-01-01

    This invention introduces a methodology and associated software tools for automatically learning spacecraft system models without any assumptions regarding system behavior. Data stream mining techniques were used to learn models for critical portions of the International Space Station (ISS) Electrical Power System (EPS). Evaluation on historical ISS telemetry data shows that adaptive system modeling reduces simulation error anywhere from 50 to 90 percent over existing approaches. The purpose of the methodology is to outline how someone can create accurate system models from sensor (telemetry) data. The purpose of the software is to support the methodology. The software provides analysis tools to design the adaptive models. The software also provides the algorithms to initially build system models and continuously update them from the latest streaming sensor data. The main strengths are as follows: Creates accurate spacecraft system models without in-depth system knowledge or any assumptions about system behavior. Automatically updates/calibrates system models using the latest streaming sensor data. Creates device specific models that capture the exact behavior of devices of the same type. Adapts to evolving systems. Can reduce computational complexity (faster simulations).

  17. Second-order sliding mode controller with model reference adaptation for automatic train operation

    NASA Astrophysics Data System (ADS)

    Ganesan, M.; Ezhilarasi, D.; Benni, Jijo

    2017-11-01

    In this paper, a new approach to model reference based adaptive second-order sliding mode control together with adaptive state feedback is presented to control the longitudinal dynamic motion of a high speed train for automatic train operation with the objective of minimal jerk travel by the passengers. The nonlinear dynamic model for the longitudinal motion of the train comprises of a locomotive and coach subsystems is constructed using multiple point-mass model by considering the forces acting on the vehicle. An adaptation scheme using Lyapunov criterion is derived to tune the controller gains by considering a linear, stable reference model that ensures the stability of the system in closed loop. The effectiveness of the controller tracking performance is tested under uncertain passenger load, coupler-draft gear parameters, propulsion resistance coefficients variations and environmental disturbances due to side wind and wet rail conditions. The results demonstrate improved tracking performance of the proposed control scheme with a least jerk under maximum parameter uncertainties when compared to constant gain second-order sliding mode control.

  18. Uncovering the true nature of deformation microstructures using 3D analysis methods

    NASA Astrophysics Data System (ADS)

    Ferry, M.; Quadir, M. Z.; Afrin, N.; Xu, W.; Loeb, A.; Soe, B.; McMahon, C.; George, C.; Bassman, L.

    2015-08-01

    Three-dimensional electron backscatter diffraction (3D EBSD) has emerged as a powerful technique for generating 3D crystallographic information in reasonably large volumes of a microstructure. The technique uses a focused ion beam (FIB) as a high precision serial sectioning device for generating consecutive ion milled surfaces of a material, with each milled surface subsequently mapped by EBSD. The successive EBSD maps are combined using a suitable post-processing method to generate a crystallographic volume of the microstructure. The first part of this paper shows the usefulness of 3D EBSD for understanding the origin of various structural features associated with the plastic deformation of metals. The second part describes a new method for automatically identifying the various types of low and high angle boundaries found in deformed and annealed metals, particularly those associated with grains exhibiting subtle and gradual variations in orientation. We have adapted a 2D image segmentation technique, fast multiscale clustering, to 3D EBSD data using a novel variance function to accommodate quaternion data. This adaptation is capable of segmenting based on subtle and gradual variation as well as on sharp boundaries within the data. We demonstrate the excellent capabilities of this technique with application to 3D EBSD data sets generated from a range of cold rolled and annealed metals described in the paper.

  19. Efficient seeding and defragmentation of curvature streamlines for colonic polyp detection

    NASA Astrophysics Data System (ADS)

    Zhao, Lingxiao; Botha, Charl P.; Truyen, Roel; Vos, Frans M.; Post, Frits H.

    2008-03-01

    Many computer aided diagnosis (CAD) schemes have been developed for colon cancer detection using Virtual Colonoscopy (VC). In earlier work, we developed an automatic polyp detection method integrating flow visualization techniques, that forms part of the CAD functionality of an existing Virtual Colonoscopy pipeline. Curvature streamlines were used to characterize polyp surface shape. Features derived from curvature streamlines correlated highly with true polyp detections. During testing with a large number of patient data sets, we found that the correlation between streamline features and true polyps could be affected by noise and our streamline generation technique. The seeding and spacing constraints and CT noise could lead to streamline fragmentation, which reduced the discriminating power of our streamline features. In this paper, we present two major improvements of our curvature streamline generation. First, we adapted our streamline seeding strategy to the local surface properties and made the streamline generation faster. It generates a significantly smaller number of seeds but still results in a comparable and suitable streamline distribution. Second, based on our observation that longer streamlines are better surface shape descriptors, we improved our streamline tracing algorithm to produce longer streamlines. Our improved techniques are more effcient and also guide the streamline geometry to correspond better to colonic surface shape. These two adaptations support a robust and high correlation between our streamline features and true positive detections and lead to better polyp detection results.

  20. Examining Myddosome Formation by Luminescence-Based Mammalian Interactome Mapping (LUMIER).

    PubMed

    Wolz, Olaf-Oliver; Koegl, Manfred; Weber, Alexander N R

    2018-01-01

    Recent structural, biochemical, and functional studies have led to the notion that many of the post-receptor signaling complexes in innate immunity have a multimeric, multi-protein architecture whose hierarchical assembly is vital for function. The Myddosome is a post-receptor complex in the cytoplasmic signaling of Toll-like receptors (TLR) and the Interleukin-1 receptor (IL-1R), involving the proteins MyD88, IL-1R-associated kinase 4 (IRAK4), and IRAK2. Its importance is strikingly illustrated by the fact that rare germline mutations in MYD88 causing high susceptibility to infections are characterized by failure to assemble Myddosomes; conversely, gain-of-function MYD88 mutations leading to oncogenic hyperactivation of NF-κB show increased Myddosome formation. Reliable methods to probe Myddosome formation experimentally are therefore vital to further study the properties of this important post-receptor complex and its role in innate immunity, such as its regulation by posttranslational modification. Compared to structural and biochemical analyses, luminescence-based mammalian interactome mapping (LUMIER) is a straightforward, automatable, quantifiable, and versatile technique to study protein-protein interactions in a physiologically relevant context. We adapted LUMIER for Myddosome analysis and provide here a basic background of this technique, suitable experimental protocols, and its potential for medium-throughput screening. The principles presented herein can be adapted to other signaling pathways.

  1. Font adaptive word indexing of modern printed documents.

    PubMed

    Marinai, Simone; Marino, Emanuele; Soda, Giovanni

    2006-08-01

    We propose an approach for the word-level indexing of modern printed documents which are difficult to recognize using current OCR engines. By means of word-level indexing, it is possible to retrieve the position of words in a document, enabling queries involving proximity of terms. Web search engines implement this kind of indexing, allowing users to retrieve Web pages on the basis of their textual content. Nowadays, digital libraries hold collections of digitized documents that can be retrieved either by browsing the document images or relying on appropriate metadata assembled by domain experts. Word indexing tools would therefore increase the access to these collections. The proposed system is designed to index homogeneous document collections by automatically adapting to different languages and font styles without relying on OCR engines for character recognition. The approach is based on three main ideas: the use of Self Organizing Maps (SOM) to perform unsupervised character clustering, the definition of one suitable vector-based word representation whose size depends on the word aspect-ratio, and the run-time alignment of the query word with indexed words to deal with broken and touching characters. The most appropriate applications are for processing modern printed documents (17th to 19th centuries) where current OCR engines are less accurate. Our experimental analysis addresses six data sets containing documents ranging from books of the 17th century to contemporary journals.

  2. Computerized adaptive control weld skate with CCTV weld guidance project

    NASA Technical Reports Server (NTRS)

    Wall, W. A.

    1976-01-01

    This report summarizes progress of the automatic computerized weld skate development portion of the Computerized Weld Skate with Closed Circuit Television (CCTV) Arc Guidance Project. The main goal of the project is to develop an automatic welding skate demonstration model equipped with CCTV weld guidance. The three main goals of the overall project are to: (1) develop a demonstration model computerized weld skate system, (2) develop a demonstration model automatic CCTV guidance system, and (3) integrate the two systems into a demonstration model of computerized weld skate with CCTV weld guidance for welding contoured parts.

  3. A Noise-Assisted Data Analysis Method for Automatic EOG-Based Sleep Stage Classification Using Ensemble Learning.

    PubMed

    Olesen, Alexander Neergaard; Christensen, Julie A E; Sorensen, Helge B D; Jennum, Poul J

    2016-08-01

    Reducing the number of recording modalities for sleep staging research can benefit both researchers and patients, under the condition that they provide as accurate results as conventional systems. This paper investigates the possibility of exploiting the multisource nature of the electrooculography (EOG) signals by presenting a method for automatic sleep staging using the complete ensemble empirical mode decomposition with adaptive noise algorithm, and a random forest classifier. It achieves a high overall accuracy of 82% and a Cohen's kappa of 0.74 indicating substantial agreement between automatic and manual scoring.

  4. Instance-based categorization: automatic versus intentional forms of retrieval.

    PubMed

    Neal, A; Hesketh, B; Andrews, S

    1995-03-01

    Two experiments are reported which attempt to disentangle the relative contribution of intentional and automatic forms of retrieval to instance-based categorization. A financial decision-making task was used in which subjects had to decide whether a bank would approve loans for a series of applicants. Experiment 1 found that categorization was sensitive to instance-specific knowledge, even when subjects had practiced using a simple rule. L. L. Jacoby's (1991) process-dissociation procedure was adapted for use in Experiment 2 to infer the relative contribution of intentional and automatic retrieval processes to categorization decisions. The results provided (1) strong evidence that intentional retrieval processes influence categorization, and (2) some preliminary evidence suggesting that automatic retrieval processes may also contribute to categorization decisions.

  5. An integrated framework for assessing vulnerability to climate change and developing adaptation strategies for coffee growing families in Mesoamerica.

    PubMed

    Baca, María; Läderach, Peter; Haggar, Jeremy; Schroth, Götz; Ovalle, Oriana

    2014-01-01

    The Mesoamerican region is considered to be one of the areas in the world most vulnerable to climate change. We developed a framework for quantifying the vulnerability of the livelihoods of coffee growers in Mesoamerica at regional and local levels and identify adaptation strategies. Following the Intergovernmental Panel on Climate Change (IPCC) concepts, vulnerability was defined as the combination of exposure, sensitivity and adaptive capacity. To quantify exposure, changes in the climatic suitability for coffee and other crops were predicted through niche modelling based on historical climate data and locations of coffee growing areas from Mexico, Guatemala, El Salvador and Nicaragua. Future climate projections were generated from 19 Global Circulation Models. Focus groups were used to identify nine indicators of sensitivity and eleven indicators of adaptive capacity, which were evaluated through semi-structured interviews with 558 coffee producers. Exposure, sensitivity and adaptive capacity were then condensed into an index of vulnerability, and adaptation strategies were identified in participatory workshops. Models predict that all target countries will experience a decrease in climatic suitability for growing Arabica coffee, with highest suitability loss for El Salvador and lowest loss for Mexico. High vulnerability resulted from loss in climatic suitability for coffee production and high sensitivity through variability of yields and out-migration of the work force. This was combined with low adaptation capacity as evidenced by poor post harvest infrastructure and in some cases poor access to credit and low levels of social organization. Nevertheless, the specific contributors to vulnerability varied strongly among countries, municipalities and families making general trends difficult to identify. Flexible strategies for adaption are therefore needed. Families need the support of government and institutions specialized in impacts of climate change and strengthening of farmer organizations to enable the adjustment of adaptation strategies to local needs and conditions.

  6. An Integrated Framework for Assessing Vulnerability to Climate Change and Developing Adaptation Strategies for Coffee Growing Families in Mesoamerica

    PubMed Central

    Baca, María; Läderach, Peter; Haggar, Jeremy; Schroth, Götz; Ovalle, Oriana

    2014-01-01

    The Mesoamerican region is considered to be one of the areas in the world most vulnerable to climate change. We developed a framework for quantifying the vulnerability of the livelihoods of coffee growers in Mesoamerica at regional and local levels and identify adaptation strategies. Following the Intergovernmental Panel on Climate Change (IPCC) concepts, vulnerability was defined as the combination of exposure, sensitivity and adaptive capacity. To quantify exposure, changes in the climatic suitability for coffee and other crops were predicted through niche modelling based on historical climate data and locations of coffee growing areas from Mexico, Guatemala, El Salvador and Nicaragua. Future climate projections were generated from 19 Global Circulation Models. Focus groups were used to identify nine indicators of sensitivity and eleven indicators of adaptive capacity, which were evaluated through semi-structured interviews with 558 coffee producers. Exposure, sensitivity and adaptive capacity were then condensed into an index of vulnerability, and adaptation strategies were identified in participatory workshops. Models predict that all target countries will experience a decrease in climatic suitability for growing Arabica coffee, with highest suitability loss for El Salvador and lowest loss for Mexico. High vulnerability resulted from loss in climatic suitability for coffee production and high sensitivity through variability of yields and out-migration of the work force. This was combined with low adaptation capacity as evidenced by poor post harvest infrastructure and in some cases poor access to credit and low levels of social organization. Nevertheless, the specific contributors to vulnerability varied strongly among countries, municipalities and families making general trends difficult to identify. Flexible strategies for adaption are therefore needed. Families need the support of government and institutions specialized in impacts of climate change and strengthening of farmer organizations to enable the adjustment of adaptation strategies to local needs and conditions. PMID:24586328

  7. Model Checking Satellite Operational Procedures

    NASA Astrophysics Data System (ADS)

    Cavaliere, Federico; Mari, Federico; Melatti, Igor; Minei, Giovanni; Salvo, Ivano; Tronci, Enrico; Verzino, Giovanni; Yushtein, Yuri

    2011-08-01

    We present a model checking approach for the automatic verification of satellite operational procedures (OPs). Building a model for a complex system as a satellite is a hard task. We overcome this obstruction by using a suitable simulator (SIMSAT) for the satellite. Our approach aims at improving OP quality assurance by automatic exhaustive exploration of all possible simulation scenarios. Moreover, our solution decreases OP verification costs by using a model checker (CMurphi) to automatically drive the simulator. We model OPs as user-executed programs observing the simulator telemetries and sending telecommands to the simulator. In order to assess feasibility of our approach we present experimental results on a simple meaningful scenario. Our results show that we can save up to 90% of verification time.

  8. Precision Targeting With a Tracking Adaptive Optics Scanning Laser Ophthalmoscope

    DTIC Science & Technology

    2006-01-01

    automatic high- resolution mosaic generation, and automatic blink detection and tracking re-lock were also tested. The system has the potential to become an...structures can lead to earlier detection of retinal diseases such as age-related macular degeneration (AMD) and diabetic retinopathy (DR). Combined...optics systems sense perturbations in the detected wave-front and apply corrections to an optical element that flatten the wave-front and allow near

  9. An algorithm for automatic parameter adjustment for brain extraction in BrainSuite

    NASA Astrophysics Data System (ADS)

    Rajagopal, Gautham; Joshi, Anand A.; Leahy, Richard M.

    2017-02-01

    Brain Extraction (classification of brain and non-brain tissue) of MRI brain images is a crucial pre-processing step necessary for imaging-based anatomical studies of the human brain. Several automated methods and software tools are available for performing this task, but differences in MR image parameters (pulse sequence, resolution) and instrumentand subject-dependent noise and artefacts affect the performance of these automated methods. We describe and evaluate a method that automatically adapts the default parameters of the Brain Surface Extraction (BSE) algorithm to optimize a cost function chosen to reflect accurate brain extraction. BSE uses a combination of anisotropic filtering, Marr-Hildreth edge detection, and binary morphology for brain extraction. Our algorithm automatically adapts four parameters associated with these steps to maximize the brain surface area to volume ratio. We evaluate the method on a total of 109 brain volumes with ground truth brain masks generated by an expert user. A quantitative evaluation of the performance of the proposed algorithm showed an improvement in the mean (s.d.) Dice coefficient from 0.8969 (0.0376) for default parameters to 0.9509 (0.0504) for the optimized case. These results indicate that automatic parameter optimization can result in significant improvements in definition of the brain mask.

  10. Expert Knowledge-Based Automatic Sleep Stage Determination by Multi-Valued Decision Making Method

    NASA Astrophysics Data System (ADS)

    Wang, Bei; Sugi, Takenao; Kawana, Fusae; Wang, Xingyu; Nakamura, Masatoshi

    In this study, an expert knowledge-based automatic sleep stage determination system working on a multi-valued decision making method is developed. Visual inspection by a qualified clinician is adopted to obtain the expert knowledge database. The expert knowledge database consists of probability density functions of parameters for various sleep stages. Sleep stages are determined automatically according to the conditional probability. Totally, four subjects were participated. The automatic sleep stage determination results showed close agreements with the visual inspection on sleep stages of awake, REM (rapid eye movement), light sleep and deep sleep. The constructed expert knowledge database reflects the distributions of characteristic parameters which can be adaptive to variable sleep data in hospitals. The developed automatic determination technique based on expert knowledge of visual inspection can be an assistant tool enabling further inspection of sleep disorder cases for clinical practice.

  11. Digital dental surface registration with laser scanner for orthodontics set-up planning

    NASA Astrophysics Data System (ADS)

    Alcaniz-Raya, Mariano L.; Albalat, Salvador E.; Grau Colomer, Vincente; Monserrat, Carlos A.

    1997-05-01

    We present an optical measuring system based on laser structured light suitable for its diary use in orthodontics clinics that fit four main requirements: (1) to avoid use of stone models, (2) to automatically discriminate geometric points belonging to teeth and gum, (3) to automatically calculate diagnostic parameters used by orthodontists, (4) to make use of low cost and easy to use technology for future commercial use. Proposed technique is based in the use of hydrocolloids mould used by orthodontists for stone model obtention. These mould of the inside of patient's mouth are composed of very fluent materials like alginate or hydrocolloids that reveal fine details of dental anatomy. Alginate mould are both very easy to obtain and very low costly. Once captured, alginate moulds are digitized by mean of a newly developed and patented 3D dental scanner. Developed scanner is based in the optical triangulation method based in the projection of a laser line on the alginate mould surface. Line deformation gives uncalibrated shape information. Relative linear movements of the mould with respect to the sensor head gives more sections thus obtaining a full 3D uncalibrated dentition model. Developed device makes use of redundant CCD in the sensor head and servocontrolled linear axis for mould movement. Last step is calibration to get a real and precise X, Y, Z image. All the process is done automatically. The scanner has been specially adapted for 3D dental anatomy capturing in order to fulfill specific requirements such as: scanning time, accuracy, security and correct acquisition of 'hidden points' in alginate mould. Measurement realized on phantoms with known geometry quite similar to dental anatomy present errors less than 0,1 mm. Scanning of global dental anatomy is 2 minutes, and generation of 3D graphics of dental cast takes approximately 30 seconds in a Pentium-based PC.

  12. The system neurophysiological basis of non-adaptive cognitive control: Inhibition of implicit learning mediated by right prefrontal regions.

    PubMed

    Stock, Ann-Kathrin; Steenbergen, Laura; Colzato, Lorenza; Beste, Christian

    2016-12-01

    Cognitive control is adaptive in the sense that it inhibits automatic processes to optimize goal-directed behavior, but high levels of control may also have detrimental effects in case they suppress beneficial automatisms. Until now, the system neurophysiological mechanisms and functional neuroanatomy underlying these adverse effects of cognitive control have remained elusive. This question was examined by analyzing the automatic exploitation of a beneficial implicit predictive feature under conditions of high versus low cognitive control demands, combining event-related potentials (ERPs) and source localization. It was found that cognitive control prohibits the beneficial automatic exploitation of additional implicit information when task demands are high. Bottom-up perceptual and attentional selection processes (P1 and N1 ERPs) are not modulated by this, but the automatic exploitation of beneficial predictive information in case of low cognitive control demands was associated with larger response-locked P3 amplitudes and stronger activation of the right inferior frontal gyrus (rIFG, BA47). This suggests that the rIFG plays a key role in the detection of relevant task cues, the exploitation of alternative task sets, and the automatic (bottom-up) implementation and reprogramming of action plans. Moreover, N450 amplitudes were larger under high cognitive control demands, which was associated with activity differences in the right medial frontal gyrus (BA9). This most likely reflects a stronger exploitation of explicit task sets which hinders the exploration of the implicit beneficial information in case of high cognitive control demands. Hum Brain Mapp 37:4511-4522, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  13. Cognition and balance control: does processing of explicit contextual cues of impending perturbations modulate automatic postural responses?

    PubMed

    Coelho, Daniel Boari; Teixeira, Luis Augusto

    2017-08-01

    Processing of predictive contextual cues of an impending perturbation is thought to induce adaptive postural responses. Cueing in previous research has been provided through repeated perturbations with a constant foreperiod. This experimental strategy confounds explicit predictive cueing with adaptation and non-specific properties of temporal cueing. Two experiments were performed to assess those factors separately. To perturb upright balance, the base of support was suddenly displaced backwards in three amplitudes: 5, 10 and 15 cm. In Experiment 1, we tested the effect of cueing the amplitude of the impending postural perturbation by means of visual signals, and the effect of adaptation to repeated exposures by comparing block versus random sequences of perturbation. In Experiment 2, we evaluated separately the effects of cueing the characteristics of an impending balance perturbation and cueing the timing of perturbation onset. Results from Experiment 1 showed that the block sequence of perturbations led to increased stability of automatic postural responses, and modulation of magnitude and onset latency of muscular responses. Results from Experiment 2 showed that only the condition cueing timing of platform translation onset led to increased balance stability and modulation of onset latency of muscular responses. Conversely, cueing platform displacement amplitude failed to induce any effects on automatic postural responses in both experiments. Our findings support the interpretation of improved postural responses via optimized sensorimotor processes, at the same time that cast doubt on the notion that cognitive processing of explicit contextual cues advancing the magnitude of an impending perturbation can preset adaptive postural responses.

  14. Automatic Domain Adaptation of Word Sense Disambiguation Based on Sublanguage Semantic Schemata Applied to Clinical Narrative

    ERIC Educational Resources Information Center

    Patterson, Olga

    2012-01-01

    Domain adaptation of natural language processing systems is challenging because it requires human expertise. While manual effort is effective in creating a high quality knowledge base, it is expensive and time consuming. Clinical text adds another layer of complexity to the task due to privacy and confidentiality restrictions that hinder the…

  15. Adding Statistical Machine Translation Adaptation to Computer-Assisted Translation

    DTIC Science & Technology

    2013-09-01

    are automatically searched and used to suggest possible translations; (2) spell-checkers; (3) glossaries; (4) dictionaries ; (5) alignment and...matching against TMs to propose translations; spell-checking, glossary, and dictionary look-up; support for multiple file formats; regular expressions...on Telecommunications. Tehran, 2012, 822–826. Bertoldi, N.; Federico, M. Domain Adaptation for Statistical Machine Translation with Monolingual

  16. Automatic assembly of micro-optical components

    NASA Astrophysics Data System (ADS)

    Gengenbach, Ulrich K.

    1996-12-01

    Automatic assembly becomes an important issue as hybrid micro systems enter industrial fabrication. Moving from a laboratory scale production with manual assembly and bonding processes to automatic assembly requires a thorough re- evaluation of the design, the characteristics of the individual components and of the processes involved. Parts supply for automatic operation, sensitive and intelligent grippers adapted to size, surface and material properties of the microcomponents gain importance when the superior sensory and handling skills of a human are to be replaced by a machine. This holds in particular for the automatic assembly of micro-optical components. The paper outlines these issues exemplified at the automatic assembly of a micro-optical duplexer consisting of a micro-optical bench fabricated by the LIGA technique, two spherical lenses, a wavelength filter and an optical fiber. Spherical lenses, wavelength filter and optical fiber are supplied by third party vendors, which raises the question of parts supply for automatic assembly. The bonding processes for these components include press fit and adhesive bonding. The prototype assembly system with all relevant components e.g. handling system, parts supply, grippers and control is described. Results of first automatic assembly tests are presented.

  17. An automatic, closed-circuit oxygen consumption apparatus for small animals.

    PubMed

    Stock, M J

    1975-11-01

    An apparatus suitable for the continuous measurement of oxygen consumption of rats and mice is described. The system uses a motorized syringe dispenser to deliver fixed volumes of oxygen to a closed animal chamber. The dispenser is controlled by a micro-differential pressure switch to maintain chamber pressure slightly above ambient. The rate of oxygen consumption is determined by timing the interval between successive operations of the dispenser. The system has proved suitable for a range of experimental conditions and treatments.

  18. J-Adaptive estimation with estimated noise statistics. [for orbit determination

    NASA Technical Reports Server (NTRS)

    Jazwinski, A. H.; Hipkins, C.

    1975-01-01

    The J-Adaptive estimator described by Jazwinski and Hipkins (1972) is extended to include the simultaneous estimation of the statistics of the unmodeled system accelerations. With the aid of simulations it is demonstrated that the J-Adaptive estimator with estimated noise statistics can automatically estimate satellite orbits to an accuracy comparable with the data noise levels, when excellent, continuous tracking coverage is available. Such tracking coverage will be available from satellite-to-satellite tracking.

  19. AdaFF: Adaptive Failure-Handling Framework for Composite Web Services

    NASA Astrophysics Data System (ADS)

    Kim, Yuna; Lee, Wan Yeon; Kim, Kyong Hoon; Kim, Jong

    In this paper, we propose a novel Web service composition framework which dynamically accommodates various failure recovery requirements. In the proposed framework called Adaptive Failure-handling Framework (AdaFF), failure-handling submodules are prepared during the design of a composite service, and some of them are systematically selected and automatically combined with the composite Web service at service instantiation in accordance with the requirement of individual users. In contrast, existing frameworks cannot adapt the failure-handling behaviors to user's requirements. AdaFF rapidly delivers a composite service supporting the requirement-matched failure handling without manual development, and contributes to a flexible composite Web service design in that service architects never care about failure handling or variable requirements of users. For proof of concept, we implement a prototype system of the AdaFF, which automatically generates a composite service instance with Web Services Business Process Execution Language (WS-BPEL) according to the users' requirement specified in XML format and executes the generated instance on the ActiveBPEL engine.

  20. The ALICE-HMPID Detector Control System: Its evolution towards an expert and adaptive system

    NASA Astrophysics Data System (ADS)

    De Cataldo, G.; Franco, A.; Pastore, C.; Sgura, I.; Volpe, G.

    2011-05-01

    The High Momentum Particle IDentification (HMPID) detector is a proximity focusing Ring Imaging Cherenkov (RICH) for charged hadron identification. The HMPID is based on liquid C 6F 14 as the radiator medium and on a 10 m 2 CsI coated, pad segmented photocathode of MWPCs for UV Cherenkov photon detection. To ensure full remote control, the HMPID is equipped with a detector control system (DCS) responding to industrial standards for robustness and reliability. It has been implemented using PVSS as Slow Control And Data Acquisition (SCADA) environment, Programmable Logic Controller as control devices and Finite State Machines for modular and automatic command execution. In the perspective of reducing human presence at the experiment site, this paper focuses on DCS evolution towards an expert and adaptive control system, providing, respectively, automatic error recovery and stable detector performance. HAL9000, the first prototype of the HMPID expert system, is then presented. Finally an analysis of the possible application of the adaptive features is provided.

  1. Learning without labeling: domain adaptation for ultrasound transducer localization.

    PubMed

    Heimann, Tobias; Mountney, Peter; John, Matthias; Ionasec, Razvan

    2013-01-01

    The fusion of image data from trans-esophageal echography (TEE) and X-ray fluoroscopy is attracting increasing interest in minimally-invasive treatment of structural heart disease. In order to calculate the needed transform between both imaging systems, we employ a discriminative learning based approach to localize the TEE transducer in X-ray images. Instead of time-consuming manual labeling, we generate the required training data automatically from a single volumetric image of the transducer. In order to adapt this system to real X-ray data, we use unlabeled fluoroscopy images to estimate differences in feature space density and correct covariate shift by instance weighting. An evaluation on more than 1900 images reveals that our approach reduces detection failures by 95% compared to cross validation on the test set and improves the localization error from 1.5 to 0.8 mm. Due to the automatic generation of training data, the proposed system is highly flexible and can be adapted to any medical device with minimal efforts.

  2. Development of Test Article Building Block (TABB) for deployable platform systems

    NASA Technical Reports Server (NTRS)

    Greenberg, H. S.; Barbour, R. T.

    1984-01-01

    The concept of a Test Article Building Block (TABB) is described. The TABB is a ground test article that is representative of a future building block that can be used to construct LEO and GEO deployable space platforms for communications and scientific payloads. This building block contains a main housing within which the entire structure, utilities, and deployment/retraction mechanism are stowed during launch. The end adapter secures the foregoing components to the housing during launch. The main housing and adapter provide the necessary building-block-to-building-block attachments for automatically deployable platforms. Removal from the shuttle cargo bay can be accomplished with the remote manipulator system (RMS) and/or the handling and positioning aid (HAPA). In this concept, all the electrical connections are in place prior to launch with automatic latches for payload attachment provided on either the end adapters or housings. The housings also can contain orbiter docking ports for payload installation and maintenance.

  3. Advanced adaptive computational methods for Navier-Stokes simulations in rotorcraft aerodynamics

    NASA Technical Reports Server (NTRS)

    Stowers, S. T.; Bass, J. M.; Oden, J. T.

    1993-01-01

    A phase 2 research and development effort was conducted in area transonic, compressible, inviscid flows with an ultimate goal of numerically modeling complex flows inherent in advanced helicopter blade designs. The algorithms and methodologies therefore are classified as adaptive methods, which are error estimation techniques for approximating the local numerical error, and automatically refine or unrefine the mesh so as to deliver a given level of accuracy. The result is a scheme which attempts to produce the best possible results with the least number of grid points, degrees of freedom, and operations. These types of schemes automatically locate and resolve shocks, shear layers, and other flow details to an accuracy level specified by the user of the code. The phase 1 work involved a feasibility study of h-adaptive methods for steady viscous flows, with emphasis on accurate simulation of vortex initiation, migration, and interaction. Phase 2 effort focused on extending these algorithms and methodologies to a three-dimensional topology.

  4. Comparison of Document Index Graph Using TextRank and HITS Weighting Method in Automatic Text Summarization

    NASA Astrophysics Data System (ADS)

    Hadyan, Fadhlil; Shaufiah; Arif Bijaksana, Moch.

    2017-01-01

    Automatic summarization is a system that can help someone to take the core information of a long text instantly. The system can help by summarizing text automatically. there’s Already many summarization systems that have been developed at this time but there are still many problems in those system. In this final task proposed summarization method using document index graph. This method utilizes the PageRank and HITS formula used to assess the web page, adapted to make an assessment of words in the sentences in a text document. The expected outcome of this final task is a system that can do summarization of a single document, by utilizing document index graph with TextRank and HITS to improve the quality of the summary results automatically.

  5. 800 MHz Communication Survey of the Los Angeles Area

    DOT National Transportation Integrated Search

    1979-01-01

    During the first half of 1978, as part of the Multi-User Automatic Vehicle Monitoring (AVM) Program, a survey was conducted to determine the suitability of utilizing the 800-900 MHz band as the primary carrier of digital communication data pertaining...

  6. Automatic RBG-depth-pressure anthropometric analysis and individualised sleep solution prescription.

    PubMed

    Esquirol Caussa, Jordi; Palmero Cantariño, Cristina; Bayo Tallón, Vanessa; Cos Morera, Miquel Àngel; Escalera, Sergio; Sánchez, David; Sánchez Padilla, Maider; Serrano Domínguez, Noelia; Relats Vilageliu, Mireia

    2017-08-01

    Sleep surfaces must adapt to individual somatotypic features to maintain a comfortable, convenient and healthy sleep, preventing diseases and injuries. Individually determining the most adequate rest surface can often be a complex and subjective question. To design and validate an automatic multimodal somatotype determination model to automatically recommend an individually designed mattress-topper-pillow combination. Design and validation of an automated prescription model for an individualised sleep system is performed through a single-image 2 D-3 D analysis and body pressure distribution, to objectively determine optimal individual sleep surfaces combining five different mattress densities, three different toppers and three cervical pillows. A final study (n = 151) and re-analysis (n = 117) defined and validated the model, showing high correlations between calculated and real data (>85% in height and body circumferences, 89.9% in weight, 80.4% in body mass index and more than 70% in morphotype categorisation). Somatotype determination model can accurately prescribe an individualised sleep solution. This can be useful for healthy people and for health centres that need to adapt sleep surfaces to people with special needs. Next steps will increase model's accuracy and analise, if this prescribed individualised sleep solution can improve sleep quantity and quality; additionally, future studies will adapt the model to mattresses with technological improvements, tailor-made production and will define interfaces for people with special needs.

  7. Gaussian mixtures on tensor fields for segmentation: applications to medical imaging.

    PubMed

    de Luis-García, Rodrigo; Westin, Carl-Fredrik; Alberola-López, Carlos

    2011-01-01

    In this paper, we introduce a new approach for tensor field segmentation based on the definition of mixtures of Gaussians on tensors as a statistical model. Working over the well-known Geodesic Active Regions segmentation framework, this scheme presents several interesting advantages. First, it yields a more flexible model than the use of a single Gaussian distribution, which enables the method to better adapt to the complexity of the data. Second, it can work directly on tensor-valued images or, through a parallel scheme that processes independently the intensity and the local structure tensor, on scalar textured images. Two different applications have been considered to show the suitability of the proposed method for medical imaging segmentation. First, we address DT-MRI segmentation on a dataset of 32 volumes, showing a successful segmentation of the corpus callosum and favourable comparisons with related approaches in the literature. Second, the segmentation of bones from hand radiographs is studied, and a complete automatic-semiautomatic approach has been developed that makes use of anatomical prior knowledge to produce accurate segmentation results. Copyright © 2010 Elsevier Ltd. All rights reserved.

  8. Introduction to Adaptive Methods for Differential Equations

    NASA Astrophysics Data System (ADS)

    Eriksson, Kenneth; Estep, Don; Hansbo, Peter; Johnson, Claes

    Knowing thus the Algorithm of this calculus, which I call Differential Calculus, all differential equations can be solved by a common method (Gottfried Wilhelm von Leibniz, 1646-1719).When, several years ago, I saw for the first time an instrument which, when carried, automatically records the number of steps taken by a pedestrian, it occurred to me at once that the entire arithmetic could be subjected to a similar kind of machinery so that not only addition and subtraction, but also multiplication and division, could be accomplished by a suitably arranged machine easily, promptly and with sure results. For it is unworthy of excellent men to lose hours like slaves in the labour of calculations, which could safely be left to anyone else if the machine was used. And now that we may give final praise to the machine, we may say that it will be desirable to all who are engaged in computations which, as is well known, are the managers of financial affairs, the administrators of others estates, merchants, surveyors, navigators, astronomers, and those connected with any of the crafts that use mathematics (Leibniz).

  9. Design for Verification: Using Design Patterns to Build Reliable Systems

    NASA Technical Reports Server (NTRS)

    Mehlitz, Peter C.; Penix, John; Koga, Dennis (Technical Monitor)

    2003-01-01

    Components so far have been mainly used in commercial software development to reduce time to market. While some effort has been spent on formal aspects of components, most of this was done in the context of programming language or operating system framework integration. As a consequence, increased reliability of composed systems is mainly regarded as a side effect of a more rigid testing of pre-fabricated components. In contrast to this, Design for Verification (D4V) puts the focus on component specific property guarantees, which are used to design systems with high reliability requirements. D4V components are domain specific design pattern instances with well-defined property guarantees and usage rules, which are suitable for automatic verification. The guaranteed properties are explicitly used to select components according to key system requirements. The D4V hypothesis is that the same general architecture and design principles leading to good modularity, extensibility and complexity/functionality ratio can be adapted to overcome some of the limitations of conventional reliability assurance measures, such as too large a state space or too many execution paths.

  10. Explicit instructions and consolidation promote rewiring of automatic behaviors in the human mind.

    PubMed

    Szegedi-Hallgató, Emese; Janacsek, Karolina; Vékony, Teodóra; Tasi, Lia Andrea; Kerepes, Leila; Hompoth, Emőke Adrienn; Bálint, Anna; Németh, Dezső

    2017-06-29

    One major challenge in human behavior and brain sciences is to understand how we can rewire already existing perceptual, motor, cognitive, and social skills or habits. Here we aimed to characterize one aspect of rewiring, namely, how we can update our knowledge of sequential/statistical regularities when they change. The dynamics of rewiring was explored from learning to consolidation using a unique experimental design which is suitable to capture the effect of implicit and explicit processing and the proactive and retroactive interference. Our results indicate that humans can rewire their knowledge of such regularities incidentally, and consolidation has a critical role in this process. Moreover, old and new knowledge can coexist, leading to effective adaptivity of the human mind in the changing environment, although the execution of the recently acquired knowledge may be more fluent than the execution of the previously learned one. These findings can contribute to a better understanding of the cognitive processes underlying behavior change, and can provide insights into how we can boost behavior change in various contexts, such as sports, educational settings or psychotherapy.

  11. Stereoacuity versus fixation disparity as indicators for vergence accuracy under prismatic stress.

    PubMed

    Kromeier, Miriam; Schmitt, Christina; Bach, Michael; Kommerell, Guntram

    2003-01-01

    Fixation disparity has been widely used as an indicator for vergence accuracy under prismatic stress. However, the targets used for measuring fixation disparity contain artificial features in that the fusional contours are thinned out. We considered that stereoacuity might be a preferable indicator of vergence accuracy, as stereo targets represent natural viewing conditions. We measured fixation disparity with a computer adaptation of Ogle's test and stereoacuity with the automatic Freiburg Stereoacuity Test. Eight subjects were examined under increasing base-in and base-out prisms. The response of fixation disparity to prismatic stress revealed the curve types described by Ogle and Crone. All eight subjects reached a stereoscopic threshold below 10 arcsec. In seven subjects the stereoscopic threshold increased before double vision occurred. Our data suggest that stereoacuity is suitable to assess the range of binocular vision under prismatic stress. As stereoacuity bears the advantage over fixation disparity in that it can be measured without introducing artificial viewing conditions, we suggest exploring whether stereoacuity under prismatic stress would be more meaningful in the work-up of asthenopic patients than is fixation disparity.

  12. Study on portable optical 3D coordinate measuring system

    NASA Astrophysics Data System (ADS)

    Ren, Tongqun; Zhu, Jigui; Guo, Yinbiao

    2009-05-01

    A portable optical 3D coordinate measuring system based on digital Close Range Photogrammetry (CRP) technology and binocular stereo vision theory is researched. Three ultra-red LED with high stability is set on a hand-hold target to provide measuring feature and establish target coordinate system. Ray intersection based field directional calibrating is done for the intersectant binocular measurement system composed of two cameras by a reference ruler. The hand-hold target controlled by Bluetooth wireless communication is free moved to implement contact measurement. The position of ceramic contact ball is pre-calibrated accurately. The coordinates of target feature points are obtained by binocular stereo vision model from the stereo images pair taken by cameras. Combining radius compensation for contact ball and residual error correction, object point can be resolved by transfer of axes using target coordinate system as intermediary. This system is suitable for on-field large-scale measurement because of its excellent portability, high precision, wide measuring volume, great adaptability and satisfying automatization. It is tested that the measuring precision is near to +/-0.1mm/m.

  13. Real-time detection of small and dim moving objects in IR video sequences using a robust background estimator and a noise-adaptive double thresholding

    NASA Astrophysics Data System (ADS)

    Zingoni, Andrea; Diani, Marco; Corsini, Giovanni

    2016-10-01

    We developed an algorithm for automatically detecting small and poorly contrasted (dim) moving objects in real-time, within video sequences acquired through a steady infrared camera. The algorithm is suitable for different situations since it is independent of the background characteristics and of changes in illumination. Unlike other solutions, small objects of any size (up to single-pixel), either hotter or colder than the background, can be successfully detected. The algorithm is based on accurately estimating the background at the pixel level and then rejecting it. A novel approach permits background estimation to be robust to changes in the scene illumination and to noise, and not to be biased by the transit of moving objects. Care was taken in avoiding computationally costly procedures, in order to ensure the real-time performance even using low-cost hardware. The algorithm was tested on a dataset of 12 video sequences acquired in different conditions, providing promising results in terms of detection rate and false alarm rate, independently of background and objects characteristics. In addition, the detection map was produced frame by frame in real-time, using cheap commercial hardware. The algorithm is particularly suitable for applications in the fields of video-surveillance and computer vision. Its reliability and speed permit it to be used also in critical situations, like in search and rescue, defence and disaster monitoring.

  14. Automated high-performance cIMT measurement techniques using patented AtheroEdge™: a screening and home monitoring system.

    PubMed

    Molinari, Filippo; Meiburger, Kristen M; Suri, Jasjit

    2011-01-01

    The evaluation of the carotid artery wall is fundamental for the assessment of cardiovascular risk. This paper presents the general architecture of an automatic strategy, which segments the lumen-intima and media-adventitia borders, classified under a class of Patented AtheroEdge™ systems (Global Biomedical Technologies, Inc, CA, USA). Guidelines to produce accurate and repeatable measurements of the intima-media thickness are provided and the problem of the different distance metrics one can adopt is confronted. We compared the results of a completely automatic algorithm that we developed with those of a semi-automatic algorithm, and showed final segmentation results for both techniques. The overall rationale is to provide user-independent high-performance techniques suitable for screening and remote monitoring.

  15. Automatic control of the preload in adaptive friction drives of chemical production machines

    NASA Astrophysics Data System (ADS)

    Balakin, P. D.

    2017-08-01

    Being based on the principle of providing the systems with adaptation property to the real parameters and operational condition, the energy effective mechanical system constructed on the base of friction gear with automated preload is offered and this allows keeping mechanical efficiency value adequate transforming drive path to in the terms of multimode operation. This is achieved by integrated control loop, operating on the basis of the laws of motion with the energy of the main power flow by changing automatically the kinematic dimension of the section and, hence, the value of preload in the friction contact. The given ratios of forces and deformations in the control loop are required at the stage of conceptual design to determine design dimensions of power transmission elements with new properties.

  16. Automatic Data Distribution for CFD Applications on Structured Grids

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Yan, Jerry

    1999-01-01

    Data distribution is an important step in implementation of any parallel algorithm. The data distribution determines data traffic, utilization of the interconnection network and affects the overall code efficiency. In recent years a number data distribution methods have been developed and used in real programs for improving data traffic. We use some of the methods for translating data dependence and affinity relations into data distribution directives. We describe an automatic data alignment and placement tool (ADAPT) which implements these methods and show it results for some CFD codes (NPB and ARC3D). Algorithms for program analysis and derivation of data distribution implemented in ADAPT are efficient three pass algorithms. Most algorithms have linear complexity with the exception of some graph algorithms having complexity O(n(sup 4)) in the worst case.

  17. [Realization of an adaptive method of simulating the process of temperature change of a cadaver on a microcomputer].

    PubMed

    Shved, E F; Novikov, P I; Vlasov, A Iu

    1989-01-01

    Programme based on mathematical model of the process of dead body temperature changing was developed for estimation of postmortem interval. Automatic retrieval of problem solution was performed on programmable microcalculators of "Electronica MK-61" type using adaptive approach. Diagnostical accuracy in case of dead body being preserved in permanent cooling conditions is +/- 3%.

  18. An Approach for Automatic Generation of Adaptive Hypermedia in Education with Multilingual Knowledge Discovery Techniques

    ERIC Educational Resources Information Center

    Alfonseca, Enrique; Rodriguez, Pilar; Perez, Diana

    2007-01-01

    This work describes a framework that combines techniques from Adaptive Hypermedia and Natural Language processing in order to create, in a fully automated way, on-line information systems from linear texts in electronic format, such as textbooks. The process is divided into two steps: an "off-line" processing step, which analyses the source text,…

  19. Conceptual design of semi-automatic wheelbarrow to overcome ergonomics problems among palm oil plantation workers

    NASA Astrophysics Data System (ADS)

    Nawik, N. S. M.; Deros, B. M.; Rahman, M. N. A.; Sukadarin, E. H.; Nordin, N.; Tamrin, S. B. M.; Bakar, S. A.; Norzan, M. L.

    2015-12-01

    An ergonomics problem is one of the main issues faced by palm oil plantation workers especially during harvesting and collecting of fresh fruit bunches (FFB). Intensive manual handling and labor activities involved have been associated with high prevalence of musculoskeletal disorders (MSDs) among palm oil plantation workers. New and safe technology on machines and equipment in palm oil plantation are very important in order to help workers reduce risks and injuries while working. The aim of this research is to improve the design of a wheelbarrow, which is suitable for workers and a small size oil palm plantation. The wheelbarrow design was drawn using CATIA ergonomic features. The characteristic of ergonomics assessment is performed by comparing the existing design of wheelbarrow. Conceptual design was developed based on the problems that have been reported by workers. From the analysis of the problem, finally have resulting concept design the ergonomic quality of semi-automatic wheelbarrow with safe and suitable used for palm oil plantation workers.

  20. Development of a high-resolution automatic digital (urine/electrolytes) flow volume and rate measurement system of miniature size

    NASA Technical Reports Server (NTRS)

    Liu, F. F.

    1975-01-01

    To aid in the quantitative analysis of man's physiological rhythms, a flowmeter to measure circadian patterns of electrolyte excretion during various environmental stresses was developed. One initial flowmeter was designed and fabricated, the sensor of which is the approximate size of a wristwatch. The detector section includes a special type of dielectric integrating type sensor which automatically controls, activates, and deactivates the flow sensor data output by determining the presence or absence of fluid flow in the system, including operation under zero-G conditions. The detector also provides qualitative data on the composition of the fluid. A compact electronic system was developed to indicate flow rate as well as total volume per release or the cumulative volume of several releases in digital/analog forms suitable for readout or telemetry. A suitable data readout instrument is also provided. Calibration and statistical analyses of the performance functions required of the flowmeter were also conducted.

  1. Fully automatic segmentation of femurs with medullary canal definition in high and in low resolution CT scans.

    PubMed

    Almeida, Diogo F; Ruben, Rui B; Folgado, João; Fernandes, Paulo R; Audenaert, Emmanuel; Verhegghe, Benedict; De Beule, Matthieu

    2016-12-01

    Femur segmentation can be an important tool in orthopedic surgical planning. However, in order to overcome the need of an experienced user with extensive knowledge on the techniques, segmentation should be fully automatic. In this paper a new fully automatic femur segmentation method for CT images is presented. This method is also able to define automatically the medullary canal and performs well even in low resolution CT scans. Fully automatic femoral segmentation was performed adapting a template mesh of the femoral volume to medical images. In order to achieve this, an adaptation of the active shape model (ASM) technique based on the statistical shape model (SSM) and local appearance model (LAM) of the femur with a novel initialization method was used, to drive the template mesh deformation in order to fit the in-image femoral shape in a time effective approach. With the proposed method a 98% convergence rate was achieved. For high resolution CT images group the average error is less than 1mm. For the low resolution image group the results are also accurate and the average error is less than 1.5mm. The proposed segmentation pipeline is accurate, robust and completely user free. The method is robust to patient orientation, image artifacts and poorly defined edges. The results excelled even in CT images with a significant slice thickness, i.e., above 5mm. Medullary canal segmentation increases the geometric information that can be used in orthopedic surgical planning or in finite element analysis. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  2. Automatic, semi-automatic and manual validation of urban drainage data.

    PubMed

    Branisavljević, N; Prodanović, D; Pavlović, D

    2010-01-01

    Advances in sensor technology and the possibility of automated long distance data transmission have made continuous measurements the preferable way of monitoring urban drainage processes. Usually, the collected data have to be processed by an expert in order to detect and mark the wrong data, remove them and replace them with interpolated data. In general, the first step in detecting the wrong, anomaly data is called the data quality assessment or data validation. Data validation consists of three parts: data preparation, validation scores generation and scores interpretation. This paper will present the overall framework for the data quality improvement system, suitable for automatic, semi-automatic or manual operation. The first two steps of the validation process are explained in more detail, using several validation methods on the same set of real-case data from the Belgrade sewer system. The final part of the validation process, which is the scores interpretation, needs to be further investigated on the developed system.

  3. Automatic detection of cardiac cycle and measurement of the mitral annulus diameter in 4D TEE images

    NASA Astrophysics Data System (ADS)

    Graser, Bastian; Hien, Maximilian; Rauch, Helmut; Meinzer, Hans-Peter; Heimann, Tobias

    2012-02-01

    Mitral regurgitation is a wide spread problem. For successful surgical treatment quantification of the mitral annulus, especially its diameter, is essential. Time resolved 3D transesophageal echocardiography (TEE) is suitable for this task. Yet, manual measurement in four dimensions is extremely time consuming, which confirms the need for automatic quantification methods. The method we propose is capable of automatically detecting the cardiac cycle (systole or diastole) for each time step and measuring the mitral annulus diameter. This is done using total variation noise filtering, the graph cut segmentation algorithm and morphological operators. An evaluation took place using expert measurements on 4D TEE data of 13 patients. The cardiac cycle was detected correctly on 78% of all images and the mitral annulus diameter was measured with an average error of 3.08 mm. Its full automatic processing makes the method easy to use in the clinical workflow and it provides the surgeon with helpful information.

  4. Detecting brain tumor in pathological slides using hyperspectral imaging

    PubMed Central

    Ortega, Samuel; Fabelo, Himar; Camacho, Rafael; de la Luz Plaza, María; Callicó, Gustavo M.; Sarmiento, Roberto

    2018-01-01

    Hyperspectral imaging (HSI) is an emerging technology for medical diagnosis. This research work presents a proof-of-concept on the use of HSI data to automatically detect human brain tumor tissue in pathological slides. The samples, consisting of hyperspectral cubes collected from 400 nm to 1000 nm, were acquired from ten different patients diagnosed with high-grade glioma. Based on the diagnosis provided by pathologists, a spectral library of normal and tumor tissues was created and processed using three different supervised classification algorithms. Results prove that HSI is a suitable technique to automatically detect high-grade tumors from pathological slides. PMID:29552415

  5. Detecting brain tumor in pathological slides using hyperspectral imaging.

    PubMed

    Ortega, Samuel; Fabelo, Himar; Camacho, Rafael; de la Luz Plaza, María; Callicó, Gustavo M; Sarmiento, Roberto

    2018-02-01

    Hyperspectral imaging (HSI) is an emerging technology for medical diagnosis. This research work presents a proof-of-concept on the use of HSI data to automatically detect human brain tumor tissue in pathological slides. The samples, consisting of hyperspectral cubes collected from 400 nm to 1000 nm, were acquired from ten different patients diagnosed with high-grade glioma. Based on the diagnosis provided by pathologists, a spectral library of normal and tumor tissues was created and processed using three different supervised classification algorithms. Results prove that HSI is a suitable technique to automatically detect high-grade tumors from pathological slides.

  6. Automatic indexing of compound words based on mutual information for Korean text retrieval

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan Koo Kim; Yoo Kun Cho

    In this paper, we present an automatic indexing technique for compound words suitable to an aggulutinative language, specifically Korean. Firstly, we present the construction conditions to compose compound words as indexing terms. Also we present the decomposition rules applicable to consecutive nouns to extract all contents of text. Finally we propose a measure to estimate the usefulness of a term, mutual information, to calculate the degree of word association of compound words, based on the information theoretic notion. By applying this method, our system has raised the precision rate of compound words from 72% to 87%.

  7. Automatic Lamp and Fan Control Based on Microcontroller

    NASA Astrophysics Data System (ADS)

    Widyaningrum, V. T.; Pramudita, Y. D.

    2018-01-01

    In general, automation can be described as a process following pre-determined sequential steps with a little or without any human exertion. Automation is provided with the use of various sensors suitable to observe the production processes, actuators and different techniques and devices. In this research, the automation system developed is an automatic lamp and an automatic fan on the smart home. Both of these systems will be processed using an Arduino Mega 2560 microcontroller. A microcontroller is used to obtain values of physical conditions through sensors connected to it. In the automatic lamp system required sensors to detect the light of the LDR (Light Dependent Resistor) sensor. While the automatic fan system required sensors to detect the temperature of the DHT11 sensor. In tests that have been done lamps and fans can work properly. The lamp can turn on automatically when the light begins to darken, and the lamp can also turn off automatically when the light begins to bright again. In addition, it can concluded also that the readings of LDR sensors are placed outside the room is different from the readings of LDR sensors placed in the room. This is because the light intensity received by the existing LDR sensor in the room is blocked by the wall of the house or by other objects. Then for the fan, it can also turn on automatically when the temperature is greater than 25°C, and the fan speed can also be adjusted. The fan may also turn off automatically when the temperature is less than equal to 25°C.

  8. Using Virtualization and Automatic Evaluation: Adapting Network Services Management Courses to the EHEA

    ERIC Educational Resources Information Center

    Ros, S.; Robles-Gomez, A.; Hernandez, R.; Caminero, A. C.; Pastor, R.

    2012-01-01

    This paper outlines the adaptation of a course on the management of network services in operating systems, called NetServicesOS, to the context of the new European Higher Education Area (EHEA). NetServicesOS is a mandatory course in one of the official graduate programs in the Faculty of Computer Science at the Universidad Nacional de Educacion a…

  9. Development of an Advanced, Automatic, Ultrasonic NDE Imaging System via Adaptive Learning Network Signal Processing Techniques

    DTIC Science & Technology

    1981-03-13

    UNCLASSIFIED SECURITY CLAS,:FtfC ’i OF TH*!’ AGC W~ct P- A* 7~9r1) 0. ABSTRACT (continued) onuing in concert with a sophisticated detector has...and New York, 1969. Whalen, M.F., L.J. O’Brien, and A.N. Mucciardi, "Application of Adaptive Learning Netowrks for the Characterization of Two

  10. Real time computer controlled weld skate

    NASA Technical Reports Server (NTRS)

    Wall, W. A., Jr.

    1977-01-01

    A real time, adaptive control, automatic welding system was developed. This system utilizes the general case geometrical relationships between a weldment and a weld skate to precisely maintain constant weld speed and torch angle along a contoured workplace. The system is compatible with the gas tungsten arc weld process or can be adapted to other weld processes. Heli-arc cutting and machine tool routing operations are possible applications.

  11. Event-based knowledge elicitation of operating room management decision-making using scenarios adapted from information systems data

    PubMed Central

    2011-01-01

    Background No systematic process has previously been described for a needs assessment that identifies the operating room (OR) management decisions made by the anesthesiologists and nurse managers at a facility that do not maximize the efficiency of use of OR time. We evaluated whether event-based knowledge elicitation can be used practically for rapid assessment of OR management decision-making at facilities, whether scenarios can be adapted automatically from information systems data, and the usefulness of the approach. Methods A process of event-based knowledge elicitation was developed to assess OR management decision-making that may reduce the efficiency of use of OR time. Hypothetical scenarios addressing every OR management decision influencing OR efficiency were created from published examples. Scenarios are adapted, so that cues about conditions are accurate and appropriate for each facility (e.g., if OR 1 is used as an example in a scenario, the listed procedure is a type of procedure performed at the facility in OR 1). Adaptation is performed automatically using the facility's OR information system or anesthesia information management system (AIMS) data for most scenarios (43 of 45). Performing the needs assessment takes approximately 1 hour of local managers' time while they decide if their decisions are consistent with the described scenarios. A table of contents of the indexed scenarios is created automatically, providing a simple version of problem solving using case-based reasoning. For example, a new OR manager wanting to know the best way to decide whether to move a case can look in the chapter on "Moving Cases on the Day of Surgery" to find a scenario that describes the situation being encountered. Results Scenarios have been adapted and used at 22 hospitals. Few changes in decisions were needed to increase the efficiency of use of OR time. The few changes were heterogeneous among hospitals, showing the usefulness of individualized assessments. Conclusions Our technical advance is the development and use of automated event-based knowledge elicitation to identify suboptimal OR management decisions that decrease the efficiency of use of OR time. The adapted scenarios can be used in future decision-making. PMID:21214905

  12. Event-based knowledge elicitation of operating room management decision-making using scenarios adapted from information systems data.

    PubMed

    Dexter, Franklin; Wachtel, Ruth E; Epstein, Richard H

    2011-01-07

    No systematic process has previously been described for a needs assessment that identifies the operating room (OR) management decisions made by the anesthesiologists and nurse managers at a facility that do not maximize the efficiency of use of OR time. We evaluated whether event-based knowledge elicitation can be used practically for rapid assessment of OR management decision-making at facilities, whether scenarios can be adapted automatically from information systems data, and the usefulness of the approach. A process of event-based knowledge elicitation was developed to assess OR management decision-making that may reduce the efficiency of use of OR time. Hypothetical scenarios addressing every OR management decision influencing OR efficiency were created from published examples. Scenarios are adapted, so that cues about conditions are accurate and appropriate for each facility (e.g., if OR 1 is used as an example in a scenario, the listed procedure is a type of procedure performed at the facility in OR 1). Adaptation is performed automatically using the facility's OR information system or anesthesia information management system (AIMS) data for most scenarios (43 of 45). Performing the needs assessment takes approximately 1 hour of local managers' time while they decide if their decisions are consistent with the described scenarios. A table of contents of the indexed scenarios is created automatically, providing a simple version of problem solving using case-based reasoning. For example, a new OR manager wanting to know the best way to decide whether to move a case can look in the chapter on "Moving Cases on the Day of Surgery" to find a scenario that describes the situation being encountered. Scenarios have been adapted and used at 22 hospitals. Few changes in decisions were needed to increase the efficiency of use of OR time. The few changes were heterogeneous among hospitals, showing the usefulness of individualized assessments. Our technical advance is the development and use of automated event-based knowledge elicitation to identify suboptimal OR management decisions that decrease the efficiency of use of OR time. The adapted scenarios can be used in future decision-making.

  13. Sapc - Application for Adapting Scanned Analogue Photographs to Use Them in Structure from Motion Technology

    NASA Astrophysics Data System (ADS)

    Salach, A.

    2017-05-01

    The documentary value of analogue scanned photographs is invaluable. A large and rich collection of archival photographs is often the only source of information about past of the selected area. This paper presents a method of adaptation of scanned, analogue photographs to suitable form allowing to use them in Structure from Motion technology. For this purpose, an automatic algorithm, implemented in the application called SAPC (Scanned Aerial Photographs Correction), which transforms scans to a form, which characteristic similar to the images captured by a digital camera, was invented. Images, which are created in the applied program as output data, are characterized by the same principal point position in each photo and the same resolution through cutting out the black photo frame. Additionally, SAPC generates a binary image file, which can mask areas of fiducial marks. In the experimental section, scanned, analogue photographs of Warsaw, which had been captured in 1986, were used in two variants: unprocessed and processed in SAPC application. An insightful analysis was conducted on the influence of transformation in SAPC on quality of spatial orientation of photographs. Block adjustment through aerial triangulation was calculated using two SfM software products: Agisoft PhotoScan and Pix4d and their results were compared with results obtained from professional photogrammetric software - Trimble Inpho. The author concluded that pre-processing in SAPC application had a positive impact on a quality of block orientation of scanned, analogue photographs, using SfM technology.

  14. Modeling Cooperative Threads to Project GPU Performance for Adaptive Parallelism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, Jiayuan; Uram, Thomas; Morozov, Vitali A.

    Most accelerators, such as graphics processing units (GPUs) and vector processors, are particularly suitable for accelerating massively parallel workloads. On the other hand, conventional workloads are developed for multi-core parallelism, which often scale to only a few dozen OpenMP threads. When hardware threads significantly outnumber the degree of parallelism in the outer loop, programmers are challenged with efficient hardware utilization. A common solution is to further exploit the parallelism hidden deep in the code structure. Such parallelism is less structured: parallel and sequential loops may be imperfectly nested within each other, neigh boring inner loops may exhibit different concurrency patternsmore » (e.g. Reduction vs. Forall), yet have to be parallelized in the same parallel section. Many input-dependent transformations have to be explored. A programmer often employs a larger group of hardware threads to cooperatively walk through a smaller outer loop partition and adaptively exploit any encountered parallelism. This process is time-consuming and error-prone, yet the risk of gaining little or no performance remains high for such workloads. To reduce risk and guide implementation, we propose a technique to model workloads with limited parallelism that can automatically explore and evaluate transformations involving cooperative threads. Eventually, our framework projects the best achievable performance and the most promising transformations without implementing GPU code or using physical hardware. We envision our technique to be integrated into future compilers or optimization frameworks for autotuning.« less

  15. Space-time adaptive solution of inverse problems with the discrete adjoint method

    NASA Astrophysics Data System (ADS)

    Alexe, Mihai; Sandu, Adrian

    2014-08-01

    This paper develops a framework for the construction and analysis of discrete adjoint sensitivities in the context of time dependent, adaptive grid, adaptive step models. Discrete adjoints are attractive in practice since they can be generated with low effort using automatic differentiation. However, this approach brings several important challenges. The space-time adjoint of the forward numerical scheme may be inconsistent with the continuous adjoint equations. A reduction in accuracy of the discrete adjoint sensitivities may appear due to the inter-grid transfer operators. Moreover, the optimization algorithm may need to accommodate state and gradient vectors whose dimensions change between iterations. This work shows that several of these potential issues can be avoided through a multi-level optimization strategy using discontinuous Galerkin (DG) hp-adaptive discretizations paired with Runge-Kutta (RK) time integration. We extend the concept of dual (adjoint) consistency to space-time RK-DG discretizations, which are then shown to be well suited for the adaptive solution of time-dependent inverse problems. Furthermore, we prove that DG mesh transfer operators on general meshes are also dual consistent. This allows the simultaneous derivation of the discrete adjoint for both the numerical solver and the mesh transfer logic with an automatic code generation mechanism such as algorithmic differentiation (AD), potentially speeding up development of large-scale simulation codes. The theoretical analysis is supported by numerical results reported for a two-dimensional non-stationary inverse problem.

  16. Assessing the impact of graphical quality on automatic text recognition in digital maps

    NASA Astrophysics Data System (ADS)

    Chiang, Yao-Yi; Leyk, Stefan; Honarvar Nazari, Narges; Moghaddam, Sima; Tan, Tian Xiang

    2016-08-01

    Converting geographic features (e.g., place names) in map images into a vector format is the first step for incorporating cartographic information into a geographic information system (GIS). With the advancement in computational power and algorithm design, map processing systems have been considerably improved over the last decade. However, the fundamental map processing techniques such as color image segmentation, (map) layer separation, and object recognition are sensitive to minor variations in graphical properties of the input image (e.g., scanning resolution). As a result, most map processing results would not meet user expectations if the user does not "properly" scan the map of interest, pre-process the map image (e.g., using compression or not), and train the processing system, accordingly. These issues could slow down the further advancement of map processing techniques as such unsuccessful attempts create a discouraged user community, and less sophisticated tools would be perceived as more viable solutions. Thus, it is important to understand what kinds of maps are suitable for automatic map processing and what types of results and process-related errors can be expected. In this paper, we shed light on these questions by using a typical map processing task, text recognition, to discuss a number of map instances that vary in suitability for automatic processing. We also present an extensive experiment on a diverse set of scanned historical maps to provide measures of baseline performance of a standard text recognition tool under varying map conditions (graphical quality) and text representations (that can vary even within the same map sheet). Our experimental results help the user understand what to expect when a fully or semi-automatic map processing system is used to process a scanned map with certain (varying) graphical properties and complexities in map content.

  17. A Sequential Insect Dispenser for Behavioral Experiments

    ERIC Educational Resources Information Center

    Gans, Carl; Mix, Harold

    1974-01-01

    Describes the construction and operation of an automatic insect dispenser suitable for feeding small vertebrates that are being maintained for behavioral experiments. The food morsels are squirted from their chambers an an air jet, and may be directed at a particluar portion of the cage or distributed to different areas. (JR)

  18. 30 CFR 56.13021 - High-pressure hose connections.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false High-pressure hose connections. 56.13021... and Boilers § 56.13021 High-pressure hose connections. Except where automatic shutoff valves are used, safety chains or other suitable locking devices shall be used at connections to machines of high-pressure...

  19. Validation of Symbolic Expressions in Circuit Analysis E-Learning

    ERIC Educational Resources Information Center

    Weyten, L.; Rombouts, P.; Catteau, B.; De Bock, M.

    2011-01-01

    Symbolic circuit analysis is a cornerstone of electrical engineering education. Solving a suitable set of selected problems is essential to developing professional skills in the field. A new method is presented for automatic validation of circuit equations representing a student's intermediate steps in the solving process. Providing this immediate…

  20. Master/Programmable-Slave Computer

    NASA Technical Reports Server (NTRS)

    Smaistrla, David; Hall, William A.

    1990-01-01

    Unique modular computer features compactness, low power, mass storage of data, multiprocessing, and choice of various input/output modes. Master processor communicates with user via usual keyboard and video display terminal. Coordinates operations of as many as 24 slave processors, each dedicated to different experiment. Each slave circuit card includes slave microprocessor and assortment of input/output circuits for communication with external equipment, with master processor, and with other slave processors. Adaptable to industrial process control with selectable degrees of automatic control, automatic and/or manual monitoring, and manual intervention.

  1. A comparison of flight and simulation data for three automatic landing system control laws for the Augmentor wing jet STOL research airplane

    NASA Technical Reports Server (NTRS)

    Feinreich, B.; Gevaert, G.

    1980-01-01

    Automatic flare and decrab control laws for conventional takeoff and landing aircraft were adapted to the unique requirements of the powered lift short takeoff and landing airplane. Three longitudinal autoland control laws were developed. Direct lift and direct drag control were used in the longitudinal axis. A fast time simulation was used for the control law synthesis, with emphasis on stochastic performance prediction and evaluation. Good correlation with flight test results was obtained.

  2. Patient-Specific Seizure Detection in Long-Term EEG Using Signal-Derived Empirical Mode Decomposition (EMD)-based Dictionary Approach.

    PubMed

    Kaleem, Muhammad; Gurve, Dharmendra; Guergachi, Aziz; Krishnan, Sridhar

    2018-06-25

    The objective of the work described in this paper is development of a computationally efficient methodology for patient-specific automatic seizure detection in long-term multi-channel EEG recordings. Approach: A novel patient-specific seizure detection approach based on signal-derived Empirical Mode Decomposition (EMD)-based dictionary approach is proposed. For this purpose, we use an empirical framework for EMD-based dictionary creation and learning, inspired by traditional dictionary learning methods, in which the EMD-based dictionary is learned from the multi-channel EEG data being analyzed for automatic seizure detection. We present the algorithm for dictionary creation and learning, whose purpose is to learn dictionaries with a small number of atoms. Using training signals belonging to seizure and non-seizure classes, an initial dictionary, termed as the raw dictionary, is formed. The atoms of the raw dictionary are composed of intrinsic mode functions obtained after decomposition of the training signals using the empirical mode decomposition algorithm. The raw dictionary is then trained using a learning algorithm, resulting in a substantial decrease in the number of atoms in the trained dictionary. The trained dictionary is then used for automatic seizure detection, such that coefficients of orthogonal projections of test signals against the trained dictionary form the features used for classification of test signals into seizure and non-seizure classes. Thus no hand-engineered features have to be extracted from the data as in traditional seizure detection approaches. Main results: The performance of the proposed approach is validated using the CHB-MIT benchmark database, and averaged accuracy, sensitivity and specificity values of 92.9%, 94.3% and 91.5%, respectively, are obtained using support vector machine classifier and five-fold cross-validation method. These results are compared with other approaches using the same database, and the suitability of the approach for seizure detection in long-term multi-channel EEG recordings is discussed. Significance: The proposed approach describes a computationally efficient method for automatic seizure detection in long-term multi-channel EEG recordings. The method does not rely on hand-engineered features, as are required in traditional approaches. Furthermore, the approach is suitable for scenarios where the dictionary once formed and trained can be used for automatic seizure detection of newly recorded data, making the approach suitable for long-term multi-channel EEG recordings. © 2018 IOP Publishing Ltd.

  3. An Adaptively-Refined, Cartesian, Cell-Based Scheme for the Euler and Navier-Stokes Equations. Ph.D. Thesis - Michigan Univ.

    NASA Technical Reports Server (NTRS)

    Coirier, William John

    1994-01-01

    A Cartesian, cell-based scheme for solving the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, polygonal 'cut' cells are created. The geometry of the cut cells is computed using polygon-clipping algorithms. The grid is stored in a binary-tree data structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded, with a limited linear reconstruction of the primitive variables used to provide input states to an approximate Riemann solver for computing the fluxes between neighboring cells. A multi-stage time-stepping scheme is used to reach a steady-state solution. Validation of the Euler solver with benchmark numerical and exact solutions is presented. An assessment of the accuracy of the approach is made by uniform and adaptive grid refinements for a steady, transonic, exact solution to the Euler equations. The error of the approach is directly compared to a structured solver formulation. A non smooth flow is also assessed for grid convergence, comparing uniform and adaptively refined results. Several formulations of the viscous terms are assessed analytically, both for accuracy and positivity. The two best formulations are used to compute adaptively refined solutions of the Navier-Stokes equations. These solutions are compared to each other, to experimental results and/or theory for a series of low and moderate Reynolds numbers flow fields. The most suitable viscous discretization is demonstrated for geometrically-complicated internal flows. For flows at high Reynolds numbers, both an altered grid-generation procedure and a different formulation of the viscous terms are shown to be necessary. A hybrid Cartesian/body-fitted grid generation approach is demonstrated. In addition, a grid-generation procedure based on body-aligned cell cutting coupled with a viscous stensil-construction procedure based on quadratic programming is presented.

  4. Robust Cell Detection of Histopathological Brain Tumor Images Using Sparse Reconstruction and Adaptive Dictionary Selection

    PubMed Central

    Su, Hai; Xing, Fuyong; Yang, Lin

    2016-01-01

    Successful diagnostic and prognostic stratification, treatment outcome prediction, and therapy planning depend on reproducible and accurate pathology analysis. Computer aided diagnosis (CAD) is a useful tool to help doctors make better decisions in cancer diagnosis and treatment. Accurate cell detection is often an essential prerequisite for subsequent cellular analysis. The major challenge of robust brain tumor nuclei/cell detection is to handle significant variations in cell appearance and to split touching cells. In this paper, we present an automatic cell detection framework using sparse reconstruction and adaptive dictionary learning. The main contributions of our method are: 1) A sparse reconstruction based approach to split touching cells; 2) An adaptive dictionary learning method used to handle cell appearance variations. The proposed method has been extensively tested on a data set with more than 2000 cells extracted from 32 whole slide scanned images. The automatic cell detection results are compared with the manually annotated ground truth and other state-of-the-art cell detection algorithms. The proposed method achieves the best cell detection accuracy with a F1 score = 0.96. PMID:26812706

  5. Natural language processing and visualization in the molecular imaging domain.

    PubMed

    Tulipano, P Karina; Tao, Ying; Millar, William S; Zanzonico, Pat; Kolbert, Katherine; Xu, Hua; Yu, Hong; Chen, Lifeng; Lussier, Yves A; Friedman, Carol

    2007-06-01

    Molecular imaging is at the crossroads of genomic sciences and medical imaging. Information within the molecular imaging literature could be used to link to genomic and imaging information resources and to organize and index images in a way that is potentially useful to researchers. A number of natural language processing (NLP) systems are available to automatically extract information from genomic literature. One existing NLP system, known as BioMedLEE, automatically extracts biological information consisting of biomolecular substances and phenotypic data. This paper focuses on the adaptation, evaluation, and application of BioMedLEE to the molecular imaging domain. In order to adapt BioMedLEE for this domain, we extend an existing molecular imaging terminology and incorporate it into BioMedLEE. BioMedLEE's performance is assessed with a formal evaluation study. The system's performance, measured as recall and precision, is 0.74 (95% CI: [.70-.76]) and 0.70 (95% CI [.63-.76]), respectively. We adapt a JAVA viewer known as PGviewer for the simultaneous visualization of images with NLP extracted information.

  6. Self-Learning Adaptive Umbrella Sampling Method for the Determination of Free Energy Landscapes in Multiple Dimensions

    PubMed Central

    Wojtas-Niziurski, Wojciech; Meng, Yilin; Roux, Benoit; Bernèche, Simon

    2013-01-01

    The potential of mean force describing conformational changes of biomolecules is a central quantity that determines the function of biomolecular systems. Calculating an energy landscape of a process that depends on three or more reaction coordinates might require a lot of computational power, making some of multidimensional calculations practically impossible. Here, we present an efficient automatized umbrella sampling strategy for calculating multidimensional potential of mean force. The method progressively learns by itself, through a feedback mechanism, which regions of a multidimensional space are worth exploring and automatically generates a set of umbrella sampling windows that is adapted to the system. The self-learning adaptive umbrella sampling method is first explained with illustrative examples based on simplified reduced model systems, and then applied to two non-trivial situations: the conformational equilibrium of the pentapeptide Met-enkephalin in solution and ion permeation in the KcsA potassium channel. With this method, it is demonstrated that a significant smaller number of umbrella windows needs to be employed to characterize the free energy landscape over the most relevant regions without any loss in accuracy. PMID:23814508

  7. Automatic detection of cone photoreceptors in split detector adaptive optics scanning light ophthalmoscope images.

    PubMed

    Cunefare, David; Cooper, Robert F; Higgins, Brian; Katz, David F; Dubra, Alfredo; Carroll, Joseph; Farsiu, Sina

    2016-05-01

    Quantitative analysis of the cone photoreceptor mosaic in the living retina is potentially useful for early diagnosis and prognosis of many ocular diseases. Non-confocal split detector based adaptive optics scanning light ophthalmoscope (AOSLO) imaging reveals the cone photoreceptor inner segment mosaics often not visualized on confocal AOSLO imaging. Despite recent advances in automated cone segmentation algorithms for confocal AOSLO imagery, quantitative analysis of split detector AOSLO images is currently a time-consuming manual process. In this paper, we present the fully automatic adaptive filtering and local detection (AFLD) method for detecting cones in split detector AOSLO images. We validated our algorithm on 80 images from 10 subjects, showing an overall mean Dice's coefficient of 0.95 (standard deviation 0.03), when comparing our AFLD algorithm to an expert grader. This is comparable to the inter-observer Dice's coefficient of 0.94 (standard deviation 0.04). To the best of our knowledge, this is the first validated, fully-automated segmentation method which has been applied to split detector AOSLO images.

  8. Automatic detection of multiple UXO-like targets using magnetic anomaly inversion and self-adaptive fuzzy c-means clustering

    NASA Astrophysics Data System (ADS)

    Yin, Gang; Zhang, Yingtang; Fan, Hongbo; Ren, Guoquan; Li, Zhining

    2017-12-01

    We have developed a method for automatically detecting UXO-like targets based on magnetic anomaly inversion and self-adaptive fuzzy c-means clustering. Magnetic anomaly inversion methods are used to estimate the initial locations of multiple UXO-like sources. Although these initial locations have some errors with respect to the real positions, they form dense clouds around the actual positions of the magnetic sources. Then we use the self-adaptive fuzzy c-means clustering algorithm to cluster these initial locations. The estimated number of cluster centroids represents the number of targets and the cluster centroids are regarded as the locations of magnetic targets. Effectiveness of the method has been demonstrated using synthetic datasets. Computational results show that the proposed method can be applied to the case of several UXO-like targets that are randomly scattered within in a confined, shallow subsurface, volume. A field test was carried out to test the validity of the proposed method and the experimental results show that the prearranged magnets can be detected unambiguously and located precisely.

  9. A methodology for post-mainshock probabilistic assessment of building collapse risk

    USGS Publications Warehouse

    Luco, N.; Gerstenberger, M.C.; Uma, S.R.; Ryu, H.; Liel, A.B.; Raghunandan, M.

    2011-01-01

    This paper presents a methodology for post-earthquake probabilistic risk (of damage) assessment that we propose in order to develop a computational tool for automatic or semi-automatic assessment. The methodology utilizes the same so-called risk integral which can be used for pre-earthquake probabilistic assessment. The risk integral couples (i) ground motion hazard information for the location of a structure of interest with (ii) knowledge of the fragility of the structure with respect to potential ground motion intensities. In the proposed post-mainshock methodology, the ground motion hazard component of the risk integral is adapted to account for aftershocks which are deliberately excluded from typical pre-earthquake hazard assessments and which decrease in frequency with the time elapsed since the mainshock. Correspondingly, the structural fragility component is adapted to account for any damage caused by the mainshock, as well as any uncertainty in the extent of this damage. The result of the adapted risk integral is a fully-probabilistic quantification of post-mainshock seismic risk that can inform emergency response mobilization, inspection prioritization, and re-occupancy decisions.

  10. A prototype automatic phase compensation module

    NASA Technical Reports Server (NTRS)

    Terry, John D.

    1992-01-01

    The growing demands for high gain and accurate satellite communication systems will necessitate the utilization of large reflector systems. One area of concern of reflector based satellite communication is large scale surface deformations due to thermal effects. These distortions, when present, can degrade the performance of the reflector system appreciable. This performance degradation is manifested by a decrease in peak gain, and increase in sidelobe level, and pointing errors. It is essential to compensate for these distortion effects and to maintain the required system performance in the operating space environment. For this reason the development of a technique to offset the degradation effects is highly desirable. Currently, most research is direct at developing better material for the reflector. These materials have a lower coefficient of linear expansion thereby reducing the surface errors. Alternatively, one can minimize the distortion effects of these large scale errors by adaptive phased array compensation. Adaptive phased array techniques have been studied extensively at NASA and elsewhere. Presented in this paper is a prototype automatic phase compensation module designed and built at NASA Lewis Research Center which is the first stage of development for an adaptive array compensation module.

  11. Automobile or other conveyance and adaptive equipment certificate of eligibility for veterans or members of the armed forces with amyotrophic lateral sclerosis. Interim final rule.

    PubMed

    2015-02-25

    The Department of Veterans Affairs (VA) is amending its adjudication regulation regarding certificates of eligibility for financial assistance in the purchase of an automobile or other conveyance and adaptive equipment. The amendment authorizes automatic issuance of a certificate of eligibility for financial assistance in the purchase of an automobile or other conveyance and adaptive equipment to all veterans with service-connected amyotrophic lateral sclerosis (ALS) and members of the Armed Forces serving on active duty with ALS.

  12. Adaptive hyperspectral imager: design, modeling, and control

    NASA Astrophysics Data System (ADS)

    McGregor, Scot; Lacroix, Simon; Monmayrant, Antoine

    2015-08-01

    An adaptive, hyperspectral imager is presented. We propose a system with easily adaptable spectral resolution, adjustable acquisition time, and high spatial resolution which is independent of spectral resolution. The system yields the possibility to define a variety of acquisition schemes, and in particular near snapshot acquisitions that may be used to measure the spectral content of given or automatically detected regions of interest. The proposed system is modelled and simulated, and tests on a first prototype validate the approach to achieve near snapshot spectral acquisitions without resorting to any computationally heavy post-processing, nor cumbersome calibration

  13. Temperature actuated automatic safety rod release

    DOEpatents

    Hutter, E.; Pardini, J.A.; Walker, D.E.

    1984-03-13

    A temperature-actuated apparatus is disclosed for releasably supporting a safety rod in a nuclear reactor, comprising a safety rod upper adapter having a retention means, a drive shaft which houses the upper adapter, and a bimetallic means supported within the drive shaft and having at least one ledge which engages a retention means of the safety rod upper adapter. A pre-determined increase in temperature causes the bimetallic means to deform so that the ledge disengages from the retention means, whereby the bimetallic means releases the safety rod into the core of the reactor.

  14. Temperature actuated automatic safety rod release

    DOEpatents

    Hutter, Ernest; Pardini, John A.; Walker, David E.

    1987-01-01

    A temperature-actuated apparatus is disclosed for releasably supporting a safety rod in a nuclear reactor, comprising a safety rod upper adapter having a retention means, a drive shaft which houses the upper adapter, and a bimetallic means supported within the drive shaft and having at least one ledge which engages a retention means of the safety rod upper adapter. A pre-determined increase in temperature causes the bimetallic means to deform so that the ledge disengages from the retention means, whereby the bimetallic means releases the safety rod into the core of the reactor.

  15. Adaptive image inversion of contrast 3D echocardiography for enabling automated analysis.

    PubMed

    Shaheen, Anjuman; Rajpoot, Kashif

    2015-08-01

    Contrast 3D echocardiography (C3DE) is commonly used to enhance the visual quality of ultrasound images in comparison with non-contrast 3D echocardiography (3DE). Although the image quality in C3DE is perceived to be improved for visual analysis, however it actually deteriorates for the purpose of automatic or semi-automatic analysis due to higher speckle noise and intensity inhomogeneity. Therefore, the LV endocardial feature extraction and segmentation from the C3DE images remains a challenging problem. To address this challenge, this work proposes an adaptive pre-processing method to invert the appearance of C3DE image. The image inversion is based on an image intensity threshold value which is automatically estimated through image histogram analysis. In the inverted appearance, the LV cavity appears dark while the myocardium appears bright thus making it similar in appearance to a 3DE image. Moreover, the resulting inverted image has high contrast and low noise appearance, yielding strong LV endocardium boundary and facilitating feature extraction for segmentation. Our results demonstrate that the inverse appearance of contrast image enables the subsequent LV segmentation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. [Study of automatic marine oil spills detection using imaging spectroscopy].

    PubMed

    Liu, De-Lian; Han, Liang; Zhang, Jian-Qi

    2013-11-01

    To reduce artificial auxiliary works in oil spills detection process, an automatic oil spill detection method based on adaptive matched filter is presented. Firstly, the characteristics of reflectance spectral signature of C-H bond in oil spill are analyzed. And an oil spill spectral signature extraction model is designed by using the spectral feature of C-H bond. It is then used to obtain the reference spectral signature for the following oil spill detection step. Secondly, the characteristics of reflectance spectral signature of sea water, clouds, and oil spill are compared. The bands which have large difference in reflectance spectral signatures of the sea water, clouds, and oil spill are selected. By using these bands, the sea water pixels are segmented. And the background parameters are then calculated. Finally, the classical adaptive matched filter from target detection algorithms is improved and introduced for oil spill detection. The proposed method is applied to the real airborne visible infrared imaging spectrometer (AVIRIS) hyperspectral image captured during the deepwater horizon oil spill in the Gulf of Mexico for oil spill detection. The results show that the proposed method has, high efficiency, does not need artificial auxiliary work, and can be used for automatic detection of marine oil spill.

  17. Automatic analysis of microscopic images of red blood cell aggregates

    NASA Astrophysics Data System (ADS)

    Menichini, Pablo A.; Larese, Mónica G.; Riquelme, Bibiana D.

    2015-06-01

    Red blood cell aggregation is one of the most important factors in blood viscosity at stasis or at very low rates of flow. The basic structure of aggregates is a linear array of cell commonly termed as rouleaux. Enhanced or abnormal aggregation is seen in clinical conditions, such as diabetes and hypertension, producing alterations in the microcirculation, some of which can be analyzed through the characterization of aggregated cells. Frequently, image processing and analysis for the characterization of RBC aggregation were done manually or semi-automatically using interactive tools. We propose a system that processes images of RBC aggregation and automatically obtains the characterization and quantification of the different types of RBC aggregates. Present technique could be interesting to perform the adaptation as a routine used in hemorheological and Clinical Biochemistry Laboratories because this automatic method is rapid, efficient and economical, and at the same time independent of the user performing the analysis (repeatability of the analysis).

  18. Adaptive video-based vehicle classification technique for monitoring traffic : [executive summary].

    DOT National Transportation Integrated Search

    2015-08-01

    Federal Highway Administration (FHWA) recommends axle-based classification standards to map : passenger vehicles, single unit trucks, and multi-unit trucks, at Automatic Traffic Recorder (ATR) stations : statewide. Many state Departments of Transport...

  19. Assessing vulnerability of giant pandas to climate change in the Qinling Mountains of China.

    PubMed

    Li, Jia; Liu, Fang; Xue, Yadong; Zhang, Yu; Li, Diqiang

    2017-06-01

    Climate change might pose an additional threat to the already vulnerable giant panda ( Ailuropoda melanoleuca ). Effective conservation efforts require projections of vulnerability of the giant panda in facing climate change and proactive strategies to reduce emerging climate-related threats. We used the maximum entropy model to assess the vulnerability of giant panda to climate change in the Qinling Mountains of China. The results of modeling included the following findings: (1) the area of suitable habitat for giant pandas was projected to decrease by 281 km 2 from climate change by the 2050s; (2) the mean elevation of suitable habitat of giant panda was predicted to shift 30 m higher due to climate change over this period; (3) the network of nature reserves protect 61.73% of current suitable habitat for the species, and 59.23% of future suitable habitat; (4) current suitable habitat mainly located in Chenggu, Taibai, and Yangxian counties (with a total area of 987 km 2 ) was predicted to be vulnerable. Assessing the vulnerability of giant panda provided adaptive strategies for conservation programs and national park construction. We proposed adaptation strategies to ameliorate the predicted impacts of climate change on giant panda, including establishing and adjusting reserves, establishing habitat corridors, improving adaptive capacity to climate change, and strengthening monitoring of giant panda.

  20. Chest CT window settings with multiscale adaptive histogram equalization: pilot study.

    PubMed

    Fayad, Laura M; Jin, Yinpeng; Laine, Andrew F; Berkmen, Yahya M; Pearson, Gregory D; Freedman, Benjamin; Van Heertum, Ronald

    2002-06-01

    Multiscale adaptive histogram equalization (MAHE), a wavelet-based algorithm, was investigated as a method of automatic simultaneous display of the full dynamic contrast range of a computed tomographic image. Interpretation times were significantly lower for MAHE-enhanced images compared with those for conventionally displayed images. Diagnostic accuracy, however, was insufficient in this pilot study to allow recommendation of MAHE as a replacement for conventional window display.

  1. Speech Recognition as a Support Service for Deaf and Hard of Hearing Students: Adaptation and Evaluation. Final Report to Spencer Foundation.

    ERIC Educational Resources Information Center

    Stinson, Michael; Elliot, Lisa; McKee, Barbara; Coyne, Gina

    This report discusses a project that adapted new automatic speech recognition (ASR) technology to provide real-time speech-to-text transcription as a support service for students who are deaf and hard of hearing (D/HH). In this system, as the teacher speaks, a hearing intermediary, or captionist, dictates into the speech recognition system in a…

  2. DyKOSMap: A framework for mapping adaptation between biomedical knowledge organization systems.

    PubMed

    Dos Reis, Julio Cesar; Pruski, Cédric; Da Silveira, Marcos; Reynaud-Delaître, Chantal

    2015-06-01

    Knowledge Organization Systems (KOS) and their associated mappings play a central role in several decision support systems. However, by virtue of knowledge evolution, KOS entities are modified over time, impacting mappings and potentially turning them invalid. This requires semi-automatic methods to maintain such semantic correspondences up-to-date at KOS evolution time. We define a complete and original framework based on formal heuristics that drives the adaptation of KOS mappings. Our approach takes into account the definition of established mappings, the evolution of KOS and the possible changes that can be applied to mappings. This study experimentally evaluates the proposed heuristics and the entire framework on realistic case studies borrowed from the biomedical domain, using official mappings between several biomedical KOSs. We demonstrate the overall performance of the approach over biomedical datasets of different characteristics and sizes. Our findings reveal the effectiveness in terms of precision, recall and F-measure of the suggested heuristics and methods defining the framework to adapt mappings affected by KOS evolution. The obtained results contribute and improve the quality of mappings over time. The proposed framework can adapt mappings largely automatically, facilitating thus the maintenance task. The implemented algorithms and tools support and minimize the work of users in charge of KOS mapping maintenance. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Automatic patient-adaptive bleeding detection in a capsule endoscopy

    NASA Astrophysics Data System (ADS)

    Jung, Yun Sub; Kim, Yong Ho; Lee, Dong Ha; Lee, Sang Ho; Song, Jeong Joo; Kim, Jong Hyo

    2009-02-01

    We present a method for patient-adaptive detection of bleeding region for a Capsule Endoscopy (CE) images. The CE system has 320x320 resolution and transmits 3 images per second to receiver during around 10-hour. We have developed a technique to detect the bleeding automatically utilizing color spectrum transformation (CST) method. However, because of irregular conditions like organ difference, patient difference and illumination condition, detection performance is not uniform. To solve this problem, the detection method in this paper include parameter compensation step which compensate irregular image condition using color balance index (CBI). We have investigated color balance through sequential 2 millions images. Based on this pre-experimental result, we defined ΔCBI to represent deviate of color balance compared with standard small bowel color balance. The ΔCBI feature value is extracted from each image and used in CST method as parameter compensation constant. After candidate pixels were detected using CST method, they were labeled and examined with a bleeding character. We tested our method with 4,800 images in 12 patient data set (9 abnormal, 3 normal). Our experimental results show the proposed method achieves (before patient adaptive method : 80.87% and 74.25%, after patient adaptive method : 94.87% and 96.12%) of sensitivity and specificity.

  4. Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem.

    PubMed

    Wang, Jun Yi; Ngo, Michael M; Hessl, David; Hagerman, Randi J; Rivera, Susan M

    2016-01-01

    Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer's segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging studies, and potentially for segmenting other neural regions as well.

  5. Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem

    PubMed Central

    Wang, Jun Yi; Ngo, Michael M.; Hessl, David; Hagerman, Randi J.; Rivera, Susan M.

    2016-01-01

    Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer’s segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging studies, and potentially for segmenting other neural regions as well. PMID:27213683

  6. An automatic dose verification system for adaptive radiotherapy for helical tomotherapy

    NASA Astrophysics Data System (ADS)

    Mo, Xiaohu; Chen, Mingli; Parnell, Donald; Olivera, Gustavo; Galmarini, Daniel; Lu, Weiguo

    2014-03-01

    Purpose: During a typical 5-7 week treatment of external beam radiotherapy, there are potential differences between planned patient's anatomy and positioning, such as patient weight loss, or treatment setup. The discrepancies between planned and delivered doses resulting from these differences could be significant, especially in IMRT where dose distributions tightly conforms to target volumes while avoiding organs-at-risk. We developed an automatic system to monitor delivered dose using daily imaging. Methods: For each treatment, a merged image is generated by registering the daily pre-treatment setup image and planning CT using treatment position information extracted from the Tomotherapy archive. The treatment dose is then computed on this merged image using our in-house convolution-superposition based dose calculator implemented on GPU. The deformation field between merged and planning CT is computed using the Morphon algorithm. The planning structures and treatment doses are subsequently warped for analysis and dose accumulation. All results are saved in DICOM format with private tags and organized in a database. Due to the overwhelming amount of information generated, a customizable tolerance system is used to flag potential treatment errors or significant anatomical changes. A web-based system and a DICOM-RT viewer were developed for reporting and reviewing the results. Results: More than 30 patients were analysed retrospectively. Our in-house dose calculator passed 97% gamma test evaluated with 2% dose difference and 2mm distance-to-agreement compared with Tomotherapy calculated dose, which is considered sufficient for adaptive radiotherapy purposes. Evaluation of the deformable registration through visual inspection showed acceptable and consistent results, except for cases with large or unrealistic deformation. Our automatic flagging system was able to catch significant patient setup errors or anatomical changes. Conclusions: We developed an automatic dose verification system that quantifies treatment doses, and provides necessary information for adaptive planning without impeding clinical workflows.

  7. An Adaptive Monitoring Scheme for Automatic Control of Anaesthesia in dynamic surgical environments based on Bispectral Index and Blood Pressure.

    PubMed

    Yu, Yu-Ning; Doctor, Faiyaz; Fan, Shou-Zen; Shieh, Jiann-Shing

    2018-04-13

    During surgical procedures, bispectral index (BIS) is a well-known measure used to determine the patient's depth of anesthesia (DOA). However, BIS readings can be subject to interference from many factors during surgery, and other parameters such as blood pressure (BP) and heart rate (HR) can provide more stable indicators. However, anesthesiologist still consider BIS as a primary measure to determine if the patient is correctly anaesthetized while relaying on the other physiological parameters to monitor and ensure the patient's status is maintained. The automatic control of administering anesthesia using intelligent control systems has been the subject of recent research in order to alleviate the burden on the anesthetist to manually adjust drug dosage in response physiological changes for sustaining DOA. A system proposed for the automatic control of anesthesia based on type-2 Self Organizing Fuzzy Logic Controllers (T2-SOFLCs) has been shown to be effective in the control of DOA under simulated scenarios while contending with uncertainties due to signal noise and dynamic changes in pharmacodynamics (PD) and pharmacokinetic (PK) effects of the drug on the body. This study considers both BIS and BP as part of an adaptive automatic control scheme, which can adjust to the monitoring of either parameter in response to changes in the availability and reliability of BIS signals during surgery. The simulation of different control schemes using BIS data obtained during real surgical procedures to emulate noise and interference factors have been conducted. The use of either or both combined parameters for controlling the delivery Propofol to maintain safe target set points for DOA are evaluated. The results show that combing BIS and BP based on the proposed adaptive control scheme can ensure the target set points and the correct amount of drug in the body is maintained even with the intermittent loss of BIS signal that could otherwise disrupt an automated control system.

  8. Information processing requirements for on-board monitoring of automatic landing

    NASA Technical Reports Server (NTRS)

    Sorensen, J. A.; Karmarkar, J. S.

    1977-01-01

    A systematic procedure is presented for determining the information processing requirements for on-board monitoring of automatic landing systems. The monitoring system detects landing anomalies through use of appropriate statistical tests. The time-to-correct aircraft perturbations is determined from covariance analyses using a sequence of suitable aircraft/autoland/pilot models. The covariance results are used to establish landing safety and a fault recovery operating envelope via an event outcome tree. This procedure is demonstrated with examples using the NASA Terminal Configured Vehicle (B-737 aircraft). The procedure can also be used to define decision height, assess monitoring implementation requirements, and evaluate alternate autoland configurations.

  9. Frequency control of wind turbine in power system

    NASA Astrophysics Data System (ADS)

    Xu, Huawei

    2018-06-01

    In order to improve the stability of the overall frequency of the power system, automatic power generation control and secondary frequency adjustment were applied. Automatic power generation control was introduced into power generation planning. A dual-fed wind generator power regulation model suitable for secondary frequency regulation was established. The results showed that this method satisfied the basic requirements of frequency regulation control of large-scale wind power access power systems and improved the stability and reliability of power system operation. Therefore, this system frequency control method and strategy is relatively simple. The effect is significant. The system frequency can quickly reach a steady state. It is worth applying and promoting.

  10. Automatic Generation of Test Oracles - From Pilot Studies to Application

    NASA Technical Reports Server (NTRS)

    Feather, Martin S.; Smith, Ben

    1998-01-01

    There is a trend towards the increased use of automation in V&V. Automation can yield savings in time and effort. For critical systems, where thorough V&V is required, these savings can be substantial. We describe a progression from pilot studies to development and use of V&V automation. We used pilot studies to ascertain opportunities for, and suitability of, automating various analyses whose results would contribute to V&V. These studies culminated in the development of an automatic generator of automated test oracles. This was then applied and extended in the course of testing an Al planning system that is a key component of an autonomous spacecraft.

  11. GOAL - A test engineer oriented language. [Ground Operations Aerospace Language for coding automatic test

    NASA Technical Reports Server (NTRS)

    Mitchell, T. R.

    1974-01-01

    The development of a test engineer oriented language has been under way at the Kennedy Space Center for several years. The result of this effort is the Ground Operations Aerospace Language, GOAL, a self-documenting, high-order language suitable for coding automatic test, checkout and launch procedures. GOAL is a highly readable, writable, retainable language that is easily learned by nonprogramming oriented engineers. It is sufficiently powerful for use at all levels of Space Shuttle ground processing, from line replaceable unit checkout to integrated launch day operations. This paper will relate the language development, and describe GOAL and its applications.

  12. Automatic selection of optimal Savitzky-Golay filter parameters for Coronary Wave Intensity Analysis.

    PubMed

    Rivolo, Simone; Nagel, Eike; Smith, Nicolas P; Lee, Jack

    2014-01-01

    Coronary Wave Intensity Analysis (cWIA) is a technique capable of separating the effects of proximal arterial haemodynamics from cardiac mechanics. The cWIA ability to establish a mechanistic link between coronary haemodynamics measurements and the underlying pathophysiology has been widely demonstrated. Moreover, the prognostic value of a cWIA-derived metric has been recently proved. However, the clinical application of cWIA has been hindered due to the strong dependence on the practitioners, mainly ascribable to the cWIA-derived indices sensitivity to the pre-processing parameters. Specifically, as recently demonstrated, the cWIA-derived metrics are strongly sensitive to the Savitzky-Golay (S-G) filter, typically used to smooth the acquired traces. This is mainly due to the inability of the S-G filter to deal with the different timescale features present in the measured waveforms. Therefore, we propose to apply an adaptive S-G algorithm that automatically selects pointwise the optimal filter parameters. The newly proposed algorithm accuracy is assessed against a cWIA gold standard, provided by a newly developed in-silico cWIA modelling framework, when physiological noise is added to the simulated traces. The adaptive S-G algorithm, when used to automatically select the polynomial degree of the S-G filter, provides satisfactory results with ≤ 10% error for all the metrics through all the levels of noise tested. Therefore, the newly proposed method makes cWIA fully automatic and independent from the practitioners, opening the possibility to multi-centre trials.

  13. A semi-automatic 2D-to-3D video conversion with adaptive key-frame selection

    NASA Astrophysics Data System (ADS)

    Ju, Kuanyu; Xiong, Hongkai

    2014-11-01

    To compensate the deficit of 3D content, 2D to 3D video conversion (2D-to-3D) has recently attracted more attention from both industrial and academic communities. The semi-automatic 2D-to-3D conversion which estimates corresponding depth of non-key-frames through key-frames is more desirable owing to its advantage of balancing labor cost and 3D effects. The location of key-frames plays a role on quality of depth propagation. This paper proposes a semi-automatic 2D-to-3D scheme with adaptive key-frame selection to keep temporal continuity more reliable and reduce the depth propagation errors caused by occlusion. The potential key-frames would be localized in terms of clustered color variation and motion intensity. The distance of key-frame interval is also taken into account to keep the accumulated propagation errors under control and guarantee minimal user interaction. Once their depth maps are aligned with user interaction, the non-key-frames depth maps would be automatically propagated by shifted bilateral filtering. Considering that depth of objects may change due to the objects motion or camera zoom in/out effect, a bi-directional depth propagation scheme is adopted where a non-key frame is interpolated from two adjacent key frames. The experimental results show that the proposed scheme has better performance than existing 2D-to-3D scheme with fixed key-frame interval.

  14. Automatic brightness control of laser spot vision inspection system

    NASA Astrophysics Data System (ADS)

    Han, Yang; Zhang, Zhaoxia; Chen, Xiaodong; Yu, Daoyin

    2009-10-01

    The laser spot detection system aims to locate the center of the laser spot after long-distance transmission. The accuracy of positioning laser spot center depends very much on the system's ability to control brightness. In this paper, an automatic brightness control system with high-performance is designed using the device of FPGA. The brightness is controlled by combination of auto aperture (video driver) and adaptive exposure algorithm, and clear images with proper exposure are obtained under different conditions of illumination. Automatic brightness control system creates favorable conditions for positioning of the laser spot center later, and experiment results illuminate the measurement accuracy of the system has been effectively guaranteed. The average error of the spot center is within 0.5mm.

  15. Generic method for automatic bladder segmentation on cone beam CT using a patient-specific bladder shape model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schoot, A. J. A. J. van de, E-mail: a.j.schootvande@amc.uva.nl; Schooneveldt, G.; Wognum, S.

    Purpose: The aim of this study is to develop and validate a generic method for automatic bladder segmentation on cone beam computed tomography (CBCT), independent of gender and treatment position (prone or supine), using only pretreatment imaging data. Methods: Data of 20 patients, treated for tumors in the pelvic region with the entire bladder visible on CT and CBCT, were divided into four equally sized groups based on gender and treatment position. The full and empty bladder contour, that can be acquired with pretreatment CT imaging, were used to generate a patient-specific bladder shape model. This model was used tomore » guide the segmentation process on CBCT. To obtain the bladder segmentation, the reference bladder contour was deformed iteratively by maximizing the cross-correlation between directional grey value gradients over the reference and CBCT bladder edge. To overcome incorrect segmentations caused by CBCT image artifacts, automatic adaptations were implemented. Moreover, locally incorrect segmentations could be adapted manually. After each adapted segmentation, the bladder shape model was expanded and new shape patterns were calculated for following segmentations. All available CBCTs were used to validate the segmentation algorithm. The bladder segmentations were validated by comparison with the manual delineations and the segmentation performance was quantified using the Dice similarity coefficient (DSC), surface distance error (SDE) and SD of contour-to-contour distances. Also, bladder volumes obtained by manual delineations and segmentations were compared using a Bland-Altman error analysis. Results: The mean DSC, mean SDE, and mean SD of contour-to-contour distances between segmentations and manual delineations were 0.87, 0.27 cm and 0.22 cm (female, prone), 0.85, 0.28 cm and 0.22 cm (female, supine), 0.89, 0.21 cm and 0.17 cm (male, supine) and 0.88, 0.23 cm and 0.17 cm (male, prone), respectively. Manual local adaptations improved the segmentation results significantly (p < 0.01) based on DSC (6.72%) and SD of contour-to-contour distances (0.08 cm) and decreased the 95% confidence intervals of the bladder volume differences. Moreover, expanding the shape model improved the segmentation results significantly (p < 0.01) based on DSC and SD of contour-to-contour distances. Conclusions: This patient-specific shape model based automatic bladder segmentation method on CBCT is accurate and generic. Our segmentation method only needs two pretreatment imaging data sets as prior knowledge, is independent of patient gender and patient treatment position and has the possibility to manually adapt the segmentation locally.« less

  16. Probabilistic co-adaptive brain-computer interfacing

    NASA Astrophysics Data System (ADS)

    Bryan, Matthew J.; Martin, Stefan A.; Cheung, Willy; Rao, Rajesh P. N.

    2013-12-01

    Objective. Brain-computer interfaces (BCIs) are confronted with two fundamental challenges: (a) the uncertainty associated with decoding noisy brain signals, and (b) the need for co-adaptation between the brain and the interface so as to cooperatively achieve a common goal in a task. We seek to mitigate these challenges. Approach. We introduce a new approach to brain-computer interfacing based on partially observable Markov decision processes (POMDPs). POMDPs provide a principled approach to handling uncertainty and achieving co-adaptation in the following manner: (1) Bayesian inference is used to compute posterior probability distributions (‘beliefs’) over brain and environment state, and (2) actions are selected based on entire belief distributions in order to maximize total expected reward; by employing methods from reinforcement learning, the POMDP’s reward function can be updated over time to allow for co-adaptive behaviour. Main results. We illustrate our approach using a simple non-invasive BCI which optimizes the speed-accuracy trade-off for individual subjects based on the signal-to-noise characteristics of their brain signals. We additionally demonstrate that the POMDP BCI can automatically detect changes in the user’s control strategy and can co-adaptively switch control strategies on-the-fly to maximize expected reward. Significance. Our results suggest that the framework of POMDPs offers a promising approach for designing BCIs that can handle uncertainty in neural signals and co-adapt with the user on an ongoing basis. The fact that the POMDP BCI maintains a probability distribution over the user’s brain state allows a much more powerful form of decision making than traditional BCI approaches, which have typically been based on the output of classifiers or regression techniques. Furthermore, the co-adaptation of the system allows the BCI to make online improvements to its behaviour, adjusting itself automatically to the user’s changing circumstances.

  17. HOLA: Human-like Orthogonal Network Layout.

    PubMed

    Kieffer, Steve; Dwyer, Tim; Marriott, Kim; Wybrow, Michael

    2016-01-01

    Over the last 50 years a wide variety of automatic network layout algorithms have been developed. Some are fast heuristic techniques suitable for networks with hundreds of thousands of nodes while others are multi-stage frameworks for higher-quality layout of smaller networks. However, despite decades of research currently no algorithm produces layout of comparable quality to that of a human. We give a new "human-centred" methodology for automatic network layout algorithm design that is intended to overcome this deficiency. User studies are first used to identify the aesthetic criteria algorithms should encode, then an algorithm is developed that is informed by these criteria and finally, a follow-up study evaluates the algorithm output. We have used this new methodology to develop an automatic orthogonal network layout method, HOLA, that achieves measurably better (by user study) layout than the best available orthogonal layout algorithm and which produces layouts of comparable quality to those produced by hand.

  18. A recent advance in the automatic indexing of the biomedical literature.

    PubMed

    Névéol, Aurélie; Shooshan, Sonya E; Humphrey, Susanne M; Mork, James G; Aronson, Alan R

    2009-10-01

    The volume of biomedical literature has experienced explosive growth in recent years. This is reflected in the corresponding increase in the size of MEDLINE, the largest bibliographic database of biomedical citations. Indexers at the US National Library of Medicine (NLM) need efficient tools to help them accommodate the ensuing workload. After reviewing issues in the automatic assignment of Medical Subject Headings (MeSH terms) to biomedical text, we focus more specifically on the new subheading attachment feature for NLM's Medical Text Indexer (MTI). Natural Language Processing, statistical, and machine learning methods of producing automatic MeSH main heading/subheading pair recommendations were assessed independently and combined. The best combination achieves 48% precision and 30% recall. After validation by NLM indexers, a suitable combination of the methods presented in this paper was integrated into MTI as a subheading attachment feature producing MeSH indexing recommendations compliant with current state-of-the-art indexing practice.

  19. Reflector automatic acquisition and pointing based on auto-collimation theodolite.

    PubMed

    Luo, Jun; Wang, Zhiqian; Wen, Zhuoman; Li, Mingzhu; Liu, Shaojin; Shen, Chengwu

    2018-01-01

    An auto-collimation theodolite (ACT) for reflector automatic acquisition and pointing is designed based on the principle of autocollimators and theodolites. First, the principle of auto-collimation and theodolites is reviewed, and then the coaxial ACT structure is developed. Subsequently, the acquisition and pointing strategies for reflector measurements are presented, which first quickly acquires the target over a wide range and then points the laser spot to the charge coupled device zero position. Finally, experiments are conducted to verify the acquisition and pointing performance, including the calibration of the ACT, the comparison of the acquisition mode and pointing mode, and the accuracy measurement in horizontal and vertical directions. In both directions, a measurement accuracy of ±3″ is achieved. The presented ACT is suitable for automatic pointing and monitoring the reflector over a small scanning area and can be used in a wide range of applications such as bridge structure monitoring and cooperative target aiming.

  20. Automatic Invocation Linking for Collaborative Web-Based Corpora

    NASA Astrophysics Data System (ADS)

    Gardner, James; Krowne, Aaron; Xiong, Li

    Collaborative online encyclopedias or knowledge bases such as Wikipedia and PlanetMath are becoming increasingly popular because of their open access, comprehensive and interlinked content, rapid and continual updates, and community interactivity. To understand a particular concept in these knowledge bases, a reader needs to learn about related and underlying concepts. In this chapter, we introduce the problem of invocation linking for collaborative encyclopedia or knowledge bases, review the state of the art for invocation linking including the popular linking system of Wikipedia, discuss the problems and challenges of automatic linking, and present the NNexus approach, an abstraction and generalization of the automatic linking system used by PlanetMath.org. The chapter emphasizes both research problems and practical design issues through discussion of real world scenarios and hence is suitable for both researchers in web intelligence and practitioners looking to adopt the techniques. Below is a brief outline of the chapter.

  1. Automatic Conflict Detection on Contracts

    NASA Astrophysics Data System (ADS)

    Fenech, Stephen; Pace, Gordon J.; Schneider, Gerardo

    Many software applications are based on collaborating, yet competing, agents or virtual organisations exchanging services. Contracts, expressing obligations, permissions and prohibitions of the different actors, can be used to protect the interests of the organisations engaged in such service exchange. However, the potentially dynamic composition of services with different contracts, and the combination of service contracts with local contracts can give rise to unexpected conflicts, exposing the need for automatic techniques for contract analysis. In this paper we look at automatic analysis techniques for contracts written in the contract language mathcal{CL}. We present a trace semantics of mathcal{CL} suitable for conflict analysis, and a decision procedure for detecting conflicts (together with its proof of soundness, completeness and termination). We also discuss its implementation and look into the applications of the contract analysis approach we present. These techniques are applied to a small case study of an airline check-in desk.

  2. Intelligent vision guide for automatic ventilation grommet insertion into the tympanic membrane.

    PubMed

    Gao, Wenchao; Tan, Kok Kiong; Liang, Wenyu; Gan, Chee Wee; Lim, Hsueh Yee

    2016-03-01

    Otitis media with effusion is a worldwide ear disease. The current treatment is to surgically insert a ventilation grommet into the tympanic membrane. A robotic device allowing automatic grommet insertion has been designed in a previous study; however, the part of the membrane where the malleus bone is attached to the inner surface is to be avoided during the insertion process. This paper proposes a synergy of optical flow technique and a gradient vector flow active contours algorithm to achieve an online tracking of the malleus under endoscopic vision, to guide the working channel to move efficiently during the surgery. The proposed method shows a more stable and accurate tracking performance than the current tracking methods in preclinical tests. With satisfactory tracking results, vision guidance of a suitable insertion spot can be provided to the device to perform the surgery in an automatic way. Copyright © 2015 John Wiley & Sons, Ltd.

  3. Towards Automatic Processing of Virtual City Models for Simulations

    NASA Astrophysics Data System (ADS)

    Piepereit, R.; Schilling, A.; Alam, N.; Wewetzer, M.; Pries, M.; Coors, V.

    2016-10-01

    Especially in the field of numerical simulations, such as flow and acoustic simulations, the interest in using virtual 3D models to optimize urban systems is increasing. The few instances in which simulations were already carried out in practice have been associated with an extremely high manual and therefore uneconomical effort for the processing of models. Using different ways of capturing models in Geographic Information System (GIS) and Computer Aided Engineering (CAE), increases the already very high complexity of the processing. To obtain virtual 3D models suitable for simulation, we developed a tool for automatic processing with the goal to establish ties between the world of GIS and CAE. In this paper we introduce a way to use Coons surfaces for the automatic processing of building models in LoD2, and investigate ways to simplify LoD3 models in order to reduce unnecessary information for a numerical simulation.

  4. Reflector automatic acquisition and pointing based on auto-collimation theodolite

    NASA Astrophysics Data System (ADS)

    Luo, Jun; Wang, Zhiqian; Wen, Zhuoman; Li, Mingzhu; Liu, Shaojin; Shen, Chengwu

    2018-01-01

    An auto-collimation theodolite (ACT) for reflector automatic acquisition and pointing is designed based on the principle of autocollimators and theodolites. First, the principle of auto-collimation and theodolites is reviewed, and then the coaxial ACT structure is developed. Subsequently, the acquisition and pointing strategies for reflector measurements are presented, which first quickly acquires the target over a wide range and then points the laser spot to the charge coupled device zero position. Finally, experiments are conducted to verify the acquisition and pointing performance, including the calibration of the ACT, the comparison of the acquisition mode and pointing mode, and the accuracy measurement in horizontal and vertical directions. In both directions, a measurement accuracy of ±3″ is achieved. The presented ACT is suitable for automatic pointing and monitoring the reflector over a small scanning area and can be used in a wide range of applications such as bridge structure monitoring and cooperative target aiming.

  5. A novel automated spike sorting algorithm with adaptable feature extraction.

    PubMed

    Bestel, Robert; Daus, Andreas W; Thielemann, Christiane

    2012-10-15

    To study the electrophysiological properties of neuronal networks, in vitro studies based on microelectrode arrays have become a viable tool for analysis. Although in constant progress, a challenging task still remains in this area: the development of an efficient spike sorting algorithm that allows an accurate signal analysis at the single-cell level. Most sorting algorithms currently available only extract a specific feature type, such as the principal components or Wavelet coefficients of the measured spike signals in order to separate different spike shapes generated by different neurons. However, due to the great variety in the obtained spike shapes, the derivation of an optimal feature set is still a very complex issue that current algorithms struggle with. To address this problem, we propose a novel algorithm that (i) extracts a variety of geometric, Wavelet and principal component-based features and (ii) automatically derives a feature subset, most suitable for sorting an individual set of spike signals. Thus, there is a new approach that evaluates the probability distribution of the obtained spike features and consequently determines the candidates most suitable for the actual spike sorting. These candidates can be formed into an individually adjusted set of spike features, allowing a separation of the various shapes present in the obtained neuronal signal by a subsequent expectation maximisation clustering algorithm. Test results with simulated data files and data obtained from chick embryonic neurons cultured on microelectrode arrays showed an excellent classification result, indicating the superior performance of the described algorithm approach. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Overhead tray for cable test system

    NASA Technical Reports Server (NTRS)

    Saltz, K. T.

    1976-01-01

    System consists of overhead slotted tray, series of compatible adapter cables, and automatic test set which consists of control console and cable-switching console. System reduces hookup time and also reduces cost of fabricating and storing test cables.

  7. Adaptive driving beam headlights : visibility, glare and measurement considerations.

    DOT National Transportation Integrated Search

    2016-06-01

    Recent developments in solid-state lighting, sensor and control technologies are making new : configurations for vehicle forward lighting feasible. Building on systems that automatically switch from : high- to low-beam headlights in the presence of o...

  8. Research in Parallel Algorithms and Software for Computational Aerosciences

    DOT National Transportation Integrated Search

    1996-04-01

    Phase I is complete for the development of a Computational Fluid Dynamics : with automatic grid generation and adaptation for the Euler : analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian : grid code developed at Lockheed...

  9. Adaptive artificial neural network for autonomous robot control

    NASA Technical Reports Server (NTRS)

    Arras, Michael K.; Protzel, Peter W.; Palumbo, Daniel L.

    1992-01-01

    The topics are presented in viewgraph form and include: neural network controller for robot arm positioning with visual feedback; initial training of the arm; automatic recovery from cumulative fault scenarios; and error reduction by iterative fine movements.

  10. AutoCellSeg: robust automatic colony forming unit (CFU)/cell analysis using adaptive image segmentation and easy-to-use post-editing techniques.

    PubMed

    Khan, Arif Ul Maula; Torelli, Angelo; Wolf, Ivo; Gretz, Norbert

    2018-05-08

    In biological assays, automated cell/colony segmentation and counting is imperative owing to huge image sets. Problems occurring due to drifting image acquisition conditions, background noise and high variation in colony features in experiments demand a user-friendly, adaptive and robust image processing/analysis method. We present AutoCellSeg (based on MATLAB) that implements a supervised automatic and robust image segmentation method. AutoCellSeg utilizes multi-thresholding aided by a feedback-based watershed algorithm taking segmentation plausibility criteria into account. It is usable in different operation modes and intuitively enables the user to select object features interactively for supervised image segmentation method. It allows the user to correct results with a graphical interface. This publicly available tool outperforms tools like OpenCFU and CellProfiler in terms of accuracy and provides many additional useful features for end-users.

  11. Toward Automatic Verification of Goal-Oriented Flow Simulations

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.

    2014-01-01

    We demonstrate the power of adaptive mesh refinement with adjoint-based error estimates in verification of simulations governed by the steady Euler equations. The flow equations are discretized using a finite volume scheme on a Cartesian mesh with cut cells at the wall boundaries. The discretization error in selected simulation outputs is estimated using the method of adjoint-weighted residuals. Practical aspects of the implementation are emphasized, particularly in the formulation of the refinement criterion and the mesh adaptation strategy. Following a thorough code verification example, we demonstrate simulation verification of two- and three-dimensional problems. These involve an airfoil performance database, a pressure signature of a body in supersonic flow and a launch abort with strong jet interactions. The results show reliable estimates and automatic control of discretization error in all simulations at an affordable computational cost. Moreover, the approach remains effective even when theoretical assumptions, e.g., steady-state and solution smoothness, are relaxed.

  12. The Researching on Evaluation of Automatic Voltage Control Based on Improved Zoning Methodology

    NASA Astrophysics Data System (ADS)

    Xiao-jun, ZHU; Ang, FU; Guang-de, DONG; Rui-miao, WANG; De-fen, ZHU

    2018-03-01

    According to the present serious phenomenon of increasing size and structure of power system, hierarchically structured automatic voltage control(AVC) has been the researching spot. In the paper, the reduced control model is built and the adaptive reduced control model is researched to improve the voltage control effect. The theories of HCSD, HCVS, SKC and FCM are introduced and the effect on coordinated voltage regulation caused by different zoning methodologies is also researched. The generic framework for evaluating performance of coordinated voltage regulation is built. Finally, the IEEE-96 stsyem is used to divide the network. The 2383-bus Polish system is built to verify that the selection of a zoning methodology affects not only the coordinated voltage regulation operation, but also its robustness to erroneous data and proposes a comprehensive generic framework for evaluating its performance. The New England 39-bus network is used to verify the adaptive reduced control models’ performance.

  13. Visual perception system and method for a humanoid robot

    NASA Technical Reports Server (NTRS)

    Chelian, Suhas E. (Inventor); Linn, Douglas Martin (Inventor); Wampler, II, Charles W. (Inventor); Bridgwater, Lyndon (Inventor); Wells, James W. (Inventor); Mc Kay, Neil David (Inventor)

    2012-01-01

    A robotic system includes a humanoid robot with robotic joints each moveable using an actuator(s), and a distributed controller for controlling the movement of each of the robotic joints. The controller includes a visual perception module (VPM) for visually identifying and tracking an object in the field of view of the robot under threshold lighting conditions. The VPM includes optical devices for collecting an image of the object, a positional extraction device, and a host machine having an algorithm for processing the image and positional information. The algorithm visually identifies and tracks the object, and automatically adapts an exposure time of the optical devices to prevent feature data loss of the image under the threshold lighting conditions. A method of identifying and tracking the object includes collecting the image, extracting positional information of the object, and automatically adapting the exposure time to thereby prevent feature data loss of the image.

  14. Automatic segmentation and classification of mycobacterium tuberculosis with conventional light microscopy

    NASA Astrophysics Data System (ADS)

    Xu, Chao; Zhou, Dongxiang; Zhai, Yongping; Liu, Yunhui

    2015-12-01

    This paper realizes the automatic segmentation and classification of Mycobacterium tuberculosis with conventional light microscopy. First, the candidate bacillus objects are segmented by the marker-based watershed transform. The markers are obtained by an adaptive threshold segmentation based on the adaptive scale Gaussian filter. The scale of the Gaussian filter is determined according to the color model of the bacillus objects. Then the candidate objects are extracted integrally after region merging and contaminations elimination. Second, the shape features of the bacillus objects are characterized by the Hu moments, compactness, eccentricity, and roughness, which are used to classify the single, touching and non-bacillus objects. We evaluated the logistic regression, random forest, and intersection kernel support vector machines classifiers in classifying the bacillus objects respectively. Experimental results demonstrate that the proposed method yields to high robustness and accuracy. The logistic regression classifier performs best with an accuracy of 91.68%.

  15. 46 CFR 62.25-10 - Manual alternate control systems.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... automatic primary control system failure; (2) Be suitable for manual control for prolonged periods; (3) Be... 46 Shipping 2 2011-10-01 2011-10-01 false Manual alternate control systems. 62.25-10 Section 62.25... AUTOMATION General Requirements for All Automated Vital Systems § 62.25-10 Manual alternate control systems...

  16. Automatic Selection of Suitable Sentences for Language Learning Exercises

    ERIC Educational Resources Information Center

    Pilán, Ildikó; Volodina, Elena; Johansson, Richard

    2013-01-01

    In our study we investigated second and foreign language (L2) sentence readability, an area little explored so far in the case of several languages, including Swedish. The outcome of our research consists of two methods for sentence selection from native language corpora based on Natural Language Processing (NLP) and machine learning (ML)…

  17. The Suitability of Cloud-Based Speech Recognition Engines for Language Learning

    ERIC Educational Resources Information Center

    Daniels, Paul; Iwago, Koji

    2017-01-01

    As online automatic speech recognition (ASR) engines become more accurate and more widely implemented with call software, it becomes important to evaluate the effectiveness and the accuracy of these recognition engines using authentic speech samples. This study investigates two of the most prominent cloud-based speech recognition engines--Apple's…

  18. 46 CFR 98.25-40 - Valves, fittings, and accessories.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., United States of America Standard 300-pound standard minimum, fitted with suitable soft gasket material... shut-off valves located as close to the tank as possible. (d) Excess flow valves where required by this section shall close automatically at the rated flow of vapor or liquid as specified by the manufacturer...

  19. Commercial applications

    NASA Technical Reports Server (NTRS)

    Togai, Masaki

    1990-01-01

    Viewgraphs on commercial applications of fuzzy logic in Japan are presented. Topics covered include: suitable application area of fuzzy theory; characteristics of fuzzy control; fuzzy closed-loop controller; Mitsubishi heavy air conditioner; predictive fuzzy control; the Sendai subway system; automatic transmission; fuzzy logic-based command system for antilock braking system; fuzzy feed-forward controller; and fuzzy auto-tuning system.

  20. A preliminary investigation of inlet unstart effects on a high-speed civil transport concept

    NASA Technical Reports Server (NTRS)

    Domack, Christopher S.

    1991-01-01

    Vehicle motions resulting from a supersonic mixed-compression inlet unstart were examined to determine if the unstart constituted a hazard severe enough to warrant rejection of mixed-compression inlets on high-speed civil transport (HSCT) concepts. A simple kinematic analysis of an inlet unstart during cruise was performed for a Mach 2, 4, 250-passenger HSCT concept using data from a wind-tunnel test of a representative configuration with unstarted inlets simulated. A survey of previously published research on inlet unstart effects, including simulation and flight test data for the YF-12, XB-70, and Concorde aircraft, was conducted to validate the calculated results. It was concluded that, when countered by suitable automatic propulsion and flight control systems, the vehicle dynamics induced by an inlet unstart are not severe enough to preclude the use of mixed-compression inlets on an HSCT from a passenger safety standpoint. The ability to provide suitable automatic controls appears to be within the current state of the art. However, the passenger startle and discomfort caused by the noise, vibration, and cabin motions associated with an inlet unstart remain a concern.

  1. Gaussian processes: a method for automatic QSAR modeling of ADME properties.

    PubMed

    Obrezanova, Olga; Csanyi, Gabor; Gola, Joelle M R; Segall, Matthew D

    2007-01-01

    In this article, we discuss the application of the Gaussian Process method for the prediction of absorption, distribution, metabolism, and excretion (ADME) properties. On the basis of a Bayesian probabilistic approach, the method is widely used in the field of machine learning but has rarely been applied in quantitative structure-activity relationship and ADME modeling. The method is suitable for modeling nonlinear relationships, does not require subjective determination of the model parameters, works for a large number of descriptors, and is inherently resistant to overtraining. The performance of Gaussian Processes compares well with and often exceeds that of artificial neural networks. Due to these features, the Gaussian Processes technique is eminently suitable for automatic model generation-one of the demands of modern drug discovery. Here, we describe the basic concept of the method in the context of regression problems and illustrate its application to the modeling of several ADME properties: blood-brain barrier, hERG inhibition, and aqueous solubility at pH 7.4. We also compare Gaussian Processes with other modeling techniques.

  2. Application of computer vision to automatic prescription verification in pharmaceutical mail order

    NASA Astrophysics Data System (ADS)

    Alouani, Ali T.

    2005-05-01

    In large volume pharmaceutical mail order, before shipping out prescriptions, licensed pharmacists ensure that the drug in the bottle matches the information provided in the patient prescription. Typically, the pharmacist has about 2 sec to complete the prescription verification process of one prescription. Performing about 1800 prescription verification per hour is tedious and can generate human errors as a result of visual and brain fatigue. Available automatic drug verification systems are limited to a single pill at a time. This is not suitable for large volume pharmaceutical mail order, where a prescription can have as many as 60 pills and where thousands of prescriptions are filled every day. In an attempt to reduce human fatigue, cost, and limit human error, the automatic prescription verification system (APVS) was invented to meet the need of large scale pharmaceutical mail order. This paper deals with the design and implementation of the first prototype online automatic prescription verification machine to perform the same task currently done by a pharmacist. The emphasis here is on the visual aspects of the machine. The system has been successfully tested on 43,000 prescriptions.

  3. Assessment of Climate Suitability of Maize in South Korea

    NASA Astrophysics Data System (ADS)

    Hyun, S.; Choi, D.; Seo, B.

    2017-12-01

    Assessing suitable areas for crops would be useful to design alternate cropping systems as an adaptation option to climate change adaptation. Although suitable areas could be identified by using a crop growth model, it would require a number of input parameters including cultivar and soil. Instead, a simple climate suitability model, e.g., EcoCrop model, could be used for an assessment of climate suitability for a major grain crop. The objective of this study was to assess of climate suitability for maize using the EcoCrop model under climate change conditions in Korea. A long term climate data from 2000 - 2100 were compiled from weather data source. The EcoCrop model implemented in R was used to determine climate suitability index at each grid cell. Overall, the EcoCrop model tended to identify suitable areas for maize production near the coastal areas whereas the actual major production areas located in inland areas. It is likely that the discrepancy between assessed and actual crop production areas would result from the socioeconomic aspects of maize production. Because the price of maize is considerably low, maize has been grown in an area where moisture and temperature conditions would be less than optimum. In part, a simple algorithm to predict climate suitability for maize would caused a relatively large error in climate suitability assessment under the present climate conditions. In 2050s, the climate suitability for maize increased in a large areas in southern and western part of Korea. In particular, the plain areas near the coastal region had considerably greater suitability index in the future compared with mountainous areas. The expansion of suitable areas for maize would help crop production policy making such as the allocation of rice production area for other crops due to considerably less demand for the rice in Korea.

  4. Automated Coronal Loop Identification using Digital Image Processing Techniques

    NASA Astrophysics Data System (ADS)

    Lee, J. K.; Gary, G. A.; Newman, T. S.

    2003-05-01

    The results of a Master's thesis study of computer algorithms for automatic extraction and identification (i.e., collectively, "detection") of optically-thin, 3-dimensional, (solar) coronal-loop center "lines" from extreme ultraviolet and X-ray 2-dimensional images will be presented. The center lines, which can be considered to be splines, are proxies of magnetic field lines. Detecting the loops is challenging because there are no unique shapes, the loop edges are often indistinct, and because photon and detector noise heavily influence the images. Three techniques for detecting the projected magnetic field lines have been considered and will be described in the presentation. The three techniques used are (i) linear feature recognition of local patterns (related to the inertia-tensor concept), (ii) parametric space inferences via the Hough transform, and (iii) topological adaptive contours (snakes) that constrain curvature and continuity. Since coronal loop topology is dominated by the magnetic field structure, a first-order magnetic field approximation using multiple dipoles provides a priori information that has also been incorporated into the detection process. Synthesized images have been generated to benchmark the suitability of the three techniques, and the performance of the three techniques on both synthesized and solar images will be presented and numerically evaluated in the presentation. The process of automatic detection of coronal loops is important in the reconstruction of the coronal magnetic field where the derived magnetic field lines provide a boundary condition for magnetic models ( cf. , Gary (2001, Solar Phys., 203, 71) and Wiegelmann & Neukirch (2002, Solar Phys., 208, 233)). . This work was supported by NASA's Office of Space Science - Solar and Heliospheric Physics Supporting Research and Technology Program.

  5. The determination of high-resolution spatio-temporal glacier motion fields from time-lapse sequences

    NASA Astrophysics Data System (ADS)

    Schwalbe, Ellen; Maas, Hans-Gerd

    2017-12-01

    This paper presents a comprehensive method for the determination of glacier surface motion vector fields at high spatial and temporal resolution. These vector fields can be derived from monocular terrestrial camera image sequences and are a valuable data source for glaciological analysis of the motion behaviour of glaciers. The measurement concepts for the acquisition of image sequences are presented, and an automated monoscopic image sequence processing chain is developed. Motion vector fields can be derived with high precision by applying automatic subpixel-accuracy image matching techniques on grey value patterns in the image sequences. Well-established matching techniques have been adapted to the special characteristics of the glacier data in order to achieve high reliability in automatic image sequence processing, including the handling of moving shadows as well as motion effects induced by small instabilities in the camera set-up. Suitable geo-referencing techniques were developed to transform image measurements into a reference coordinate system.The result of monoscopic image sequence analysis is a dense raster of glacier surface point trajectories for each image sequence. Each translation vector component in these trajectories can be determined with an accuracy of a few centimetres for points at a distance of several kilometres from the camera. Extensive practical validation experiments have shown that motion vector and trajectory fields derived from monocular image sequences can be used for the determination of high-resolution velocity fields of glaciers, including the analysis of tidal effects on glacier movement, the investigation of a glacier's motion behaviour during calving events, the determination of the position and migration of the grounding line and the detection of subglacial channels during glacier lake outburst floods.

  6. Using the GeoFEST Faulted Region Simulation System

    NASA Technical Reports Server (NTRS)

    Parker, Jay W.; Lyzenga, Gregory A.; Donnellan, Andrea; Judd, Michele A.; Norton, Charles D.; Baker, Teresa; Tisdale, Edwin R.; Li, Peggy

    2004-01-01

    GeoFEST (the Geophysical Finite Element Simulation Tool) simulates stress evolution, fault slip and plastic/elastic processes in realistic materials, and so is suitable for earthquake cycle studies in regions such as Southern California. Many new capabilities and means of access for GeoFEST are now supported. New abilities include MPI-based cluster parallel computing using automatic PYRAMID/Parmetis-based mesh partitioning, automatic mesh generation for layered media with rectangular faults, and results visualization that is integrated with remote sensing data. The parallel GeoFEST application has been successfully run on over a half-dozen computers, including Intel Xeon clusters, Itanium II and Altix machines, and the Apple G5 cluster. It is not separately optimized for different machines, but relies on good domain partitioning for load-balance and low communication, and careful writing of the parallel diagonally preconditioned conjugate gradient solver to keep communication overhead low. Demonstrated thousand-step solutions for over a million finite elements on 64 processors require under three hours, and scaling tests show high efficiency when using more than (order of) 4000 elements per processor. The source code and documentation for GeoFEST is available at no cost from Open Channel Foundation. In addition GeoFEST may be used through a browser-based portal environment available to approved users. That environment includes semi-automated geometry creation and mesh generation tools, GeoFEST, and RIVA-based visualization tools that include the ability to generate a flyover animation showing deformations and topography. Work is in progress to support simulation of a region with several faults using 16 million elements, using a strain energy metric to adapt the mesh to faithfully represent the solution in a region of widely varying strain.

  7. Development of Portable Automatic Number Plate Recognition System on Android Mobile Phone

    NASA Astrophysics Data System (ADS)

    Mutholib, Abdul; Gunawan, Teddy S.; Chebil, Jalel; Kartiwi, Mira

    2013-12-01

    The Automatic Number Plate Recognition (ANPR) System has performed as the main role in various access control and security, such as: tracking of stolen vehicles, traffic violations (speed trap) and parking management system. In this paper, the portable ANPR implemented on android mobile phone is presented. The main challenges in mobile application are including higher coding efficiency, reduced computational complexity, and improved flexibility. Significance efforts are being explored to find suitable and adaptive algorithm for implementation of ANPR on mobile phone. ANPR system for mobile phone need to be optimize due to its limited CPU and memory resources, its ability for geo-tagging image captured using GPS coordinates and its ability to access online database to store the vehicle's information. In this paper, the design of portable ANPR on android mobile phone will be described as follows. First, the graphical user interface (GUI) for capturing image using built-in camera was developed to acquire vehicle plate number in Malaysia. Second, the preprocessing of raw image was done using contrast enhancement. Next, character segmentation using fixed pitch and an optical character recognition (OCR) using neural network were utilized to extract texts and numbers. Both character segmentation and OCR were using Tesseract library from Google Inc. The proposed portable ANPR algorithm was implemented and simulated using Android SDK on a computer. Based on the experimental results, the proposed system can effectively recognize the license plate number at 90.86%. The required processing time to recognize a license plate is only 2 seconds on average. The result is consider good in comparison with the results obtained from previous system that was processed in a desktop PC with the range of result from 91.59% to 98% recognition rate and 0.284 second to 1.5 seconds recognition time.

  8. RefMoB, a Reflectivity Feature Model-Based Automated Method for Measuring Four Outer Retinal Hyperreflective Bands in Optical Coherence Tomography

    PubMed Central

    Ross, Douglas H.; Clark, Mark E.; Godara, Pooja; Huisingh, Carrie; McGwin, Gerald; Owsley, Cynthia; Litts, Katie M.; Spaide, Richard F.; Sloan, Kenneth R.; Curcio, Christine A.

    2015-01-01

    Purpose. To validate a model-driven method (RefMoB) of automatically describing the four outer retinal hyperreflective bands revealed by spectral-domain optical coherence tomography (SDOCT), for comparison with histology of normal macula; to report thickness and position of bands, particularly band 2 (ellipsoid zone [EZ], commonly called IS/OS). Methods. Foveal and superior perifoveal scans of seven SDOCT volumes of five individuals aged 28 to 69 years with healthy maculas were used (seven eyes for validation, five eyes for measurement). RefMoB determines band thickness and position by a multistage procedure that models reflectivities as a summation of Gaussians. Band thickness and positions were compared with those obtained by manual evaluators for the same scans, and compared with an independent published histological dataset. Results. Agreement among manual evaluators was moderate. Relative to manual evaluation, RefMoB reported reduced thickness and vertical shifts in band positions in a band-specific manner for both simulated and empirical data. In foveal and perifoveal scans, band 1 was thick relative to the anatomical external limiting membrane, band 2 aligned with the outer one-third of the anatomical IS ellipsoid, and band 3 (IZ, interdigitation of retinal pigment epithelium and photoreceptors) was cleanly delineated. Conclusions. RefMoB is suitable for automatic description of the location and thickness of the four outer retinal hyperreflective bands. Initial results suggest that band 2 aligns with the outer ellipsoid, thus supporting its recent designation as EZ. Automated and objective delineation of band 3 will help investigations of structural biomarkers of dark-adaptation changes in aging. PMID:26132776

  9. Reconfigurable environmentally adaptive computing

    NASA Technical Reports Server (NTRS)

    Coxe, Robin L. (Inventor); Galica, Gary E. (Inventor)

    2008-01-01

    Described are methods and apparatus, including computer program products, for reconfigurable environmentally adaptive computing technology. An environmental signal representative of an external environmental condition is received. A processing configuration is automatically selected, based on the environmental signal, from a plurality of processing configurations. A reconfigurable processing element is reconfigured to operate according to the selected processing configuration. In some examples, the environmental condition is detected and the environmental signal is generated based on the detected condition.

  10. Automatic evaluations and exercise setting preference in frequent exercisers.

    PubMed

    Antoniewicz, Franziska; Brand, Ralf

    2014-12-01

    The goals of this study were to test whether exercise-related stimuli can elicit automatic evaluative responses and whether automatic evaluations reflect exercise setting preference in highly active exercisers. An adapted version of the Affect Misattribution Procedure was employed. Seventy-two highly active exercisers (26 years ± 9.03; 43% female) were subliminally primed (7 ms) with pictures depicting typical fitness center scenarios or gray rectangles (control primes). After each prime, participants consciously evaluated the "pleasantness" of a Chinese symbol. Controlled evaluations were measured with a questionnaire and were more positive in participants who regularly visited fitness centers than in those who reported avoiding this exercise setting. Only center exercisers gave automatic positive evaluations of the fitness center setting (partial eta squared = .08). It is proposed that a subliminal Affect Misattribution Procedure paradigm can elicit automatic evaluations to exercising and that, in highly active exercisers, these evaluations play a role in decisions about the exercise setting rather than the amounts of physical exercise. Findings are interpreted in terms of a dual systems theory of social information processing and behavior.

  11. Automatic Cell Segmentation in Fluorescence Images of Confluent Cell Monolayers Using Multi-object Geometric Deformable Model.

    PubMed

    Yang, Zhen; Bogovic, John A; Carass, Aaron; Ye, Mao; Searson, Peter C; Prince, Jerry L

    2013-03-13

    With the rapid development of microscopy for cell imaging, there is a strong and growing demand for image analysis software to quantitatively study cell morphology. Automatic cell segmentation is an important step in image analysis. Despite substantial progress, there is still a need to improve the accuracy, efficiency, and adaptability to different cell morphologies. In this paper, we propose a fully automatic method for segmenting cells in fluorescence images of confluent cell monolayers. This method addresses several challenges through a combination of ideas. 1) It realizes a fully automatic segmentation process by first detecting the cell nuclei as initial seeds and then using a multi-object geometric deformable model (MGDM) for final segmentation. 2) To deal with different defects in the fluorescence images, the cell junctions are enhanced by applying an order-statistic filter and principal curvature based image operator. 3) The final segmentation using MGDM promotes robust and accurate segmentation results, and guarantees no overlaps and gaps between neighboring cells. The automatic segmentation results are compared with manually delineated cells, and the average Dice coefficient over all distinguishable cells is 0.88.

  12. Towards an automatic wind speed and direction profiler for Wide Field adaptive optics systems

    NASA Astrophysics Data System (ADS)

    Sivo, G.; Turchi, A.; Masciadri, E.; Guesalaga, A.; Neichel, B.

    2018-05-01

    Wide Field Adaptive Optics (WFAO) systems are among the most sophisticated adaptive optics (AO) systems available today on large telescopes. Knowledge of the vertical spatio-temporal distribution of wind speed (WS) and direction (WD) is fundamental to optimize the performance of such systems. Previous studies already proved that the Gemini Multi-Conjugated AO system (GeMS) is able to retrieve measurements of the WS and WD stratification using the SLOpe Detection And Ranging (SLODAR) technique and to store measurements in the telemetry data. In order to assess the reliability of these estimates and of the SLODAR technique applied to such complex AO systems, in this study we compared WS and WD values retrieved from GeMS with those obtained with the atmospheric model Meso-NH on a rich statistical sample of nights. It has previously been proved that the latter technique provided excellent agreement with a large sample of radiosoundings, both in statistical terms and on individual flights. It can be considered, therefore, as an independent reference. The excellent agreement between GeMS measurements and the model that we find in this study proves the robustness of the SLODAR approach. To bypass the complex procedures necessary to achieve automatic measurements of the wind with GeMS, we propose a simple automatic method to monitor nightly WS and WD using Meso-NH model estimates. Such a method can be applied to whatever present or new-generation facilities are supported by WFAO systems. The interest of this study is, therefore, well beyond the optimization of GeMS performance.

  13. New generation of the health monitoring system SMS 2001

    NASA Astrophysics Data System (ADS)

    Berndt, Rolf-Dietrich; Schwesinger, Peter

    2001-08-01

    The Structure Monitoring System SMS 2001 (applied for patent) represents a modular structured multi-component measurement devise for use under outdoor conditions. Besides usual continuously (static) measurements of e.g. environmental parameters and structure related responses the SMS is able to register also short term dynamic events automatically with measurement frequencies up to 1 kHz. A larger range of electrical sensors is able to be used. On demand a solar based power supply can be realized. The SMS 2001 is adaptable in a wide range, it is space-saving in its geometric structure and can meet very various demands of the users. The system is applicable preferably for small and medium sized concrete and steel structures (besides buildings and bridges also for special cases). It is suitable to support the efficient concept of a controlled life time extension especially in the case of pre-damaged structures. The interactive communication between SMS and the central office is completely remote controlled. Two point or multi-point connections using the internet can be realized. The measurement data are stored in a central data bank. A safe access supported by software modules can be organized in different levels, e.g. for scientific evaluation, service reasons or needs of authorities.

  14. Empirical Mode Decomposition and Neural Networks on FPGA for Fault Diagnosis in Induction Motors

    PubMed Central

    Garcia-Perez, Arturo; Osornio-Rios, Roque Alfredo; Romero-Troncoso, Rene de Jesus

    2014-01-01

    Nowadays, many industrial applications require online systems that combine several processing techniques in order to offer solutions to complex problems as the case of detection and classification of multiple faults in induction motors. In this work, a novel digital structure to implement the empirical mode decomposition (EMD) for processing nonstationary and nonlinear signals using the full spline-cubic function is presented; besides, it is combined with an adaptive linear network (ADALINE)-based frequency estimator and a feed forward neural network (FFNN)-based classifier to provide an intelligent methodology for the automatic diagnosis during the startup transient of motor faults such as: one and two broken rotor bars, bearing defects, and unbalance. Moreover, the overall methodology implementation into a field-programmable gate array (FPGA) allows an online and real-time operation, thanks to its parallelism and high-performance capabilities as a system-on-a-chip (SoC) solution. The detection and classification results show the effectiveness of the proposed fused techniques; besides, the high precision and minimum resource usage of the developed digital structures make them a suitable and low-cost solution for this and many other industrial applications. PMID:24678281

  15. Gendermetrics.NET: a novel software for analyzing the gender representation in scientific authoring.

    PubMed

    Bendels, Michael H K; Brüggmann, Dörthe; Schöffel, Norman; Groneberg, David A

    2016-01-01

    Imbalances in female career promotion are believed to be strong in the field of academic science. A primary parameter to analyze gender inequalities is the gender authoring in scientific publications. Since the presently available data on gender distribution is largely limited to underpowered studies, we here develop a new approach to analyze authors' genders in large bibliometric databases. A SQL-Server based multiuser software suite was developed that serves as an integrative tool for analyzing bibliometric data with a special emphasis on gender and topographical analysis. The presented system allows seamless integration, inspection, modification, evaluation and visualization of bibliometric data. By providing an adaptive and almost fully automatic integration and analysis process, the inter-individual variability of analysis is kept at a low level. Depending on the scientific question, the system enables the user to perform a scientometric analysis including its visualization within a short period of time. In summary, a new software suite for analyzing gender representations in scientific articles was established. The system is suitable for the comparative analysis of scientific structures on the level of continents, countries, cities, city regions, institutions, research fields and journals.

  16. Empirical mode decomposition and neural networks on FPGA for fault diagnosis in induction motors.

    PubMed

    Camarena-Martinez, David; Valtierra-Rodriguez, Martin; Garcia-Perez, Arturo; Osornio-Rios, Roque Alfredo; Romero-Troncoso, Rene de Jesus

    2014-01-01

    Nowadays, many industrial applications require online systems that combine several processing techniques in order to offer solutions to complex problems as the case of detection and classification of multiple faults in induction motors. In this work, a novel digital structure to implement the empirical mode decomposition (EMD) for processing nonstationary and nonlinear signals using the full spline-cubic function is presented; besides, it is combined with an adaptive linear network (ADALINE)-based frequency estimator and a feed forward neural network (FFNN)-based classifier to provide an intelligent methodology for the automatic diagnosis during the startup transient of motor faults such as: one and two broken rotor bars, bearing defects, and unbalance. Moreover, the overall methodology implementation into a field-programmable gate array (FPGA) allows an online and real-time operation, thanks to its parallelism and high-performance capabilities as a system-on-a-chip (SoC) solution. The detection and classification results show the effectiveness of the proposed fused techniques; besides, the high precision and minimum resource usage of the developed digital structures make them a suitable and low-cost solution for this and many other industrial applications.

  17. Importance of balanced architectures in the design of high-performance imaging systems

    NASA Astrophysics Data System (ADS)

    Sgro, Joseph A.; Stanton, Paul C.

    1999-03-01

    Imaging systems employed in demanding military and industrial applications, such as automatic target recognition and computer vision, typically require real-time high-performance computing resources. While high- performances computing systems have traditionally relied on proprietary architectures and custom components, recent advances in high performance general-purpose microprocessor technology have produced an abundance of low cost components suitable for use in high-performance computing systems. A common pitfall in the design of high performance imaging system, particularly systems employing scalable multiprocessor architectures, is the failure to balance computational and memory bandwidth. The performance of standard cluster designs, for example, in which several processors share a common memory bus, is typically constrained by memory bandwidth. The symptom characteristic of this problem is failure to the performance of the system to scale as more processors are added. The problem becomes exacerbated if I/O and memory functions share the same bus. The recent introduction of microprocessors with large internal caches and high performance external memory interfaces makes it practical to design high performance imaging system with balanced computational and memory bandwidth. Real word examples of such designs will be presented, along with a discussion of adapting algorithm design to best utilize available memory bandwidth.

  18. Model-Based Learning of Local Image Features for Unsupervised Texture Segmentation

    NASA Astrophysics Data System (ADS)

    Kiechle, Martin; Storath, Martin; Weinmann, Andreas; Kleinsteuber, Martin

    2018-04-01

    Features that capture well the textural patterns of a certain class of images are crucial for the performance of texture segmentation methods. The manual selection of features or designing new ones can be a tedious task. Therefore, it is desirable to automatically adapt the features to a certain image or class of images. Typically, this requires a large set of training images with similar textures and ground truth segmentation. In this work, we propose a framework to learn features for texture segmentation when no such training data is available. The cost function for our learning process is constructed to match a commonly used segmentation model, the piecewise constant Mumford-Shah model. This means that the features are learned such that they provide an approximately piecewise constant feature image with a small jump set. Based on this idea, we develop a two-stage algorithm which first learns suitable convolutional features and then performs a segmentation. We note that the features can be learned from a small set of images, from a single image, or even from image patches. The proposed method achieves a competitive rank in the Prague texture segmentation benchmark, and it is effective for segmenting histological images.

  19. Automatic target detection using binary template matching

    NASA Astrophysics Data System (ADS)

    Jun, Dong-San; Sun, Sun-Gu; Park, HyunWook

    2005-03-01

    This paper presents a new automatic target detection (ATD) algorithm to detect targets such as battle tanks and armored personal carriers in ground-to-ground scenarios. Whereas most ATD algorithms were developed for forward-looking infrared (FLIR) images, we have developed an ATD algorithm for charge-coupled device (CCD) images, which have superior quality to FLIR images in daylight. The proposed algorithm uses fast binary template matching with an adaptive binarization, which is robust to various light conditions in CCD images and saves computation time. Experimental results show that the proposed method has good detection performance.

  20. The design of a microscopic system for typical fluorescent in-situ hybridization applications

    NASA Astrophysics Data System (ADS)

    Yi, Dingrong; Xie, Shaochuan

    2013-12-01

    Fluorescence in situ hybridization (FISH) is a modern molecular biology technique used for the detection of genetic abnormalities in terms of the number and structure of chromosomes and genes. The FISH technique is typically employed for prenatal diagnosis of congenital dementia in the Obstetrics and Genecology department. It is also routinely used to pick up qualifying breast cancer patients that are known to be highly curable by the prescription of Her2 targeted therapy. During the microscopic observation phase, the technician needs to count typically green probe dots and red probe dots contained in a single nucleus and calculate their ratio. This procedure need to be done to over hundreds of nuclei. Successful implementation of FISH tests critically depends on a suitable fluorescent microscope which is primarily imported from overseas due to the complexity of such a system beyond the maturity of the domestic optoelectrical industry. In this paper, the typical requirements of a fluorescent microscope that is suitable for FISH applications are first reviewed. The focus of this paper is on the system design and computational methods of an automatic florescent microscopy with high magnification APO objectives, a fast spinning automatic filter wheel, an automatic shutter, a cooled CCD camera used as a photo-detector, and a software platform for image acquisition, registration, pseudo-color generation, multi-channel fusing and multi-focus fusion. Preliminary results from FISH experiments indicate that this system satisfies routine FISH microscopic observation tasks.

  1. Automatic detection of measurement points for non-contact vibrometer-based diagnosis of cardiac arrhythmias

    NASA Astrophysics Data System (ADS)

    Metzler, Jürgen; Kroschel, Kristian; Willersinn, Dieter

    2017-03-01

    Monitoring of the heart rhythm is the cornerstone of the diagnosis of cardiac arrhythmias. It is done by means of electrocardiography which relies on electrodes attached to the skin of the patient. We present a new system approach based on the so-called vibrocardiogram that allows an automatic non-contact registration of the heart rhythm. Because of the contactless principle, the technique offers potential application advantages in medical fields like emergency medicine (burn patient) or premature baby care where adhesive electrodes are not easily applicable. A laser-based, mobile, contactless vibrometer for on-site diagnostics that works with the principle of laser Doppler vibrometry allows the acquisition of vital functions in form of a vibrocardiogram. Preliminary clinical studies at the Klinikum Karlsruhe have shown that the region around the carotid artery and the chest region are appropriate therefore. However, the challenge is to find a suitable measurement point in these parts of the body that differs from person to person due to e. g. physiological properties of the skin. Therefore, we propose a new Microsoft Kinect-based approach. When a suitable measurement area on the appropriate parts of the body are detected by processing the Kinect data, the vibrometer is automatically aligned on an initial location within this area. Then, vibrocardiograms on different locations within this area are successively acquired until a sufficient measuring quality is achieved. This optimal location is found by exploiting the autocorrelation function.

  2. Automatic Processing and Interpretation of Long Records of Endogenous Micro-Seismicity: the Case of the Super-Sauze Soft-Rock Landslide.

    NASA Astrophysics Data System (ADS)

    Provost, F.; Malet, J. P.; Hibert, C.; Doubre, C.

    2017-12-01

    The Super-Sauze landslide is a clay-rich landslide located the Southern French Alps. The landslide exhibits a complex pattern of deformation: a large number of rockfalls are observed in the 100 m height main scarp while the deformation of the upper part of the accumulated material is mainly affected by material shearing along stable in-situ crests. Several fissures are locally observed. The shallowest layer of the accumulated material tends to behave in a brittle manner but may undergo fluidization and/or rapid acceleration. Previous studies have demonstrated the presence of a rich endogenous micro-seismicity associated to the deformation of the landslide. However, the lack of long-term seismic records and suitable processing chains prevented a full interpretation of the links between the external forcings, the deformation and the recorded seismic signals. Since 2013, two permanent seismic arrays are installed in the upper part of the landslide. We here present the methodology adopted to process this dataset. The processing chain consists of a set of automated methods for automatic and robust detection, classification and location of the recorded seismicity. Thousands of events are detected and further automatically classified. The classification method is based on the description of the signal through attributes (e.g. waveform, spectral content properties). These attributes are used as inputs to classify the signal using a Random Forest machine-learning algorithm in four classes: endogenous micro-quakes, rockfalls, regional earthquakes and natural/anthropogenic noises. The endogenous landslide sources (i.e. micro-quake and rockfall) are further located. The location method is adapted to the type of event. The micro-quakes are located with a 3D velocity model derived from a seismic tomography campaign and an optimization of the first arrival picking with the inter-trace correlation of the P-wave arrivals. The rockfalls are located by optimizing the inter-trace correlation of the whole signal. We analyze the temporal relationships of the endogenous seismic events with rainfall and landslide displacements. Sub-families of landslide micro-quakes are also identified and an interpretation of their source mechanism is proposed from their signal properties and spatial location.

  3. Substantiation of Structure of Adaptive Control Systems for Motor Units

    NASA Astrophysics Data System (ADS)

    Ovsyannikov, S. I.

    2018-05-01

    The article describes the development of new electronic control systems, in particular motor units, for small-sized agricultural equipment. Based on the analysis of traffic control systems, the main course of development of the conceptual designs of motor units has been defined. The systems aimed to control the course motion of the motor unit in automatic mode using the adaptive systems have been developed. The article presents structural models of the conceptual motor units based on electrically controlled systems by the operation of drive motors and adaptive systems that make the motor units completely automated.

  4. Multi-Level Adaptive Techniques (MLAT) for singular-perturbation problems

    NASA Technical Reports Server (NTRS)

    Brandt, A.

    1978-01-01

    The multilevel (multigrid) adaptive technique, a general strategy of solving continuous problems by cycling between coarser and finer levels of discretization is described. It provides very fast general solvers, together with adaptive, nearly optimal discretization schemes. In the process, boundary layers are automatically either resolved or skipped, depending on a control function which expresses the computational goal. The global error decreases exponentially as a function of the overall computational work, in a uniform rate independent of the magnitude of the singular-perturbation terms. The key is high-order uniformly stable difference equations, and uniformly smoothing relaxation schemes.

  5. An Item-Driven Adaptive Design for Calibrating Pretest Items. Research Report. ETS RR-14-38

    ERIC Educational Resources Information Center

    Ali, Usama S.; Chang, Hua-Hua

    2014-01-01

    Adaptive testing is advantageous in that it provides more efficient ability estimates with fewer items than linear testing does. Item-driven adaptive pretesting may also offer similar advantages, and verification of such a hypothesis about item calibration was the main objective of this study. A suitability index (SI) was introduced to adaptively…

  6. A simplified financial model for automatic meter reading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ward, S.M.

    1994-01-15

    The financial model proposed here (which can be easily adapted for electric, gas, or water) combines aspects of [open quotes]life cycle,[close quotes] [open quotes]consumer value[close quotes] and [open quotes]revenue based[close quotes] approaches and addresses intangible benefits. A simple value tree of one-word descriptions clarifies the relationship between level of investment and level of value, visually relating increased value to increased cost. The model computes the numerical present values of capital costs, recurring costs, and revenue benefits over a 15-year period for the seven configurations: manual reading of existing or replacement standard meters (MMR), manual reading using electronic, hand-held retrievers (EMR),more » remote reading of inaccessible meters via hard-wired receptacles (RMR), remote reading of meters adapted with pulse generators (RMR-P), remote reading of meters adapted with absolute dial encoders (RMR-E), offsite reading over a few hundred feet with mobile radio (OMR), and fully automatic reading using telephone or an equivalent network (AMR). In the model, of course, the costs of installing the configurations are clearly listed under each column. The model requires only four annualized inputs and seven fixed-cost inputs that are rather easy to obtain.« less

  7. Cellular neural network-based hybrid approach toward automatic image registration

    NASA Astrophysics Data System (ADS)

    Arun, Pattathal VijayaKumar; Katiyar, Sunil Kumar

    2013-01-01

    Image registration is a key component of various image processing operations that involve the analysis of different image data sets. Automatic image registration domains have witnessed the application of many intelligent methodologies over the past decade; however, inability to properly model object shape as well as contextual information has limited the attainable accuracy. A framework for accurate feature shape modeling and adaptive resampling using advanced techniques such as vector machines, cellular neural network (CNN), scale invariant feature transform (SIFT), coreset, and cellular automata is proposed. CNN has been found to be effective in improving feature matching as well as resampling stages of registration and complexity of the approach has been considerably reduced using coreset optimization. The salient features of this work are cellular neural network approach-based SIFT feature point optimization, adaptive resampling, and intelligent object modelling. Developed methodology has been compared with contemporary methods using different statistical measures. Investigations over various satellite images revealed that considerable success was achieved with the approach. This system has dynamically used spectral and spatial information for representing contextual knowledge using CNN-prolog approach. This methodology is also illustrated to be effective in providing intelligent interpretation and adaptive resampling.

  8. Reconstruction of sparse-view X-ray computed tomography using adaptive iterative algorithms.

    PubMed

    Liu, Li; Lin, Weikai; Jin, Mingwu

    2015-01-01

    In this paper, we propose two reconstruction algorithms for sparse-view X-ray computed tomography (CT). Treating the reconstruction problems as data fidelity constrained total variation (TV) minimization, both algorithms adapt the alternate two-stage strategy: projection onto convex sets (POCS) for data fidelity and non-negativity constraints and steepest descent for TV minimization. The novelty of this work is to determine iterative parameters automatically from data, thus avoiding tedious manual parameter tuning. In TV minimization, the step sizes of steepest descent are adaptively adjusted according to the difference from POCS update in either the projection domain or the image domain, while the step size of algebraic reconstruction technique (ART) in POCS is determined based on the data noise level. In addition, projection errors are used to compare with the error bound to decide whether to perform ART so as to reduce computational costs. The performance of the proposed methods is studied and evaluated using both simulated and physical phantom data. Our methods with automatic parameter tuning achieve similar, if not better, reconstruction performance compared to a representative two-stage algorithm. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Active materials for automotive adaptive forward lighting Part 1: system requirements vs. material properties

    NASA Astrophysics Data System (ADS)

    Keefe, Andrew C.; Browne, Alan L.; Johnson, Nancy L.

    2011-04-01

    Adaptive Frontlighting Systems (AFS in GM usage) improve visibility by automatically optimizing the beam pattern to accommodate road, driving and environmental conditions. By moving, modifying, and/or adding light during nighttime, inclement weather, or in sharp turns, the driver is presented with dynamic illumination not possible with static lighting systems The objective of this GM-HRL collaborative research project was to assess the potential of active materials to decrease the cost, mass, and packaging volume of current electric stepper-motor AFS designs. Solid-state active material actuators, if proved suitable for this application, could be less expensive than electric motors and have lower part count, reduced size and weight, and lower acoustic and EMF noise1. This paper documents Part 1 of the collaborative study, assessing technically mature, commercially available active materials for use as actuators. Candidate materials should reduce cost and improve AFS capabilities, such as increased angular velocity on swivel. Additional benefits to AFS resulting from active materials actuators were to be identified as well such as lower part count. In addition, several notional approaches to AFS were documented to illustrate the potential function, which is developed more fully in Part 2. Part 1 was successful in verifying the feasibility of using two active materials for AFS: shape memory alloys, and piezoelectrics. In particular, this demonstration showed that all application requirements including those on actuation speed, force, and cyclic stability to effect manipulation of the filament assembly and/or the reflector could be met by piezoelectrics (as ultrasonic motors) and SMA wire actuators.

  10. [Not Available].

    PubMed

    Burgot, J L

    1978-06-01

    Acids conjugated to various phenothiazine derivatives are titrated directly with sodium hydroxide, by means of an automatic thermometric titrimeter. The titration curves have sharp breaks, suitable for analytical use, and these are discussed, in the case of promethazine hydrochloride, as functions of various parameters such as pK(a), the solubility of the product and the enthalpy of neutralization (determined in this work).

  11. The SIETTE Automatic Assessment Environment

    ERIC Educational Resources Information Center

    Conejo, Ricardo; Guzmán, Eduardo; Trella, Monica

    2016-01-01

    This article describes the evolution and current state of the domain-independent Siette assessment environment. Siette supports different assessment methods--including classical test theory, item response theory, and computer adaptive testing--and integrates them with multidimensional student models used by intelligent educational systems.…

  12. Development and preliminary testing of an automatic turning movements identification system : final report, February 2010.

    DOT National Transportation Integrated Search

    2010-02-01

    It is important for many applications, such as intersection delay estimation and adaptive signal : control, to obtain vehicle turning movement information at signalized intersections. However, : vehicle turning movement information is very time consu...

  13. Automatic detection of tweets reporting cases of influenza like illnesses in Australia

    PubMed Central

    2015-01-01

    Early detection of disease outbreaks is critical for disease spread control and management. In this work we investigate the suitability of statistical machine learning approaches to automatically detect Twitter messages (tweets) that are likely to report cases of possible influenza like illnesses (ILI). Empirical results obtained on a large set of tweets originating from the state of Victoria, Australia, in a 3.5 month period show evidence that machine learning classifiers are effective in identifying tweets that mention possible cases of ILI (up to 0.736 F-measure, i.e. the harmonic mean of precision and recall), regardless of the specific technique implemented by the classifier investigated in the study. PMID:25870759

  14. Enhancement of L-Threonine Production by Controlling Sequential Carbon-Nitrogen Ratios during Fermentation.

    PubMed

    Lee, Hyeok-Won; Lee, Hee-Suk; Kim, Chun-Suk; Lee, Jin-Gyeom; Kim, Won-Kyo; Lee, Eun-Gyo; Lee, Hong-Weon

    2018-02-28

    Controlling the residual glucose concentration is important for improving productivity in L-threonine fermentation. In this study, we developed a procedure to automatically control the feeding quantity of glucose solution as a function of ammonia-water consumption rate. The feeding ratio (R C/N ) of glucose and ammonia water was predetermined via a stoichiometric approach, on the basis of glucose-ammonia water consumption rates. In a 5-L fermenter, 102 g/l L -threonine was obtained using our glucose-ammonia water combined feeding strategy, which was then successfully applied in a 500-L fermenter (89 g/l). Therefore, we conclude that an automatic combination feeding strategy is suitable for improving L-threonine production.

  15. Simulation and visualization of fundamental optics phenomenon by LabVIEW

    NASA Astrophysics Data System (ADS)

    Lyu, Bohan

    2017-08-01

    Most instructors teach complex phenomenon by equation and static illustration without interactive multimedia. Students usually memorize phenomenon by taking note. However, only note or complex formula can not make user visualize the phenomenon of the photonics system. LabVIEW is a good tool for in automatic measurement. However, the simplicity of coding in LabVIEW makes it not only suit for automatic measurement, but also suitable for simulation and visualization of fundamental optics phenomenon. In this paper, five simple optics phenomenon will be discuss and simulation with LabVIEW. They are Snell's Law, Hermite-Gaussian beam transverse mode, square and circular aperture diffraction, polarization wave and Poincare sphere, and finally Fabry-Perrot etalon in spectrum domain.

  16. Automatic visual monitoring of welding procedure in stainless steel kegs

    NASA Astrophysics Data System (ADS)

    Leo, Marco; Del Coco, Marco; Carcagnì, Pierluigi; Spagnolo, Paolo; Mazzeo, Pier Luigi; Distante, Cosimo; Zecca, Raffaele

    2018-05-01

    In this paper a system for automatic visual monitoring of welding process, in dry stainless steel kegs for food storage, is proposed. In the considered manufacturing process the upper and lower skirts are welded to the vessel by means of Tungsten Inert Gas (TIG) welding. During the process several problems can arise: 1) residuals on the bottom 2) darker weld 3) excessive/poor penetration and 4) outgrowths. The proposed system deals with all the four aforementioned problems and its inspection performances have been evaluated by using a large set of kegs demonstrating both the reliability in terms of defect detection and the suitability to be introduced in the manufacturing system in terms of computational costs.

  17. An adaptive spatio-temporal Gaussian filter for processing cardiac optical mapping data.

    PubMed

    Pollnow, S; Pilia, N; Schwaderlapp, G; Loewe, A; Dössel, O; Lenis, G

    2018-06-04

    Optical mapping is widely used as a tool to investigate cardiac electrophysiology in ex vivo preparations. Digital filtering of fluorescence-optical data is an important requirement for robust subsequent data analysis and still a challenge when processing data acquired from thin mammalian myocardium. Therefore, we propose and investigate the use of an adaptive spatio-temporal Gaussian filter for processing optical mapping signals from these kinds of tissue usually having low signal-to-noise ratio (SNR). We demonstrate how filtering parameters can be chosen automatically without additional user input. For systematic comparison of this filter with standard filtering methods from the literature, we generated synthetic signals representing optical recordings from atrial myocardium of a rat heart with varying SNR. Furthermore, all filter methods were applied to experimental data from an ex vivo setup. Our developed filter outperformed the other filter methods regarding local activation time detection at SNRs smaller than 3 dB which are typical noise ratios expected in these signals. At higher SNRs, the proposed filter performed slightly worse than the methods from literature. In conclusion, the proposed adaptive spatio-temporal Gaussian filter is an appropriate tool for investigating fluorescence-optical data with low SNR. The spatio-temporal filter parameters were automatically adapted in contrast to the other investigated filters. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Automatic joint alignment measurements in pre- and post-operative long leg standing radiographs.

    PubMed

    Goossen, A; Weber, G M; Dries, S P M

    2012-01-01

    For diagnosis or treatment assessment of knee joint osteoarthritis it is required to measure bone morphometry from radiographic images. We propose a method for automatic measurement of joint alignment from pre-operative as well as post-operative radiographs. In a two step approach we first detect and segment any implants or other artificial objects within the image. We exploit physical characteristics and avoid prior shape information to cope with the vast amount of implant types. Subsequently, we exploit the implant delineations to adapt the initialization and adaptation phase of a dedicated bone segmentation scheme using deformable template models. Implant and bone contours are fused to derive the final joint segmentation and thus the alignment measurements. We evaluated our method on clinical long leg radiographs and compared both the initialization rate, corresponding to the number of images successfully processed by the proposed algorithm, and the accuracy of the alignment measurement. Ground truth has been generated by an experienced orthopedic surgeon. For comparison a second reader reevaluated the measurements. Experiments on two sets of 70 and 120 digital radiographs show that 92% of the joints could be processed automatically and the derived measurements of the automatic method are comparable to a human reader for pre-operative as well as post-operative images with a typical error of 0.7° and correlations of r = 0.82 to r = 0.99 with the ground truth. The proposed method allows deriving objective measures of joint alignment from clinical radiographs. Its accuracy and precision are on par with a human reader for all evaluated measurements.

  19. Fully automatic hp-adaptivity for acoustic and electromagnetic scattering in three dimensions

    NASA Astrophysics Data System (ADS)

    Kurtz, Jason Patrick

    We present an algorithm for fully automatic hp-adaptivity for finite element approximations of elliptic and Maxwell boundary value problems in three dimensions. The algorithm automatically generates a sequence of coarse grids, and a corresponding sequence of fine grids, such that the energy norm of the error decreases exponentially with respect to the number of degrees of freedom in either sequence. At each step, we employ a discrete optimization algorithm to determine the refinements for the current coarse grid such that the projection-based interpolation error for the current fine grid solution decreases with an optimal rate with respect to the number of degrees of freedom added by the refinement. The refinements are restricted only by the requirement that the resulting mesh is at most 1-irregular, but they may be anisotropic in both element size h and order of approximation p. While we cannot prove that our method converges at all, we present numerical evidence of exponential convergence for a diverse suite of model problems from acoustic and electromagnetic scattering. In particular we show that our method is well suited to the automatic resolution of exterior problems truncated by the introduction of a perfectly matched layer. To enable and accelerate the solution of these problems on commodity hardware, we include a detailed account of three critical aspects of our implementation, namely an efficient implementation of sum factorization, several efficient interfaces to the direct multi-frontal solver MUMPS, and some fast direct solvers for the computation of a sequence of nested projections.

  20. Design of efficient and simple interface testing equipment for opto-electric tracking system

    NASA Astrophysics Data System (ADS)

    Liu, Qiong; Deng, Chao; Tian, Jing; Mao, Yao

    2016-10-01

    Interface testing for opto-electric tracking system is one important work to assure system running performance, aiming to verify the design result of every electronic interface matching the communication protocols or not, by different levels. Opto-electric tracking system nowadays is more complicated, composed of many functional units. Usually, interface testing is executed between units manufactured completely, highly depending on unit design and manufacture progress as well as relative people. As a result, it always takes days or weeks, inefficiently. To solve the problem, this paper promotes an efficient and simple interface testing equipment for opto-electric tracking system, consisting of optional interface circuit card, processor and test program. The hardware cards provide matched hardware interface(s), easily offered from hardware engineer. Automatic code generation technique is imported, providing adaption to new communication protocols. Automatic acquiring items, automatic constructing code architecture and automatic encoding are used to form a new program quickly with adaption. After simple steps, a standard customized new interface testing equipment with matching test program and interface(s) is ready for a waiting-test system in minutes. The efficient and simple interface testing equipment for opto-electric tracking system has worked for many opto-electric tracking system to test entire or part interfaces, reducing test time from days to hours, greatly improving test efficiency, with high software quality and stability, without manual coding. Used as a common tool, the efficient and simple interface testing equipment for opto-electric tracking system promoted by this paper has changed traditional interface testing method and created much higher efficiency.

  1. Development of an automatic rotational orthosis for walking with arm swing.

    PubMed

    Fang, Juan; Yang, Guo-Yuan; Xie, Le

    2017-07-01

    Interlimb neural coupling is often observed during normal gait and is postulated to be important for gait restoration. In order to provide a testbed for investigation of interlimb neural coupling, we previously developed a rotational orthosis for walking with arm swing (ROWAS). The present study aimed to develop and evaluate the feasibility of a new system, viz. an automatic ROWAS (aROWAS). We developed the mechanical structures of aROWAS in SolidWorks, and implemented the concept in a prototype. Normal gait data from walking at various speeds were used as reference trajectories of the shoulder, hip, knee and ankle joints. The aROWAS prototype was tested in three able-bodied subjects. The prototype could automatically adjust to size and height, and automatically produced adaptable coordinated performance in the upper and lower limbs, with joint profiles similar to those occurring in normal gait. The subjects reported better acceptance in aROWAS than in ROWAS. The aROWAS system was deemed feasible among able-bodied subjects.

  2. Neural networks: Alternatives to conventional techniques for automatic docking

    NASA Technical Reports Server (NTRS)

    Vinz, Bradley L.

    1994-01-01

    Automatic docking of orbiting spacecraft is a crucial operation involving the identification of vehicle orientation as well as complex approach dynamics. The chaser spacecraft must be able to recognize the target spacecraft within a scene and achieve accurate closing maneuvers. In a video-based system, a target scene must be captured and transformed into a pattern of pixels. Successful recognition lies in the interpretation of this pattern. Due to their powerful pattern recognition capabilities, artificial neural networks offer a potential role in interpretation and automatic docking processes. Neural networks can reduce the computational time required by existing image processing and control software. In addition, neural networks are capable of recognizing and adapting to changes in their dynamic environment, enabling enhanced performance, redundancy, and fault tolerance. Most neural networks are robust to failure, capable of continued operation with a slight degradation in performance after minor failures. This paper discusses the particular automatic docking tasks neural networks can perform as viable alternatives to conventional techniques.

  3. A hybrid 3D region growing and 4D curvature analysis-based automatic abdominal blood vessel segmentation through contrast enhanced CT

    NASA Astrophysics Data System (ADS)

    Maklad, Ahmed S.; Matsuhiro, Mikio; Suzuki, Hidenobu; Kawata, Yoshiki; Niki, Noboru; Shimada, Mitsuo; Iinuma, Gen

    2017-03-01

    In abdominal disease diagnosis and various abdominal surgeries planning, segmentation of abdominal blood vessel (ABVs) is a very imperative task. Automatic segmentation enables fast and accurate processing of ABVs. We proposed a fully automatic approach for segmenting ABVs through contrast enhanced CT images by a hybrid of 3D region growing and 4D curvature analysis. The proposed method comprises three stages. First, candidates of bone, kidneys, ABVs and heart are segmented by an auto-adapted threshold. Second, bone is auto-segmented and classified into spine, ribs and pelvis. Third, ABVs are automatically segmented in two sub-steps: (1) kidneys and abdominal part of the heart are segmented, (2) ABVs are segmented by a hybrid approach that integrates a 3D region growing and 4D curvature analysis. Results are compared with two conventional methods. Results show that the proposed method is very promising in segmenting and classifying bone, segmenting whole ABVs and may have potential utility in clinical use.

  4. Integrating the automatic and the controlled: Strategies in Semantic Priming in an Attractor Network with Latching Dynamics

    PubMed Central

    Lerner, Itamar; Bentin, Shlomo; Shriki, Oren

    2014-01-01

    Semantic priming has long been recognized to reflect, along with automatic semantic mechanisms, the contribution of controlled strategies. However, previous theories of controlled priming were mostly qualitative, lacking common grounds with modern mathematical models of automatic priming based on neural networks. Recently, we have introduced a novel attractor network model of automatic semantic priming with latching dynamics. Here, we extend this work to show how the same model can also account for important findings regarding controlled processes. Assuming the rate of semantic transitions in the network can be adapted using simple reinforcement learning, we show how basic findings attributed to controlled processes in priming can be achieved, including their dependency on stimulus onset asynchrony and relatedness proportion and their unique effect on associative, category-exemplar, mediated and backward prime-target relations. We discuss how our mechanism relates to the classic expectancy theory and how it can be further extended in future developments of the model. PMID:24890261

  5. Automated muscle fiber type population analysis with ImageJ of whole rat muscles using rapid myosin heavy chain immunohistochemistry.

    PubMed

    Bergmeister, Konstantin D; Gröger, Marion; Aman, Martin; Willensdorfer, Anna; Manzano-Szalai, Krisztina; Salminger, Stefan; Aszmann, Oskar C

    2016-08-01

    Skeletal muscle consists of different fiber types which adapt to exercise, aging, disease, or trauma. Here we present a protocol for fast staining, automatic acquisition, and quantification of fiber populations with ImageJ. Biceps and lumbrical muscles were harvested from Sprague-Dawley rats. Quadruple immunohistochemical staining was performed on single sections using antibodies against myosin heavy chains and secondary fluorescent antibodies. Slides were scanned automatically with a slide scanner. Manual and automatic analyses were performed and compared statistically. The protocol provided rapid and reliable staining for automated image acquisition. Analyses between manual and automatic data indicated Pearson correlation coefficients for biceps of 0.645-0.841 and 0.564-0.673 for lumbrical muscles. Relative fiber populations were accurate to a degree of ± 4%. This protocol provides a reliable tool for quantification of muscle fiber populations. Using freely available software, it decreases the required time to analyze whole muscle sections. Muscle Nerve 54: 292-299, 2016. © 2016 Wiley Periodicals, Inc.

  6. Adaptive Neural Network Algorithm for Power Control in Nuclear Power Plants

    NASA Astrophysics Data System (ADS)

    Masri Husam Fayiz, Al

    2017-01-01

    The aim of this paper is to design, test and evaluate a prototype of an adaptive neural network algorithm for the power controlling system of a nuclear power plant. The task of power control in nuclear reactors is one of the fundamental tasks in this field. Therefore, researches are constantly conducted to ameliorate the power reactor control process. Currently, in the Department of Automation in the National Research Nuclear University (NRNU) MEPhI, numerous studies are utilizing various methodologies of artificial intelligence (expert systems, neural networks, fuzzy systems and genetic algorithms) to enhance the performance, safety, efficiency and reliability of nuclear power plants. In particular, a study of an adaptive artificial intelligent power regulator in the control systems of nuclear power reactors is being undertaken to enhance performance and to minimize the output error of the Automatic Power Controller (APC) on the grounds of a multifunctional computer analyzer (simulator) of the Water-Water Energetic Reactor known as Vodo-Vodyanoi Energetichesky Reaktor (VVER) in Russian. In this paper, a block diagram of an adaptive reactor power controller was built on the basis of an intelligent control algorithm. When implementing intelligent neural network principles, it is possible to improve the quality and dynamic of any control system in accordance with the principles of adaptive control. It is common knowledge that an adaptive control system permits adjusting the controller’s parameters according to the transitions in the characteristics of the control object or external disturbances. In this project, it is demonstrated that the propitious options for an automatic power controller in nuclear power plants is a control system constructed on intelligent neural network algorithms.

  7. Adaptive Semantic and Social Web-based learning and assessment environment for the STEM

    NASA Astrophysics Data System (ADS)

    Babaie, Hassan; Atchison, Chris; Sunderraman, Rajshekhar

    2014-05-01

    We are building a cloud- and Semantic Web-based personalized, adaptive learning environment for the STEM fields that integrates and leverages Social Web technologies to allow instructors and authors of learning material to collaborate in semi-automatic development and update of their common domain and task ontologies and building their learning resources. The semi-automatic ontology learning and development minimize issues related to the design and maintenance of domain ontologies by knowledge engineers who do not have any knowledge of the domain. The social web component of the personal adaptive system will allow individual and group learners to interact with each other and discuss their own learning experience and understanding of course material, and resolve issues related to their class assignments. The adaptive system will be capable of representing key knowledge concepts in different ways and difficulty levels based on learners' differences, and lead to different understanding of the same STEM content by different learners. It will adapt specific pedagogical strategies to individual learners based on their characteristics, cognition, and preferences, allow authors to assemble remotely accessed learning material into courses, and provide facilities for instructors to assess (in real time) the perception of students of course material, monitor their progress in the learning process, and generate timely feedback based on their understanding or misconceptions. The system applies a set of ontologies that structure the learning process, with multiple user friendly Web interfaces. These include the learning ontology (models learning objects, educational resources, and learning goal); context ontology (supports adaptive strategy by detecting student situation), domain ontology (structures concepts and context), learner ontology (models student profile, preferences, and behavior), task ontologies, technological ontology (defines devices and places that surround the student), pedagogy ontology, and learner ontology (defines time constraint, comment, profile).

  8. Flexible Early Warning Systems with Workflows and Decision Tables

    NASA Astrophysics Data System (ADS)

    Riedel, F.; Chaves, F.; Zeiner, H.

    2012-04-01

    An essential part of early warning systems and systems for crisis management are decision support systems that facilitate communication and collaboration. Often official policies specify how different organizations collaborate and what information is communicated to whom. For early warning systems it is crucial that information is exchanged dynamically in a timely manner and all participants get exactly the information they need to fulfil their role in the crisis management process. Information technology obviously lends itself to automate parts of the process. We have experienced however that in current operational systems the information logistics processes are hard-coded, even though they are subject to change. In addition, systems are tailored to the policies and requirements of a certain organization and changes can require major software refactoring. We seek to develop a system that can be deployed and adapted to multiple organizations with different dynamic runtime policies. A major requirement for such a system is that changes can be applied locally without affecting larger parts of the system. In addition to the flexibility regarding changes in policies and processes, the system needs to be able to evolve; when new information sources become available, it should be possible to integrate and use these in the decision process. In general, this kind of flexibility comes with a significant increase in complexity. This implies that only IT professionals can maintain a system that can be reconfigured and adapted; end-users are unable to utilise the provided flexibility. In the business world similar problems arise and previous work suggested using business process management systems (BPMS) or workflow management systems (WfMS) to guide and automate early warning processes or crisis management plans. However, the usability and flexibility of current WfMS are limited, because current notations and user interfaces are still not suitable for end-users, and workflows are usually only suited for rigid processes. We show how improvements can be achieved by using decision tables and rule-based adaptive workflows. Decision tables have been shown to be an intuitive tool that can be used by domain experts to express rule sets that can be interpreted automatically at runtime. Adaptive workflows use a rule-based approach to increase the flexibility of workflows by providing mechanisms to adapt workflows based on context changes, human intervention and availability of services. The combination of workflows, decision tables and rule-based adaption creates a framework that opens up new possibilities for flexible and adaptable workflows, especially, for use in early warning and crisis management systems.

  9. Interactive vs. automatic ultrasound image segmentation methods for staging hepatic lipidosis.

    PubMed

    Weijers, Gert; Starke, Alexander; Haudum, Alois; Thijssen, Johan M; Rehage, Jürgen; De Korte, Chris L

    2010-07-01

    The aim of this study was to test the hypothesis that automatic segmentation of vessels in ultrasound (US) images can produce similar or better results in grading fatty livers than interactive segmentation. A study was performed in postpartum dairy cows (N=151), as an animal model of human fatty liver disease, to test this hypothesis. Five transcutaneous and five intraoperative US liver images were acquired in each animal and a liverbiopsy was taken. In liver tissue samples, triacylglycerol (TAG) was measured by biochemical analysis and hepatic diseases other than hepatic lipidosis were excluded by histopathologic examination. Ultrasonic tissue characterization (UTC) parameters--Mean echo level, standard deviation (SD) of echo level, signal-to-noise ratio (SNR), residual attenuation coefficient (ResAtt) and axial and lateral speckle size--were derived using a computer-aided US (CAUS) protocol and software package. First, the liver tissue was interactively segmented by two observers. With increasing fat content, fewer hepatic vessels were visible in the ultrasound images and, therefore, a smaller proportion of the liver needed to be excluded from these images. Automatic-segmentation algorithms were implemented and it was investigated whether better results could be achieved than with the subjective and time-consuming interactive-segmentation procedure. The automatic-segmentation algorithms were based on both fixed and adaptive thresholding techniques in combination with a 'speckle'-shaped moving-window exclusion technique. All data were analyzed with and without postprocessing as contained in CAUS and with different automated-segmentation techniques. This enabled us to study the effect of the applied postprocessing steps on single and multiple linear regressions ofthe various UTC parameters with TAG. Improved correlations for all US parameters were found by using automatic-segmentation techniques. Stepwise multiple linear-regression formulas where derived and used to predict TAG level in the liver. Receiver-operating-characteristics (ROC) analysis was applied to assess the performance and area under the curve (AUC) of predicting TAG and to compare the sensitivity and specificity of the methods. Best speckle-size estimates and overall performance (R2 = 0.71, AUC = 0.94) were achieved by using an SNR-based adaptive automatic-segmentation method (used TAG threshold: 50 mg/g liver wet weight). Automatic segmentation is thus feasible and profitable.

  10. Adaptive Oceanographic Sampling in a Coastal Environment Using Autonomous Gliding Vehicles

    DTIC Science & Technology

    2003-08-01

    cost autonomous vehicles with near-global range and modular sensor payload. Particular emphasis is placed on the development of adaptive sampling...environment. Secondary objectives include continued development of adaptive sampling strategies suitable for large fleets of slow-moving autonomous ... vehicles , and development and implementation of new oceanographic sensors and sampling methodologies. The main task completed was a complete redesign of

  11. Use of Time Information in Models behind Adaptive System for Building Fluency in Mathematics

    ERIC Educational Resources Information Center

    Rihák, Jirí

    2015-01-01

    In this work we introduce the system for adaptive practice of foundations of mathematics. Adaptivity of the system is primarily provided by selection of suitable tasks, which uses information from a domain model and a student model. The domain model does not use prerequisites but works with splitting skills to more concrete sub-skills. The student…

  12. Adaptive 3D single-block grids for the computation of viscous flows around wings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagmeijer, R.; Kok, J.C.

    1996-12-31

    A robust algorithm for the adaption of a 3D single-block structured grid suitable for the computation of viscous flows around a wing is presented and demonstrated by application to the ONERA M6 wing. The effects of grid adaption on the flow solution and accuracy improvements is analyzed. Reynolds number variations are studied.

  13. Automatic motor task selection via a bandit algorithm for a brain-controlled button

    NASA Astrophysics Data System (ADS)

    Fruitet, Joan; Carpentier, Alexandra; Munos, Rémi; Clerc, Maureen

    2013-02-01

    Objective. Brain-computer interfaces (BCIs) based on sensorimotor rhythms use a variety of motor tasks, such as imagining moving the right or left hand, the feet or the tongue. Finding the tasks that yield best performance, specifically to each user, is a time-consuming preliminary phase to a BCI experiment. This study presents a new adaptive procedure to automatically select (online) the most promising motor task for an asynchronous brain-controlled button. Approach. We develop for this purpose an adaptive algorithm UCB-classif based on the stochastic bandit theory and design an EEG experiment to test our method. We compare (offline) the adaptive algorithm to a naïve selection strategy which uses uniformly distributed samples from each task. We also run the adaptive algorithm online to fully validate the approach. Main results. By not wasting time on inefficient tasks, and focusing on the most promising ones, this algorithm results in a faster task selection and a more efficient use of the BCI training session. More precisely, the offline analysis reveals that the use of this algorithm can reduce the time needed to select the most appropriate task by almost half without loss in precision, or alternatively, allow us to investigate twice the number of tasks within a similar time span. Online tests confirm that the method leads to an optimal task selection. Significance. This study is the first one to optimize the task selection phase by an adaptive procedure. By increasing the number of tasks that can be tested in a given time span, the proposed method could contribute to reducing ‘BCI illiteracy’.

  14. RAD-ADAPT: Software for modelling clonogenic assay data in radiation biology.

    PubMed

    Zhang, Yaping; Hu, Kaiqiang; Beumer, Jan H; Bakkenist, Christopher J; D'Argenio, David Z

    2017-04-01

    We present a comprehensive software program, RAD-ADAPT, for the quantitative analysis of clonogenic assays in radiation biology. Two commonly used models for clonogenic assay analysis, the linear-quadratic model and single-hit multi-target model, are included in the software. RAD-ADAPT uses maximum likelihood estimation method to obtain parameter estimates with the assumption that cell colony count data follow a Poisson distribution. The program has an intuitive interface, generates model prediction plots, tabulates model parameter estimates, and allows automatic statistical comparison of parameters between different groups. The RAD-ADAPT interface is written using the statistical software R and the underlying computations are accomplished by the ADAPT software system for pharmacokinetic/pharmacodynamic systems analysis. The use of RAD-ADAPT is demonstrated using an example that examines the impact of pharmacologic ATM and ATR kinase inhibition on human lung cancer cell line A549 after ionizing radiation. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Automatic detection of snow avalanches in continuous seismic data using hidden Markov models

    NASA Astrophysics Data System (ADS)

    Heck, Matthias; Hammer, Conny; van Herwijnen, Alec; Schweizer, Jürg; Fäh, Donat

    2018-01-01

    Snow avalanches generate seismic signals as many other mass movements. Detection of avalanches by seismic monitoring is highly relevant to assess avalanche danger. In contrast to other seismic events, signals generated by avalanches do not have a characteristic first arrival nor is it possible to detect different wave phases. In addition, the moving source character of avalanches increases the intricacy of the signals. Although it is possible to visually detect seismic signals produced by avalanches, reliable automatic detection methods for all types of avalanches do not exist yet. We therefore evaluate whether hidden Markov models (HMMs) are suitable for the automatic detection of avalanches in continuous seismic data. We analyzed data recorded during the winter season 2010 by a seismic array deployed in an avalanche starting zone above Davos, Switzerland. We re-evaluated a reference catalogue containing 385 events by grouping the events in seven probability classes. Since most of the data consist of noise, we first applied a simple amplitude threshold to reduce the amount of data. As first classification results were unsatisfying, we analyzed the temporal behavior of the seismic signals for the whole data set and found that there is a high variability in the seismic signals. We therefore applied further post-processing steps to reduce the number of false alarms by defining a minimal duration for the detected event, implementing a voting-based approach and analyzing the coherence of the detected events. We obtained the best classification results for events detected by at least five sensors and with a minimal duration of 12 s. These processing steps allowed identifying two periods of high avalanche activity, suggesting that HMMs are suitable for the automatic detection of avalanches in seismic data. However, our results also showed that more sensitive sensors and more appropriate sensor locations are needed to improve the signal-to-noise ratio of the signals and therefore the classification.

  16. Automating the application of smart materials for protein crystallization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khurshid, Sahir; Govada, Lata; EL-Sharif, Hazim F.

    2015-03-01

    The first semi-liquid, non-protein nucleating agent for automated protein crystallization trials is described. This ‘smart material’ is demonstrated to induce crystal growth and will provide a simple, cost-effective tool for scientists in academia and industry. The fabrication and validation of the first semi-liquid nonprotein nucleating agent to be administered automatically to crystallization trials is reported. This research builds upon prior demonstration of the suitability of molecularly imprinted polymers (MIPs; known as ‘smart materials’) for inducing protein crystal growth. Modified MIPs of altered texture suitable for high-throughput trials are demonstrated to improve crystal quality and to increase the probability of successmore » when screening for suitable crystallization conditions. The application of these materials is simple, time-efficient and will provide a potent tool for structural biologists embarking on crystallization trials.« less

  17. Global image analysis to determine suitability for text-based image personalization

    NASA Astrophysics Data System (ADS)

    Ding, Hengzhou; Bala, Raja; Fan, Zhigang; Bouman, Charles A.; Allebach, Jan P.

    2012-03-01

    Lately, image personalization is becoming an interesting topic. Images with variable elements such as text usually appear much more appealing to the recipients. In this paper, we describe a method to pre-analyze the image and automatically suggest to the user the most suitable regions within an image for text-based personalization. The method is based on input gathered from experiments conducted with professional designers. It has been observed that regions that are spatially smooth and regions with existing text (e.g. signage, banners, etc.) are the best candidates for personalization. This gives rise to two sets of corresponding algorithms: one for identifying smooth areas, and one for locating text regions. Furthermore, based on the smooth and text regions found in the image, we derive an overall metric to rate the image in terms of its suitability for personalization (SFP).

  18. Adaptive Neural Networks for Automatic Negotiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sakas, D. P.; Vlachos, D. S.; Simos, T. E.

    The use of fuzzy logic and fuzzy neural networks has been found effective for the modelling of the uncertain relations between the parameters of a negotiation procedure. The problem with these configurations is that they are static, that is, any new knowledge from theory or experiment lead to the construction of entirely new models. To overcome this difficulty, we apply in this work, an adaptive neural topology to model the negotiation process. Finally a simple simulation is carried in order to test the new method.

  19. System integration of pattern recognition, adaptive aided, upper limb prostheses

    NASA Technical Reports Server (NTRS)

    Lyman, J.; Freedy, A.; Solomonow, M.

    1975-01-01

    The requirements for successful integration of a computer aided control system for multi degree of freedom artificial arms are discussed. Specifications are established for a system which shares control between a human amputee and an automatic control subsystem. The approach integrates the following subsystems: (1) myoelectric pattern recognition, (2) adaptive computer aiding; (3) local reflex control; (4) prosthetic sensory feedback; and (5) externally energized arm with the functions of prehension, wrist rotation, elbow extension and flexion and humeral rotation.

  20. Thread concept for automatic task parallelization in image analysis

    NASA Astrophysics Data System (ADS)

    Lueckenhaus, Maximilian; Eckstein, Wolfgang

    1998-09-01

    Parallel processing of image analysis tasks is an essential method to speed up image processing and helps to exploit the full capacity of distributed systems. However, writing parallel code is a difficult and time-consuming process and often leads to an architecture-dependent program that has to be re-implemented when changing the hardware. Therefore it is highly desirable to do the parallelization automatically. For this we have developed a special kind of thread concept for image analysis tasks. Threads derivated from one subtask may share objects and run in the same context but may process different threads of execution and work on different data in parallel. In this paper we describe the basics of our thread concept and show how it can be used as basis of an automatic task parallelization to speed up image processing. We further illustrate the design and implementation of an agent-based system that uses image analysis threads for generating and processing parallel programs by taking into account the available hardware. The tests made with our system prototype show that the thread concept combined with the agent paradigm is suitable to speed up image processing by an automatic parallelization of image analysis tasks.

  1. Automatic building identification under bomb damage conditions

    NASA Astrophysics Data System (ADS)

    Woodley, Robert; Noll, Warren; Barker, Joseph; Wunsch, Donald C., II

    2009-05-01

    Given the vast amount of image intelligence utilized in support of planning and executing military operations, a passive automated image processing capability for target identification is urgently required. Furthermore, transmitting large image streams from remote locations would quickly use available band width (BW) precipitating the need for processing to occur at the sensor location. This paper addresses the problem of automatic target recognition for battle damage assessment (BDA). We utilize an Adaptive Resonance Theory approach to cluster templates of target buildings. The results show that the network successfully classifies targets from non-targets in a virtual test bed environment.

  2. Users manual for the Variable dimension Automatic Synthesis Program (VASP)

    NASA Technical Reports Server (NTRS)

    White, J. S.; Lee, H. Q.

    1971-01-01

    A dictionary and some problems for the Variable Automatic Synthesis Program VASP are submitted. The dictionary contains a description of each subroutine and instructions on its use. The example problems give the user a better perspective on the use of VASP for solving problems in modern control theory. These example problems include dynamic response, optimal control gain, solution of the sampled data matrix Ricatti equation, matrix decomposition, and pseudo inverse of a matrix. Listings of all subroutines are also included. The VASP program has been adapted to run in the conversational mode on the Ames 360/67 computer.

  3. Ontology-based automatic generation of computerized cognitive exercises.

    PubMed

    Leonardi, Giorgio; Panzarasa, Silvia; Quaglini, Silvana

    2011-01-01

    Computer-based approaches can add great value to the traditional paper-based approaches for cognitive rehabilitation. The management of a big amount of stimuli and the use of multimedia features permits to improve the patient's involvement and to reuse and recombine them to create new exercises, whose difficulty level should be adapted to the patient's performance. This work proposes an ontological organization of the stimuli, to support the automatic generation of new exercises, tailored on the patient's preferences and skills, and its integration into a commercial cognitive rehabilitation tool. The possibilities offered by this approach are presented with the help of real examples.

  4. Real time microcontroller implementation of an adaptive myoelectric filter.

    PubMed

    Bagwell, P J; Chappell, P H

    1995-03-01

    This paper describes a real time digital adaptive filter for processing myoelectric signals. The filter time constant is automatically selected by the adaptation algorithm, giving a significant improvement over linear filters for estimating the muscle force and controlling a prosthetic device. Interference from mains sources often produces problems for myoelectric processing, and so 50 Hz and all harmonic frequencies are reduced by an averaging filter and differential process. This makes practical electrode placement and contact less critical and time consuming. An economic real time implementation is essential for a prosthetic controller, and this is achieved using an Intel 80C196KC microcontroller.

  5. MARZ: Manual and automatic redshifting software

    NASA Astrophysics Data System (ADS)

    Hinton, S. R.; Davis, Tamara M.; Lidman, C.; Glazebrook, K.; Lewis, G. F.

    2016-04-01

    The Australian Dark Energy Survey (OzDES) is a 100-night spectroscopic survey underway on the Anglo-Australian Telescope using the fibre-fed 2-degree-field (2dF) spectrograph. We have developed a new redshifting application MARZ with greater usability, flexibility, and the capacity to analyse a wider range of object types than the RUNZ software package previously used for redshifting spectra from 2dF. MARZ is an open-source, client-based, Javascript web-application which provides an intuitive interface and powerful automatic matching capabilities on spectra generated from the AAOmega spectrograph to produce high quality spectroscopic redshift measurements. The software can be run interactively or via the command line, and is easily adaptable to other instruments and pipelines if conforming to the current FITS file standard is not possible. Behind the scenes, a modified version of the AUTOZ cross-correlation algorithm is used to match input spectra against a variety of stellar and galaxy templates, and automatic matching performance for OzDES spectra has increased from 54% (RUNZ) to 91% (MARZ). Spectra not matched correctly by the automatic algorithm can be easily redshifted manually by cycling automatic results, manual template comparison, or marking spectral features.

  6. Automatic segmentation of the prostate on CT images using deep learning and multi-atlas fusion

    NASA Astrophysics Data System (ADS)

    Ma, Ling; Guo, Rongrong; Zhang, Guoyi; Tade, Funmilayo; Schuster, David M.; Nieh, Peter; Master, Viraj; Fei, Baowei

    2017-02-01

    Automatic segmentation of the prostate on CT images has many applications in prostate cancer diagnosis and therapy. However, prostate CT image segmentation is challenging because of the low contrast of soft tissue on CT images. In this paper, we propose an automatic segmentation method by combining a deep learning method and multi-atlas refinement. First, instead of segmenting the whole image, we extract the region of interesting (ROI) to delete irrelevant regions. Then, we use the convolutional neural networks (CNN) to learn the deep features for distinguishing the prostate pixels from the non-prostate pixels in order to obtain the preliminary segmentation results. CNN can automatically learn the deep features adapting to the data, which are different from some handcrafted features. Finally, we select some similar atlases to refine the initial segmentation results. The proposed method has been evaluated on a dataset of 92 prostate CT images. Experimental results show that our method achieved a Dice similarity coefficient of 86.80% as compared to the manual segmentation. The deep learning based method can provide a useful tool for automatic segmentation of the prostate on CT images and thus can have a variety of clinical applications.

  7. Application of fluidic lens technology to an adaptive holographic optical element see-through autophoropter

    NASA Astrophysics Data System (ADS)

    Chancy, Carl H.

    A device for performing an objective eye exam has been developed to automatically determine ophthalmic prescriptions. The closed loop fluidic auto-phoropter has been designed, modeled, fabricated and tested for the automatic measurement and correction of a patient's prescriptions. The adaptive phoropter is designed through the combination of a spherical-powered fluidic lens and two cylindrical fluidic lenses that are orientated 45o relative to each other. In addition, the system incorporates Shack-Hartmann wavefront sensing technology to identify the eye's wavefront error and corresponding prescription. Using the wavefront error information, the fluidic auto-phoropter nulls the eye's lower order wavefront error by applying the appropriate volumes to the fluidic lenses. The combination of the Shack-Hartmann wavefront sensor the fluidic auto-phoropter allows for the identification and control of spherical refractive error, as well as cylinder error and axis; thus, creating a truly automated refractometer and corrective system. The fluidic auto-phoropter is capable of correcting defocus error ranging from -20D to 20D and astigmatism from -10D to 10D. The transmissive see-through design allows for the observation of natural scenes through the system at varying object planes with no additional imaging optics in the patient's line of sight. In this research, two generations of the fluidic auto-phoropter are designed and tested; the first generation uses traditional glass optics for the measurement channel. The second generation of the fluidic auto-phoropter takes advantage of the progress in the development of holographic optical elements (HOEs) to replace all the traditional glass optics. The addition of the HOEs has enabled the development of a more compact, inexpensive and easily reproducible system without compromising its performance. Additionally, the fluidic lenses were tested during a National Aeronautics Space Administration (NASA) parabolic flight campaign, to determine the effect of varying gravitational acceleration on the performance and image quality of the fluidic lenses. Wavefront analysis has indicated that flight turbulence and the varying levels of gravitational acceleration ranging from zero-G (microgravity) to 2G (hypergravity) had minimal effect on the performance of the fluidic lenses, except for small changes in defocus; making them suitable for potential use in a portable space-based fluidic auto-phoropter.

  8. Image quality and radiation reduction of 320-row area detector CT coronary angiography with optimal tube voltage selection and an automatic exposure control system: comparison with body mass index-adapted protocol.

    PubMed

    Lim, Jiyeon; Park, Eun-Ah; Lee, Whal; Shim, Hackjoon; Chung, Jin Wook

    2015-06-01

    To assess the image quality and radiation exposure of 320-row area detector computed tomography (320-ADCT) coronary angiography with optimal tube voltage selection with the guidance of an automatic exposure control system in comparison with a body mass index (BMI)-adapted protocol. Twenty-two patients (study group) underwent 320-ADCT coronary angiography using an automatic exposure control system with the target standard deviation value of 33 as the image quality index and the lowest possible tube voltage. For comparison, a sex- and BMI-matched group (control group, n = 22) using a BMI-adapted protocol was established. Images of both groups were reconstructed by an iterative reconstruction algorithm. For objective evaluation of the image quality, image noise, vessel density, signal to noise ratio (SNR), and contrast to noise ratio (CNR) were measured. Two blinded readers then subjectively graded the image quality using a four-point scale (1: nondiagnostic to 4: excellent). Radiation exposure was also measured. Although the study group tended to show higher image noise (14.1 ± 3.6 vs. 9.3 ± 2.2 HU, P = 0.111) and higher vessel density (665.5 ± 161 vs. 498 ± 143 HU, P = 0.430) than the control group, the differences were not significant. There was no significant difference between the two groups for SNR (52.5 ± 19.2 vs. 60.6 ± 21.8, P = 0.729), CNR (57.0 ± 19.8 vs. 67.8 ± 23.3, P = 0.531), or subjective image quality scores (3.47 ± 0.55 vs. 3.59 ± 0.56, P = 0.960). However, radiation exposure was significantly reduced by 42 % in the study group (1.9 ± 0.8 vs. 3.6 ± 0.4 mSv, P = 0.003). Optimal tube voltage selection with the guidance of an automatic exposure control system in 320-ADCT coronary angiography allows substantial radiation reduction without significant impairment of image quality, compared to the results obtained using a BMI-based protocol.

  9. Time-Domain Receiver Function Deconvolution using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Moreira, L. P.

    2017-12-01

    Receiver Functions (RF) are well know method for crust modelling using passive seismological signals. Many different techniques were developed to calculate the RF traces, applying the deconvolution calculation to radial and vertical seismogram components. A popular method used a spectral division of both components, which requires human intervention to apply the Water Level procedure to avoid instabilities from division by small numbers. One of most used method is an iterative procedure to estimate the RF peaks and applying the convolution with vertical component seismogram, comparing the result with the radial component. This method is suitable for automatic processing, however several RF traces are invalid due to peak estimation failure.In this work it is proposed a deconvolution algorithm using Genetic Algorithm (GA) to estimate the RF peaks. This method is entirely processed in the time domain, avoiding the time-to-frequency calculations (and vice-versa), and totally suitable for automatic processing. Estimated peaks can be used to generate RF traces in a seismogram format for visualization. The RF trace quality is similar for high magnitude events, although there are less failures for RF calculation of smaller events, increasing the overall performance for high number of events per station.

  10. On the possibility of producing definitive magnetic observatory data within less than one year

    NASA Astrophysics Data System (ADS)

    Mandić, Igor; Korte, Monika

    2017-04-01

    Geomagnetic observatory data are fundamental in geomagnetic field studies and are widely used in other applications. Often they are combined with satellite and ground survey data. Unfortunately, the observatory definitive data are only available with a time lag ranging from several months up to more than a year. The reason for this lag is the annual production of the final calibration values, i.e. baselines that are used to correct preliminary data from continuously recording magnetometers. In this paper, we will show that the preparation of definitive geomagnetic data is possible within a calendar year and presents an original method for prompt and automatic estimation of the observatory baselines. The new baselines, obtained in a mostly automatic manner, are compared with the baselines reported on INTERMAGNET DVDs for the 2009-2011 period. The high quality of the baselines obtained by the proposed method indicates its suitability for data processing in fully automatic observatories when automated absolute instruments will be deployed at remote sites.

  11. A Recent Advance in the Automatic Indexing of the Biomedical Literature

    PubMed Central

    Névéol, Aurélie; Shooshan, Sonya E.; Humphrey, Susanne M.; Mork, James G.; Aronson, Alan R.

    2009-01-01

    The volume of biomedical literature has experienced explosive growth in recent years. This is reflected in the corresponding increase in the size of MEDLINE®, the largest bibliographic database of biomedical citations. Indexers at the U.S. National Library of Medicine (NLM) need efficient tools to help them accommodate the ensuing workload. After reviewing issues in the automatic assignment of Medical Subject Headings (MeSH® terms) to biomedical text, we focus more specifically on the new subheading attachment feature for NLM’s Medical Text Indexer (MTI). Natural Language Processing, statistical, and machine learning methods of producing automatic MeSH main heading/subheading pair recommendations were assessed independently and combined. The best combination achieves 48% precision and 30% recall. After validation by NLM indexers, a suitable combination of the methods presented in this paper was integrated into MTI as a subheading attachment feature producing MeSH indexing recommendations compliant with current state-of-the-art indexing practice. PMID:19166973

  12. An ultra low-power CMOS automatic action potential detector.

    PubMed

    Gosselin, Benoit; Sawan, Mohamad

    2009-08-01

    We present a low-power complementary metal-oxide semiconductor (CMOS) analog integrated biopotential detector intended for neural recording in wireless multichannel implants. The proposed detector can achieve accurate automatic discrimination of action potential (APs) from the background activity by means of an energy-based preprocessor and a linear delay element. This strategy improves detected waveforms integrity and prompts for better performance in neural prostheses. The delay element is implemented with a low-power continuous-time filter using a ninth-order equiripple allpass transfer function. All circuit building blocks use subthreshold OTAs employing dedicated circuit techniques for achieving ultra low-power and high dynamic range. The proposed circuit function in the submicrowatt range as the implemented CMOS 0.18- microm chip dissipates 780 nW, and it features a size of 0.07 mm(2). So it is suitable for massive integration in a multichannel device with modest overhead. The fabricated detector succeeds to automatically detect APs from underlying background activity. Testbench validation results obtained with synthetic neural waveforms are presented.

  13. Motorization of a surgical microscope for intra-operative navigation and intuitive control.

    PubMed

    Finke, M; Schweikard, A

    2010-09-01

    During surgical procedures, various medical systems, e.g. microscope or C-arm, are used. Their precise and repeatable manual positioning can be very cumbersome and interrupts the surgeon's work flow. Robotized systems can assist the surgeon but they require suitable kinematics and control. However, positioning must be fast, flexible and intuitive. We describe a fully motorized surgical microscope. Hardware components as well as implemented applications are specified. The kinematic equations are described and a novel control concept is proposed. Our microscope combines fast manual handling with accurate, automatic positioning. Intuitive control is provided by a small remote control mounted to one of the surgical instruments. Positioning accuracy and repeatability are < 1 mm and vibrations caused by automatic movements fade away in about 1 s. The robotic system assists the surgeon, so that he can position the microscope precisely and repeatedly without interrupting the clinical workflow. The combination of manual und automatic control guarantees fast and flexible positioning during surgical procedures. Copyright 2010 John Wiley & Sons, Ltd.

  14. Development and Long-Term Verification of Stereo Vision Sensor System for Controlling Safety at Railroad Crossing

    NASA Astrophysics Data System (ADS)

    Hosotani, Daisuke; Yoda, Ikushi; Hishiyama, Yoshiyuki; Sakaue, Katsuhiko

    Many people are involved in accidents every year at railroad crossings, but there is no suitable sensor for detecting pedestrians. We are therefore developing a ubiquitous stereo vision based system for ensuring safety at railroad crossings. In this system, stereo cameras are installed at the corners and are pointed toward the center of the railroad crossing to monitor the passage of people. The system determines automatically and in real-time whether anyone or anything is inside the railroad crossing, and whether anyone remains in the crossing. The system can be configured to automatically switch over to a surveillance monitor or automatically connect to an emergency brake system in the event of trouble. We have developed an original stereovision device and installed the remote controlled experimental system applied human detection algorithm in the commercial railroad crossing. Then we store and analyze image data and tracking data throughout two years for standardization of system requirement specification.

  15. Case-based synthesis in automatic advertising creation system

    NASA Astrophysics Data System (ADS)

    Zhuang, Yueting; Pan, Yunhe

    1995-08-01

    Advertising (ads) is an important design area. Though many interactive ad-design softwares have come into commercial use, none of them ever support the intelligent work -- automatic ad creation. The potential for this is enormous. This paper gives a description of our current work in research of an automatic advertising creation system (AACS). After careful analysis of the mental behavior of a human ad designer, we conclude that case-based approach is appropriate to its intelligent modeling. A model for AACS is given in the paper. A case in ads is described as two parts: the creation process and the configuration of the ads picture, with detailed data structures given in the paper. Along with the case representation, we put forward an algorithm. Some issues such as similarity measure computing, and case adaptation have also been discussed.

  16. Robust extraction of the aorta and pulmonary artery from 3D MDCT image data

    NASA Astrophysics Data System (ADS)

    Taeprasartsit, Pinyo; Higgins, William E.

    2010-03-01

    Accurate definition of the aorta and pulmonary artery from three-dimensional (3D) multi-detector CT (MDCT) images is important for pulmonary applications. This work presents robust methods for defining the aorta and pulmonary artery in the central chest. The methods work on both contrast enhanced and no-contrast 3D MDCT image data. The automatic methods use a common approach employing model fitting and selection and adaptive refinement. During the occasional event that more precise vascular extraction is desired or the method fails, we also have an alternate semi-automatic fail-safe method. The semi-automatic method extracts the vasculature by extending the medial axes into a user-guided direction. A ground-truth study over a series of 40 human 3D MDCT images demonstrates the efficacy, accuracy, robustness, and efficiency of the methods.

  17. Pilot control through the TAFCOS automatic flight control system

    NASA Technical Reports Server (NTRS)

    Wehrend, W. R., Jr.

    1979-01-01

    The set of flight control logic used in a recently completed flight test program to evaluate the total automatic flight control system (TAFCOS) with the controller operating in a fully automatic mode, was used to perform an unmanned simulation on an IBM 360 computer in which the TAFCOS concept was extended to provide a multilevel pilot interface. A pilot TAFCOS interface for direct pilot control by use of a velocity-control-wheel-steering mode was defined as well as a means for calling up conventional autopilot modes. It is concluded that the TAFCOS structure is easily adaptable to the addition of a pilot control through a stick-wheel-throttle control similar to conventional airplane controls. Conventional autopilot modes, such as airspeed-hold, altitude-hold, heading-hold, and flight path angle-hold, can also be included.

  18. Automatic segmentation of the puborectalis muscle in 3D transperineal ultrasound.

    PubMed

    van den Noort, Frieda; Grob, Anique T M; Slump, Cornelis H; van der Vaart, Carl H; van Stralen, Marijn

    2017-10-11

    The introduction of 3D analysis of the puborectalis muscle, for diagnostic purposes, into daily practice is hindered by the need for appropriate training of the observers. Automatic 3D segmentation of the puborectalis muscle in 3D transperineal ultrasound may aid to its adaption in clinical practice. A manual 3D segmentation protocol was developed to segment the puborectalis muscle. The data of 20 women, in their first trimester of pregnancy, was used to validate the reproducibility of this protocol. For automatic segmentation, active appearance models of the puborectalis muscle were developed. Those models were trained using manual segmentation data of 50 women. The performance of both manual and automatic segmentation was analyzed by measuring the overlap and distance between the segmentations. Also, the interclass correlation coefficients and their 95% confidence intervals were determined for mean echogenicity and volume of the puborectalis muscle. The ICC values of mean echogenicity (0.968-0.991) and volume (0.626-0.910) are good to very good for both automatic and manual segmentation. The results of overlap and distance for manual segmentation are as expected, showing only few pixels (2-3) mismatch on average and a reasonable overlap. Based on overlap and distance 5 mismatches in automatic segmentation were detected, resulting in an automatic segmentation a success rate of 90%. In conclusion, this study presents a reliable manual and automatic 3D segmentation of the puborectalis muscle. This will facilitate future investigation of the puborectalis muscle. It also allows for reliable measurements of clinically potentially valuable parameters like mean echogenicity. This article is protected by copyright. All rights reserved.

  19. Microprocessor Control For Liquid-Cooled Garment

    NASA Technical Reports Server (NTRS)

    Weaver, Charles S.

    1990-01-01

    Automatic control system maintains temperature of water-cooled garment within comfort zone while wearer's level of physical activity varies. Uncomfortable overshoots and undershoots of temperature eliminated. Designed for use in space suit, adaptable to other protective garments and to enclosed environments operating according to similar principles.

  20. Personalization of Reading Passages Improves Vocabulary Acquisition

    ERIC Educational Resources Information Center

    Heilman, Michael; Collins-Thompson, Kevyn; Callan, Jamie; Eskenazi, Maxine; Juffs, Alan; Wilson, Lois

    2010-01-01

    The REAP tutoring system provides individualized and adaptive English as a Second Language vocabulary practice. REAP can automatically personalize instruction by providing practice readings about topics that match interests as well as domain-based, cognitive objectives. While most previous research on motivation in intelligent tutoring…

  1. Applying modern psychometric techniques to melodic discrimination testing: Item response theory, computerised adaptive testing, and automatic item generation.

    PubMed

    Harrison, Peter M C; Collins, Tom; Müllensiefen, Daniel

    2017-06-15

    Modern psychometric theory provides many useful tools for ability testing, such as item response theory, computerised adaptive testing, and automatic item generation. However, these techniques have yet to be integrated into mainstream psychological practice. This is unfortunate, because modern psychometric techniques can bring many benefits, including sophisticated reliability measures, improved construct validity, avoidance of exposure effects, and improved efficiency. In the present research we therefore use these techniques to develop a new test of a well-studied psychological capacity: melodic discrimination, the ability to detect differences between melodies. We calibrate and validate this test in a series of studies. Studies 1 and 2 respectively calibrate and validate an initial test version, while Studies 3 and 4 calibrate and validate an updated test version incorporating additional easy items. The results support the new test's viability, with evidence for strong reliability and construct validity. We discuss how these modern psychometric techniques may also be profitably applied to other areas of music psychology and psychological science in general.

  2. Robot friendly probe and socket assembly

    NASA Technical Reports Server (NTRS)

    Nyberg, Karen L. (Inventor)

    1994-01-01

    A probe and socket assembly for serving as a mechanical interface between structures is presented. The assembly comprises a socket having a housing adapted for connection to a first supporting structure and a probe which is readily connectable to a second structure and is designed to be easily grappled and manipulated by a robotic device for insertion and coupling with the socket. Cooperable automatic locking means are provided on the probe shaft and socket housing for automatically locking the probe in the socket when the probe is inserted a predetermined distance. A second cooperable locking means on the probe shaft and housing are adapted for actuation after the probe has been inserted the predetermined distance. Actuation means mounted on the probe and responsive to the grip of the probe handle by a gripping device, such as a robot for conditioning the probe for insertion and are also responsive to release of the grip of the probe handle to actuate the second locking means to provide a hard lock of the probe in the socket.

  3. A cost-effective strategy for nonoscillatory convection without clipping

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.; Niknafs, H. S.

    1990-01-01

    Clipping of narrow extrema and distortion of smooth profiles is a well known problem associated with so-called high resolution nonoscillatory convection schemes. A strategy is presented for accurately simulating highly convective flows containing discontinuities such as density fronts or shock waves, without distorting smooth profiles or clipping narrow local extrema. The convection algorithm is based on non-artificially diffusive third-order upwinding in smooth regions, with automatic adaptive stencil expansion to (in principle, arbitrarily) higher order upwinding locally, in regions of rapidly changing gradients. This is highly cost effective because the wider stencil is used only where needed-in isolated narrow regions. A recently developed universal limiter assures sharp monotonic resolution of discontinuities without introducing artificial diffusion or numerical compression. An adaptive discriminator is constructed to distinguish between spurious overshoots and physical peaks; this automatically relaxes the limiter near local turning points, thereby avoiding loss of resolution in narrow extrema. Examples are given for one-dimensional pure convection of scalar profiles at constant velocity.

  4. Image acquisition device of inspection robot based on adaptive rotation regulation of polarizer

    NASA Astrophysics Data System (ADS)

    Dong, Maoqi; Wang, Xingguang; Liang, Tao; Yang, Guoqing; Zhang, Chuangyou; Gao, Faqin

    2017-12-01

    An image processing device of inspection robot with adaptive polarization adjustment is proposed, that the device includes the inspection robot body, the image collecting mechanism, the polarizer and the polarizer automatic actuating device. Where, the image acquisition mechanism is arranged at the front of the inspection robot body for collecting equipment image data in the substation. Polarizer is fixed on the automatic actuating device of polarizer, and installed in front of the image acquisition mechanism, and that the optical axis of the camera vertically goes through the polarizer and the polarizer rotates with the optical axis of the visible camera as the central axis. The simulation results show that the system solves the fuzzy problems of the equipment that are caused by glare, reflection of light and shadow, and the robot can observe details of the running status of electrical equipment. And the full coverage of the substation equipment inspection robot observation target is achieved, which ensures the safe operation of the substation equipment.

  5. An 1.4 ppm/°C bandgap voltage reference with automatic curvature-compensation technique

    NASA Astrophysics Data System (ADS)

    Zhou, Zekun; Yu, Hongming; Shi, Yue; Zhang, Bo

    2017-12-01

    A high-precision Bandgap voltage reference (BGR) with a novel curvature-compensation scheme is proposed in this paper. The temperature coefficient (TC) can be automatically optimized with a built-in adaptive curvature-compensation technique, which is realized in a digitization control way. Firstly, an exponential curvature compensation method is adopted to reduce the TC in a certain degree, especially in low temperature range. Then, the temperature drift of BGR in higher temperature range can be further minimized by dynamic zero-temperature-coefficient point tracking with temperature changes. With the help of proposed adaptive signal processing, the output voltage of BGR can approximately maintain zero TC in a wider temperature range. Experiment results of the BGR proposed in this paper, which is implemented in 0.35-μm BCD process, illustrate that the TC of 1.4ppm/°C is realized under the power supply voltage of 3.6V and the power supply rejection of the proposed circuit is -67dB.

  6. Inference of segmented color and texture description by tensor voting.

    PubMed

    Jia, Jiaya; Tang, Chi-Keung

    2004-06-01

    A robust synthesis method is proposed to automatically infer missing color and texture information from a damaged 2D image by (N)D tensor voting (N > 3). The same approach is generalized to range and 3D data in the presence of occlusion, missing data and noise. Our method translates texture information into an adaptive (N)D tensor, followed by a voting process that infers noniteratively the optimal color values in the (N)D texture space. A two-step method is proposed. First, we perform segmentation based on insufficient geometry, color, and texture information in the input, and extrapolate partitioning boundaries by either 2D or 3D tensor voting to generate a complete segmentation for the input. Missing colors are synthesized using (N)D tensor voting in each segment. Different feature scales in the input are automatically adapted by our tensor scale analysis. Results on a variety of difficult inputs demonstrate the effectiveness of our tensor voting approach.

  7. Selected Topics from LVCSR Research for Asian Languages at Tokyo Tech

    NASA Astrophysics Data System (ADS)

    Furui, Sadaoki

    This paper presents our recent work in regard to building Large Vocabulary Continuous Speech Recognition (LVCSR) systems for the Thai, Indonesian, and Chinese languages. For Thai, since there is no word boundary in the written form, we have proposed a new method for automatically creating word-like units from a text corpus, and applied topic and speaking style adaptation to the language model to recognize spoken-style utterances. For Indonesian, we have applied proper noun-specific adaptation to acoustic modeling, and rule-based English-to-Indonesian phoneme mapping to solve the problem of large variation in proper noun and English word pronunciation in a spoken-query information retrieval system. In spoken Chinese, long organization names are frequently abbreviated, and abbreviated utterances cannot be recognized if the abbreviations are not included in the dictionary. We have proposed a new method for automatically generating Chinese abbreviations, and by expanding the vocabulary using the generated abbreviations, we have significantly improved the performance of spoken query-based search.

  8. A Comparative Study of Acousto-Optic Time-Integrating Correlators for Adaptive Jamming Cancellation

    DTIC Science & Technology

    1997-10-01

    This final report presents a comparative study of the space-integrating and time-integrating configurations of an acousto - optic correlator...systematically evaluate all existing acousto - optic correlator architectures and to determine which would be most suitable for adaptive jamming

  9. Underwater Acoustic Propagation and Communications: A Coupled Research Program

    DTIC Science & Technology

    2015-06-15

    coding technique suitable for both SIMO and MIMO systems. 4. an adaptive OFDM modulation technique, whereby the transmitter acts in response to...timate based adaptation for SIMO and MIMO systems in a interactive turbo-equalization framework were developed and analyzed. MIMO and SISO

  10. Self-adaptive multi-objective harmony search for optimal design of water distribution networks

    NASA Astrophysics Data System (ADS)

    Choi, Young Hwan; Lee, Ho Min; Yoo, Do Guen; Kim, Joong Hoon

    2017-11-01

    In multi-objective optimization computing, it is important to assign suitable parameters to each optimization problem to obtain better solutions. In this study, a self-adaptive multi-objective harmony search (SaMOHS) algorithm is developed to apply the parameter-setting-free technique, which is an example of a self-adaptive methodology. The SaMOHS algorithm attempts to remove some of the inconvenience from parameter setting and selects the most adaptive parameters during the iterative solution search process. To verify the proposed algorithm, an optimal least cost water distribution network design problem is applied to three different target networks. The results are compared with other well-known algorithms such as multi-objective harmony search and the non-dominated sorting genetic algorithm-II. The efficiency of the proposed algorithm is quantified by suitable performance indices. The results indicate that SaMOHS can be efficiently applied to the search for Pareto-optimal solutions in a multi-objective solution space.

  11. ConvAn: a convergence analyzing tool for optimization of biochemical networks.

    PubMed

    Kostromins, Andrejs; Mozga, Ivars; Stalidzans, Egils

    2012-01-01

    Dynamic models of biochemical networks usually are described as a system of nonlinear differential equations. In case of optimization of models for purpose of parameter estimation or design of new properties mainly numerical methods are used. That causes problems of optimization predictability as most of numerical optimization methods have stochastic properties and the convergence of the objective function to the global optimum is hardly predictable. Determination of suitable optimization method and necessary duration of optimization becomes critical in case of evaluation of high number of combinations of adjustable parameters or in case of large dynamic models. This task is complex due to variety of optimization methods, software tools and nonlinearity features of models in different parameter spaces. A software tool ConvAn is developed to analyze statistical properties of convergence dynamics for optimization runs with particular optimization method, model, software tool, set of optimization method parameters and number of adjustable parameters of the model. The convergence curves can be normalized automatically to enable comparison of different methods and models in the same scale. By the help of the biochemistry adapted graphical user interface of ConvAn it is possible to compare different optimization methods in terms of ability to find the global optima or values close to that as well as the necessary computational time to reach them. It is possible to estimate the optimization performance for different number of adjustable parameters. The functionality of ConvAn enables statistical assessment of necessary optimization time depending on the necessary optimization accuracy. Optimization methods, which are not suitable for a particular optimization task, can be rejected if they have poor repeatability or convergence properties. The software ConvAn is freely available on www.biosystems.lv/convan. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  12. Automatic identification of epileptic seizures from EEG signals using linear programming boosting.

    PubMed

    Hassan, Ahnaf Rashik; Subasi, Abdulhamit

    2016-11-01

    Computerized epileptic seizure detection is essential for expediting epilepsy diagnosis and research and for assisting medical professionals. Moreover, the implementation of an epilepsy monitoring device that has low power and is portable requires a reliable and successful seizure detection scheme. In this work, the problem of automated epilepsy seizure detection using singe-channel EEG signals has been addressed. At first, segments of EEG signals are decomposed using a newly proposed signal processing scheme, namely complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN). Six spectral moments are extracted from the CEEMDAN mode functions and train and test matrices are formed afterward. These matrices are fed into the classifier to identify epileptic seizures from EEG signal segments. In this work, we implement an ensemble learning based machine learning algorithm, namely linear programming boosting (LPBoost) to perform classification. The efficacy of spectral features in the CEEMDAN domain is validated by graphical and statistical analyses. The performance of CEEMDAN is compared to those of its predecessors to further inspect its suitability. The effectiveness and the appropriateness of LPBoost are demonstrated as opposed to the commonly used classification models. Resubstitution and 10 fold cross-validation error analyses confirm the superior algorithm performance of the proposed scheme. The algorithmic performance of our epilepsy seizure identification scheme is also evaluated against state-of-the-art works in the literature. Experimental outcomes manifest that the proposed seizure detection scheme performs better than the existing works in terms of accuracy, sensitivity, specificity, and Cohen's Kappa coefficient. It can be anticipated that owing to its use of only one channel of EEG signal, the proposed method will be suitable for device implementation, eliminate the onus of clinicians for analyzing a large bulk of data manually, and expedite epilepsy diagnosis. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. Dynamics modeling and adaptive control of flexible manipulators

    NASA Technical Reports Server (NTRS)

    Sasiadek, J. Z.

    1991-01-01

    An application of Model Reference Adaptive Control (MRAC) to the position and force control of flexible manipulators and robots is presented. A single-link flexible manipulator is analyzed. The problem was to develop a mathematical model of a flexible robot that is accurate. The objective is to show that the adaptive control works better than 'conventional' systems and is suitable for flexible structure control.

  14. Monitoring groundwater and river interaction along the Hanford reach of the Columbia River

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, M.D.

    1994-04-01

    As an adjunct to efficient Hanford Site characterization and remediation of groundwater contamination, an automatic monitor network has been used to measure Columbia River and adjacent groundwater levels in several areas of the Hanford Site since 1991. Water levels, temperatures, and electrical conductivity measured by the automatic monitor network provided an initial database with which to calibrate models and from which to infer ground and river water interactions for site characterization and remediation activities. Measurements of the dynamic river/aquifer system have been simultaneous at 1-hr intervals, with a quality suitable for hydrologic modeling and for computer model calibration and testing.more » This report describes the equipment, procedures, and results from measurements done in 1993.« less

  15. Study of smartphone suitability for mapping of skin chromophores

    NASA Astrophysics Data System (ADS)

    Kuzmina, Ilona; Lacis, Matiss; Spigulis, Janis; Berzina, Anna; Valeine, Lauma

    2015-09-01

    RGB (red-green-blue) technique for mapping skin chromophores by smartphones is proposed and studied. Three smartphones of different manufacturers were tested on skin phantoms and in vivo on benign skin lesions using a specially designed light source for illumination. Hemoglobin and melanin indices obtained by these smartphones showed differences in both tests. In vitro tests showed an increment of hemoglobin and melanin indices with the concentration of chromophores in phantoms. In vivo tests indicated higher hemoglobin index in hemangiomas than in nevi and healthy skin, and nevi showed higher melanin index compared to the healthy skin. Smartphones that allow switching off the automatic camera settings provided useful data, while those with "embedded" automatic settings appear to be useless for distant skin chromophore mapping.

  16. Study of smartphone suitability for mapping of skin chromophores.

    PubMed

    Kuzmina, Ilona; Lacis, Matiss; Spigulis, Janis; Berzina, Anna; Valeine, Lauma

    2015-09-01

    RGB (red-green-blue) technique for mapping skin chromophores by smartphones is proposed and studied. Three smartphones of different manufacturers were tested on skin phantoms and in vivo on benign skin lesions using a specially designed light source for illumination. Hemoglobin and melanin indices obtained by these smartphones showed differences in both tests. In vitro tests showed an increment of hemoglobin and melanin indices with the concentration of chromophores in phantoms. In vivo tests indicated higher hemoglobin index in hemangiomas than in nevi and healthy skin, and nevi showed higher melanin index compared to the healthy skin. Smartphones that allow switching off the automatic camera settings provided useful data, while those with “embedded” automatic settings appear to be useless for distant skin chromophore mapping.

  17. Automatic detection of white-light flare kernels in SDO/HMI intensitygrams

    NASA Astrophysics Data System (ADS)

    Mravcová, Lucia; Švanda, Michal

    2017-11-01

    Solar flares with a broadband emission in the white-light range of the electromagnetic spectrum belong to most enigmatic phenomena on the Sun. The origin of the white-light emission is not entirely understood. We aim to systematically study the visible-light emission connected to solar flares in SDO/HMI observations. We developed a code for automatic detection of kernels of flares with HMI intensity brightenings and study properties of detected candidates. The code was tuned and tested and with a little effort, it could be applied to any suitable data set. By studying a few flare examples, we found indication that HMI intensity brightening might be an artefact of the simplified procedure used to compute HMI observables.

  18. Automatic Flushing Unit With Cleanliness Monitor

    NASA Technical Reports Server (NTRS)

    Hildebrandt, N. E.

    1982-01-01

    Liquid-level probe kept clean, therefore at peak accuracy, by unit that flushes probe with solvent, monitors effluent for contamination, and determines probe is particle-free. Approach may be adaptable to industrial cleaning such as flushing filters and pipes, and ensuring that manufactured parts have been adequately cleaned.

  19. Accelerometer-controlled automatic braking system

    NASA Technical Reports Server (NTRS)

    Dreher, R. C.; Sleeper, R. K.; Nayadley, J. R., Sr.

    1973-01-01

    Braking system, which employs angular accelerometer to control wheel braking and results in low level of tire slip, has been developed and tested. Tests indicate that system is feasible for operations on surfaces of different slipperinesses. System restricts tire slip and is capable of adapting to rapidly-changing surface conditions.

  20. Organizational Adaptative Behavior: The Complex Perspective of Individuals-Tasks Interaction

    NASA Astrophysics Data System (ADS)

    Wu, Jiang; Sun, Duoyong; Hu, Bin; Zhang, Yu

    Organizations with different organizational structures have different organizational behaviors when responding environmental changes. In this paper, we use a computational model to examine organizational adaptation on four dimensions: Agility, Robustness, Resilience, and Survivability. We analyze the dynamics of organizational adaptation by a simulation study from a complex perspective of the interaction between tasks and individuals in a sales enterprise. The simulation studies in different scenarios show that more flexible communication between employees and less hierarchy level with the suitable centralization can improve organizational adaptation.

  1. Multi-limit unsymmetrical MLIBD image restoration algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Cheng, Yiping; Chen, Zai-wang; Bo, Chen

    2012-11-01

    A novel multi-limit unsymmetrical iterative blind deconvolution(MLIBD) algorithm was presented to enhance the performance of adaptive optics image restoration.The algorithm enhances the reliability of iterative blind deconvolution by introducing the bandwidth limit into the frequency domain of point spread(PSF),and adopts the PSF dynamic support region estimation to improve the convergence speed.The unsymmetrical factor is automatically computed to advance its adaptivity.Image deconvolution comparing experiments between Richardson-Lucy IBD and MLIBD were done,and the result indicates that the iteration number is reduced by 22.4% and the peak signal-to-noise ratio is improved by 10.18dB with MLIBD method. The performance of MLIBD algorithm is outstanding in the images restoration the FK5-857 adaptive optics and the double-star adaptive optics.

  2. Development of cost-effective Hordeum chilense DNA markers: molecular aids for marker-assisted cereal breeding.

    PubMed

    Hernández, P; Dorado, G; Ramírez, M C; Laurie, D A; Snape, J W; Martín, A

    2003-01-01

    Hordeum chilense is a potential source of useful genes for wheat breeding. The use of this wild species to increase genetic variation in wheat will be greatly facilitated by marker-assisted introgression. In recent years, the search for the most suitable DNA marker system for tagging H. chilense genomic regions in a wheat background has lead to the development of RAPD and SCAR markers for this species. RAPDs represent an easy way of quickly generating suitable introgression markers, but their use is limited in heterogeneous wheat genetic backgrounds. SCARs are more specific assays, suitable for automatation or multiplexing. Direct sequencing of RAPD products is a cost-effective approach that reduces labour and costs for SCAR development. The use of SSR and STS primers originally developed for wheat and barley are additional sources of genetic markers. Practical applications of the different marker approaches for obtaining derived introgression products are described.

  3. Social Adaptation of New Immigrant Students: Cultural Scripts, Roles, and Symbolic Interactionism

    ERIC Educational Resources Information Center

    Ukasoanya, Grace

    2014-01-01

    It is important that counselors understand the socio-cultural dimensions of social adaptation among immigrant students. While many psychological theories could provide suitable frameworks for examining these, in this article, I argue that symbolic interactionism could provide an additional valuable framework for (a) exploring the intersections of…

  4. A comparison of hardware description languages. [describing digital systems structure and behavior to a computer

    NASA Technical Reports Server (NTRS)

    Shiva, S. G.

    1978-01-01

    Several high level languages which evolved over the past few years for describing and simulating the structure and behavior of digital systems, on digital computers are assessed. The characteristics of the four prominent languages (CDL, DDL, AHPL, ISP) are summarized. A criterion for selecting a suitable hardware description language for use in an automatic integrated circuit design environment is provided.

  5. 3D automatic Cartesian grid generation for Euler flows

    NASA Technical Reports Server (NTRS)

    Melton, John E.; Enomoto, Francis Y.; Berger, Marsha J.

    1993-01-01

    We describe a Cartesian grid strategy for the study of three dimensional inviscid flows about arbitrary geometries that uses both conventional and CAD/CAM surface geometry databases. Initial applications of the technique are presented. The elimination of the body-fitted constraint allows the grid generation process to be automated, significantly reducing the time and effort required to develop suitable computational grids for inviscid flowfield simulations.

  6. Holding Cargo in Place With Foam

    NASA Technical Reports Server (NTRS)

    Fisher, T. T.

    1985-01-01

    Foam fills entire container to protect cargo from shock and vibration. Originally developed for stowing space debris and spent satellites in Space Shuttle for return to Earth, encapsulation concept suitable for preparing shipments carried by truck, boat, or airplane. Equipment automatically injects polyurethane foam into its interior to hold cargo securely in place. Container of rectangular or other cross section built to match shape of vehicle used.

  7. Automatic load sharing in inverter modules

    NASA Technical Reports Server (NTRS)

    Nagano, S.

    1979-01-01

    Active feedback loads transistor equally with little power loss. Circuit is suitable for balancing modular inverters in spacecraft, computer power supplies, solar-electric power generators, and electric vehicles. Current-balancing circuit senses differences between collector current for power transistor and average value of load currents for all power transistors. Principle is effective not only in fixed duty-cycle inverters but also in converters operating at variable duty cycles.

  8. Submerged arc welding of heavy plate

    NASA Technical Reports Server (NTRS)

    Wilson, R. A.

    1972-01-01

    The submerged arc process is particularly suitable for heavy plate welding because of its ability to combine very high deposit rates along with excellent quality. It does these things without the smoke and spatter often accompanying other processes. It is available today in several forms that are pointed to the fabricators of heavy sections with long, short or round about welds. Tandem arc full automatic equipment is particularly suitable for those long heavy welds where speed and deposit rate are of the first order. An attachment called long stick-out which makes use of the IR drop on long electrode extensions can be included on this equipment to increase deposition rates 50% or more.

  9. A dual growing method for the automatic extraction of individual trees from mobile laser scanning data

    NASA Astrophysics Data System (ADS)

    Li, Lin; Li, Dalin; Zhu, Haihong; Li, You

    2016-10-01

    Street trees interlaced with other objects in cluttered point clouds of urban scenes inhibit the automatic extraction of individual trees. This paper proposes a method for the automatic extraction of individual trees from mobile laser scanning data, according to the general constitution of trees. Two components of each individual tree - a trunk and a crown can be extracted by the dual growing method. This method consists of coarse classification, through which most of artifacts are removed; the automatic selection of appropriate seeds for individual trees, by which the common manual initial setting is avoided; a dual growing process that separates one tree from others by circumscribing a trunk in an adaptive growing radius and segmenting a crown in constrained growing regions; and a refining process that draws a singular trunk from the interlaced other objects. The method is verified by two datasets with over 98% completeness and over 96% correctness. The low mean absolute percentage errors in capturing the morphological parameters of individual trees indicate that this method can output individual trees with high precision.

  10. A new methodology for automatic detection of reference points in 3D cephalometry: A pilot study.

    PubMed

    Ed-Dhahraouy, Mohammed; Riri, Hicham; Ezzahmouly, Manal; Bourzgui, Farid; El Moutaoukkil, Abdelmajid

    2018-04-05

    The aim of this study was to develop a new method for an automatic detection of reference points in 3D cephalometry to overcome the limits of 2D cephalometric analyses. A specific application was designed using the C++ language for automatic and manual identification of 21 (reference) points on the craniofacial structures. Our algorithm is based on the implementation of an anatomical and geometrical network adapted to the craniofacial structure. This network was constructed based on the anatomical knowledge of the 3D cephalometric (reference) points. The proposed algorithm was tested on five CBCT images. The proposed approach for the automatic 3D cephalometric identification was able to detect 21 points with a mean error of 2.32mm. In this pilot study, we propose an automated methodology for the identification of the 3D cephalometric (reference) points. A larger sample will be implemented in the future to assess the method validity and reliability. Copyright © 2018 CEO. Published by Elsevier Masson SAS. All rights reserved.

  11. Intelligent virtual teacher

    NASA Astrophysics Data System (ADS)

    Takács, Ondřej; Kostolányová, Kateřina

    2016-06-01

    This paper describes the Virtual Teacher that uses a set of rules to automatically adapt the way of teaching. These rules compose of two parts: conditions on various students' properties or learning situation; conclusions that specify different adaptation parameters. The rules can be used for general adaptation of each subject or they can be specific to some subject. The rule based system of Virtual Teacher is dedicated to be used in pedagogical experiments in adaptive e-learning and is therefore designed for users without education in computer science. The Virtual Teacher was used in dissertation theses of two students, who executed two pedagogical experiments. This paper also describes the phase of simulating and modeling of the theoretically prepared adaptive process in the modeling tool, which has all the required parameters and has been created especially for the occasion. The experiments are being conducted on groups of virtual students and by using a virtual study material.

  12. Large-field-of-view, modular, stabilized, adaptive-optics-based scanning laser ophthalmoscope.

    PubMed

    Burns, Stephen A; Tumbar, Remy; Elsner, Ann E; Ferguson, Daniel; Hammer, Daniel X

    2007-05-01

    We describe the design and performance of an adaptive optics retinal imager that is optimized for use during dynamic correction for eye movements. The system incorporates a retinal tracker and stabilizer, a wide-field line scan scanning laser ophthalmoscope (SLO), and a high-resolution microelectromechanical-systems-based adaptive optics SLO. The detection system incorporates selection and positioning of confocal apertures, allowing measurement of images arising from different portions of the double pass retinal point-spread function (psf). System performance was excellent. The adaptive optics increased the brightness and contrast for small confocal apertures by more than 2x and decreased the brightness of images obtained with displaced apertures, confirming the ability of the adaptive optics system to improve the psf. The retinal image was stabilized to within 18 microm 90% of the time. Stabilization was sufficient for cross-correlation techniques to automatically align the images.

  13. An Adaptive Unstructured Grid Method by Grid Subdivision, Local Remeshing, and Grid Movement

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    1999-01-01

    An unstructured grid adaptation technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The approach is based on a combination of grid subdivision, local remeshing, and grid movement. For solution adaptive grids, the surface triangulation is locally refined by grid subdivision, and the tetrahedral grid in the field is partially remeshed at locations of dominant flow features. A grid redistribution strategy is employed for geometric adaptation of volume grids to moving or deforming surfaces. The method is automatic and fast and is designed for modular coupling with different solvers. Several steady state test cases with different inviscid flow features were tested for grid/solution adaptation. In all cases, the dominant flow features, such as shocks and vortices, were accurately and efficiently predicted with the present approach. A new and robust method of moving tetrahedral "viscous" grids is also presented and demonstrated on a three-dimensional example.

  14. Large Field of View, Modular, Stabilized, Adaptive-Optics-Based Scanning Laser Ophthalmoscope

    PubMed Central

    Burns, Stephen A.; Tumbar, Remy; Elsner, Ann E.; Ferguson, Daniel; Hammer, Daniel X.

    2007-01-01

    We describe the design and performance of an adaptive optics retinal imager that is optimized for use during dynamic correction for eye movements. The system incorporates a retinal tracker and stabilizer, a wide field line scan Scanning Laser Ophthalmocsope (SLO), and a high resolution MEMS based adaptive optics SLO. The detection system incorporates selection and positioning of confocal apertures, allowing measurement of images arising from different portions of the double pass retinal point spread function (psf). System performance was excellent. The adaptive optics increased the brightness and contrast for small confocal apertures by more than 2x, and decreased the brightness of images obtained with displaced apertures, confirming the ability of the adaptive optics system to improve the pointspread function. The retinal image was stabilized to within 18 microns 90% of the time. Stabilization was sufficient for cross-correlation techniques to automatically align the images. PMID:17429477

  15. Non-motor tasks improve adaptive brain-computer interface performance in users with severe motor impairment

    PubMed Central

    Faller, Josef; Scherer, Reinhold; Friedrich, Elisabeth V. C.; Costa, Ursula; Opisso, Eloy; Medina, Josep; Müller-Putz, Gernot R.

    2014-01-01

    Individuals with severe motor impairment can use event-related desynchronization (ERD) based BCIs as assistive technology. Auto-calibrating and adaptive ERD-based BCIs that users control with motor imagery tasks (“SMR-AdBCI”) have proven effective for healthy users. We aim to find an improved configuration of such an adaptive ERD-based BCI for individuals with severe motor impairment as a result of spinal cord injury (SCI) or stroke. We hypothesized that an adaptive ERD-based BCI, that automatically selects a user specific class-combination from motor-related and non motor-related mental tasks during initial auto-calibration (“Auto-AdBCI”) could allow for higher control performance than a conventional SMR-AdBCI. To answer this question we performed offline analyses on two sessions (21 data sets total) of cue-guided, five-class electroencephalography (EEG) data recorded from individuals with SCI or stroke. On data from the twelve individuals in Session 1, we first identified three bipolar derivations for the SMR-AdBCI. In a similar way, we determined three bipolar derivations and four mental tasks for the Auto-AdBCI. We then simulated both, the SMR-AdBCI and the Auto-AdBCI configuration on the unseen data from the nine participants in Session 2 and compared the results. On the unseen data of Session 2 from individuals with SCI or stroke, we found that automatically selecting a user specific class-combination from motor-related and non motor-related mental tasks during initial auto-calibration (Auto-AdBCI) significantly (p < 0.01) improved classification performance compared to an adaptive ERD-based BCI that only used motor imagery tasks (SMR-AdBCI; average accuracy of 75.7 vs. 66.3%). PMID:25368546

  16. Modeling the Perceptual Learning of Novel Dialect Features

    ERIC Educational Resources Information Center

    Tatman, Rachael

    2017-01-01

    All language use reflects the user's social identity in systematic ways. While humans can easily adapt to this sociolinguistic variation, automatic speech recognition (ASR) systems continue to struggle with it. This dissertation makes three main contributions. The first is to provide evidence that modern state-of-the-art commercial ASR systems…

  17. Using Web-Based Practice to Enhance Mathematics Learning and Achievement

    ERIC Educational Resources Information Center

    Nguyen, Diem M.; Kulm, Gerald

    2005-01-01

    This article describes 1) the special features and accessibility of an innovative web-based practice instrument (WebMA) designed with randomized short-answer, matching and multiple choice items incorporated with automatically adapted feedback for middle school students; and 2) an exploratory study that compares the effects and contributions of…

  18. Neural network applications in telecommunications

    NASA Technical Reports Server (NTRS)

    Alspector, Joshua

    1994-01-01

    Neural network capabilities include automatic and organized handling of complex information, quick adaptation to continuously changing environments, nonlinear modeling, and parallel implementation. This viewgraph presentation presents Bellcore work on applications, learning chip computational function, learning system block diagram, neural network equalization, broadband access control, calling-card fraud detection, software reliability prediction, and conclusions.

  19. An automatic rat brain extraction method based on a deformable surface model.

    PubMed

    Li, Jiehua; Liu, Xiaofeng; Zhuo, Jiachen; Gullapalli, Rao P; Zara, Jason M

    2013-08-15

    The extraction of the brain from the skull in medical images is a necessary first step before image registration or segmentation. While pre-clinical MR imaging studies on small animals, such as rats, are increasing, fully automatic imaging processing techniques specific to small animal studies remain lacking. In this paper, we present an automatic rat brain extraction method, the Rat Brain Deformable model method (RBD), which adapts the popular human brain extraction tool (BET) through the incorporation of information on the brain geometry and MR image characteristics of the rat brain. The robustness of the method was demonstrated on T2-weighted MR images of 64 rats and compared with other brain extraction methods (BET, PCNN, PCNN-3D). The results demonstrate that RBD reliably extracts the rat brain with high accuracy (>92% volume overlap) and is robust against signal inhomogeneity in the images. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. Diffraction phase microscopy realized with an automatic digital pinhole

    NASA Astrophysics Data System (ADS)

    Zheng, Cheng; Zhou, Renjie; Kuang, Cuifang; Zhao, Guangyuan; Zhang, Zhimin; Liu, Xu

    2017-12-01

    We report a novel approach to diffraction phase microscopy (DPM) with automatic pinhole alignment. The pinhole, which serves as a spatial low-pass filter to generate a uniform reference beam, is made out of a liquid crystal display (LCD) device that allows for electrical control. We have made DPM more accessible to users, while maintaining high phase measurement sensitivity and accuracy, through exploring low cost optical components and replacing the tedious pinhole alignment process with an automatic pinhole optical alignment procedure. Due to its flexibility in modifying the size and shape, this LCD device serves as a universal filter, requiring no future replacement. Moreover, a graphic user interface for real-time phase imaging has been also developed by using a USB CMOS camera. Experimental results of height maps of beads sample and live red blood cells (RBCs) dynamics are also presented, making this system ready for broad adaption to biological imaging and material metrology.

  1. Automatic Tortuosity-Based Retinopathy of Prematurity Screening System

    NASA Astrophysics Data System (ADS)

    Sukkaew, Lassada; Uyyanonvara, Bunyarit; Makhanov, Stanislav S.; Barman, Sarah; Pangputhipong, Pannet

    Retinopathy of Prematurity (ROP) is an infant disease characterized by increased dilation and tortuosity of the retinal blood vessels. Automatic tortuosity evaluation from retinal digital images is very useful to facilitate an ophthalmologist in the ROP screening and to prevent childhood blindness. This paper proposes a method to automatically classify the image into tortuous and non-tortuous. The process imitates expert ophthalmologists' screening by searching for clearly tortuous vessel segments. First, a skeleton of the retinal blood vessels is extracted from the original infant retinal image using a series of morphological operators. Next, we propose to partition the blood vessels recursively using an adaptive linear interpolation scheme. Finally, the tortuosity is calculated based on the curvature of the resulting vessel segments. The retinal images are then classified into two classes using segments characterized by the highest tortuosity. For an optimal set of training parameters the prediction is as high as 100%.

  2. Experimental investigation of an accelerometer controlled automatic braking system

    NASA Technical Reports Server (NTRS)

    Dreher, R. C.; Sleeper, R. K.; Nayadley, J. R., Sr.

    1972-01-01

    An investigation was made to determine the feasibility of an automatic braking system for arresting the motion of an airplane by sensing and controlling braked wheel decelerations. The system was tested on a rotating drum dynamometer by using an automotive tire, wheel, and disk-brake assembly under conditions which included two tire loadings, wet and dry surfaces, and a range of ground speeds up to 70 knots. The controlling parameters were the rates at which brake pressure was applied and released and the Command Deceleration Level which governed the wheel deceleration by controlling the brake operation. Limited tests were also made with the automatic braking system installed on a ground vehicle in an effort to provide a more realistic proof of its feasibility. The results of this investigation indicate that a braking system which utilizes wheel decelerations as the control variable to restrict tire slip is feasible and capable of adapting to rapidly changing surface conditions.

  3. Automatic Train Operation Using Autonomic Prediction of Train Runs

    NASA Astrophysics Data System (ADS)

    Asuka, Masashi; Kataoka, Kenji; Komaya, Kiyotoshi; Nishida, Syogo

    In this paper, we present an automatic train control method adaptable to disturbed train traffic conditions. The proposed method presumes transmission of detected time of a home track clearance to trains approaching to the station by employing equipment of Digital ATC (Automatic Train Control). Using the information, each train controls its acceleration by the method that consists of two approaches. First, by setting a designated restricted speed, the train controls its running time to arrive at the next station in accordance with predicted delay. Second, the train predicts the time at which it will reach the current braking pattern generated by Digital ATC, along with the time when the braking pattern transits ahead. By comparing them, the train correctly chooses the coasting drive mode in advance to avoid deceleration due to the current braking pattern. We evaluated the effectiveness of the proposed method regarding driving conditions, energy consumption and reduction of delays by simulation.

  4. SA-SOM algorithm for detecting communities in complex networks

    NASA Astrophysics Data System (ADS)

    Chen, Luogeng; Wang, Yanran; Huang, Xiaoming; Hu, Mengyu; Hu, Fang

    2017-10-01

    Currently, community detection is a hot topic. This paper, based on the self-organizing map (SOM) algorithm, introduced the idea of self-adaptation (SA) that the number of communities can be identified automatically, a novel algorithm SA-SOM of detecting communities in complex networks is proposed. Several representative real-world networks and a set of computer-generated networks by LFR-benchmark are utilized to verify the accuracy and the efficiency of this algorithm. The experimental findings demonstrate that this algorithm can identify the communities automatically, accurately and efficiently. Furthermore, this algorithm can also acquire higher values of modularity, NMI and density than the SOM algorithm does.

  5. Prioritization of brain MRI volumes using medical image perception model and tumor region segmentation.

    PubMed

    Mehmood, Irfan; Ejaz, Naveed; Sajjad, Muhammad; Baik, Sung Wook

    2013-10-01

    The objective of the present study is to explore prioritization methods in diagnostic imaging modalities to automatically determine the contents of medical images. In this paper, we propose an efficient prioritization of brain MRI. First, the visual perception of the radiologists is adapted to identify salient regions. Then this saliency information is used as an automatic label for accurate segmentation of brain lesion to determine the scientific value of that image. The qualitative and quantitative results prove that the rankings generated by the proposed method are closer to the rankings created by radiologists. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Assume-Guarantee Abstraction Refinement Meets Hybrid Systems

    NASA Technical Reports Server (NTRS)

    Bogomolov, Sergiy; Frehse, Goran; Greitschus, Marius; Grosu, Radu; Pasareanu, Corina S.; Podelski, Andreas; Strump, Thomas

    2014-01-01

    Compositional verification techniques in the assume- guarantee style have been successfully applied to transition systems to efficiently reduce the search space by leveraging the compositional nature of the systems under consideration. We adapt these techniques to the domain of hybrid systems with affine dynamics. To build assumptions we introduce an abstraction based on location merging. We integrate the assume-guarantee style analysis with automatic abstraction refinement. We have implemented our approach in the symbolic hybrid model checker SpaceEx. The evaluation shows its practical potential. To the best of our knowledge, this is the first work combining assume-guarantee reasoning with automatic abstraction-refinement in the context of hybrid automata.

  7. Automatic laser welding and milling with in situ inline coherent imaging.

    PubMed

    Webster, P J L; Wright, L G; Ji, Y; Galbraith, C M; Kinross, A W; Van Vlack, C; Fraser, J M

    2014-11-01

    Although new affordable high-power laser technologies enable many processing applications in science and industry, depth control remains a serious technical challenge. In this Letter we show that inline coherent imaging (ICI), with line rates up to 312 kHz and microsecond-duration capture times, is capable of directly measuring laser penetration depth, in a process as violent as kW-class keyhole welding. We exploit ICI's high speed, high dynamic range, and robustness to interference from other optical sources to achieve automatic, adaptive control of laser welding, as well as ablation, achieving 3D micron-scale sculpting in vastly different heterogeneous biological materials.

  8. Modernized build and test infrastructure for control software at ESO: highly flexible building, testing, and automatic quality practices for telescope control software

    NASA Astrophysics Data System (ADS)

    Pellegrin, F.; Jeram, B.; Haucke, J.; Feyrin, S.

    2016-07-01

    The paper describes the introduction of a new automatized build and test infrastructure, based on the open-source software Jenkins1, into the ESO Very Large Telescope control software to replace the preexisting in-house solution. A brief introduction to software quality practices is given, a description of the previous solution, the limitations of it and new upcoming requirements. Modifications required to adapt the new system are described, how these were implemented to current software and the results obtained. An overview on how the new system may be used in future projects is also presented.

  9. Towards automating the discovery of certain innovative design principles through a clustering-based optimization technique

    NASA Astrophysics Data System (ADS)

    Bandaru, Sunith; Deb, Kalyanmoy

    2011-09-01

    In this article, a methodology is proposed for automatically extracting innovative design principles which make a system or process (subject to conflicting objectives) optimal using its Pareto-optimal dataset. Such 'higher knowledge' would not only help designers to execute the system better, but also enable them to predict how changes in one variable would affect other variables if the system has to retain its optimal behaviour. This in turn would help solve other similar systems with different parameter settings easily without the need to perform a fresh optimization task. The proposed methodology uses a clustering-based optimization technique and is capable of discovering hidden functional relationships between the variables, objective and constraint functions and any other function that the designer wishes to include as a 'basis function'. A number of engineering design problems are considered for which the mathematical structure of these explicit relationships exists and has been revealed by a previous study. A comparison with the multivariate adaptive regression splines (MARS) approach reveals the practicality of the proposed approach due to its ability to find meaningful design principles. The success of this procedure for automated innovization is highly encouraging and indicates its suitability for further development in tackling more complex design scenarios.

  10. A physics-based fractional order model and state of energy estimation for lithium ion batteries. Part II: Parameter identification and state of energy estimation for LiFePO4 battery

    NASA Astrophysics Data System (ADS)

    Li, Xiaoyu; Pan, Ke; Fan, Guodong; Lu, Rengui; Zhu, Chunbo; Rizzoni, Giorgio; Canova, Marcello

    2017-11-01

    State of energy (SOE) is an important index for the electrochemical energy storage system in electric vehicles. In this paper, a robust state of energy estimation method in combination with a physical model parameter identification method is proposed to achieve accurate battery state estimation at different operating conditions and different aging stages. A physics-based fractional order model with variable solid-state diffusivity (FOM-VSSD) is used to characterize the dynamic performance of a LiFePO4/graphite battery. In order to update the model parameter automatically at different aging stages, a multi-step model parameter identification method based on the lexicographic optimization is especially designed for the electric vehicle operating conditions. As the battery available energy changes with different applied load current profiles, the relationship between the remaining energy loss and the state of charge, the average current as well as the average squared current is modeled. The SOE with different operating conditions and different aging stages are estimated based on an adaptive fractional order extended Kalman filter (AFEKF). Validation results show that the overall SOE estimation error is within ±5%. The proposed method is suitable for the electric vehicle online applications.

  11. Numerical Study of Variation of Mechanical Properties of a Binary Aluminum Alloy with Respect to Its Grain Shapes †

    PubMed Central

    Sharifi, Hamid; Larouche, Daniel

    2014-01-01

    To study the variation of the mechanical behavior of binary aluminum copper alloys with respect to their microstructure, a numerical simulation of their granular structure was carried out. The microstructures are created by a repeated inclusion of some predefined basic grain shapes into a representative volume element until reaching a given volume percentage of the α-phase. Depending on the grain orientations, the coalescence of the grains can be performed. Different granular microstructures are created by using different basic grain shapes. Selecting a suitable set of basic grain shapes, the modeled microstructure exhibits a realistic aluminum alloy microstructure which can be adapted to a particular cooling condition. Our granular models are automatically converted to a finite element model. The effect of grain shapes and sizes on the variation of elastic modulus and plasticity of such a heterogeneous domain was investigated. Our results show that for a given α-phase fraction having different grain shapes and sizes, the elastic moduli and yield stresses are almost the same but the ultimate stress and elongation are more affected. Besides, we realized that the distribution of the θ phases inside the α phases is more important than the grain shape itself. PMID:28788607

  12. Multiresponsive Kinematics and Robotics of Surface-Patterned Polymer Film.

    PubMed

    Liang, Shumin; Qiu, Xiaxin; Yuan, Jun; Huang, Wei; Du, Xuemin; Zhang, Lidong

    2018-06-06

    Soft robots, sensors, and energy harvesters require materials that are capable of converting external stimuli to visible deformations, especially when shape-programmable deformations are desired. Herein, we develop a polymer film that can reversibly respond to humidity, heating, and acetone vapors with the generation of shape-programmable large deformations. Poly(vinylidene fluoride) film, capable of providing acetone responsiveness, is designed with microchannel patterns created on its one side by using templates, and the microchannels-patterned side is then treated with hygroscopic 3-aminopropyltriethoxysilane (APTES) to give humidity/heating-responsive elements. The APTES-modified microchannels lead to anisotropic flexural modulus and hygroscopicity in the film, resulting in the shape-programmed kinematics depending on the orientations of surface microchannels. As the microchannels align at oblique/right angles with respect to the long axis of the film strips, the coiling/curling motions can be generated in response to the stimuli, and the better motion performances are found in humidity- and heating-driven systems. This material utilized in self-adaptive soft robots exhibits prominent toughness, powerful strength, and long endurance for converting humidity and heat to mechanical works including transportation of lightweight objects, automatic sensing cap, and mimicking crawling in nature. We thus believe that this material with shape-programmable multisensing capability might be suitable for soft machines and robotics.

  13. A fast and automatic fusion algorithm for unregistered multi-exposure image sequence

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Yu, Feihong

    2014-09-01

    Human visual system (HVS) can visualize all the brightness levels of the scene through visual adaptation. However, the dynamic range of most commercial digital cameras and display devices are smaller than the dynamic range of human eye. This implies low dynamic range (LDR) images captured by normal digital camera may lose image details. We propose an efficient approach to high dynamic (HDR) image fusion that copes with image displacement and image blur degradation in a computationally efficient manner, which is suitable for implementation on mobile devices. The various image registration algorithms proposed in the previous literatures are unable to meet the efficiency and performance requirements in the application of mobile devices. In this paper, we selected Oriented Brief (ORB) detector to extract local image structures. The descriptor selected in multi-exposure image fusion algorithm has to be fast and robust to illumination variations and geometric deformations. ORB descriptor is the best candidate in our algorithm. Further, we perform an improved RANdom Sample Consensus (RANSAC) algorithm to reject incorrect matches. For the fusion of images, a new approach based on Stationary Wavelet Transform (SWT) is used. The experimental results demonstrate that the proposed algorithm generates high quality images at low computational cost. Comparisons with a number of other feature matching methods show that our method gets better performance.

  14. All-in-one 3D printed microscopy chamber for multidimensional imaging, the UniverSlide.

    PubMed

    Alessandri, Kevin; Andrique, Laetitia; Feyeux, Maxime; Bikfalvi, Andreas; Nassoy, Pierre; Recher, Gaëlle

    2017-02-10

    While live 3D high resolution microscopy techniques are developing rapidly, their use for biological applications is partially hampered by practical difficulties such as the lack of a versatile sample chamber. Here, we propose the design of a multi-usage observation chamber adapted for live 3D bio-imaging. We show the usefulness and practicality of this chamber, which we named the UniverSlide, for live imaging of two case examples, namely multicellular systems encapsulated in sub-millimeter hydrogel shells and zebrafish larvae. We also demonstrate its versatility and compatibility with all microscopy devices by using upright or inverted microscope configurations after loading the UniverSlide with fixed or living samples. Further, the device is applicable for medium/high throughput screening and automatized multi-position image acquisition, providing a constraint-free but stable and parallelized immobilization of the samples. The frame of the UniverSlide is fabricated using a stereolithography 3D printer, has the size of a microscopy slide, is autoclavable and sealed with a removable lid, which makes it suitable for use in a controlled culture environment. We describe in details how to build this chamber and we provide all the files necessary to print the different pieces in the lab.

  15. All-in-one 3D printed microscopy chamber for multidimensional imaging, the UniverSlide

    PubMed Central

    Alessandri, Kevin; Andrique, Laetitia; Feyeux, Maxime; Bikfalvi, Andreas; Nassoy, Pierre; Recher, Gaëlle

    2017-01-01

    While live 3D high resolution microscopy techniques are developing rapidly, their use for biological applications is partially hampered by practical difficulties such as the lack of a versatile sample chamber. Here, we propose the design of a multi-usage observation chamber adapted for live 3D bio-imaging. We show the usefulness and practicality of this chamber, which we named the UniverSlide, for live imaging of two case examples, namely multicellular systems encapsulated in sub-millimeter hydrogel shells and zebrafish larvae. We also demonstrate its versatility and compatibility with all microscopy devices by using upright or inverted microscope configurations after loading the UniverSlide with fixed or living samples. Further, the device is applicable for medium/high throughput screening and automatized multi-position image acquisition, providing a constraint-free but stable and parallelized immobilization of the samples. The frame of the UniverSlide is fabricated using a stereolithography 3D printer, has the size of a microscopy slide, is autoclavable and sealed with a removable lid, which makes it suitable for use in a controlled culture environment. We describe in details how to build this chamber and we provide all the files necessary to print the different pieces in the lab. PMID:28186188

  16. Replacing the AMOR with the miniDOAS in the ammonia monitoring network in the Netherlands

    NASA Astrophysics Data System (ADS)

    Berkhout, Augustinus J. C.; Swart, Daan P. J.; Volten, Hester; Gast, Lou F. L.; Haaima, Marty; Verboom, Hans; Stefess, Guus; Hafkenscheid, Theo; Hoogerbrugge, Ronald

    2017-11-01

    In this paper we present the continued development of the miniDOAS, an active differential optical absorption spectroscopy (DOAS) instrument used to measure ammonia concentrations in ambient air. The miniDOAS has been adapted for use in the Dutch National Air Quality Monitoring Network. The miniDOAS replaces the life-expired continuous-flow denuder ammonia monitor (AMOR). From September 2014 to December 2015, both instruments measured in parallel before the change from AMOR to miniDOAS was made. The instruments were deployed at six monitoring stations throughout the Netherlands. We report on the results of this intercomparison. Both instruments show a good uptime of ca. 90 %, adequate for an automatic monitoring network. Although both instruments produce 1 min values of ammonia concentrations, a direct comparison on short timescales such as minutes or hours does not give meaningful results because the AMOR response to changing ammonia concentrations is slow. Comparisons between daily and monthly values show good agreement. For monthly averages, we find a small average offset of 0.65 ± 0.28 µg m-3 and a slope of 1.034 ± 0.028, with the miniDOAS measuring slightly higher than the AMOR. The fast time resolution of the miniDOAS makes the instrument suitable not only for monitoring but also for process studies.

  17. A Method for Extracting Road Boundary Information from Crowdsourcing Vehicle GPS Trajectories.

    PubMed

    Yang, Wei; Ai, Tinghua; Lu, Wei

    2018-04-19

    Crowdsourcing trajectory data is an important approach for accessing and updating road information. In this paper, we present a novel approach for extracting road boundary information from crowdsourcing vehicle traces based on Delaunay triangulation (DT). First, an optimization and interpolation method is proposed to filter abnormal trace segments from raw global positioning system (GPS) traces and interpolate the optimization segments adaptively to ensure there are enough tracking points. Second, constructing the DT and the Voronoi diagram within interpolated tracking lines to calculate road boundary descriptors using the area of Voronoi cell and the length of triangle edge. Then, the road boundary detection model is established integrating the boundary descriptors and trajectory movement features (e.g., direction) by DT. Third, using the boundary detection model to detect road boundary from the DT constructed by trajectory lines, and a regional growing method based on seed polygons is proposed to extract the road boundary. Experiments were conducted using the GPS traces of taxis in Beijing, China, and the results show that the proposed method is suitable for extracting the road boundary from low-frequency GPS traces, multi-type road structures, and different time intervals. Compared with two existing methods, the automatically extracted boundary information was proved to be of higher quality.

  18. A Method for Extracting Road Boundary Information from Crowdsourcing Vehicle GPS Trajectories

    PubMed Central

    Yang, Wei

    2018-01-01

    Crowdsourcing trajectory data is an important approach for accessing and updating road information. In this paper, we present a novel approach for extracting road boundary information from crowdsourcing vehicle traces based on Delaunay triangulation (DT). First, an optimization and interpolation method is proposed to filter abnormal trace segments from raw global positioning system (GPS) traces and interpolate the optimization segments adaptively to ensure there are enough tracking points. Second, constructing the DT and the Voronoi diagram within interpolated tracking lines to calculate road boundary descriptors using the area of Voronoi cell and the length of triangle edge. Then, the road boundary detection model is established integrating the boundary descriptors and trajectory movement features (e.g., direction) by DT. Third, using the boundary detection model to detect road boundary from the DT constructed by trajectory lines, and a regional growing method based on seed polygons is proposed to extract the road boundary. Experiments were conducted using the GPS traces of taxis in Beijing, China, and the results show that the proposed method is suitable for extracting the road boundary from low-frequency GPS traces, multi-type road structures, and different time intervals. Compared with two existing methods, the automatically extracted boundary information was proved to be of higher quality. PMID:29671792

  19. All-in-one 3D printed microscopy chamber for multidimensional imaging, the UniverSlide

    NASA Astrophysics Data System (ADS)

    Alessandri, Kevin; Andrique, Laetitia; Feyeux, Maxime; Bikfalvi, Andreas; Nassoy, Pierre; Recher, Gaëlle

    2017-02-01

    While live 3D high resolution microscopy techniques are developing rapidly, their use for biological applications is partially hampered by practical difficulties such as the lack of a versatile sample chamber. Here, we propose the design of a multi-usage observation chamber adapted for live 3D bio-imaging. We show the usefulness and practicality of this chamber, which we named the UniverSlide, for live imaging of two case examples, namely multicellular systems encapsulated in sub-millimeter hydrogel shells and zebrafish larvae. We also demonstrate its versatility and compatibility with all microscopy devices by using upright or inverted microscope configurations after loading the UniverSlide with fixed or living samples. Further, the device is applicable for medium/high throughput screening and automatized multi-position image acquisition, providing a constraint-free but stable and parallelized immobilization of the samples. The frame of the UniverSlide is fabricated using a stereolithography 3D printer, has the size of a microscopy slide, is autoclavable and sealed with a removable lid, which makes it suitable for use in a controlled culture environment. We describe in details how to build this chamber and we provide all the files necessary to print the different pieces in the lab.

  20. Development and application of a backscatter lidar forward operator for quantitative validation of aerosol dispersion models and future data assimilation

    NASA Astrophysics Data System (ADS)

    Geisinger, Armin; Behrendt, Andreas; Wulfmeyer, Volker; Strohbach, Jens; Förstner, Jochen; Potthast, Roland

    2017-12-01

    A new backscatter lidar forward operator was developed which is based on the distinct calculation of the aerosols' backscatter and extinction properties. The forward operator was adapted to the COSMO-ART ash dispersion simulation of the Eyjafjallajökull eruption in 2010. While the particle number concentration was provided as a model output variable, the scattering properties of each individual particle type were determined by dedicated scattering calculations. Sensitivity studies were performed to estimate the uncertainties related to the assumed particle properties. Scattering calculations for several types of non-spherical particles required the usage of T-matrix routines. Due to the distinct calculation of the backscatter and extinction properties of the models' volcanic ash size classes, the sensitivity studies could be made for each size class individually, which is not the case for forward models based on a fixed lidar ratio. Finally, the forward-modeled lidar profiles have been compared to automated ceilometer lidar (ACL) measurements both qualitatively and quantitatively while the attenuated backscatter coefficient was chosen as a suitable physical quantity. As the ACL measurements were not calibrated automatically, their calibration had to be performed using satellite lidar and ground-based Raman lidar measurements. A slight overestimation of the model-predicted volcanic ash number density was observed. Major requirements for future data assimilation of data from ACL have been identified, namely, the availability of calibrated lidar measurement data, a scattering database for atmospheric aerosols, a better representation and coverage of aerosols by the ash dispersion model, and more investigation in backscatter lidar forward operators which calculate the backscatter coefficient directly for each individual aerosol type. The introduced forward operator offers the flexibility to be adapted to a multitude of model systems and measurement setups.

  1. Adaptive fusion of infrared and visible images in dynamic scene

    NASA Astrophysics Data System (ADS)

    Yang, Guang; Yin, Yafeng; Man, Hong; Desai, Sachi

    2011-11-01

    Multiple modalities sensor fusion has been widely employed in various surveillance and military applications. A variety of image fusion techniques including PCA, wavelet, curvelet and HSV has been proposed in recent years to improve human visual perception for object detection. One of the main challenges for visible and infrared image fusion is to automatically determine an optimal fusion strategy for different input scenes along with an acceptable computational cost. This paper, we propose a fast and adaptive feature selection based image fusion method to obtain high a contrast image from visible and infrared sensors for targets detection. At first, fuzzy c-means clustering is applied on the infrared image to highlight possible hotspot regions, which will be considered as potential targets' locations. After that, the region surrounding the target area is segmented as the background regions. Then image fusion is locally applied on the selected target and background regions by computing different linear combination of color components from registered visible and infrared images. After obtaining different fused images, histogram distributions are computed on these local fusion images as the fusion feature set. The variance ratio which is based on Linear Discriminative Analysis (LDA) measure is employed to sort the feature set and the most discriminative one is selected for the whole image fusion. As the feature selection is performed over time, the process will dynamically determine the most suitable feature for the image fusion in different scenes. Experiment is conducted on the OSU Color-Thermal database, and TNO Human Factor dataset. The fusion results indicate that our proposed method achieved a competitive performance compared with other fusion algorithms at a relatively low computational cost.

  2. Neural Flight Control System

    NASA Technical Reports Server (NTRS)

    Gundy-Burlet, Karen

    2003-01-01

    The Neural Flight Control System (NFCS) was developed to address the need for control systems that can be produced and tested at lower cost, easily adapted to prototype vehicles and for flight systems that can accommodate damaged control surfaces or changes to aircraft stability and control characteristics resulting from failures or accidents. NFCS utilizes on a neural network-based flight control algorithm which automatically compensates for a broad spectrum of unanticipated damage or failures of an aircraft in flight. Pilot stick and rudder pedal inputs are fed into a reference model which produces pitch, roll and yaw rate commands. The reference model frequencies and gains can be set to provide handling quality characteristics suitable for the aircraft of interest. The rate commands are used in conjunction with estimates of the aircraft s stability and control (S&C) derivatives by a simplified Dynamic Inverse controller to produce virtual elevator, aileron and rudder commands. These virtual surface deflection commands are optimally distributed across the aircraft s available control surfaces using linear programming theory. Sensor data is compared with the reference model rate commands to produce an error signal. A Proportional/Integral (PI) error controller "winds up" on the error signal and adds an augmented command to the reference model output with the effect of zeroing the error signal. In order to provide more consistent handling qualities for the pilot, neural networks learn the behavior of the error controller and add in the augmented command before the integrator winds up. In the case of damage sufficient to affect the handling qualities of the aircraft, an Adaptive Critic is utilized to reduce the reference model frequencies and gains to stay within a flyable envelope of the aircraft.

  3. Suitability evaluation tool for lands (rice, corn and soybean) as mobile application

    NASA Astrophysics Data System (ADS)

    Rahim, S. E.; Supli, A. A.; Damiri, N.

    2017-09-01

    Evaluation of land suitability for special purposes e.g. for food crops is a must, a means to understand determining factors to be considered in the management of a land successfully. A framework for evaluating the land suitability for purposes in agriculture was first introduced by the Food and Agriculture Organization (FAO) in late 1970s. When using the framework manually, it is time consuming and not interesting for land users. Therefore, the authors have developed an effective tool by transforming the FAO framework into smart mobile application. This application is designed by using simple language for each factor and also by utilizing rule based system (RBS) algorithm. The factors involved are soil type, depth of soil solum, soil fertility, soil pH, drainage, risk of flood, etc. Suitability in this paper is limited to rice, corn and soybean. The application is found to be easier to understand and also could automatically determine the suitability of land. Usability testing was also conducted with 75 respondents. The results showed the usability was in "very good" classification. The program is urgently needed by the land managers, farmers, lecturers, students and government officials (planners) to help them more easily manage their land for a better future.

  4. Small RNA Library Preparation Method for Next-Generation Sequencing Using Chemical Modifications to Prevent Adapter Dimer Formation.

    PubMed

    Shore, Sabrina; Henderson, Jordana M; Lebedev, Alexandre; Salcedo, Michelle P; Zon, Gerald; McCaffrey, Anton P; Paul, Natasha; Hogrefe, Richard I

    2016-01-01

    For most sample types, the automation of RNA and DNA sample preparation workflows enables high throughput next-generation sequencing (NGS) library preparation. Greater adoption of small RNA (sRNA) sequencing has been hindered by high sample input requirements and inherent ligation side products formed during library preparation. These side products, known as adapter dimer, are very similar in size to the tagged library. Most sRNA library preparation strategies thus employ a gel purification step to isolate tagged library from adapter dimer contaminants. At very low sample inputs, adapter dimer side products dominate the reaction and limit the sensitivity of this technique. Here we address the need for improved specificity of sRNA library preparation workflows with a novel library preparation approach that uses modified adapters to suppress adapter dimer formation. This workflow allows for lower sample inputs and elimination of the gel purification step, which in turn allows for an automatable sRNA library preparation protocol.

  5. Does Visuomotor Adaptation Proceed in Stages? An Examination of the Learning Model by Chein and Schneider (2012).

    PubMed

    Simon, Anja; Bock, Otmar

    2015-01-01

    A new 3-stage model based on neuroimaging evidence is proposed by Chein and Schneider (2012). Each stage is associated with different brain regions, and draws on cognitive abilities: the first stage on creativity, the second on selective attention, and the third on automatic processing. The purpose of the present study was to scrutinize the validity of this model for 1 popular learning paradigm, visuomotor adaptation. Participants completed tests for creativity, selective attention and automated processing before attending in a pointing task with adaptation to a 60° rotation of visual feedback. To examine the relationship between cognitive abilities and motor learning at different times of practice, associations between cognitive and adaptation scores were calculated repeatedly throughout adaptation. The authors found no benefit of high creativity for adaptive performance. High levels of selective attention were positively associated with early adaptation, but hardly with late adaptation and de-adaptation. High levels of automated execution were beneficial for late adaptation, but hardly for early and de-adaptation. From this we conclude that Chein and Schneider's first learning stage is difficult to confirm by research on visuomotor adaptation, and that the other 2 learning stages rather relate to workaround strategies than to actual adaptive recalibration.

  6. Automatic allograft bone selection through band registration and its application to distal femur.

    PubMed

    Zhang, Yu; Qiu, Lei; Li, Fengzan; Zhang, Qing; Zhang, Li; Niu, Xiaohui

    2017-09-01

    Clinical reports suggest that large bone defects could be effectively restored by allograft bone transplantation, where allograft bone selection acts an important role. Besides, there is a huge demand for developing the automatic allograft bone selection methods, as the automatic methods could greatly improve the management efficiency of the large bone banks. Although several automatic methods have been presented to select the most suitable allograft bone from the massive allograft bone bank, these methods still suffer from inaccuracy. In this paper, we propose an effective allograft bone selection method without using the contralateral bones. Firstly, the allograft bone is globally aligned to the recipient bone by surface registration. Then, the global alignment is further refined through band registration. The band, defined as the recipient points within the lifted and lowered cutting planes, could involve more local structure of the defected segment. Therefore, our method could achieve robust alignment and high registration accuracy of the allograft and recipient. Moreover, the existing contour method and surface method could be unified into one framework under our method by adjusting the lift and lower distances of the cutting planes. Finally, our method has been validated on the database of distal femurs. The experimental results indicate that our method outperforms the surface method and contour method.

  7. Automatic non-proliferative diabetic retinopathy screening system based on color fundus image.

    PubMed

    Xiao, Zhitao; Zhang, Xinpeng; Geng, Lei; Zhang, Fang; Wu, Jun; Tong, Jun; Ogunbona, Philip O; Shan, Chunyan

    2017-10-26

    Non-proliferative diabetic retinopathy is the early stage of diabetic retinopathy. Automatic detection of non-proliferative diabetic retinopathy is significant for clinical diagnosis, early screening and course progression of patients. This paper introduces the design and implementation of an automatic system for screening non-proliferative diabetic retinopathy based on color fundus images. Firstly, the fundus structures, including blood vessels, optic disc and macula, are extracted and located, respectively. In particular, a new optic disc localization method using parabolic fitting is proposed based on the physiological structure characteristics of optic disc and blood vessels. Then, early lesions, such as microaneurysms, hemorrhages and hard exudates, are detected based on their respective characteristics. An equivalent optical model simulating human eyes is designed based on the anatomical structure of retina. Main structures and early lesions are reconstructed in the 3D space for better visualization. Finally, the severity of each image is evaluated based on the international criteria of diabetic retinopathy. The system has been tested on public databases and images from hospitals. Experimental results demonstrate that the proposed system achieves high accuracy for main structures and early lesions detection. The results of severity classification for non-proliferative diabetic retinopathy are also accurate and suitable. Our system can assist ophthalmologists for clinical diagnosis, automatic screening and course progression of patients.

  8. DALMATIAN: An Algorithm for Automatic Cell Detection and Counting in 3D.

    PubMed

    Shuvaev, Sergey A; Lazutkin, Alexander A; Kedrov, Alexander V; Anokhin, Konstantin V; Enikolopov, Grigori N; Koulakov, Alexei A

    2017-01-01

    Current 3D imaging methods, including optical projection tomography, light-sheet microscopy, block-face imaging, and serial two photon tomography enable visualization of large samples of biological tissue. Large volumes of data obtained at high resolution require development of automatic image processing techniques, such as algorithms for automatic cell detection or, more generally, point-like object detection. Current approaches to automated cell detection suffer from difficulties originating from detection of particular cell types, cell populations of different brightness, non-uniformly stained, and overlapping cells. In this study, we present a set of algorithms for robust automatic cell detection in 3D. Our algorithms are suitable for, but not limited to, whole brain regions and individual brain sections. We used watershed procedure to split regional maxima representing overlapping cells. We developed a bootstrap Gaussian fit procedure to evaluate the statistical significance of detected cells. We compared cell detection quality of our algorithm and other software using 42 samples, representing 6 staining and imaging techniques. The results provided by our algorithm matched manual expert quantification with signal-to-noise dependent confidence, including samples with cells of different brightness, non-uniformly stained, and overlapping cells for whole brain regions and individual tissue sections. Our algorithm provided the best cell detection quality among tested free and commercial software.

  9. Achieving Accurate Automatic Sleep Staging on Manually Pre-processed EEG Data Through Synchronization Feature Extraction and Graph Metrics.

    PubMed

    Chriskos, Panteleimon; Frantzidis, Christos A; Gkivogkli, Polyxeni T; Bamidis, Panagiotis D; Kourtidou-Papadeli, Chrysoula

    2018-01-01

    Sleep staging, the process of assigning labels to epochs of sleep, depending on the stage of sleep they belong, is an arduous, time consuming and error prone process as the initial recordings are quite often polluted by noise from different sources. To properly analyze such data and extract clinical knowledge, noise components must be removed or alleviated. In this paper a pre-processing and subsequent sleep staging pipeline for the sleep analysis of electroencephalographic signals is described. Two novel methods of functional connectivity estimation (Synchronization Likelihood/SL and Relative Wavelet Entropy/RWE) are comparatively investigated for automatic sleep staging through manually pre-processed electroencephalographic recordings. A multi-step process that renders signals suitable for further analysis is initially described. Then, two methods that rely on extracting synchronization features from electroencephalographic recordings to achieve computerized sleep staging are proposed, based on bivariate features which provide a functional overview of the brain network, contrary to most proposed methods that rely on extracting univariate time and frequency features. Annotation of sleep epochs is achieved through the presented feature extraction methods by training classifiers, which are in turn able to accurately classify new epochs. Analysis of data from sleep experiments on a randomized, controlled bed-rest study, which was organized by the European Space Agency and was conducted in the "ENVIHAB" facility of the Institute of Aerospace Medicine at the German Aerospace Center (DLR) in Cologne, Germany attains high accuracy rates, over 90% based on ground truth that resulted from manual sleep staging by two experienced sleep experts. Therefore, it can be concluded that the above feature extraction methods are suitable for semi-automatic sleep staging.

  10. Achieving Accurate Automatic Sleep Staging on Manually Pre-processed EEG Data Through Synchronization Feature Extraction and Graph Metrics

    PubMed Central

    Chriskos, Panteleimon; Frantzidis, Christos A.; Gkivogkli, Polyxeni T.; Bamidis, Panagiotis D.; Kourtidou-Papadeli, Chrysoula

    2018-01-01

    Sleep staging, the process of assigning labels to epochs of sleep, depending on the stage of sleep they belong, is an arduous, time consuming and error prone process as the initial recordings are quite often polluted by noise from different sources. To properly analyze such data and extract clinical knowledge, noise components must be removed or alleviated. In this paper a pre-processing and subsequent sleep staging pipeline for the sleep analysis of electroencephalographic signals is described. Two novel methods of functional connectivity estimation (Synchronization Likelihood/SL and Relative Wavelet Entropy/RWE) are comparatively investigated for automatic sleep staging through manually pre-processed electroencephalographic recordings. A multi-step process that renders signals suitable for further analysis is initially described. Then, two methods that rely on extracting synchronization features from electroencephalographic recordings to achieve computerized sleep staging are proposed, based on bivariate features which provide a functional overview of the brain network, contrary to most proposed methods that rely on extracting univariate time and frequency features. Annotation of sleep epochs is achieved through the presented feature extraction methods by training classifiers, which are in turn able to accurately classify new epochs. Analysis of data from sleep experiments on a randomized, controlled bed-rest study, which was organized by the European Space Agency and was conducted in the “ENVIHAB” facility of the Institute of Aerospace Medicine at the German Aerospace Center (DLR) in Cologne, Germany attains high accuracy rates, over 90% based on ground truth that resulted from manual sleep staging by two experienced sleep experts. Therefore, it can be concluded that the above feature extraction methods are suitable for semi-automatic sleep staging. PMID:29628883

  11. Automatic patient alignment system using 3D ultrasound.

    PubMed

    Kaar, Marcus; Figl, Michael; Hoffmann, Rainer; Birkfellner, Wolfgang; Stock, Markus; Georg, Dietmar; Goldner, Gregor; Hummel, Johann

    2013-04-01

    Recent developments in radiation therapy such as intensity modulated radiotherapy (IMRT) or dose painting promise to provide better dose distribution on the tumor. For effective application of these methods the exact positioning of the patient and the localization of the irradiated organ and surrounding structures is crucial. Especially with respect to the treatment of the prostate, ultrasound (US) allows for differentiation between soft tissue and was therefore applied by various repositioning systems, such as BAT or Clarity. The authors built a new system which uses 3D US at both sites, the CT room and the intervention room and applied a 3D/3D US/US registration for automatic repositioning. In a first step the authors applied image preprocessing methods to prepare the US images for an optimal registration process. For the 3D/3D registration procedure five different metrics were evaluated. To find the image metric which fits best for a particular patient three 3D US images were taken at the CT site and registered to each other. From these results an US registration error was calculated. The most successful image metric was then applied for the US/US registration process. The success of the whole repositioning method was assessed by taking the results of an ExacTrac system as golden standard. The US/US registration error was found to be 2.99 ± 1.54 mm with respect to the mutual information metric by Mattes (eleven patients) which revealed to be the most suitable of the assessed metrics. For complete repositioning chain the error amounted to 4.15 ± 1.20 mm (ten patients). The authors developed a system for patient repositioning which works automatically without the necessity of user interaction with an accuracy which seems to be suitable for clinical application.

  12. Toward Ensuring Health Equity: Readability and Cultural Equivalence of OMERACT Patient-reported Outcome Measures.

    PubMed

    Petkovic, Jennifer; Epstein, Jonathan; Buchbinder, Rachelle; Welch, Vivian; Rader, Tamara; Lyddiatt, Anne; Clerehan, Rosemary; Christensen, Robin; Boonen, Annelies; Goel, Niti; Maxwell, Lara J; Toupin-April, Karine; De Wit, Maarten; Barton, Jennifer; Flurey, Caroline; Jull, Janet; Barnabe, Cheryl; Sreih, Antoine G; Campbell, Willemina; Pohl, Christoph; Duruöz, Mehmet Tuncay; Singh, Jasvinder A; Tugwell, Peter S; Guillemin, Francis

    2015-12-01

    The goal of the Outcome Measures in Rheumatology (OMERACT) 12 (2014) equity working group was to determine whether and how comprehensibility of patient-reported outcome measures (PROM) should be assessed, to ensure suitability for people with low literacy and differing cultures. The English, Dutch, French, and Turkish Health Assessment Questionnaires and English and French Osteoarthritis Knee and Hip Quality of Life questionnaires were evaluated by applying 3 readability formulas: Flesch Reading Ease, Flesch-Kincaid grade level, and Simple Measure of Gobbledygook; and a new tool, the Evaluative Linguistic Framework for Questionnaires, developed to assess text quality of questionnaires. We also considered a study assessing cross-cultural adaptation with/without back-translation and/or expert committee. The results of this preconference work were presented to the equity working group participants to gain their perspectives on the importance of comprehensibility and cross-cultural adaptation for PROM. Thirty-one OMERACT delegates attended the equity session. Twenty-six participants agreed that PROM should be assessed for comprehensibility and for use of suitable methods (4 abstained, 1 no). Twenty-two participants agreed that cultural equivalency of PROM should be assessed and suitable methods used (7 abstained, 2 no). Special interest group participants identified challenges with cross-cultural adaptation including resources required, and suggested patient involvement for improving translation and adaptation. Future work will include consensus exercises on what methods are required to ensure PROM are appropriate for people with low literacy and different cultures.

  13. Toward Ensuring Health Equity: Readability and Cultural Equivalence of OMERACT Patient-reported Outcome Measures

    PubMed Central

    Petkovic, Jennifer; Epstein, Jonathan; Buchbinder, Rachelle; Welch, Vivian; Rader, Tamara; Lyddiatt, Anne; Clerehan, Rosemary; Christensen, Robin; Boonen, Annelies; Goel, Niti; Maxwell, Lara J.; Toupin-April, Karine; De Wit, Maarten; Barton, Jennifer; Flurey, Caroline; Jull, Janet; Barnabe, Cheryl; Sreih, Antoine G.; Campbell, Willemina; Pohl, Christoph; Duruöz, Mehmet Tuncay; Singh, Jasvinder A.; Tugwell, Peter S.; Guillemin, Francis

    2016-01-01

    Objective The goal of the Outcome Measures in Rheumatology (OMERACT) 12 (2014) equity working group was to determine whether and how comprehensibility of patient-reported outcome measures (PROM) should be assessed, to ensure suitability for people with low literacy and differing cultures. Methods The English, Dutch, French, and Turkish Health Assessment Questionnaires and English and French Osteoarthritis Knee and Hip Quality of Life questionnaires were evaluated by applying 3 readability formulas: Flesch Reading Ease, Flesch-Kincaid grade level, and Simple Measure of Gobbledygook; and a new tool, the Evaluative Linguistic Framework for Questionnaires, developed to assess text quality of questionnaires. We also considered a study assessing cross-cultural adaptation with/without back-translation and/or expert committee. The results of this preconference work were presented to the equity working group participants to gain their perspectives on the importance of comprehensibility and cross-cultural adaptation for PROM. Results Thirty-one OMERACT delegates attended the equity session. Twenty-six participants agreed that PROM should be assessed for comprehensibility and for use of suitable methods (4 abstained, 1 no). Twenty-two participants agreed that cultural equivalency of PROM should be assessed and suitable methods used (7 abstained, 2 no). Special interest group participants identified challenges with cross-cultural adaptation including resources required, and suggested patient involvement for improving translation and adaptation. Conclusion Future work will include consensus exercises on what methods are required to ensure PROM are appropriate for people with low literacy and different cultures. PMID:26077410

  14. Quantitative analysis of the patellofemoral motion pattern using semi-automatic processing of 4D CT data.

    PubMed

    Forsberg, Daniel; Lindblom, Maria; Quick, Petter; Gauffin, Håkan

    2016-09-01

    To present a semi-automatic method with minimal user interaction for quantitative analysis of the patellofemoral motion pattern. 4D CT data capturing the patellofemoral motion pattern of a continuous flexion and extension were collected for five patients prone to patellar luxation both pre- and post-surgically. For the proposed method, an observer would place landmarks in a single 3D volume, which then are automatically propagated to the other volumes in a time sequence. From the landmarks in each volume, the measures patellar displacement, patellar tilt and angle between femur and tibia were computed. Evaluation of the observer variability showed the proposed semi-automatic method to be favorable over a fully manual counterpart, with an observer variability of approximately 1.5[Formula: see text] for the angle between femur and tibia, 1.5 mm for the patellar displacement, and 4.0[Formula: see text]-5.0[Formula: see text] for the patellar tilt. The proposed method showed that surgery reduced the patellar displacement and tilt at maximum extension with approximately 10-15 mm and 15[Formula: see text]-20[Formula: see text] for three patients but with less evident differences for two of the patients. A semi-automatic method suitable for quantification of the patellofemoral motion pattern as captured by 4D CT data has been presented. Its observer variability is on par with that of other methods but with the distinct advantage to support continuous motions during the image acquisition.

  15. Effects of 99mTc-TRODAT-1 drug template on image quantitative analysis

    PubMed Central

    Yang, Bang-Hung; Chou, Yuan-Hwa; Wang, Shyh-Jen; Chen, Jyh-Cheng

    2018-01-01

    99mTc-TRODAT-1 is a type of drug that can bind to dopamine transporters in living organisms and is often used in SPCT imaging for observation of changes in the activity uptake of dopamine in the striatum. Therefore, it is currently widely used in studies on clinical diagnosis of Parkinson’s disease (PD) and movement-related disorders. In conventional 99mTc-TRODAT-1 SPECT image evaluation, visual inspection or manual selection of ROI for semiquantitative analysis is mainly used to observe and evaluate the degree of striatal defects. However, these methods are dependent on the subjective opinions of observers, which lead to human errors, have shortcomings such as long duration, increased effort, and have low reproducibility. To solve this problem, this study aimed to establish an automatic semiquantitative analytical method for 99mTc-TRODAT-1. This method combines three drug templates (one built-in SPECT template in SPM software and two self-generated MRI-based and HMPAO-based TRODAT-1 templates) for the semiquantitative analysis of the striatal phantom and clinical images. At the same time, the results of automatic analysis of the three templates were compared with results from a conventional manual analysis for examining the feasibility of automatic analysis and the effects of drug templates on automatic semiquantitative analysis results. After comparison, it was found that the MRI-based TRODAT-1 template generated from MRI images is the most suitable template for 99mTc-TRODAT-1 automatic semiquantitative analysis. PMID:29543874

  16. Behavioral Ecology of Captive Species: Using Bibliographic Information to Assess Pet Suitability of Mammal Species

    PubMed Central

    Koene, Paul; de Mol, Rudi M.; Ipema, Bert

    2016-01-01

    Which mammal species are suitable to be kept as pet? For answering this question many factors have to be considered. Animals have many adaptations to their natural environment in which they have evolved that may cause adaptation problems and/or risks in captivity. Problems may be visible in behavior, welfare, health, and/or human–animal interaction, resulting, for example, in stereotypies, disease, and fear. A framework is developed in which bibliographic information of mammal species from the wild and captive environment is collected and assessed by three teams of animal scientists. Oneliners from literature about behavioral ecology, health, and welfare and human–animal relationship of 90 mammal species are collected by team 1 in a database and strength of behavioral needs and risks is assessed by team 2. Based on summaries of those strengths the suitability of the mammal species is assessed by team 3. Involvement of stakeholders for supplying bibliographic information and assessments was propagated. Combining the individual and subjective assessments of the scientists using statistical methods makes the final assessment of a rank order of suitability as pet of those species less biased and more objective. The framework is dynamic and produces an initial rank ordered list of the pet suitability of 90 mammal species, methods to add new mammal species to the list or remove animals from the list and a method to incorporate stakeholder assessments. A model is developed that allows for provisional classification of pet suitability. Periodical update of the pet suitability framework is expected to produce an updated list with increased reliability and accuracy. Furthermore, the framework could be further developed to assess the pet suitability of additional species of other animal groups, e.g., birds, reptiles, and amphibians. PMID:27243023

  17. A new airborne laser rangefinder dynamic target simulator for non-stationary environment

    NASA Astrophysics Data System (ADS)

    Ma, Pengge; Pang, Dongdong; Yi, Yang

    2017-11-01

    For the non-stationary environment simulation in laser range finder product testing, a new dynamic target simulation system is studied. First of all, the three-pulsed laser ranging principle, laser target signal composition and mathematical representation are introduced. Then, the actual nonstationary working environment of laser range finder is analyzed, and points out that the real sunshine background light clutter and target shielding effect in laser echo become the main influencing factors. After that, the dynamic laser target signal simulation method is given. Eventlly, the implementation of automatic test system based on arbitrary waveform generator is described. Practical application shows that the new echo signal automatic test system can simulate the real laser ranging environment of laser range finder, and is suitable for performance test of products.

  18. Algorithm based on regional separation for automatic grain boundary extraction using improved mean shift method

    NASA Astrophysics Data System (ADS)

    Zhenying, Xu; Jiandong, Zhu; Qi, Zhang; Yamba, Philip

    2018-06-01

    Metallographic microscopy shows that the vast majority of metal materials are composed of many small grains; the grain size of a metal is important for determining the tensile strength, toughness, plasticity, and other mechanical properties. In order to quantitatively evaluate grain size in metals, grain boundaries must be identified in metallographic images. Based on the phenomenon of grain boundary blurring or disconnection in metallographic images, this study develops an algorithm based on regional separation for automatically extracting grain boundaries by an improved mean shift method. Experimental observation shows that the grain boundaries obtained by the proposed algorithm are highly complete and accurate. This research has practical value because the proposed algorithm is suitable for grain boundary extraction from most metallographic images.

  19. Learning Petri net models of non-linear gene interactions.

    PubMed

    Mayo, Michael

    2005-10-01

    Understanding how an individual's genetic make-up influences their risk of disease is a problem of paramount importance. Although machine-learning techniques are able to uncover the relationships between genotype and disease, the problem of automatically building the best biochemical model or "explanation" of the relationship has received less attention. In this paper, I describe a method based on random hill climbing that automatically builds Petri net models of non-linear (or multi-factorial) disease-causing gene-gene interactions. Petri nets are a suitable formalism for this problem, because they are used to model concurrent, dynamic processes analogous to biochemical reaction networks. I show that this method is routinely able to identify perfect Petri net models for three disease-causing gene-gene interactions recently reported in the literature.

  20. English as a Second Language for Adults: A Curriculum Guide.

    ERIC Educational Resources Information Center

    Selman, Mary; And Others

    To help improve English as a Second Language (ESL) programs for adult learners, this curriculum guide provides informative materials for the teacher and 30 sections of lessons suitable for adaptation by the teacher. Teacher information includes materials on language teaching and learning, use of the guide, needs assessment, adapting lesson plans,…

  1. Development and Validation of a Computer Adaptive EFL Test

    ERIC Educational Resources Information Center

    He, Lianzhen; Min, Shangchao

    2017-01-01

    The first aim of this study was to develop a computer adaptive EFL test (CALT) that assesses test takers' listening and reading proficiency in English with dichotomous items and polytomous testlets. We reported in detail on the development of the CALT, including item banking, determination of suitable item response theory (IRT) models for item…

  2. The Theory about CD-CAT Based on FCA and Its Application

    ERIC Educational Resources Information Center

    Shuqun, Yang; Shuliang, Ding; Zhiqiang, Yao

    2009-01-01

    Cognitive diagnosis (CD) plays an important role in intelligent tutoring system. Computerized adaptive testing (CAT) is adaptive, fair, and efficient, which is suitable to large-scale examination. Traditional cognitive diagnostic test needs quite large number of items, the efficient and tailored CAT could be a remedy for it, so the CAT with…

  3. Using the U.S. "Test of Financial Literacy" in Germany--Adaptation and Validation

    ERIC Educational Resources Information Center

    Förster, Manuel; Happ, Roland; Molerov, Dimitar

    2017-01-01

    In this article, the authors present the adaptation and validation processes conducted to render the American "Test of Financial Literacy" (TFL) suitable for use in Germany (TFL-G). First, they outline the translation procedure followed and the various cultural adjustments made in line with international standards. Next, they present…

  4. Dominant forest tree species are potentially vulnerable to climate change over large portions of their range even at high latitudes

    PubMed Central

    de Blois, Sylvie

    2016-01-01

    Projecting suitable conditions for a species as a function of future climate provides a reasonable, although admittedly imperfect, spatially explicit estimate of species vulnerability associated with climate change. Projections emphasizing range shifts at continental scale, however, can mask contrasting patterns at local or regional scale where management and policy decisions are made. Moreover, models usually show potential for areas to become climatically unsuitable, remain suitable, or become suitable for a particular species with climate change, but each of these outcomes raises markedly different ecological and management issues. Managing forest decline at sites where climatic stress is projected to increase is likely to be the most immediate challenge resulting from climate change. Here we assess habitat suitability with climate change for five dominant tree species of eastern North American forests, focusing on areas of greatest vulnerability (loss of suitability in the baseline range) in Quebec (Canada) rather than opportunities (increase in suitability). Results show that these species are at risk of maladaptation over a remarkably large proportion of their baseline range. Depending on species, 5–21% of currently climatically suitable habitats are projected to be at risk of becoming unsuitable. This suggests that species that have traditionally defined whole regional vegetation assemblages could become less adapted to these regions, with significant impact on ecosystems and forest economy. In spite of their well-recognised limitations and the uncertainty that remains, regionally-explicit risk assessment approaches remain one of the best options to convey that message and the need for climate policies and forest management adaptation strategies. PMID:27478706

  5. Dominant forest tree species are potentially vulnerable to climate change over large portions of their range even at high latitudes.

    PubMed

    Périé, Catherine; de Blois, Sylvie

    2016-01-01

    Projecting suitable conditions for a species as a function of future climate provides a reasonable, although admittedly imperfect, spatially explicit estimate of species vulnerability associated with climate change. Projections emphasizing range shifts at continental scale, however, can mask contrasting patterns at local or regional scale where management and policy decisions are made. Moreover, models usually show potential for areas to become climatically unsuitable, remain suitable, or become suitable for a particular species with climate change, but each of these outcomes raises markedly different ecological and management issues. Managing forest decline at sites where climatic stress is projected to increase is likely to be the most immediate challenge resulting from climate change. Here we assess habitat suitability with climate change for five dominant tree species of eastern North American forests, focusing on areas of greatest vulnerability (loss of suitability in the baseline range) in Quebec (Canada) rather than opportunities (increase in suitability). Results show that these species are at risk of maladaptation over a remarkably large proportion of their baseline range. Depending on species, 5-21% of currently climatically suitable habitats are projected to be at risk of becoming unsuitable. This suggests that species that have traditionally defined whole regional vegetation assemblages could become less adapted to these regions, with significant impact on ecosystems and forest economy. In spite of their well-recognised limitations and the uncertainty that remains, regionally-explicit risk assessment approaches remain one of the best options to convey that message and the need for climate policies and forest management adaptation strategies.

  6. Adaptive Time Stepping for Transient Network Flow Simulation in Rocket Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok K.; Ravindran, S. S.

    2017-01-01

    Fluid and thermal transients found in rocket propulsion systems such as propellant feedline system is a complex process involving fast phases followed by slow phases. Therefore their time accurate computation requires use of short time step initially followed by the use of much larger time step. Yet there are instances that involve fast-slow-fast phases. In this paper, we present a feedback control based adaptive time stepping algorithm, and discuss its use in network flow simulation of fluid and thermal transients. The time step is automatically controlled during the simulation by monitoring changes in certain key variables and by feedback. In order to demonstrate the viability of time adaptivity for engineering problems, we applied it to simulate water hammer and cryogenic chill down in pipelines. Our comparison and validation demonstrate the accuracy and efficiency of this adaptive strategy.

  7. Dynamic Testing and Automatic Repair of Reconfigurable Wiring Harnesses

    DTIC Science & Technology

    2006-11-27

    Switch An M ×N grid of switches configured to provide a M -input, N -output routing network. Permutation Network A permutation network performs an...wiring reduces the effective advantage of their reduced switch count, particularly when considering that regular grids (crossbar switches being a...are connected to. The outline circuit shown in Fig. 20 shows how a suitable ‘discovery probe’ might be implemented. The circuit shows a UART

  8. Miniaturization of the Clonogenic Assay Using Confluence Measurement

    PubMed Central

    Mayr, Christian; Beyreis, Marlena; Dobias, Heidemarie; Gaisberger, Martin; Pichler, Martin; Ritter, Markus; Jakab, Martin; Neureiter, Daniel; Kiesslich, Tobias

    2018-01-01

    The clonogenic assay is a widely used method to study the ability of cells to ‘infinitely’ produce progeny and is, therefore, used as a tool in tumor biology to measure tumor-initiating capacity and stem cell status. However, the standard protocol of using 6-well plates has several disadvantages. By miniaturizing the assay to a 96-well microplate format, as well as by utilizing the confluence detection function of a multimode reader, we here describe a new and modified protocol that allows comprehensive experimental setups and a non-endpoint, label-free semi-automatic analysis. Comparison of bright field images with confluence images demonstrated robust and reproducible detection of clones by the confluence detection function. Moreover, time-resolved non-endpoint confluence measurement of the same well showed that semi-automatic analysis was suitable for determining the mean size and colony number. By treating cells with an inhibitor of clonogenic growth (PTC-209), we show that our modified protocol is suitable for comprehensive (broad concentration range, addition of technical replicates) concentration- and time-resolved analysis of the effect of substances or treatments on clonogenic growth. In summary, this protocol represents a time- and cost-effective alternative to the commonly used 6-well protocol (with endpoint staining) and also provides additional information about the kinetics of clonogenic growth. PMID:29510509

  9. Mobile-Dose: A Dose-Meter Designed for Use in Automatic Machineries for Dose Manipulation in Nuclear Medicine

    NASA Astrophysics Data System (ADS)

    de Asmundis, Riccardo; Boiano, Alfonso; Ramaglia, Antonio

    2008-06-01

    Mobile-Dose has been designed for a very innovative use: the integration in a robotic machinery for automatic preparation of radioactive doses, to be injected to patients in Nuclear Medicine Departments, with real time measurement of the activity under preparation. Mobile-Dose gives a constant measurement of the dose during the filling of vials or syringes, triggering the end of the filling process based on a predefined dose limit. Several applications of Mobile-Dose have been delivered worldwide, from Italian hospitals and clinics to European and Japanese ones. The design of such an instrument and its integration in robotic machineries, was required by an Italian company specialised in radiation protection tools for nuclear applications, in the period 2001-2003. At the time of its design, apparently no commercial instruments with a suitable interfacing capability to the external world existed: we designed it in order to satisfy all the strict requirements coming from the medical aspects (precision within 10%, repeatability, stability, time response) and from the industrial conceiving principles that are mandatory to ensure a good reliability in such a complicated environment. The instrument is suitable to be used in standalone mode too, thanks to its portability and compactness and to the intelligent operator panel programmed for this purpose.

  10. Detection technology research on the one-way clutch of automatic brake adjuster

    NASA Astrophysics Data System (ADS)

    Jiang, Wensong; Luo, Zai; Lu, Yi

    2013-10-01

    In this article, we provide a new testing method to evaluate the acceptable quality of the one-way clutch of automatic brake adjuster. To analysis the suitable adjusting brake moment which keeps the automatic brake adjuster out of failure, we build a mechanical model of one-way clutch according to the structure and the working principle of one-way clutch. The ranges of adjusting brake moment both clockwise and anti-clockwise can be calculated through the mechanical model of one-way clutch. Its critical moment, as well, are picked up as the ideal values of adjusting brake moment to evaluate the acceptable quality of one-way clutch of automatic brake adjuster. we calculate the ideal values of critical moment depending on the different structure of one-way clutch based on its mechanical model before the adjusting brake moment test begin. In addition, an experimental apparatus, which the uncertainty of measurement is ±0.1Nm, is specially designed to test the adjusting brake moment both clockwise and anti-clockwise. Than we can judge the acceptable quality of one-way clutch of automatic brake adjuster by comparing the test results and the ideal values instead of the EXP. In fact, the evaluation standard of adjusting brake moment applied on the project are still using the EXP provided by manufacturer currently in China, but it would be unavailable when the material of one-way clutch changed. Five kinds of automatic brake adjusters are used in the verification experiment to verify the accuracy of the test method. The experimental results show that the experimental values of adjusting brake moment both clockwise and anti-clockwise are within the ranges of theoretical results. The testing method provided by this article vividly meet the requirements of manufacturer's standard.

  11. Application of automatic threshold in dynamic target recognition with low contrast

    NASA Astrophysics Data System (ADS)

    Miao, Hua; Guo, Xiaoming; Chen, Yu

    2014-11-01

    Hybrid photoelectric joint transform correlator can realize automatic real-time recognition with high precision through the combination of optical devices and electronic devices. When recognizing targets with low contrast using photoelectric joint transform correlator, because of the difference of attitude, brightness and grayscale between target and template, only four to five frames of dynamic targets can be recognized without any processing. CCD camera is used to capture the dynamic target images and the capturing speed of CCD is 25 frames per second. Automatic threshold has many advantages like fast processing speed, effectively shielding noise interference, enhancing diffraction energy of useful information and better reserving outline of target and template, so this method plays a very important role in target recognition with optical correlation method. However, the automatic obtained threshold by program can not achieve the best recognition results for dynamic targets. The reason is that outline information is broken to some extent. Optimal threshold is obtained by manual intervention in most cases. Aiming at the characteristics of dynamic targets, the processing program of improved automatic threshold is finished by multiplying OTSU threshold of target and template by scale coefficient of the processed image, and combining with mathematical morphology. The optimal threshold can be achieved automatically by improved automatic threshold processing for dynamic low contrast target images. The recognition rate of dynamic targets is improved through decreased background noise effect and increased correlation information. A series of dynamic tank images with the speed about 70 km/h are adapted as target images. The 1st frame of this series of tanks can correlate only with the 3rd frame without any processing. Through OTSU threshold, the 80th frame can be recognized. By automatic threshold processing of the joint images, this number can be increased to 89 frames. Experimental results show that the improved automatic threshold processing has special application value for the recognition of dynamic target with low contrast.

  12. Quantification of regional fat volume in rat MRI

    NASA Astrophysics Data System (ADS)

    Sacha, Jaroslaw P.; Cockman, Michael D.; Dufresne, Thomas E.; Trokhan, Darren

    2003-05-01

    Multiple initiatives in the pharmaceutical and beauty care industries are directed at identifying therapies for weight management. Body composition measurements are critical for such initiatives. Imaging technologies that can be used to measure body composition noninvasively include DXA (dual energy x-ray absorptiometry) and MRI (magnetic resonance imaging). Unlike other approaches, MRI provides the ability to perform localized measurements of fat distribution. Several factors complicate the automatic delineation of fat regions and quantification of fat volumes. These include motion artifacts, field non-uniformity, brightness and contrast variations, chemical shift misregistration, and ambiguity in delineating anatomical structures. We have developed an approach to deal practically with those challenges. The approach is implemented in a package, the Fat Volume Tool, for automatic detection of fat tissue in MR images of the rat abdomen, including automatic discrimination between abdominal and subcutaneous regions. We suppress motion artifacts using masking based on detection of implicit landmarks in the images. Adaptive object extraction is used to compensate for intensity variations. This approach enables us to perform fat tissue detection and quantification in a fully automated manner. The package can also operate in manual mode, which can be used for verification of the automatic analysis or for performing supervised segmentation. In supervised segmentation, the operator has the ability to interact with the automatic segmentation procedures to touch-up or completely overwrite intermediate segmentation steps. The operator's interventions steer the automatic segmentation steps that follow. This improves the efficiency and quality of the final segmentation. Semi-automatic segmentation tools (interactive region growing, live-wire, etc.) improve both the accuracy and throughput of the operator when working in manual mode. The quality of automatic segmentation has been evaluated by comparing the results of fully automated analysis to manual analysis of the same images. The comparison shows a high degree of correlation that validates the quality of the automatic segmentation approach.

  13. SU-F-J-194: Development of Dose-Based Image Guided Proton Therapy Workflow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pham, R; Sun, B; Zhao, T

    Purpose: To implement image-guided proton therapy (IGPT) based on daily proton dose distribution. Methods: Unlike x-ray therapy, simple alignment based on anatomy cannot ensure proper dose coverage in proton therapy. Anatomy changes along the beam path may lead to underdosing the target, or overdosing the organ-at-risk (OAR). With an in-room mobile computed tomography (CT) system, we are developing a dose-based IGPT software tool that allows patient positioning and treatment adaption based on daily dose distributions. During an IGPT treatment, daily CT images are acquired in treatment position. After initial positioning based on rigid image registration, proton dose distribution is calculatedmore » on daily CT images. The target and OARs are automatically delineated via deformable image registration. Dose distributions are evaluated to decide if repositioning or plan adaptation is necessary in order to achieve proper coverage of the target and sparing of OARs. Besides online dose-based image guidance, the software tool can also map daily treatment doses to the treatment planning CT images for offline adaptive treatment. Results: An in-room helical CT system is commissioned for IGPT purposes. It produces accurate CT numbers that allow proton dose calculation. GPU-based deformable image registration algorithms are developed and evaluated for automatic ROI-delineation and dose mapping. The online and offline IGPT functionalities are evaluated with daily CT images of the proton patients. Conclusion: The online and offline IGPT software tool may improve the safety and quality of proton treatment by allowing dose-based IGPT and adaptive proton treatments. Research is partially supported by Mevion Medical Systems.« less

  14. A New Semi-Automatic Approach to Find Suitable Virtual Electrodes in Arrays Using an Interpolation Strategy.

    PubMed

    Salchow, Christina; Valtin, Markus; Seel, Thomas; Schauer, Thomas

    2016-06-13

    Functional Electrical Stimulation via electrode arrays enables the user to form virtual electrodes (VEs) of dynamic shape, size, and position. We developed a feedback-control-assisted manual search strategy which allows the therapist to conveniently and continuously modify VEs to find a good stimulation area. This works for applications in which the desired movement consists of at least two degrees of freedom. The virtual electrode can be moved to arbitrary locations within the array, and each involved element is stimulated with an individual intensity. Meanwhile, the applied global stimulation intensity is controlled automatically to meet a predefined angle for one degree of freedom. This enables the therapist to concentrate on the remaining degree(s) of freedom while changing the VE position. This feedback-control-assisted approach aims to integrate the user's opinion and the patient's sensation. Therefore, our method bridges the gap between manual search and fully automatic identification procedures for array electrodes. Measurements in four healthy volunteers were performed to demonstrate the usefulness of our concept, using a 24-element array to generate wrist and hand extension.

  15. An adaptive Hidden Markov Model for activity recognition based on a wearable multi-sensor device

    USDA-ARS?s Scientific Manuscript database

    Human activity recognition is important in the study of personal health, wellness and lifestyle. In order to acquire human activity information from the personal space, many wearable multi-sensor devices have been developed. In this paper, a novel technique for automatic activity recognition based o...

  16. CASE: A Configurable Argumentation Support Engine

    ERIC Educational Resources Information Center

    Scheuer, O.; McLaren, B. M.

    2013-01-01

    One of the main challenges in tapping the full potential of modern educational software is to devise mechanisms to automatically analyze and adaptively support students' problem solving and learning. A number of such approaches have been developed to teach argumentation skills in domains as diverse as science, the Law, and ethics. Yet,…

  17. Social Capital Practices as Adaptive Drivers for Local Adjustment of New Public Management in Schools

    ERIC Educational Resources Information Center

    Olesen, Kristian Gylling; Hasle, Peter; Sørensen, Ole H.

    2016-01-01

    New public management (NPM) reforms have typically undermined teachers' autonomy, values, and status in society. This article questions whether such reforms automatically have these outcomes or whether and how possibilities for local adjustment of such reforms may prevent negative outcomes. Drawing on empirical case studies from two Danish…

  18. Innovations in e-Business: Can Government Contracting be Adapted to Use Crowdsourcing and Open Innovation?

    DTIC Science & Technology

    2010-09-01

    cards and software in almost any computer can communicate with each other seamlessly. The cable modem protocol is another example of competing...streaming: it means that special client software applications known as podcatchers (such as Apple Inc.’s iTunes or Nullsoft’s Winamp) can automatically

  19. [Effective implementation of change into routine work. Thinking over ways and means of a learning experience in cardiology].

    PubMed

    Angelino, Elisabetta

    2014-03-01

    Effective implementation of change in patients' care is a substantive problem. Organizational learning is viewed as process of seeking, selecting, and adapting new "routines" to improve performance but learning from experience is not automatic, but rather may result from action and reflection within the organization.

  20. A Study of Adaptive Relevance Feedback - UIUC TREC-2008 Relevance Feedback Experiments

    DTIC Science & Technology

    2008-11-01

    terms. Journal of the American Society for Information Science, 27(3):129–146, 1976. [7] J . J . Rocchio. Relevance feedback in information retrieval. In...In The SMART Retrieval System: Experiments in Automatic Document Processing, pages 313–323. Prentice-Hall Inc., 1971. [8] Gerard Salton and Chris

  1. A Conversational Intelligent Tutoring System to Automatically Predict Learning Styles

    ERIC Educational Resources Information Center

    Latham, Annabel; Crockett, Keeley; McLean, David; Edmonds, Bruce

    2012-01-01

    This paper proposes a generic methodology and architecture for developing a novel conversational intelligent tutoring system (CITS) called Oscar that leads a tutoring conversation and dynamically predicts and adapts to a student's learning style. Oscar aims to mimic a human tutor by implicitly modelling the learning style during tutoring, and…

  2. Postural perturbations: new insights for treatment of balance disorders

    NASA Technical Reports Server (NTRS)

    Horak, F. B.; Henry, S. M.; Shumway-Cook, A.; Peterson, B. W. (Principal Investigator)

    1997-01-01

    This article reviews the neural control of posture as understood through studies of automatic responses to mechanical perturbations. Recent studies of responses to postural perturbations have provided a new view of how postural stability is controlled, and this view has profound implications for physical therapy practice. We discuss the implications for rehabilitation of balance disorders and demonstrate how an understanding of the specific systems underlying postural control can help to focus and enrich our therapeutic approaches. By understanding the basic systems underlying control of balance, such as strategy selection, rapid latencies, coordinated temporal spatial patterns, force control, and context-specific adaptations, therapists can focus their treatment on each patient's specific impairments. Research on postural responses to surface translations has shown that balance is not based on a fixed set of equilibrium reflexes but on a flexible, functional motor skill that can adapt with training and experience. More research is needed to determine the extent to which quantification of automatic postural responses has practical implications for predicting falls in patients with constraints in their postural control system.

  3. Adaptive optimization of reference intensity for optical coherence imaging using galvanometric mirror tilting method

    NASA Astrophysics Data System (ADS)

    Kim, Ji-hyun; Han, Jae-Ho; Jeong, Jichai

    2015-09-01

    Integration time and reference intensity are important factors for achieving high signal-to-noise ratio (SNR) and sensitivity in optical coherence tomography (OCT). In this context, we present an adaptive optimization method of reference intensity for OCT setup. The reference intensity is automatically controlled by tilting a beam position using a Galvanometric scanning mirror system. Before sample scanning, the OCT system acquires two dimensional intensity map with normalized intensity and variables in color spaces using false-color mapping. Then, the system increases or decreases reference intensity following the map data for optimization with a given algorithm. In our experiments, the proposed method successfully corrected the reference intensity with maintaining spectral shape, enabled to change integration time without manual calibration of the reference intensity, and prevented image degradation due to over-saturation and insufficient reference intensity. Also, SNR and sensitivity could be improved by increasing integration time with automatic adjustment of the reference intensity. We believe that our findings can significantly aid in the optimization of SNR and sensitivity for optical coherence tomography systems.

  4. Supporting Teachers to Automatically Build Accessible Pedagogical Resources: The APEINTA Project

    NASA Astrophysics Data System (ADS)

    Iglesias, Ana; Moreno, Lourdes; Jiménez, Javier

    Most of the universities in Europe have started their process of adaptation towards a common educational space according to the European Higher Education Area (EHEA). The social dimension of the Bologna Process is a constituent part of the EHEA and it is a necessary condition for the attractiveness and competitiveness of the EHEA. Two of the main features of the social dimension are the equal access for all the students and the lifelong learning. One of the main problems of the adaptation process to the EHEA is that the teachers have no previous references and models to develop new pedagogical experiences accessible to all the students, nevertheless of their abilities, capabilities or accessibility characteristics. The APEINTA project presented in this paper can be used as a helpful tool for teachers in order to cope with the teaching demands of EHEA, helping the teachers to automatically build accessible pedagogical resources even when the teachers are not accessibility experts. This educational project has been successfully used in 2009 in two different degrees at the Carlos III University of Madrid: Computer Science and Library and Information Science.

  5. Automatic analysis of diabetic peripheral neuropathy using multi-scale quantitative morphology of nerve fibres in corneal confocal microscopy imaging.

    PubMed

    Dabbah, M A; Graham, J; Petropoulos, I N; Tavakoli, M; Malik, R A

    2011-10-01

    Diabetic peripheral neuropathy (DPN) is one of the most common long term complications of diabetes. Corneal confocal microscopy (CCM) image analysis is a novel non-invasive technique which quantifies corneal nerve fibre damage and enables diagnosis of DPN. This paper presents an automatic analysis and classification system for detecting nerve fibres in CCM images based on a multi-scale adaptive dual-model detection algorithm. The algorithm exploits the curvilinear structure of the nerve fibres and adapts itself to the local image information. Detected nerve fibres are then quantified and used as feature vectors for classification using random forest (RF) and neural networks (NNT) classifiers. We show, in a comparative study with other well known curvilinear detectors, that the best performance is achieved by the multi-scale dual model in conjunction with the NNT classifier. An evaluation of clinical effectiveness shows that the performance of the automated system matches that of ground-truth defined by expert manual annotation. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. A Joint Time-Frequency and Matrix Decomposition Feature Extraction Methodology for Pathological Voice Classification

    NASA Astrophysics Data System (ADS)

    Ghoraani, Behnaz; Krishnan, Sridhar

    2009-12-01

    The number of people affected by speech problems is increasing as the modern world places increasing demands on the human voice via mobile telephones, voice recognition software, and interpersonal verbal communications. In this paper, we propose a novel methodology for automatic pattern classification of pathological voices. The main contribution of this paper is extraction of meaningful and unique features using Adaptive time-frequency distribution (TFD) and nonnegative matrix factorization (NMF). We construct Adaptive TFD as an effective signal analysis domain to dynamically track the nonstationarity in the speech and utilize NMF as a matrix decomposition (MD) technique to quantify the constructed TFD. The proposed method extracts meaningful and unique features from the joint TFD of the speech, and automatically identifies and measures the abnormality of the signal. Depending on the abnormality measure of each signal, we classify the signal into normal or pathological. The proposed method is applied on the Massachusetts Eye and Ear Infirmary (MEEI) voice disorders database which consists of 161 pathological and 51 normal speakers, and an overall classification accuracy of 98.6% was achieved.

  7. Motion-adaptive model-assisted compatible coding with spatiotemporal scalability

    NASA Astrophysics Data System (ADS)

    Lee, JaeBeom; Eleftheriadis, Alexandros

    1997-01-01

    We introduce the concept of motion adaptive spatio-temporal model-assisted compatible (MA-STMAC) coding, a technique to selectively encode areas of different importance to the human eye in terms of space and time in moving images with the consideration of object motion. PRevious STMAC was proposed base don the fact that human 'eye contact' and 'lip synchronization' are very important in person-to-person communication. Several areas including the eyes and lips need different types of quality, since different areas have different perceptual significance to human observers. The approach provides a better rate-distortion tradeoff than conventional image coding techniques base don MPEG-1, MPEG- 2, H.261, as well as H.263. STMAC coding is applied on top of an encoder, taking full advantage of its core design. Model motion tracking in our previous STMAC approach was not automatic. The proposed MA-STMAC coding considers the motion of the human face within the STMAC concept using automatic area detection. Experimental results are given using ITU-T H.263, addressing very low bit-rate compression.

  8. Adaptive optics retinal imaging with automatic detection of the pupil and its boundary in real time using Shack-Hartmann images.

    PubMed

    de Castro, Alberto; Sawides, Lucie; Qi, Xiaofeng; Burns, Stephen A

    2017-08-20

    Retinal imaging with an adaptive optics (AO) system usually requires that the eye be centered and stable relative to the exit pupil of the system. Aberrations are then typically corrected inside a fixed circular pupil. This approach can be restrictive when imaging some subjects, since the pupil may not be round and maintaining a stable head position can be difficult. In this paper, we present an automatic algorithm that relaxes these constraints. An image quality metric is computed for each spot of the Shack-Hartmann image to detect the pupil and its boundary, and the control algorithm is applied only to regions within the subject's pupil. Images on a model eye as well as for five subjects were obtained to show that a system exit pupil larger than the subject's eye pupil could be used for AO retinal imaging without a reduction in image quality. This algorithm automates the task of selecting pupil size. It also may relax constraints on centering the subject's pupil and on the shape of the pupil.

  9. Balancing Automatic-Controlled Behaviors and Emotional-Salience States: A Dynamic Executive Functioning Hypothesis.

    PubMed

    Kluwe-Schiavon, Bruno; Viola, Thiago W; Sanvicente-Vieira, Breno; Malloy-Diniz, Leandro F; Grassi-Oliveira, Rodrigo

    2016-01-01

    Recently, there has been growing interest in understanding how executive functions are conceptualized in psychopathology. Since several models have been proposed, the major issue lies within the definition of executive functioning itself. Theoretical discussions have emerged, narrowing the boundaries between "hot" and "cold" executive functions or between self-regulation and cognitive control. Nevertheless, the definition of executive functions is far from a consensual proposition and it has been suggested that these models might be outdated. Current efforts indicate that human behavior and cognition are by-products of many brain systems operating and interacting at different levels, and therefore, it is very simplistic to assume a dualistic perspective of information processing. Based upon an adaptive perspective, we discuss how executive functions could emerge from the ability to solve immediate problems and to generalize successful strategies, as well as from the ability to synthesize and to classify environmental information in order to predict context and future. We present an executive functioning perspective that emerges from the dynamic balance between automatic-controlled behaviors and an emotional-salience state. According to our perspective, the adaptive role of executive functioning is to automatize efficient solutions simultaneously with cognitive demand, enabling individuals to engage such processes with increasingly complex problems. Understanding executive functioning as a mediator of stress and cognitive engagement not only fosters discussions concerning individual differences, but also offers an important paradigm to understand executive functioning as a continuum process rather than a categorical and multicomponent structure.

  10. Balancing Automatic-Controlled Behaviors and Emotional-Salience States: A Dynamic Executive Functioning Hypothesis

    PubMed Central

    Kluwe-Schiavon, Bruno; Viola, Thiago W.; Sanvicente-Vieira, Breno; Malloy-Diniz, Leandro F.; Grassi-Oliveira, Rodrigo

    2017-01-01

    Recently, there has been growing interest in understanding how executive functions are conceptualized in psychopathology. Since several models have been proposed, the major issue lies within the definition of executive functioning itself. Theoretical discussions have emerged, narrowing the boundaries between “hot” and “cold” executive functions or between self-regulation and cognitive control. Nevertheless, the definition of executive functions is far from a consensual proposition and it has been suggested that these models might be outdated. Current efforts indicate that human behavior and cognition are by-products of many brain systems operating and interacting at different levels, and therefore, it is very simplistic to assume a dualistic perspective of information processing. Based upon an adaptive perspective, we discuss how executive functions could emerge from the ability to solve immediate problems and to generalize successful strategies, as well as from the ability to synthesize and to classify environmental information in order to predict context and future. We present an executive functioning perspective that emerges from the dynamic balance between automatic-controlled behaviors and an emotional-salience state. According to our perspective, the adaptive role of executive functioning is to automatize efficient solutions simultaneously with cognitive demand, enabling individuals to engage such processes with increasingly complex problems. Understanding executive functioning as a mediator of stress and cognitive engagement not only fosters discussions concerning individual differences, but also offers an important paradigm to understand executive functioning as a continuum process rather than a categorical and multicomponent structure. PMID:28154541

  11. Toward cognitive pipelines of medical assistance algorithms.

    PubMed

    Philipp, Patrick; Maleshkova, Maria; Katic, Darko; Weber, Christian; Götz, Michael; Rettinger, Achim; Speidel, Stefanie; Kämpgen, Benedikt; Nolden, Marco; Wekerle, Anna-Laura; Dillmann, Rüdiger; Kenngott, Hannes; Müller, Beat; Studer, Rudi

    2016-09-01

    Assistance algorithms for medical tasks have great potential to support physicians with their daily work. However, medicine is also one of the most demanding domains for computer-based support systems, since medical assistance tasks are complex and the practical experience of the physician is crucial. Recent developments in the area of cognitive computing appear to be well suited to tackle medicine as an application domain. We propose a system based on the idea of cognitive computing and consisting of auto-configurable medical assistance algorithms and their self-adapting combination. The system enables automatic execution of new algorithms, given they are made available as Medical Cognitive Apps and are registered in a central semantic repository. Learning components can be added to the system to optimize the results in the cases when numerous Medical Cognitive Apps are available for the same task. Our prototypical implementation is applied to the areas of surgical phase recognition based on sensor data and image progressing for tumor progression mappings. Our results suggest that such assistance algorithms can be automatically configured in execution pipelines, candidate results can be automatically scored and combined, and the system can learn from experience. Furthermore, our evaluation shows that the Medical Cognitive Apps are providing the correct results as they did for local execution and run in a reasonable amount of time. The proposed solution is applicable to a variety of medical use cases and effectively supports the automated and self-adaptive configuration of cognitive pipelines based on medical interpretation algorithms.

  12. Research in digital adaptive flight controllers

    NASA Technical Reports Server (NTRS)

    Kaufman, H.

    1976-01-01

    A design study of adaptive control logic suitable for implementation in modern airborne digital flight computers was conducted. Both explicit controllers which directly utilize parameter identification and implicit controllers which do not require identification were considered. Extensive analytical and simulation efforts resulted in the recommendation of two explicit digital adaptive flight controllers. Interface weighted least squares estimation procedures with control logic were developed using either optimal regulator theory or with control logic based upon single stage performance indices.

  13. Exploring Suitable Emotion-Focused Strategies in Helping Students to Regulate Their Emotional State in a Tutoring System: Malaysian Case Study

    ERIC Educational Resources Information Center

    Yusoff, Mohd Zaliman Mohd; Zin, Nor Azan Mat

    2013-01-01

    Introduction: This study explored the suitable emotion-focused strategies in helping students to regulate their emotional state in a self-regulated tutoring system. Method: A questionnaire which consists of 25 different regulation strategies adapted from Way of Coping Questionnaire (WCQ) was used to determine the strategies deployed by the…

  14. Predicting Potential Changes in Suitable Habitat and Distribution by 2100 for Tree Species of the Eastern United States

    Treesearch

    Louis R Iverson; Anantha M. Prasad; Mark W. Schwartz; Mark W. Schwartz

    2005-01-01

    We predict current distribution and abundance for tree species present in eastern North America, and subsequently estimate potential suitable habitat for those species under a changed climate with 2 x CO2. We used a series of statistical models (i.e., Regression Tree Analysis (RTA), Multivariate Adaptive Regression Splines (MARS), Bagging Trees (...

  15. Chameleon Coatings: Adaptive Surfaces to Reduce Friction and Wear in Extreme Environments

    NASA Astrophysics Data System (ADS)

    Muratore, C.; Voevodin, A. A.

    2009-08-01

    Adaptive nanocomposite coating materials that automatically and reversibly adjust their surface composition and morphology via multiple mechanisms are a promising development for the reduction of friction and wear over broad ranges of ambient conditions encountered in aerospace applications, such as cycling of temperature and atmospheric composition. Materials selection for these composites is based on extensive study of interactions occurring between solid lubricants and their surroundings, especially with novel in situ surface characterization techniques used to identify adaptive behavior on size scales ranging from 10-10 to 10-4 m. Recent insights on operative solid-lubricant mechanisms and their dependency upon the ambient environment are reviewed as a basis for a discussion of the state of the art in solid-lubricant materials.

  16. Adaptive Spot Detection With Optimal Scale Selection in Fluorescence Microscopy Images.

    PubMed

    Basset, Antoine; Boulanger, Jérôme; Salamero, Jean; Bouthemy, Patrick; Kervrann, Charles

    2015-11-01

    Accurately detecting subcellular particles in fluorescence microscopy is of primary interest for further quantitative analysis such as counting, tracking, or classification. Our primary goal is to segment vesicles likely to share nearly the same size in fluorescence microscopy images. Our method termed adaptive thresholding of Laplacian of Gaussian (LoG) images with autoselected scale (ATLAS) automatically selects the optimal scale corresponding to the most frequent spot size in the image. Four criteria are proposed and compared to determine the optimal scale in a scale-space framework. Then, the segmentation stage amounts to thresholding the LoG of the intensity image. In contrast to other methods, the threshold is locally adapted given a probability of false alarm (PFA) specified by the user for the whole set of images to be processed. The local threshold is automatically derived from the PFA value and local image statistics estimated in a window whose size is not a critical parameter. We also propose a new data set for benchmarking, consisting of six collections of one hundred images each, which exploits backgrounds extracted from real microscopy images. We have carried out an extensive comparative evaluation on several data sets with ground-truth, which demonstrates that ATLAS outperforms existing methods. ATLAS does not need any fine parameter tuning and requires very low computation time. Convincing results are also reported on real total internal reflection fluorescence microscopy images.

  17. An experimental study of cutting performances in machining of nimonic super alloy GH2312

    NASA Astrophysics Data System (ADS)

    Du, Jinfu; Wang, Xi; Xu, Min; Mao, Jin; Zhao, Xinglong

    2018-05-01

    Nimonic super alloy are extensively used in the aerospace industry because of its unique properties. As they are quite costly and difficult to machine, the machining tool is easy to get worn. To solve the problem, an experiment was carried out on a numerical control slitting automatic lathe to analysis the tool wearing conditions and parts' surface quality of nimonic super alloy GH2132 under different cutters. The selection of suitable cutter, reasonable cutting data and cutting speed is obtained and some conclusions are made. The excellent coating tool, compared with other hard alloy cutters, along with suitable cutting data will greatly improve the production efficiency and product quality, it can completely meet the process of nimonic super alloy GH2312.

  18. Data base manipulation for assessment of multiresource suitability and land change

    NASA Technical Reports Server (NTRS)

    Colwell, J.; Sanders, P.; Davis, G.; Thomson, F. (Principal Investigator)

    1981-01-01

    Progress is reported in three tasks which support the overall objectives of renewable resources inventory task of the AgRISTARS program. In the first task, the geometric correction algorithms of the Master Data Processor were investigated to determine the utility of data corrected by this processor for U.S. Forest Service uses. The second task involved investigation of logic to form blobs as a precursor step to automatic change detection involving two dates of LANDSAT data. Some routine procedures for selecting BLOB (spatial averaging) parameters were developed. In the third task, a major effort was made to develop land suitability modeling approches for timber, grazing, and wildlife habitat in support of resource planning efforts on the San Juan National Forest.

  19. A Comparison of a Brain-Based Adaptive System and a Manual Adaptable System for Invoking Automation

    NASA Technical Reports Server (NTRS)

    Bailey, Nathan R.; Scerbo, Mark W.; Freeman, Frederick G.; Mikulka, Peter J.; Scott, Lorissa A.

    2004-01-01

    Two experiments are presented that examine alternative methods for invoking automation. In each experiment, participants were asked to perform simultaneously a monitoring task and a resource management task as well as a tracking task that changed between automatic and manual modes. The monitoring task required participants to detect failures of an automated system to correct aberrant conditions under either high or low system reliability. Performance on each task was assessed as well as situation awareness and subjective workload. In the first experiment, half of the participants worked with a brain-based system that used their EEG signals to switch the tracking task between automatic and manual modes. The remaining participants were yoked to participants from the adaptive condition and received the same schedule of mode switches, but their EEG had no effect on the automation. Within each group, half of the participants were assigned to either the low or high reliability monitoring task. In addition, within each combination of automation invocation and system reliability, participants were separated into high and low complacency potential groups. The results revealed no significant effects of automation invocation on the performance measures; however, the high complacency individuals demonstrated better situation awareness when working with the adaptive automation system. The second experiment was the same as the first with one important exception. Automation was invoked manually. Thus, half of the participants pressed a button to invoke automation for 10 s. The remaining participants were yoked to participants from the adaptable condition and received the same schedule of mode switches, but they had no control over the automation. The results showed that participants who could invoke automation performed more poorly on the resource management task and reported higher levels of subjective workload. Further, those who invoked automation more frequently performed more poorly on the tracking task and reported higher levels of subjective workload. and the adaptable condition in the second experiment revealed only one significant difference: the subjective workload was higher in the adaptable condition. Overall, the results show that a brain-based, adaptive automation system may facilitate situation awareness for those individuals who are more complacent toward automation. By contrast, requiring operators to invoke automation manually may have some detrimental impact on performance but does appear to increases subjective workload relative to an adaptive system.

  20. Automatic detection of referral patients due to retinal pathologies through data mining.

    PubMed

    Quellec, Gwenolé; Lamard, Mathieu; Erginay, Ali; Chabouis, Agnès; Massin, Pascale; Cochener, Béatrice; Cazuguel, Guy

    2016-04-01

    With the increased prevalence of retinal pathologies, automating the detection of these pathologies is becoming more and more relevant. In the past few years, many algorithms have been developed for the automated detection of a specific pathology, typically diabetic retinopathy, using eye fundus photography. No matter how good these algorithms are, we believe many clinicians would not use automatic detection tools focusing on a single pathology and ignoring any other pathology present in the patient's retinas. To solve this issue, an algorithm for characterizing the appearance of abnormal retinas, as well as the appearance of the normal ones, is presented. This algorithm does not focus on individual images: it considers examination records consisting of multiple photographs of each retina, together with contextual information about the patient. Specifically, it relies on data mining in order to learn diagnosis rules from characterizations of fundus examination records. The main novelty is that the content of examination records (images and context) is characterized at multiple levels of spatial and lexical granularity: 1) spatial flexibility is ensured by an adaptive decomposition of composite retinal images into a cascade of regions, 2) lexical granularity is ensured by an adaptive decomposition of the feature space into a cascade of visual words. This multigranular representation allows for great flexibility in automatically characterizing normality and abnormality: it is possible to generate diagnosis rules whose precision and generalization ability can be traded off depending on data availability. A variation on usual data mining algorithms, originally designed to mine static data, is proposed so that contextual and visual data at adaptive granularity levels can be mined. This framework was evaluated in e-ophtha, a dataset of 25,702 examination records from the OPHDIAT screening network, as well as in the publicly-available Messidor dataset. It was successfully applied to the detection of patients that should be referred to an ophthalmologist and also to the specific detection of several pathologies. Copyright © 2016 Elsevier B.V. All rights reserved.

Top