Science.gov

Sample records for semi-automatic computer system

  1. Semi-automatic process partitioning for parallel computation

    NASA Technical Reports Server (NTRS)

    Koelbel, Charles; Mehrotra, Piyush; Vanrosendale, John

    1988-01-01

    On current multiprocessor architectures one must carefully distribute data in memory in order to achieve high performance. Process partitioning is the operation of rewriting an algorithm as a collection of tasks, each operating primarily on its own portion of the data, to carry out the computation in parallel. A semi-automatic approach to process partitioning is considered in which the compiler, guided by advice from the user, automatically transforms programs into such an interacting task system. This approach is illustrated with a picture processing example written in BLAZE, which is transformed into a task system maximizing locality of memory reference.

  2. Semi-automatic ultrasonic full-breast scanner and computer-assisted detection system for breast cancer mass screening

    NASA Astrophysics Data System (ADS)

    Takada, Etsuo; Ikedo, Yuji; Fukuoka, Daisuke; Hara, Takeshi; Fujita, Hiroshi; Endo, Tokiko; Morita, Takako

    2007-03-01

    Breast cancer mass screening is widely performed by mammography but in some population with dense breast, ultrasonography is much effective for cancer detection. For this purpose it is necessary to develop special ultrasonic equipment and the system for breast mass screening. It is important to design scanner, image recorder, viewer with CAD (Computer-assisted detection) as a system. Authors developed automatic scanner which scans unilateral breast within 30 seconds. An electric linear probe visualizes width of 6cm, the probe moves 3 paths for unilateral breast. Ultrasonic images are recorded as movie files. These files are treated by microcomputer as volume data. Doctors can diagnose by digital rapid viewing with 3D function. It is possible to show unilateral or bilateral images on a screen. The viewer contains reporting function as well. This system is considered enough capability to perform ultrasonic breast cancer mass screening.

  3. A semi-automatic 3D laser scan system design

    NASA Astrophysics Data System (ADS)

    Xiong, Hanwei; Pan, Ming; Zhang, Xiangwei

    2009-11-01

    Digital 3D models are now used everywhere, from traditional fields of industrial design, artistic design, to heritage conservation. Although laser scan is very useful to get densely samples of the objects, nowadays, such an instrument is expensive and always need to be connected to a computer with stable power supply, which prevent it from usage for fieldworks. In this paper, a new semi-automatic 3D laser scan method is proposed using two line laser sources. The planes projected from the laser sources are orthogonal, one of which is fixed relative to the camera, and the other can be rotated along a settled axis. Before scanning, the system must be calibrated, from which the parameters of the camera, the position of the fixed laser plane and the settled axis are introduced. In scanning process, the fixed laser plane and the camera form a conventional structured light system, and the 3d positions of the intersection curves of the fixed laser plane with the object can be computed. The other laser plane is rotated manually or mechanically, and its position can be determined from the cross point intersecting with the fixed laser plane on the object, so the coordinates of sweeping points can be obtained. The new system can be used without a computer (The data can be processed later), which make it suitable for fieldworks. A scanning case is given in the end.

  4. Semi-automatic aircraft control system

    NASA Technical Reports Server (NTRS)

    Gilson, Richard D. (Inventor)

    1978-01-01

    A flight control type system which provides a tactile readout to the hand of a pilot for directing elevator control during both approach to flare-out and departure maneuvers. For altitudes above flare-out, the system sums the instantaneous coefficient of lift signals of a lift transducer with a generated signal representing ideal coefficient of lift for approach to flare-out, i.e., a value of about 30% below stall. Error signals resulting from the summation are read out by the noted tactile device. Below flare altitude, an altitude responsive variation is summed with the signal representing ideal coefficient of lift to provide error signal readout.

  5. Semi-automatic volumetrics system to parcellate ROI on neocortex

    NASA Astrophysics Data System (ADS)

    Tan, Ou; Ichimiya, Tetsuya; Yasuno, Fumihiko; Suhara, Tetsuya

    2002-05-01

    A template-based and semi-automatic volumetrics system--BrainVol is build to divide the any given patient brain to neo-cortical and sub-cortical regions. The standard region is given as standard ROI drawn on a standard brain volume. After normalization between the standard MR image and the patient MR image, the sub-cortical ROIs' boundary are refined based on gray matter. The neo-cortical ROIs are refined by sulcus information that is semi-automatically marked on the patient brain. Then the segmentation is applied to 4D PET image of same patient for calculation of TAC (Time Activity Curve) by co-registration between MR and PET.

  6. A semi-automatic Parachute Separation System for Balloon Payloads

    NASA Astrophysics Data System (ADS)

    Farman, M. E.; Barsic, J. E.

    When operating stratospheric balloons with scientific payloads at the National Scientific Balloon Facility, the current practice for separating the payload from the parachute after descent requires the sending of manual commands over a UHF channel from the chase aircraft or the ground control site. While this procedure generally works well, there have been occasions when, due to shadowing of the receive antenna, unfavorable aircraft attitude or even lack of a chase aircraft, the command has not been received and the parachute has failed to separate. In these circumstances, the payload may be dragged, with the consequent danger of damage to expensive and sometimes irreplaceable scientific instrumentation. The NSBF has developed a system designed to automatically separate the parachute without the necessity for commanding after touchdown. The most important criterion for such a design is that it should be fail-safe; a free-fall of the payload would of course be a disaster. This design incorporates many safety features and underwent extensive evaluation and testing for several years before it was adopted operationally. It is currently used as a backup to the commanded release, activated only when a chase aircraft is not available, at night or in exceptionally poor visibility conditions. This paper describes the design, development, testing and operation of the system, which is known as the Semi-Automatic Parachute Release (SAPR).

  7. Semi-automatic microdrive system for positioning electrodes during electrophysiological recordings from rat brain

    NASA Astrophysics Data System (ADS)

    Dabrowski, Piotr; Kublik, Ewa; Mozaryn, Jakub

    2015-09-01

    Electrophysiological recording of neuronal action potentials from behaving animals requires portable, precise and reliable devices for positioning of multiple microelectrodes in the brain. We propose a semi-automatic microdrive system for independent positioning of up to 8 electrodes (or tetrodes) in a rat (or larger animals). Device is intended to be used in chronic, long term recording applications in freely moving animals. Our design is based on independent stepper motors with lead screws which will offer single steps of ~ μm semi-automatically controlled from the computer. Microdrive system prototype for one electrode was developed and tested. Because of the lack of the systematic test procedures dedicated to such applications, we propose the evaluation of the prototype similar to ISO norm for industrial robots. To this end we designed and implemented magnetic linear and rotary encoders that provided information about electrode displacement and motor shaft movement. On the basis of these measurements we estimated repeatability, accuracy and backlash of the drive. According to the given assumptions and preliminary tests, the device should provide greater accuracy than hand-controlled manipulators available on the market. Automatic positioning will also shorten the course of the experiment and improve the acquisition of signals from multiple neuronal populations.

  8. A semi-automatic parachute separation system for balloon payloads

    NASA Astrophysics Data System (ADS)

    Farman, M.

    At the National Scientific balloon Facility (NSBF), when operating stratospheric balloons with scientific payloads, the current practice for separating the payload from the parachute after descent requires the sending of commands, over a UHF uplink, from the chase airplane or the ground control site. While this generally works well, there have been occasions when, due to shadowing of the receive antenna or unfavorable aircraft attitude, the command has not been received and the parachute has failed to separate. In these circumstances the payload may be dragged for long distances before being recovered, with consequent danger of damage to expensive and sometimes irreplaceable scientific instrumentation. The NSBF has therefore proposed a system which would automatically separate the parachute without the necessity for commanding after touchdown. Such a system is now under development.. Mechanical automatic release systems have been tried in the past with only limited success. The current design uses an electronic system based on a tilt sensor which measures the angle that the suspension train subtends relative to the gravity vector. With the suspension vertical, there is minimum output from the sensor. When the payload touches down, the parachute tilts and in any tilt direction the sensor output increases until a predetermined threshold is reached. At this point, a threshold detector is activated which fires the pyrotechnic cutter to release the parachute. The threshold level is adjustable prior to the flight to enable the optimum tilt angle to be determined from flight experience. The system will not operate until armed by command. This command is sent during the descent when communication with the on-board systems is still normally reliable. A safety interlock is included to inhibit arming if the threshold is already high at the time the command is sent. While this is intended to be the primary system, the manual option would be retained as a back- up. A market

  9. A semi-automatic traffic sign detection, classification, and positioning system

    NASA Astrophysics Data System (ADS)

    Creusen, I. M.; Hazelhoff, L.; de With, P. H. N.

    2012-01-01

    The availability of large-scale databases containing street-level panoramic images offers the possibility to perform semi-automatic surveying of real-world objects such as traffic signs. These inventories can be performed significantly more efficiently than using conventional methods. Governmental agencies are interested in these inventories for maintenance and safety reasons. This paper introduces a complete semi-automatic traffic sign inventory system. The system consists of several components. First, a detection algorithm locates the 2D position of the traffic signs in the panoramic images. Second, a classification algorithm is used to identify the traffic sign. Third, the 3D position of the traffic sign is calculated using the GPS position of the photographs. Finally, the results are listed in a table for quick inspection and are also visualized in a web browser.

  10. Automatic and semi-automatic approaches for arteriolar-to-venular computation in retinal photographs

    NASA Astrophysics Data System (ADS)

    Mendonça, Ana Maria; Remeseiro, Beatriz; Dashtbozorg, Behdad; Campilho, Aurélio

    2017-03-01

    The Arteriolar-to-Venular Ratio (AVR) is a popular dimensionless measure which allows the assessment of patients' condition for the early diagnosis of different diseases, including hypertension and diabetic retinopathy. This paper presents two new approaches for AVR computation in retinal photographs which include a sequence of automated processing steps: vessel segmentation, caliber measurement, optic disc segmentation, artery/vein classification, region of interest delineation, and AVR calculation. Both approaches have been tested on the INSPIRE-AVR dataset, and compared with a ground-truth provided by two medical specialists. The obtained results demonstrate the reliability of the fully automatic approach which provides AVR ratios very similar to at least one of the observers. Furthermore, the semi-automatic approach, which includes the manual modification of the artery/vein classification if needed, allows to significantly reduce the error to a level below the human error.

  11. A Semi-Automatic, Remote-Controlled Video Observation System for Transient Luminous Events

    NASA Astrophysics Data System (ADS)

    Allin, T.; Neubert, T.; Laursen, S.; Rasmussen, I. L.; Soula, S.

    2003-12-01

    In support for global ELF/VLF observations, HF measurements in France, and conjugate photometry/VLF observations in South Africa, we developed and operated a semi-automatic, remotely controlled video system for the observation of middle-atmospheric transient luminous events (TLEs). Installed at the Pic du Midi Observatory in Southern France, the system was operational during the period from July 18 to September 15, 2003. The video system, based two low-light, non-intensified CCD video cameras, was mounted on top of a motorized pan/tilt unit. The cameras and the pan/tilt unit were controlled over serial links from a local computer, and the video outputs were distributed to a pair of PCI frame grabbers in the computer. This setup allowed remote users to log in and operate the system over the internet. Event detection software provided means of recording and time-stamping single TLE video fields and thus eliminated the need for continuous human monitoring of TLE activity. The computer recorded and analyzed two parallel video streams at the full 50 Hz field rate, while uploading status images, TLE images, and system logs to a remote web server. The system detected more than 130 TLEs - mostly sprites - distributed over 9 active evenings. We have thus demonstrated the feasibility of remote agents for TLE observations, which are likely to find use in future ground-based TLE observation campaigns, or to be installed at remote sites in support for space-borne or other global TLE observation efforts.

  12. Semi-automatic segmentation of subcutaneous tumours from micro-computed tomography images

    NASA Astrophysics Data System (ADS)

    Ali, Rehan; Gunduz-Demir, Cigdem; Szilágyi, Tünde; Durkee, Ben; Graves, Edward E.

    2013-11-01

    This paper outlines the first attempt to segment the boundary of preclinical subcutaneous tumours, which are frequently used in cancer research, from micro-computed tomography (microCT) image data. MicroCT images provide low tissue contrast, and the tumour-to-muscle interface is hard to determine, however faint features exist which enable the boundary to be located. These are used as the basis of our semi-automatic segmentation algorithm. Local phase feature detection is used to highlight the faint boundary features, and a level set-based active contour is used to generate smooth contours that fit the sparse boundary features. The algorithm is validated against manually drawn contours and micro-positron emission tomography (microPET) images. When compared against manual expert segmentations, it was consistently able to segment at least 70% of the tumour region (n = 39) in both easy and difficult cases, and over a broad range of tumour volumes. When compared against tumour microPET data, it was able to capture over 80% of the functional microPET volume. Based on these results, we demonstrate the feasibility of subcutaneous tumour segmentation from microCT image data without the assistance of exogenous contrast agents. Our approach is a proof-of-concept that can be used as the foundation for further research, and to facilitate this, the code is open-source and available from www.setuvo.com.

  13. FishCam - A semi-automatic video-based monitoring system of fish migration

    NASA Astrophysics Data System (ADS)

    Kratzert, Frederik; Mader, Helmut

    2016-04-01

    One of the main objectives of the Water Framework Directive is to preserve and restore the continuum of river networks. Regarding vertebrate migration, fish passes are widely used measure to overcome anthropogenic constructions. Functionality of this measure needs to be verified by monitoring. In this study we propose a newly developed monitoring system, named FishCam, to observe fish migration especially in fish passes without contact and without imposing stress on fish. To avoid time and cost consuming field work for fish pass monitoring, this project aims to develop a semi-automatic monitoring system that enables a continuous observation of fish migration. The system consists of a detection tunnel and a high resolution camera, which is mainly based on the technology of security cameras. If changes in the image, e.g. by migrating fish or drifting particles, are detected by a motion sensor, the camera system starts recording and continues until no further motion is detectable. An ongoing key challenge in this project is the development of robust software, which counts, measures and classifies the passing fish. To achieve this goal, many different computer vision tasks and classification steps have to be combined. Moving objects have to be detected and separated from the static part of the image, objects have to be tracked throughout the entire video and fish have to be separated from non-fish objects (e.g. foliage and woody debris, shadows and light reflections). Subsequently, the length of all detected fish needs to be determined and fish should be classified into species. The object classification in fish and non-fish objects is realized through ensembles of state-of-the-art classifiers on a single image per object. The choice of the best image for classification is implemented through a newly developed "fish benchmark" value. This value compares the actual shape of the object with a schematic model of side-specific fish. To enable an automatization of the

  14. [Optimization of genomic DNA extraction with magnetic bead- based semi-automatic system].

    PubMed

    Ling, Jie; Wang, Hao; Zhang, Shuai; Zhang, Dan-dan; Lai, Mao-de; Zhu, Yi-min

    2012-05-01

    To develop a rapid and effective method for genomic DNA extraction with magnetic bead-based semi-automatic system. DNA was extracted from whole blood samples semi-automatically with nucleic acid automatic extraction system.The concentration and purity of samples was determined by UV-spectrophotometer. Orthogonal design was used to analyze the main effect of lysis time, blood volume, magnetic bead quantity and ethanol concentration on the DNA yield; also the 2-way interaction of these factors. Lysis time, blood volume, magnetic bead quantity and ethanol concentration were associated with DNA yield (P<0.05), but no interaction existed. DNA yield was higher under the condition with 15 min of lysis time, 100 μl of blood volume, 80 μl of magnetic beads and 80 % of ethanol. A significant association was found between the magnetic bead quantity and DNA purity OD260/OD280 (P=0.008). Interaction of blood volume and lysis time also existed (P=0.013). DNA purity was better when the extracting condition was 40 μl of magnetic beads, 15 min of lysis time and 100 μl of blood volume. Magnetic beads and ethanol concentration were associated with DNA purity OD260/OD230 (P=0.017 and P<0.05), the result was better when magnetic beads was 40 μl and ethanol concentration was 80 %. The results indicate that the optimized conditions with 40 μl magnetic beads will generate higher quality of genomic DNA from the whole blood samples.

  15. Integrating different tracking systems in football: multiple camera semi-automatic system, local position measurement and GPS technologies.

    PubMed

    Buchheit, Martin; Allen, Adam; Poon, Tsz Kit; Modonutti, Mattia; Gregson, Warren; Di Salvo, Valter

    2014-12-01

    Abstract During the past decade substantial development of computer-aided tracking technology has occurred. Therefore, we aimed to provide calibration equations to allow the interchangeability of different tracking technologies used in soccer. Eighty-two highly trained soccer players (U14-U17) were monitored during training and one match. Player activity was collected simultaneously with a semi-automatic multiple-camera (Prozone), local position measurement (LPM) technology (Inmotio) and two global positioning systems (GPSports and VX). Data were analysed with respect to three different field dimensions (small, <30 m(2) to full-pitch, match). Variables provided by the systems were compared, and calibration equations (linear regression models) between each system were calculated for each field dimension. Most metrics differed between the 4 systems with the magnitude of the differences dependant on both pitch size and the variable of interest. Trivial-to-small between-system differences in total distance were noted. However, high-intensity running distance (>14.4 km · h(-1)) was slightly-to-moderately greater when tracked with Prozone, and accelerations, small-to-very largely greater with LPM. For most of the equations, the typical error of the estimate was of a moderate magnitude. Interchangeability of the different tracking systems is possible with the provided equations, but care is required given their moderate typical error of the estimate.

  16. A semi-automatic computer-aided method for surgical template design

    NASA Astrophysics Data System (ADS)

    Chen, Xiaojun; Xu, Lu; Yang, Yue; Egger, Jan

    2016-02-01

    This paper presents a generalized integrated framework of semi-automatic surgical template design. Several algorithms were implemented including the mesh segmentation, offset surface generation, collision detection, ruled surface generation, etc., and a special software named TemDesigner was developed. With a simple user interface, a customized template can be semi- automatically designed according to the preoperative plan. Firstly, mesh segmentation with signed scalar of vertex is utilized to partition the inner surface from the input surface mesh based on the indicated point loop. Then, the offset surface of the inner surface is obtained through contouring the distance field of the inner surface, and segmented to generate the outer surface. Ruled surface is employed to connect inner and outer surfaces. Finally, drilling tubes are generated according to the preoperative plan through collision detection and merging. It has been applied to the template design for various kinds of surgeries, including oral implantology, cervical pedicle screw insertion, iliosacral screw insertion and osteotomy, demonstrating the efficiency, functionality and generality of our method.

  17. A semi-automatic computer-aided method for surgical template design.

    PubMed

    Chen, Xiaojun; Xu, Lu; Yang, Yue; Egger, Jan

    2016-02-04

    This paper presents a generalized integrated framework of semi-automatic surgical template design. Several algorithms were implemented including the mesh segmentation, offset surface generation, collision detection, ruled surface generation, etc., and a special software named TemDesigner was developed. With a simple user interface, a customized template can be semi- automatically designed according to the preoperative plan. Firstly, mesh segmentation with signed scalar of vertex is utilized to partition the inner surface from the input surface mesh based on the indicated point loop. Then, the offset surface of the inner surface is obtained through contouring the distance field of the inner surface, and segmented to generate the outer surface. Ruled surface is employed to connect inner and outer surfaces. Finally, drilling tubes are generated according to the preoperative plan through collision detection and merging. It has been applied to the template design for various kinds of surgeries, including oral implantology, cervical pedicle screw insertion, iliosacral screw insertion and osteotomy, demonstrating the efficiency, functionality and generality of our method.

  18. A semi-automatic computer-aided method for surgical template design

    PubMed Central

    Chen, Xiaojun; Xu, Lu; Yang, Yue; Egger, Jan

    2016-01-01

    This paper presents a generalized integrated framework of semi-automatic surgical template design. Several algorithms were implemented including the mesh segmentation, offset surface generation, collision detection, ruled surface generation, etc., and a special software named TemDesigner was developed. With a simple user interface, a customized template can be semi- automatically designed according to the preoperative plan. Firstly, mesh segmentation with signed scalar of vertex is utilized to partition the inner surface from the input surface mesh based on the indicated point loop. Then, the offset surface of the inner surface is obtained through contouring the distance field of the inner surface, and segmented to generate the outer surface. Ruled surface is employed to connect inner and outer surfaces. Finally, drilling tubes are generated according to the preoperative plan through collision detection and merging. It has been applied to the template design for various kinds of surgeries, including oral implantology, cervical pedicle screw insertion, iliosacral screw insertion and osteotomy, demonstrating the efficiency, functionality and generality of our method. PMID:26843434

  19. Semi-automatic system for UV images analysis of historical musical instruments

    NASA Astrophysics Data System (ADS)

    Dondi, Piercarlo; Invernizzi, Claudia; Licchelli, Maurizio; Lombardi, Luca; Malagodi, Marco; Rovetta, Tommaso

    2015-06-01

    The selection of representative areas to be analyzed is a common problem in the study of Cultural Heritage items. UV fluorescence photography is an extensively used technique to highlight specific surface features which cannot be observed in visible light (e.g. restored parts or treated with different materials), and it proves to be very effective in the study of historical musical instruments. In this work we propose a new semi-automatic solution for selecting areas with the same perceived color (a simple clue of similar materials) on UV photos, using a specifically designed interactive tool. The proposed method works in two steps: (i) users select a small rectangular area of the image; (ii) program automatically highlights all the areas that have the same color of the selected input. The identification is made by the analysis of the image in HSV color model, the most similar to the human perception. The achievable result is more accurate than a manual selection, because it can detect also points that users do not recognize as similar due to perception illusion. The application has been developed following the rules of usability, and Human Computer Interface has been improved after a series of tests performed by expert and non-expert users. All the experiments were performed on UV imagery of the Stradivari violins collection stored by "Museo del Violino" in Cremona.

  20. Robust semi-automatic segmentation of pulmonary subsolid nodules in chest computed tomography scans.

    PubMed

    Lassen, B C; Jacobs, C; Kuhnigk, J-M; van Ginneken, B; van Rikxoort, E M

    2015-02-07

    The malignancy of lung nodules is most often detected by analyzing changes of the nodule diameter in follow-up scans. A recent study showed that comparing the volume or the mass of a nodule over time is much more significant than comparing the diameter. Since the survival rate is higher when the disease is still in an early stage it is important to detect the growth rate as soon as possible. However manual segmentation of a volume is time-consuming. Whereas there are several well evaluated methods for the segmentation of solid nodules, less work is done on subsolid nodules which actually show a higher malignancy rate than solid nodules. In this work we present a fast, semi-automatic method for segmentation of subsolid nodules. As minimal user interaction the method expects a user-drawn stroke on the largest diameter of the nodule. First, a threshold-based region growing is performed based on intensity analysis of the nodule region and surrounding parenchyma. In the next step the chest wall is removed by a combination of a connected component analyses and convex hull calculation. Finally, attached vessels are detached by morphological operations. The method was evaluated on all nodules of the publicly available LIDC/IDRI database that were manually segmented and rated as non-solid or part-solid by four radiologists (Dataset 1) and three radiologists (Dataset 2). For these 59 nodules the Jaccard index for the agreement of the proposed method with the manual reference segmentations was 0.52/0.50 (Dataset 1/Dataset 2) compared to an inter-observer agreement of the manual segmentations of 0.54/0.58 (Dataset 1/Dataset 2). Furthermore, the inter-observer agreement using the proposed method (i.e. different input strokes) was analyzed and gave a Jaccard index of 0.74/0.74 (Dataset 1/Dataset 2). The presented method provides satisfactory segmentation results with minimal observer effort in minimal time and can reduce the inter-observer variability for segmentation of

  1. Robust semi-automatic segmentation of pulmonary subsolid nodules in chest computed tomography scans

    NASA Astrophysics Data System (ADS)

    Lassen, B. C.; Jacobs, C.; Kuhnigk, J.-M.; van Ginneken, B.; van Rikxoort, E. M.

    2015-02-01

    The malignancy of lung nodules is most often detected by analyzing changes of the nodule diameter in follow-up scans. A recent study showed that comparing the volume or the mass of a nodule over time is much more significant than comparing the diameter. Since the survival rate is higher when the disease is still in an early stage it is important to detect the growth rate as soon as possible. However manual segmentation of a volume is time-consuming. Whereas there are several well evaluated methods for the segmentation of solid nodules, less work is done on subsolid nodules which actually show a higher malignancy rate than solid nodules. In this work we present a fast, semi-automatic method for segmentation of subsolid nodules. As minimal user interaction the method expects a user-drawn stroke on the largest diameter of the nodule. First, a threshold-based region growing is performed based on intensity analysis of the nodule region and surrounding parenchyma. In the next step the chest wall is removed by a combination of a connected component analyses and convex hull calculation. Finally, attached vessels are detached by morphological operations. The method was evaluated on all nodules of the publicly available LIDC/IDRI database that were manually segmented and rated as non-solid or part-solid by four radiologists (Dataset 1) and three radiologists (Dataset 2). For these 59 nodules the Jaccard index for the agreement of the proposed method with the manual reference segmentations was 0.52/0.50 (Dataset 1/Dataset 2) compared to an inter-observer agreement of the manual segmentations of 0.54/0.58 (Dataset 1/Dataset 2). Furthermore, the inter-observer agreement using the proposed method (i.e. different input strokes) was analyzed and gave a Jaccard index of 0.74/0.74 (Dataset 1/Dataset 2). The presented method provides satisfactory segmentation results with minimal observer effort in minimal time and can reduce the inter-observer variability for segmentation of

  2. Building a semi-automatic ontology learning and construction system for geosciences

    NASA Astrophysics Data System (ADS)

    Babaie, H. A.; Sunderraman, R.; Zhu, Y.

    2013-12-01

    We are developing an ontology learning and construction framework that allows continuous, semi-automatic knowledge extraction, verification, validation, and maintenance by potentially a very large group of collaborating domain experts in any geosciences field. The system brings geoscientists from the side-lines to the center stage of ontology building, allowing them to collaboratively construct and enrich new ontologies, and merge, align, and integrate existing ontologies and tools. These constantly evolving ontologies can more effectively address community's interests, purposes, tools, and change. The goal is to minimize the cost and time of building ontologies, and maximize the quality, usability, and adoption of ontologies by the community. Our system will be a domain-independent ontology learning framework that applies natural language processing, allowing users to enter their ontology in a semi-structured form, and a combined Semantic Web and Social Web approach that lets direct participation of geoscientists who have no skill in the design and development of their domain ontologies. A controlled natural language (CNL) interface and an integrated authoring and editing tool automatically convert syntactically correct CNL text into formal OWL constructs. The WebProtege-based system will allow a potentially large group of geoscientists, from multiple domains, to crowd source and participate in the structuring of their knowledge model by sharing their knowledge through critiquing, testing, verifying, adopting, and updating of the concept models (ontologies). We will use cloud storage for all data and knowledge base components of the system, such as users, domain ontologies, discussion forums, and semantic wikis that can be accessed and queried by geoscientists in each domain. We will use NoSQL databases such as MongoDB as a service in the cloud environment. MongoDB uses the lightweight JSON format, which makes it convenient and easy to build Web applications using

  3. A semi-automatic measurement system based on digital image analysis for the application to the single fiber fragmentation test

    NASA Astrophysics Data System (ADS)

    Blobel, Swen; Thielsch, Karin; Ulbricht, Volker

    2013-04-01

    The computational prediction of the effective macroscopic material behavior of fiber reinforced composites is a goal of research to exploit the potential of these materials. Besides the mechanical characteristics of the material components, an extensive knowledge of the mechanical interaction between these components is necessary in order to set-up suitable models of the local material structure. For example, an experimental investigation of the micromechanical damage behavior of simplified composite specimens can help to understand the mechanisms, which causes matrix and interface damage in the vicinity of a fiber fracture. To realize an appropriate experimental setup, a novel semi-automatic measurement system based on the analysis of digital images using photoelasticity and image correlation was developed. Applied to specimens with a birefringent matrix material, it is able to provide global and local information of the damage evolution and the stress and strain state at the same time. The image acquisition is accomplished using a long distance microscopic optic with an effective resolution of two micrometer per pixel. While the system is moved along the domain of interest of the specimen, the acquired images are assembled online and used to interpret optically extracted information in combination with global force-displacement curves provided by the load frame. The illumination of the specimen with circularly polarized light and the projection of the transmitted light through different configurations of polarizer and quarterwave-plates enables the synchronous capturing of four images at the quadrants of a four megapixel image sensor. The fifth image is decoupled from the same optical path and is projected to a second camera chip, to get a non-polarized image of the same scene at the same time. The benefit of this optical setup is the opportunity to extract a wide range of information locally, without influence on the progress of the experiment. The four images

  4. MOLGENIS/connect: a system for semi-automatic integration of heterogeneous phenotype data with applications in biobanks

    PubMed Central

    Pang, Chao; van Enckevort, David; de Haan, Mark; Kelpin, Fleur; Jetten, Jonathan; Hendriksen, Dennis; de Boer, Tommy; Charbon, Bart; Winder, Erwin; van der Velde, K. Joeri; Doiron, Dany; Fortier, Isabel; Hillege, Hans

    2016-01-01

    Motivation: While the size and number of biobanks, patient registries and other data collections are increasing, biomedical researchers still often need to pool data for statistical power, a task that requires time-intensive retrospective integration. Results: To address this challenge, we developed MOLGENIS/connect, a semi-automatic system to find, match and pool data from different sources. The system shortlists relevant source attributes from thousands of candidates using ontology-based query expansion to overcome variations in terminology. Then it generates algorithms that transform source attributes to a common target DataSchema. These include unit conversion, categorical value matching and complex conversion patterns (e.g. calculation of BMI). In comparison to human-experts, MOLGENIS/connect was able to auto-generate 27% of the algorithms perfectly, with an additional 46% needing only minor editing, representing a reduction in the human effort and expertise needed to pool data. Availability and Implementation: Source code, binaries and documentation are available as open-source under LGPLv3 from http://github.com/molgenis/molgenis and www.molgenis.org/connect. Contact: m.a.swertz@rug.nl Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153686

  5. Semi-automatic laboratory goniospectrometer system for performing multi-angular reflectance and polarization measurements for natural surfaces.

    PubMed

    Sun, Z Q; Wu, Z F; Zhao, Y S

    2014-01-01

    In this paper, the design and operation of the Northeast Normal University Laboratory Goniospectrometer System for performing multi-angular reflected and polarized measurements under controlled illumination conditions is described. A semi-automatic arm, which is carried on a rotated circular ring, enables the acquisition of a large number of measurements of surface Bidirectional Reflectance Factor (BRF) over the full hemisphere. In addition, a set of polarizing optics enables the linear polarization over the spectrum from 350 nm to 2300 nm. Because of the stable measurement condition in the laboratory, the BRF and linear polarization has an average uncertainty of 1% and less than 5% depending on the sample property, respectively. The polarimetric accuracy of the instrument is below 0.01 in the form of the absolute value of degree of linear polarization, which is established by measuring a Spectralon plane. This paper also presents the reflectance and polarization of snow, soil, sand, and ice measured during 2010-2013 in order to illustrate its stability and accuracy. These measurement results are useful to understand the scattering property of natural surfaces on Earth.

  6. Semi-automatic laboratory goniospectrometer system for performing multi-angular reflectance and polarization measurements for natural surfaces

    NASA Astrophysics Data System (ADS)

    Sun, Z. Q.; Wu, Z. F.; Zhao, Y. S.

    2014-01-01

    In this paper, the design and operation of the Northeast Normal University Laboratory Goniospectrometer System for performing multi-angular reflected and polarized measurements under controlled illumination conditions is described. A semi-automatic arm, which is carried on a rotated circular ring, enables the acquisition of a large number of measurements of surface Bidirectional Reflectance Factor (BRF) over the full hemisphere. In addition, a set of polarizing optics enables the linear polarization over the spectrum from 350 nm to 2300 nm. Because of the stable measurement condition in the laboratory, the BRF and linear polarization has an average uncertainty of 1% and less than 5% depending on the sample property, respectively. The polarimetric accuracy of the instrument is below 0.01 in the form of the absolute value of degree of linear polarization, which is established by measuring a Spectralon plane. This paper also presents the reflectance and polarization of snow, soil, sand, and ice measured during 2010-2013 in order to illustrate its stability and accuracy. These measurement results are useful to understand the scattering property of natural surfaces on Earth.

  7. Comparing semi-automatic systems for recruitment of patients to clinical trials.

    PubMed

    Cuggia, Marc; Besana, Paolo; Glasspool, David

    2011-06-01

    (i) To review contributions and limitations of decision support systems for automatic recruitment of patients to clinical trials (Clinical Trial Recruitment Support Systems, CTRSS). (ii) To characterize the important features of this domain, the main classes of approach that have been used, and their advantages and disadvantages. (iii) To assess the effectiveness and potential of such systems in improving trial recruitment rates. A systematic MESH keyword-based search of Pubmed, Embase, and Scholar Google for relevant CTRSS publications from January 1st 1998 to August 31st 2009 yielded 73 references, from which 33 relevant papers describing 28 distinct studies were chosen for review, based on their report of a novel decision support system for trial recruitment which reused already available patient data. The reviewed papers were classified using a modified version of an existing taxonomy for clinical decision support systems, using 10 axes relevant to the trial recruitment domain. It proved possible and useful to characterize CTRSS on a relatively small number of dimensions and a number of clear trends emerge from the study. Only nine papers reported a useful evaluation of the effectiveness of the system in terms of trial pre-inclusion or enrolment rate. While all the systems reviewed re-use structured and coded patient data none attempts the more difficult task of using unstructured patient notes to pre-screen for trial inclusion. Few studies address acceptance of systems by clinicians, or integration into clinical workflow, and there is little evidence of use of interoperability standards. System design, scope, and assessment methodology vary significantly between papers, making it difficult to establish the impact of different approaches on recruitment rate. It is clear, however, that the pre-screening phase of trial recruitment is the most effective part of the process to address with CTRSS, that clinical workflow integration and clinician acceptance are

  8. Easing semantically enriched information retrieval-An interactive semi-automatic annotation system for medical documents.

    PubMed

    Gschwandtner, Theresia; Kaiser, Katharina; Martini, Patrick; Miksch, Silvia

    2010-06-01

    Mapping medical concepts from a terminology system to the concepts in the narrative text of a medical document is necessary to provide semantically accurate information for further processing steps. The MetaMap Transfer (MMTx) program is a semantic annotation system that generates a rough mapping of concepts from the Unified Medical Language System (UMLS) Metathesaurus to free medical text, but this mapping still contains erroneous and ambiguous bits of information. Since manually correcting the mapping is an extremely cumbersome and time-consuming task, we have developed the MapFace editor.The editor provides a convenient way of navigating the annotated information gained from the MMTx output, and enables users to correct this information on both a conceptual and a syntactical level, and thus it greatly facilitates the handling of the MMTx program. Additionally, the editor provides enhanced visualization features to support the correct interpretation of medical concepts within the text. We paid special attention to ensure that the MapFace editor is an intuitive and convenient tool to work with. Therefore, we recently conducted a usability study in order to create a well founded background serving as a starting point for further improvement of the editor's usability.

  9. Easing semantically enriched information retrieval—An interactive semi-automatic annotation system for medical documents

    PubMed Central

    Gschwandtner, Theresia; Kaiser, Katharina; Martini, Patrick; Miksch, Silvia

    2010-01-01

    Mapping medical concepts from a terminology system to the concepts in the narrative text of a medical document is necessary to provide semantically accurate information for further processing steps. The MetaMap Transfer (MMTx) program is a semantic annotation system that generates a rough mapping of concepts from the Unified Medical Language System (UMLS) Metathesaurus to free medical text, but this mapping still contains erroneous and ambiguous bits of information. Since manually correcting the mapping is an extremely cumbersome and time-consuming task, we have developed the MapFace editor. The editor provides a convenient way of navigating the annotated information gained from the MMTx output, and enables users to correct this information on both a conceptual and a syntactical level, and thus it greatly facilitates the handling of the MMTx program. Additionally, the editor provides enhanced visualization features to support the correct interpretation of medical concepts within the text. We paid special attention to ensure that the MapFace editor is an intuitive and convenient tool to work with. Therefore, we recently conducted a usability study in order to create a well founded background serving as a starting point for further improvement of the editor’s usability. PMID:20582249

  10. Graphical user interface (GUIDE) and semi-automatic system for the acquisition of anaglyphs

    NASA Astrophysics Data System (ADS)

    Canchola, Marco A.; Arízaga, Juan A.; Cortés, Obed; Tecpanecatl, Eduardo; Cantero, Jose M.

    2013-09-01

    Diverse educational experiences have shown greater acceptance of children to ideas related to science, compared with adults. That fact and showing great curiosity are factors to consider to undertake scientific outreach efforts for children, with prospects of success. Moreover now 3D digital images have become a topic that has gained importance in various areas, entertainment, film and video games mainly, but also in areas such as medical practice transcendental in disease detection This article presents a system model for 3D images for educational purposes that allows students of various grade levels, school and college, have an approach to image processing, explaining the use of filters for stereoscopic images that give brain impression of depth. The system is based on one of two hardware elements, centered on an Arduino board, and a software based on Matlab. The paper presents the design and construction of each of the elements, also presents information on the images obtained and finally how users can interact with the device.

  11. Semi-automatic measurement of left ventricular function on dual source computed tomography using five different software tools in comparison with magnetic resonance imaging.

    PubMed

    de Jonge, G J; van der Vleuten, P A; Overbosch, J; Lubbers, D D; Jansen-van der Weide, M C; Zijlstra, F; van Ooijen, P M A; Oudkerk, M

    2011-12-01

    To compare left ventricular (LV) function assessment using five different software tools on the same dual source computed tomography (DSCT) datasets with the results of MRI. Twenty-six patients, undergoing cardiac contrast-enhanced DSCT were included (20 men, mean age 59±12 years). Reconstructions were made at every 10% of the RR-interval. Function analysis was performed with five different, commercially available workstations. In all software tools, semi-automatic LV function measurements were performed, with manual corrections if necessary. Within 0-22 days, all 26 patients were scanned on a 1.5 T MRI-system. Bland-Altman analysis was performed to calculate limits of agreement between DSCT and MRI. Pearson's correlation coefficient was calculated to assess the correlation between the different DSCT software tools and MRI. Repeated measurements were performed to determine intraobserver and interobserver variability. For all five DSCT workstations, mean LV functional parameters correlated well with measurements on MRI. Bland-Altman analysis of the comparison of DSCT and MRI showed acceptable limits of agreement. Best correlation and limits of agreement were obtained by DSCT software tools with software algorithms comparable to MRI software. The five different DSCT software tools we examined have interchangeable results of LV functional parameters compared to regularly analysed results by MRI. The best correlation and the narrowest limits of agreement were found when the same software algorithm was used for both DSCT and MRI examinations, therefore our advice for clinical practice is to always evaluate images with the same type of post-processing tools in follow-up. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  12. A three-dimensional quantitative analysis of restenosis parameters after balloon angioplasty: comparison between semi-automatic computer-assisted planimetry and stereology.

    PubMed

    Salu, Koen J; Knaapen, Michiel W M; Bosmans, Johan M; Vrints, Chris J; Bult, Hidde

    2002-01-01

    Semi-automatic computer-assisted planimetry is often used for the quantification of restenosis parameters after balloon angioplasty although it is a time-consuming method. Moreover, slicing the artery to enable analysis of two-dimensional (2-D) images leads to a loss of information since the vessel structure is three-dimensional (3-D). Cavalieri's principle uses systematic random sampling allowing 3-D quantification. This study compares the accuracy and efficiency of planimetry versus point-counting measurements on restenosis parameters after balloon angioplasty and investigates the use of Cavalieri's principle for 3-D volume quantification. Bland and Altman plots showed good agreement between planimetry and point counting for the 2-D and 3-D quantification of lumen, internal elastic lamina (IEL) and external elastic lamina (EEL), with a slightly smaller agreement for intima and media. Mean values and induced coefficients of variation were similar for both methods for all parameters. Point counting induced a 6% error in its 3-D quantification, which is negligible in view of the biological variation (>90%) among animals. However, point counting was 3 times faster compared to planimetry, improving its efficiency. This study shows that combining Cavalieri's principle with point counting is a precise and efficient method for the 3-D quantification of restenosis parameters after balloon angioplasty.

  13. Semi-automatic Segmentation for Prostate Interventions

    PubMed Central

    Mahdavi, S. Sara; Chng, Nick; Spadinger, Ingrid; Morris, William J.; Salcudean, Septimiu E.

    2011-01-01

    In this paper we report and characterize a semi-automatic prostate segmentation method for prostate brachytherapy. Based on anatomical evidence and requirements of the treatment procedure, a warped and tapered ellipsoid was found suitable as the a priori 3D shape of the prostate. By transforming the acquired endorectal transverse images of the prostate into ellipses, the shape fitting problem was cast into a convex problem which can be solved efficiently. The average whole gland error between volumes created from manual and semi-automatic contours from 21 patients was 6.63±0.9%. For use in brachytherapy treatment planning, the resulting contours were modified, if deemed necessary, by radiation oncologists prior to treatment. The average whole gland volume error between the volumes computed from semi-automatic contours and those computed from modified contours, from 40 patients, was 5.82±4.15%. The amount of bias in the physicians’ delineations when given an initial semi-automatic contour was measured by comparing the volume error between 10 prostate volumes computed from manual contours with those of modified contours. This error was found to be 7.25±0.39% for the whole gland. Automatic contouring reduced subjectivity, as evidenced by a decrease in segmentation inter- and intra-observer variability from 4.65% and 5.95% for manual segmentation to 3.04% and 3.48% for semi-automatic segmentation, respectively. We characterized the performance of the method relative to the reference obtained from manual segmentation by using a novel approach that divides the prostate region into nine sectors. We analyzed each sector independently as the requirements for segmentation accuracy depend on which region of the prostate is considered. The measured segmentation time is 14±1 seconds with an additional 32±14 seconds for initialization. By assuming 1–3 minutes for modification of the contours, if necessary, a total segmentation time of less than 4 minutes is required

  14. StandFood: Standardization of Foods Using a Semi-Automatic System for Classifying and Describing Foods According to FoodEx2.

    PubMed

    Eftimov, Tome; Korošec, Peter; Koroušić Seljak, Barbara

    2017-05-26

    The European Food Safety Authority has developed a standardized food classification and description system called FoodEx2. It uses facets to describe food properties and aspects from various perspectives, making it easier to compare food consumption data from different sources and perform more detailed data analyses. However, both food composition data and food consumption data, which need to be linked, are lacking in FoodEx2 because the process of classification and description has to be manually performed-a process that is laborious and requires good knowledge of the system and also good knowledge of food (composition, processing, marketing, etc.). In this paper, we introduce a semi-automatic system for classifying and describing foods according to FoodEx2, which consists of three parts. The first involves a machine learning approach and classifies foods into four FoodEx2 categories, with two for single foods: raw (r) and derivatives (d), and two for composite foods: simple (s) and aggregated (c). The second uses a natural language processing approach and probability theory to describe foods. The third combines the result from the first and the second part by defining post-processing rules in order to improve the result for the classification part. We tested the system using a set of food items (from Slovenia) manually-coded according to FoodEx2. The new semi-automatic system obtained an accuracy of 89% for the classification part and 79% for the description part, or an overall result of 79% for the whole system.

  15. Investigating Helmet Promotion for Cyclists: Results from a Randomised Study with Observation of Behaviour, Using a Semi-Automatic Video System

    PubMed Central

    Constant, Aymery; Messiah, Antoine; Felonneau, Marie-Line; Lagarde, Emmanuel

    2012-01-01

    Introduction Half of fatal injuries among bicyclists are head injuries. While helmet use is likely to provide protection, their use often remains rare. We assessed the influence of strategies for promotion of helmet use with direct observation of behaviour by a semi-automatic video system. Methods We performed a single-centre randomised controlled study, with 4 balanced randomisation groups. Participants were non-helmet users, aged 18–75 years, recruited at a loan facility in the city of Bordeaux, France. After completing a questionnaire investigating their attitudes towards road safety and helmet use, participants were randomly assigned to three groups with the provision of “helmet only”, “helmet and information” or “information only”, and to a fourth control group. Bikes were labelled with a colour code designed to enable observation of helmet use by participants while cycling, using a 7-spot semi-automatic video system located in the city. A total of 1557 participants were included in the study. Results Between October 15th 2009 and September 28th 2010, 2621 cyclists' movements, made by 587 participants, were captured by the video system. Participants seen at least once with a helmet amounted to 6.6% of all observed participants, with higher rates in the two groups that received a helmet at baseline. The likelihood of observed helmet use was significantly increased among participants of the “helmet only” group (OR = 7.73 [2.09–28.5]) and this impact faded within six months following the intervention. No effect of information delivery was found. Conclusion Providing a helmet may be of value, but will not be sufficient to achieve high rates of helmet wearing among adult cyclists. Integrated and repeated prevention programmes will be needed, including free provision of helmets, but also information on the protective effect of helmets and strategies to increase peer and parental pressure. PMID:22355384

  16. Investigating helmet promotion for cyclists: results from a randomised study with observation of behaviour, using a semi-automatic video system.

    PubMed

    Constant, Aymery; Messiah, Antoine; Felonneau, Marie-Line; Lagarde, Emmanuel

    2012-01-01

    Half of fatal injuries among bicyclists are head injuries. While helmet use is likely to provide protection, their use often remains rare. We assessed the influence of strategies for promotion of helmet use with direct observation of behaviour by a semi-automatic video system. We performed a single-centre randomised controlled study, with 4 balanced randomisation groups. Participants were non-helmet users, aged 18-75 years, recruited at a loan facility in the city of Bordeaux, France. After completing a questionnaire investigating their attitudes towards road safety and helmet use, participants were randomly assigned to three groups with the provision of "helmet only", "helmet and information" or "information only", and to a fourth control group. Bikes were labelled with a colour code designed to enable observation of helmet use by participants while cycling, using a 7-spot semi-automatic video system located in the city. A total of 1557 participants were included in the study. Between October 15th 2009 and September 28th 2010, 2621 cyclists' movements, made by 587 participants, were captured by the video system. Participants seen at least once with a helmet amounted to 6.6% of all observed participants, with higher rates in the two groups that received a helmet at baseline. The likelihood of observed helmet use was significantly increased among participants of the "helmet only" group (OR = 7.73 [2.09-28.5]) and this impact faded within six months following the intervention. No effect of information delivery was found. Providing a helmet may be of value, but will not be sufficient to achieve high rates of helmet wearing among adult cyclists. Integrated and repeated prevention programmes will be needed, including free provision of helmets, but also information on the protective effect of helmets and strategies to increase peer and parental pressure.

  17. A method for semi-automatic segmentation and evaluation of intracranial aneurysms in bone-subtraction computed tomography angiography (BSCTA) images

    NASA Astrophysics Data System (ADS)

    Krämer, Susanne; Ditt, Hendrik; Biermann, Christina; Lell, Michael; Keller, Jörg

    2009-02-01

    The rupture of an intracranial aneurysm has dramatic consequences for the patient. Hence early detection of unruptured aneurysms is of paramount importance. Bone-subtraction computed tomography angiography (BSCTA) has proven to be a powerful tool for detection of aneurysms in particular those located close to the skull base. Most aneurysms though are chance findings in BSCTA scans performed for other reasons. Therefore it is highly desirable to have techniques operating on standard BSCTA scans available which assist radiologists and surgeons in evaluation of intracranial aneurysms. In this paper we present a semi-automatic method for segmentation and assessment of intracranial aneurysms. The only user-interaction required is placement of a marker into the vascular malformation. Termination ensues automatically as soon as the segmentation reaches the vessels which feed the aneurysm. The algorithm is derived from an adaptive region-growing which employs a growth gradient as criterion for termination. Based on this segmentation values of high clinical and prognostic significance, such as volume, minimum and maximum diameter as well as surface of the aneurysm, are calculated automatically. the segmentation itself as well as the calculated diameters are visualised. Further segmentation of the adjoining vessels provides the means for visualisation of the topographical situation of vascular structures associated to the aneurysm. A stereolithographic mesh (STL) can be derived from the surface of the segmented volume. STL together with parameters like the resiliency of vascular wall tissue provide for an accurate wall model of the aneurysm and its associated vascular structures. Consequently the haemodynamic situation in the aneurysm itself and close to it can be assessed by flow modelling. Significant values of haemodynamics such as pressure onto the vascular wall, wall shear stress or pathlines of the blood flow can be computed. Additionally a dynamic flow model can be

  18. The National Shipbuilding Research Program. Semi-Automatic Pipe Handling System and Fabrication Facility. Phase II - Implementation

    DTIC Science & Technology

    1983-03-01

    order to obtain proper welding results, the use of machine cutting is desirable. The various cutting machines required to process alloy mix of pipe ...burning has gone into the selection of the type of automatic welding equipment needed to process the mix of pipe going through the system. For welding ...straight pipe , rolling devices have been supplied incorporating automatic loading and unloading mechan- isms controlled by pushbuttons. Automated welding

  19. Semi-automatic aortic aneurysm analysis

    NASA Astrophysics Data System (ADS)

    Bodur, Osman; Grady, Leo; Stillman, Arthur; Setser, Randolph; Funka-Lea, Gareth; O'Donnell, Thomas

    2007-03-01

    Aortic aneurysms are the 13 th leading cause of death in the United States. In standard clinical practice, assessing the progression of disease in the aorta, as well as the risk of aneurysm rupture, is based on measurements of aortic diameter. We propose a method for automatically segmenting the aortic vessel border allowing the calculation of aortic diameters on CTA acquisitions which is accurate and fast, allowing clinicians more time for their evaluations. While segmentation of aortic lumen is straightforward in CTA, segmentation of the outer vessel wall (epithelial layer) in a diseased aorta is difficult; furthermore, no clinical tool currently exists to perform this task. The difficulties are due to the similarities in intensity of surrounding tissue (and thrombus due to lack of contrast agent uptake), as well as the complications from bright calcium deposits. Our overall method makes use of a centerline for the purpose of resampling the image volume into slices orthogonal to the vessel path. This centerline is computed semi-automatically via a distance transform. The difficult task of automatically segmenting the aortic border on the orthogonal slices is performed via a novel variation of the isoperimetric algorithm which incorporates circular constraints (priors). Our method is embodied in a prototype which allows the loading and registration of two datasets simultaneously, facilitating longitudinal comparisons. Both the centerline and border segmentation algorithms were evaluated on four patients, each with two volumes acquired 6 months to 1.5 years apart, for a total of eight datasets. Results showed good agreement with clinicians' findings.

  20. The National Shipbuilding Research Program. Proceedings of the REAPS Technical Symposium. Paper No. 6: Semi-Automatic Pipe Production in a Small Shipyard

    DTIC Science & Technology

    1979-09-01

    largest labour intensive process in ship construct- ion - that of pipework . Recent changes in pipework processing range from the fully automatic...choices it can make regarding its pipework : - a) A fully automatic system supported by comprehensive computer programs b) A semi-automatic system

  1. A Semi-Automatic Variability Search

    NASA Astrophysics Data System (ADS)

    Maciejewski, G.; Niedzielski, A.

    Technical features of the Semi-Automatic Variability Search (SAVS) operating at the Astronomical Observatory of the Nicolaus Copernicus University and the results of the first year of observations are presented. The user-friendly software developed for reduction of acquired CCD images and detection of new variable stars is also described.

  2. Upgrade of a semi-automatic flow injection analysis system to a fully automatic one by means of a resident program

    PubMed Central

    Prodromidis, M. I.; Tsibiris, A. B.

    1995-01-01

    The program and the arrangement for a versatile, computer-controlled flow injection analysis system is described. A resident program (which can be run simultaneously and complementary to any other program) controls (on/off, speed, direction) a pump and a pneumatic valve (emptying and filling position). The system was designed to be simple and flexible for both research and routine work. PMID:18925039

  3. A web-based remote collaborative system for visualization and assessment of semi-automatic diagnosis of liver cancer from CT images.

    PubMed

    Branzan Albu, Alexandra; Laurendeau, Denis; Gurtner, Marco; Martel, Cedric

    2005-01-01

    We propose a web-based collaborative CAD system allowing for the remote communication and data exchange between radiologists and researchers in computer vision-based software engineering. The proposed web-based interface is implemented in the Java Advanced Imaging Application Programming Interface. The different modules of the interface allow for 3D and 2D data visualization, as well as for the parametric adjustment of 3D reconstruction process. The proposed web-based CAD system was tested in a pilot study involving a limited number of liver cancer cases. The successful system validation in the feasibility stage will lead to an extended clinical study on CT and MR image databases.

  4. Semi-automatic object geometry estimation for image personalization

    NASA Astrophysics Data System (ADS)

    Ding, Hengzhou; Bala, Raja; Fan, Zhigang; Eschbach, Reiner; Bouman, Charles A.; Allebach, Jan P.

    2010-01-01

    Digital printing brings about a host of benefits, one of which is the ability to create short runs of variable, customized content. One form of customization that is receiving much attention lately is in photofinishing applications, whereby personalized calendars, greeting cards, and photo books are created by inserting text strings into images. It is particularly interesting to estimate the underlying geometry of the surface and incorporate the text into the image content in an intelligent and natural way. Current solutions either allow fixed text insertion schemes into preprocessed images, or provide manual text insertion tools that are time consuming and aimed only at the high-end graphic designer. It would thus be desirable to provide some level of automation in the image personalization process. We propose a semi-automatic image personalization workflow which includes two scenarios: text insertion and text replacement. In both scenarios, the underlying surfaces are assumed to be planar. A 3-D pinhole camera model is used for rendering text, whose parameters are estimated by analyzing existing structures in the image. Techniques in image processing and computer vison such as the Hough transform, the bilateral filter, and connected component analysis are combined, along with necessary user inputs. In particular, the semi-automatic workflow is implemented as an image personalization tool, which is presented in our companion paper.1 Experimental results including personalized images for both scenarios are shown, which demonstrate the effectiveness of our algorithms.

  5. Neuromantic – from Semi-Manual to Semi-Automatic Reconstruction of Neuron Morphology

    PubMed Central

    Myatt, Darren R.; Hadlington, Tye; Ascoli, Giorgio A.; Nasuto, Slawomir J.

    2012-01-01

    The ability to create accurate geometric models of neuronal morphology is important for understanding the role of shape in information processing. Despite a significant amount of research on automating neuron reconstructions from image stacks obtained via microscopy, in practice most data are still collected manually. This paper describes Neuromantic, an open source system for three dimensional digital tracing of neurites. Neuromantic reconstructions are comparable in quality to those of existing commercial and freeware systems while balancing speed and accuracy of manual reconstruction. The combination of semi-automatic tracing, intuitive editing, and ability of visualizing large image stacks on standard computing platforms provides a versatile tool that can help address the reconstructions availability bottleneck. Practical considerations for reducing the computational time and space requirements of the extended algorithm are also discussed. PMID:22438842

  6. Semi-automatic bowel wall thickness measurements on MR enterography in patients with Crohn's disease.

    PubMed

    Naziroglu, Robiel E; Puylaert, Carl A J; Tielbeek, Jeroen A W; Makanyanga, Jesica; Menys, Alex; Ponsioen, Cyriel Y; Hatzakis, Haralambos; Taylor, Stuart A; Stoker, Jaap; van Vliet, Lucas J; Vos, Frans M

    2017-06-01

    facilitates reproducible delineation of regions with active Crohn's disease. The semi-automatic thickness measurement sustains significantly improved interobserver agreement. Advances in knowledge: Automation of bowel wall thickness measurements strongly increases reproducibility of these measurements, which are commonly used in MRI scoring systems of Crohn's disease activity.

  7. Semi-Automatic Digital Landform Mapping

    NASA Astrophysics Data System (ADS)

    Schneider, Martin; Klein, Reinhard

    In this paper a framework for landform mapping on digital aerial photos and elevation models is presented. The developed mapping tools are integrated in a real-time terrain visualization engine in order to improve the visual recovery and identification of objects. Moreover, semi-automatic image segmentation techniques are built into the mapping tools to make object specification faster and easier without reducing accuracy. Thus, the high level cognitive task of object identification is left to the user whereas the segmentation algorithm performs the low level task of capturing the fine details of the object boundary. In addition to that, the user is able to supply additional photos of regions of interest and to match them with the textured DEM. The matched photos do not only drastically increase the visual information content of the data set but also contribute to the mapping process. Using this additional information precise landform mapping becomes even possible at steep slopes although they are only insufficiently represented in aerial imagery. As proof of concept we mapped several geomorphological structures in a high alpine valley.

  8. A Semi-Automatic "Gauss" Program.

    ERIC Educational Resources Information Center

    Ehrlich, Amos

    1988-01-01

    The suggested program is designed for mathematics teachers. It concerns the solution of systems of linear equations, but it does not complete the job on its own. The user is the solver while the computer is merely a computing assistant. (MNS)

  9. A Semi-Automatic "Gauss" Program.

    ERIC Educational Resources Information Center

    Ehrlich, Amos

    1988-01-01

    The suggested program is designed for mathematics teachers. It concerns the solution of systems of linear equations, but it does not complete the job on its own. The user is the solver while the computer is merely a computing assistant. (MNS)

  10. United States Coast Guard (USCG) SSAMPS (Standard Semi-Automatic Message Processing System) Upgrade Network Studies Functional Analysis and Cost Report

    DTIC Science & Technology

    1988-07-01

    Configuration Control of Security Related and • and DSSCS Software (7) Standard Operational Procedures for Site Development of LDMX/NAVCOMPARS ,I SCP’s...AUG 1984 DSSCS Defense Special Security Communications System (Note 1) DAMA Demand Assigned Multiple Access (Note 1) DON Defense Data Network (Note 1... DSscs COMNAVTELCOM NAVTASC/NAVTELSYSIC COMNAVDAC FOTACS COMNAVTELCOM NAVTASC/NAVTELSYSIC IMC LDMX COMNAVTELCOM NAVTASC/NAVTELSYSIC COMNAVDAC MPOS

  11. Semi-automatic knee cartilage segmentation

    NASA Astrophysics Data System (ADS)

    Dam, Erik B.; Folkesson, Jenny; Pettersen, Paola C.; Christiansen, Claus

    2006-03-01

    Osteo-Arthritis (OA) is a very common age-related cause of pain and reduced range of motion. A central effect of OA is wear-down of the articular cartilage that otherwise ensures smooth joint motion. Quantification of the cartilage breakdown is central in monitoring disease progression and therefore cartilage segmentation is required. Recent advances allow automatic cartilage segmentation with high accuracy in most cases. However, the automatic methods still fail in some problematic cases. For clinical studies, even if a few failing cases will be averaged out in the overall results, this reduces the mean accuracy and precision and thereby necessitates larger/longer studies. Since the severe OA cases are often most problematic for the automatic methods, there is even a risk that the quantification will introduce a bias in the results. Therefore, interactive inspection and correction of these problematic cases is desirable. For diagnosis on individuals, this is even more crucial since the diagnosis will otherwise simply fail. We introduce and evaluate a semi-automatic cartilage segmentation method combining an automatic pre-segmentation with an interactive step that allows inspection and correction. The automatic step consists of voxel classification based on supervised learning. The interactive step combines a watershed transformation of the original scan with the posterior probability map from the classification step at sub-voxel precision. We evaluate the method for the task of segmenting the tibial cartilage sheet from low-field magnetic resonance imaging (MRI) of knees. The evaluation shows that the combined method allows accurate and highly reproducible correction of the segmentation of even the worst cases in approximately ten minutes of interaction.

  12. Application of a semi-automatic cartilage segmentation method for biomechanical modeling of the knee joint.

    PubMed

    Liukkonen, Mimmi K; Mononen, Mika E; Tanska, Petri; Saarakkala, Simo; Nieminen, Miika T; Korhonen, Rami K

    2017-09-12

    Manual segmentation of articular cartilage from knee joint 3D magnetic resonance images (MRI) is a time consuming and laborious task. Thus, automatic methods are needed for faster and reproducible segmentations. In the present study, we developed a semi-automatic segmentation method based on radial intensity profiles to generate 3D geometries of knee joint cartilage which were then used in computational biomechanical models of the knee joint. Six healthy volunteers were imaged with a 3T MRI device and their knee cartilages were segmented both manually and semi-automatically. The values of cartilage thicknesses and volumes produced by these two methods were compared. Furthermore, the influences of possible geometrical differences on cartilage stresses and strains in the knee were evaluated with finite element modeling. The semi-automatic segmentation and 3D geometry construction of one knee joint (menisci, femoral and tibial cartilages) was approximately two times faster than with manual segmentation. Differences in cartilage thicknesses, volumes, contact pressures, stresses, and strains between segmentation methods in femoral and tibial cartilage were mostly insignificant (p > 0.05) and random, i.e. there were no systematic differences between the methods. In conclusion, the devised semi-automatic segmentation method is a quick and accurate way to determine cartilage geometries; it may become a valuable tool for biomechanical modeling applications with large patient groups.

  13. A semi-automatic annotation tool for cooking video

    NASA Astrophysics Data System (ADS)

    Bianco, Simone; Ciocca, Gianluigi; Napoletano, Paolo; Schettini, Raimondo; Margherita, Roberto; Marini, Gianluca; Gianforme, Giorgio; Pantaleo, Giuseppe

    2013-03-01

    In order to create a cooking assistant application to guide the users in the preparation of the dishes relevant to their profile diets and food preferences, it is necessary to accurately annotate the video recipes, identifying and tracking the foods of the cook. These videos present particular annotation challenges such as frequent occlusions, food appearance changes, etc. Manually annotate the videos is a time-consuming, tedious and error-prone task. Fully automatic tools that integrate computer vision algorithms to extract and identify the elements of interest are not error free, and false positive and false negative detections need to be corrected in a post-processing stage. We present an interactive, semi-automatic tool for the annotation of cooking videos that integrates computer vision techniques under the supervision of the user. The annotation accuracy is increased with respect to completely automatic tools and the human effort is reduced with respect to completely manual ones. The performance and usability of the proposed tool are evaluated on the basis of the time and effort required to annotate the same video sequences.

  14. Application of a Novel Semi-Automatic Technique for Determining the Bilateral Symmetry Plane of the Facial Skeleton of Normal Adult Males.

    PubMed

    Roumeliotis, Grayson; Willing, Ryan; Neuert, Mark; Ahluwalia, Romy; Jenkyn, Thomas; Yazdani, Arjang

    2015-09-01

    The accurate assessment of symmetry in the craniofacial skeleton is important for cosmetic and reconstructive craniofacial surgery. Although there have been several published attempts to develop an accurate system for determining the correct plane of symmetry, all are inaccurate and time consuming. Here, the authors applied a novel semi-automatic method for the calculation of craniofacial symmetry, based on principal component analysis and iterative corrective point computation, to a large sample of normal adult male facial computerized tomography scans obtained clinically (n = 32). The authors hypothesized that this method would generate planes of symmetry that would result in less error when one side of the face was compared to the other than a symmetry plane generated using a plane defined by cephalometric landmarks. When a three-dimensional model of one side of the face was reflected across the semi-automatic plane of symmetry there was less error than when reflected across the cephalometric plane. The semi-automatic plane was also more accurate when the locations of bilateral cephalometric landmarks (eg, frontozygomatic sutures) were compared across the face. The authors conclude that this method allows for accurate and fast measurements of craniofacial symmetry. This has important implications for studying the development of the facial skeleton, and clinical application for reconstruction.

  15. Semi-automatic recognition of marine debris on beaches

    NASA Astrophysics Data System (ADS)

    Ge, Zhenpeng; Shi, Huahong; Mei, Xuefei; Dai, Zhijun; Li, Daoji

    2016-05-01

    An increasing amount of anthropogenic marine debris is pervading the earth’s environmental systems, resulting in an enormous threat to living organisms. Additionally, the large amount of marine debris around the world has been investigated mostly through tedious manual methods. Therefore, we propose the use of a new technique, light detection and ranging (LIDAR), for the semi-automatic recognition of marine debris on a beach because of its substantially more efficient role in comparison with other more laborious methods. Our results revealed that LIDAR should be used for the classification of marine debris into plastic, paper, cloth and metal. Additionally, we reconstructed a 3-dimensional model of different types of debris on a beach with a high validity of debris revivification using LIDAR-based individual separation. These findings demonstrate that the availability of this new technique enables detailed observations to be made of debris on a large beach that was previously not possible. It is strongly suggested that LIDAR could be implemented as an appropriate monitoring tool for marine debris by global researchers and governments.

  16. Semi-automatic development of Payload Operations Control Center software

    NASA Technical Reports Server (NTRS)

    Ballin, Sidney

    1988-01-01

    This report summarizes the current status of CTA's investigation of methods and tools for automating the software development process in NASA Goddard Space Flight Center, Code 500. The emphasis in this effort has been on methods and tools in support of software reuse. The most recent phase of the effort has been a domain analysis of Payload Operations Control Center (POCC) software. This report summarizes the results of the domain analysis, and proposes an approach to semi-automatic development of POCC Application Processor (AP) software based on these results. The domain analysis enabled us to abstract, from specific systems, the typical components of a POCC AP. We were also able to identify patterns in the way one AP might be different from another. These two perspectives--aspects that tend to change from AP to AP, and aspects that tend to remain the same--suggest an overall approach to the reuse of POCC AP software. We found that different parts of an AP require different development technologies. We propose a hybrid approach that combines constructive and generative technologies. Constructive methods emphasize the assembly of pre-defined reusable components. Generative methods provide for automated generation of software from specifications in a very-high-level language (VHLL).

  17. Semi-automatic recognition of marine debris on beaches

    PubMed Central

    Ge, Zhenpeng; Shi, Huahong; Mei, Xuefei; Dai, Zhijun; Li, Daoji

    2016-01-01

    An increasing amount of anthropogenic marine debris is pervading the earth’s environmental systems, resulting in an enormous threat to living organisms. Additionally, the large amount of marine debris around the world has been investigated mostly through tedious manual methods. Therefore, we propose the use of a new technique, light detection and ranging (LIDAR), for the semi-automatic recognition of marine debris on a beach because of its substantially more efficient role in comparison with other more laborious methods. Our results revealed that LIDAR should be used for the classification of marine debris into plastic, paper, cloth and metal. Additionally, we reconstructed a 3-dimensional model of different types of debris on a beach with a high validity of debris revivification using LIDAR-based individual separation. These findings demonstrate that the availability of this new technique enables detailed observations to be made of debris on a large beach that was previously not possible. It is strongly suggested that LIDAR could be implemented as an appropriate monitoring tool for marine debris by global researchers and governments. PMID:27156433

  18. Semi-automatic recognition of marine debris on beaches.

    PubMed

    Ge, Zhenpeng; Shi, Huahong; Mei, Xuefei; Dai, Zhijun; Li, Daoji

    2016-05-09

    An increasing amount of anthropogenic marine debris is pervading the earth's environmental systems, resulting in an enormous threat to living organisms. Additionally, the large amount of marine debris around the world has been investigated mostly through tedious manual methods. Therefore, we propose the use of a new technique, light detection and ranging (LIDAR), for the semi-automatic recognition of marine debris on a beach because of its substantially more efficient role in comparison with other more laborious methods. Our results revealed that LIDAR should be used for the classification of marine debris into plastic, paper, cloth and metal. Additionally, we reconstructed a 3-dimensional model of different types of debris on a beach with a high validity of debris revivification using LIDAR-based individual separation. These findings demonstrate that the availability of this new technique enables detailed observations to be made of debris on a large beach that was previously not possible. It is strongly suggested that LIDAR could be implemented as an appropriate monitoring tool for marine debris by global researchers and governments.

  19. Comparison between manual and semi-automatic segmentation of nasal cavity and paranasal sinuses from CT images.

    PubMed

    Tingelhoff, K; Moral, A I; Kunkel, M E; Rilk, M; Wagner, I; Eichhorn, K G; Wahl, F M; Bootz, F

    2007-01-01

    Segmentation of medical image data is getting more and more important over the last years. The results are used for diagnosis, surgical planning or workspace definition of robot-assisted systems. The purpose of this paper is to find out whether manual or semi-automatic segmentation is adequate for ENT surgical workflow or whether fully automatic segmentation of paranasal sinuses and nasal cavity is needed. We present a comparison of manual and semi-automatic segmentation of paranasal sinuses and the nasal cavity. Manual segmentation is performed by custom software whereas semi-automatic segmentation is realized by a commercial product (Amira). For this study we used a CT dataset of the paranasal sinuses which consists of 98 transversal slices, each 1.0 mm thick, with a resolution of 512 x 512 pixels. For the analysis of both segmentation procedures we used volume, extension (width, length and height), segmentation time and 3D-reconstruction. The segmentation time was reduced from 960 minutes with manual to 215 minutes with semi-automatic segmentation. We found highest variances segmenting nasal cavity. For the paranasal sinuses manual and semi-automatic volume differences are not significant. Dependent on the segmentation accuracy both approaches deliver useful results and could be used for e.g. robot-assisted systems. Nevertheless both procedures are not useful for everyday surgical workflow, because they take too much time. Fully automatic and reproducible segmentation algorithms are needed for segmentation of paranasal sinuses and nasal cavity.

  20. A dorsolateral prefrontal cortex semi-automatic segmenter

    NASA Astrophysics Data System (ADS)

    Al-Hakim, Ramsey; Fallon, James; Nain, Delphine; Melonakos, John; Tannenbaum, Allen

    2006-03-01

    Structural, functional, and clinical studies in schizophrenia have, for several decades, consistently implicated dysfunction of the prefrontal cortex in the etiology of the disease. Functional and structural imaging studies, combined with clinical, psychometric, and genetic analyses in schizophrenia have confirmed the key roles played by the prefrontal cortex and closely linked "prefrontal system" structures such as the striatum, amygdala, mediodorsal thalamus, substantia nigra-ventral tegmental area, and anterior cingulate cortices. The nodal structure of the prefrontal system circuit is the dorsal lateral prefrontal cortex (DLPFC), or Brodmann area 46, which also appears to be the most commonly studied and cited brain area with respect to schizophrenia. 1, 2, 3, 4 In 1986, Weinberger et. al. tied cerebral blood flow in the DLPFC to schizophrenia.1 In 2001, Perlstein et. al. demonstrated that DLPFC activation is essential for working memory tasks commonly deficient in schizophrenia. 2 More recently, groups have linked morphological changes due to gene deletion and increased DLPFC glutamate concentration to schizophrenia. 3, 4 Despite the experimental and clinical focus on the DLPFC in structural and functional imaging, the variability of the location of this area, differences in opinion on exactly what constitutes DLPFC, and inherent difficulties in segmenting this highly convoluted cortical region have contributed to a lack of widely used standards for manual or semi-automated segmentation programs. Given these implications, we developed a semi-automatic tool to segment the DLPFC from brain MRI scans in a reproducible way to conduct further morphological and statistical studies. The segmenter is based on expert neuroanatomist rules (Fallon-Kindermann rules), inspired by cytoarchitectonic data and reconstructions presented by Rajkowska and Goldman-Rakic. 5 It is semi-automated to provide essential user interactivity. We present our results and provide details on

  1. Semi-automatic simulation model generation of virtual dynamic networks for production flow planning

    NASA Astrophysics Data System (ADS)

    Krenczyk, D.; Skolud, B.; Olender, M.

    2016-08-01

    Computer modelling, simulation and visualization of production flow allowing to increase the efficiency of production planning process in dynamic manufacturing networks. The use of the semi-automatic model generation concept based on parametric approach supporting processes of production planning is presented. The presented approach allows the use of simulation and visualization for verification of production plans and alternative topologies of manufacturing network configurations as well as with automatic generation of a series of production flow scenarios. Computational examples with the application of Enterprise Dynamics simulation software comprising the steps of production planning and control for manufacturing network have been also presented.

  2. Semi-Automatic Determination of Rockfall Trajectories

    PubMed Central

    Volkwein, Axel; Klette, Johannes

    2014-01-01

    In determining rockfall trajectories in the field, it is essential to calibrate and validate rockfall simulation software. This contribution presents an in situ device and a complementary Local Positioning System (LPS) that allow the determination of parts of the trajectory. An assembly of sensors (herein called rockfall sensor) is installed in the falling block recording the 3D accelerations and rotational velocities. The LPS automatically calculates the position of the block along the slope over time based on Wi-Fi signals emitted from the rockfall sensor. The velocity of the block over time is determined through post-processing. The setup of the rockfall sensor is presented followed by proposed calibration and validation procedures. The performance of the LPS is evaluated by means of different experiments. The results allow for a quality analysis of both the obtained field data and the usability of the rockfall sensor for future/further applications in the field. PMID:25268916

  3. Linking IT-Based Semi-Automatic Marking of Student Mathematics Responses and Meaningful Feedback to Pedagogical Objectives

    ERIC Educational Resources Information Center

    Wong, Khoon Yoong; Oh, Kwang-Shin; Ng, Qiu Ting Yvonne; Cheong, Jim Siew Kuan

    2012-01-01

    The purposes of an online system to auto-mark students' responses to mathematics test items are to expedite the marking process, to enhance consistency in marking and to alleviate teacher assessment workload. We propose that a semi-automatic marking and customizable feedback system better serves pedagogical objectives than a fully automatic one.…

  4. Linking IT-Based Semi-Automatic Marking of Student Mathematics Responses and Meaningful Feedback to Pedagogical Objectives

    ERIC Educational Resources Information Center

    Wong, Khoon Yoong; Oh, Kwang-Shin; Ng, Qiu Ting Yvonne; Cheong, Jim Siew Kuan

    2012-01-01

    The purposes of an online system to auto-mark students' responses to mathematics test items are to expedite the marking process, to enhance consistency in marking and to alleviate teacher assessment workload. We propose that a semi-automatic marking and customizable feedback system better serves pedagogical objectives than a fully automatic one.…

  5. User Interaction in Semi-Automatic Segmentation of Organs at Risk: a Case Study in Radiotherapy.

    PubMed

    Ramkumar, Anjana; Dolz, Jose; Kirisli, Hortense A; Adebahr, Sonja; Schimek-Jasch, Tanja; Nestle, Ursula; Massoptier, Laurent; Varga, Edit; Stappers, Pieter Jan; Niessen, Wiro J; Song, Yu

    2016-04-01

    Accurate segmentation of organs at risk is an important step in radiotherapy planning. Manual segmentation being a tedious procedure and prone to inter- and intra-observer variability, there is a growing interest in automated segmentation methods. However, automatic methods frequently fail to provide satisfactory result, and post-processing corrections are often needed. Semi-automatic segmentation methods are designed to overcome these problems by combining physicians' expertise and computers' potential. This study evaluates two semi-automatic segmentation methods with different types of user interactions, named the "strokes" and the "contour", to provide insights into the role and impact of human-computer interaction. Two physicians participated in the experiment. In total, 42 case studies were carried out on five different types of organs at risk. For each case study, both the human-computer interaction process and quality of the segmentation results were measured subjectively and objectively. Furthermore, different measures of the process and the results were correlated. A total of 36 quantifiable and ten non-quantifiable correlations were identified for each type of interaction. Among those pairs of measures, 20 of the contour method and 22 of the strokes method were strongly or moderately correlated, either directly or inversely. Based on those correlated measures, it is concluded that: (1) in the design of semi-automatic segmentation methods, user interactions need to be less cognitively challenging; (2) based on the observed workflows and preferences of physicians, there is a need for flexibility in the interface design; (3) the correlated measures provide insights that can be used in improving user interaction design.

  6. Implementation of a microcontroller-based semi-automatic coagulator.

    PubMed

    Chan, K; Kirumira, A; Elkateeb, A

    2001-01-01

    The coagulator is an instrument used in hospitals to detect clot formation as a function of time. Generally, these coagulators are very expensive and therefore not affordable by a doctors' office and small clinics. The objective of this project is to design and implement a low cost semi-automatic coagulator (SAC) prototype. The SAC is capable of assaying up to 12 samples and can perform the following tests: prothrombin time (PT), activated partial thromboplastin time (APTT), and PT/APTT combination. The prototype has been tested successfully.

  7. Semi-automatic pusher machine leveler bar control and method

    SciTech Connect

    Berenato, J.W. III; Raivel, E.L. Jr.; Strepelis, J.J.

    1985-11-26

    A leveler bar for a coke oven is controlled in its travel toward and away from the coke side of the oven by semi-automatic means to move the leveler bar during a ''cycle'' mode through half strokes and during the ''finish'' mode it performs four ''full'' strokes of which a full stroke can mean a 3/4 distance of full travel across the oven. Electrical circuitry which modifies or adds to prior art circuitry to accomplish this leveler bar movement is included herein.

  8. Semi-automatic method for routine evaluation of fibrinolytic components

    PubMed Central

    Collen, D.; Tytgat, G.; Verstraete, M.

    1968-01-01

    A semi-automatic method for the routine evaluation of fibrinolytic activity is described. The principle is based upon graphic recording by a multichannel voltmeter of tension drops over a potentiometer, caused by variations in the influence of light upon a light-dependent resistance, resulting from modifications in the composition of the fibrin fibres by lysis. The method is applied to the assessment of certain fibrinolytic factors with widespread fibrinolytic endpoints, and the results are compared with simultaneously obtained visual data on the plasmin assay, the plasminogen assay, and on the euglobulin clot lysis time. Images PMID:4237924

  9. CT evaluation prior to transapical aortic valve replacement: semi-automatic versus manual image segmentation.

    PubMed

    Foldyna, Borek; Jungert, Camelia; Luecke, Christian; von Aspern, Konstantin; Boehmer-Lasthaus, Sonja; Rueth, Eva Maria; Grothoff, Matthias; Nitzsche, Stefan; Gutberlet, Matthias; Mohr, Friedrich Wilhelm; Lehmkuhl, Lukas

    2015-08-01

    To compare the performance of semi-automatic versus manual segmentation for ECG-triggered cardiovascular computed tomography (CT) examinations prior to transcatheter aortic valve replacement (TAVR), with focus on the speed and precision of experienced versus inexperienced observers. The preoperative ECG-triggered CT data of 30 consecutive patients who were scheduled for TAVR were included. All datasets were separately evaluated by two radiologists with 1 and 5 years of experience (novice and expert, respectively) in cardiovascular CT using an evaluation software program with or without a semi-automatic TAVR workflow. The time expended for data loading and all segmentation steps required for the implantation planning were assessed. Inter-software as well as inter-observer reliability analysis was performed. The CT datasets were successfully evaluated, with mean duration between 520.4 ± 117.6 s and 693.2 ± 159.5 s. The three most time-consuming steps were the 3D volume rendering, the measurement of aorta diameter and the sizing of the aortic annulus. Using semi-automatic segmentation, a novice could evaluate CT data approximately 12.3% faster than with manual segmentation, and an expert could evaluate CT data approximately 10.3% faster [mean differences of 85.4 ± 83.8 s (p < 0.001) and 59.8 ± 101 s (p < 0.001), respectively]. The inter-software reliability for a novice was slightly lower than for an expert; however, the reliability for a novice and expert was excellent (ICC 0.92, 95% CI 0.75-0.97/ICC 0.96, 95% CI 0.91-0.98). Automatic aortic annulus detection failed in two patients (6.7%). The study revealed excellent inter-software and inter-observer reliability, with a mean ICC of 0.95. TAVR evaluation can be accomplished significantly faster with semi-automatic rather than with manual segmentation, with comparable exactness, showing a benefit for experienced and inexperienced observers.

  10. Semi-automatic learning of simple diagnostic scores utilizing complexity measures.

    PubMed

    Atzmueller, Martin; Baumeister, Joachim; Puppe, Frank

    2006-05-01

    Knowledge acquisition and maintenance in medical domains with a large application domain ontology is a difficult task. To reduce knowledge elicitation costs, semi-automatic learning methods can be used to support the domain specialists. They are usually not only interested in the accuracy of the learned knowledge: the understandability and interpretability of the learned models is of prime importance as well. Then, often simple models are more favorable than complex ones. We propose diagnostic scores as a promising approach for the representation of simple diagnostic knowledge, and present a method for inductive learning of diagnostic scores. It can be incrementally refined by including background knowledge. We present complexity measures for determining the complexity of the learned scores. We give an evaluation of the presented approach using a case base from the fielded system SonoConsult. We further discuss that the user can easily balance between accuracy and complexity of the learned knowledge applying the presented measures. We argue that semi-automatic learning methods can support the domain specialist efficiently when building (diagnostic) knowledge systems from scratch. The presented complexity measures allow for an intuitive assessment of the learned patterns.

  11. Semi-automatic photo clustering with distance metric learning

    NASA Astrophysics Data System (ADS)

    Ji, Dinghuang; Wang, Meng; Tian, Qi; Hua, Xian-Sheng

    2010-07-01

    Photo clustering has been widely explored in many applications such as album management. But automatic clustering can hardly achieve satisfying performance due to the large variety of photos' content. This paper proposes a semi-automatic photo clustering scheme that attempts to improve clustering performance with users' interactions. Users can adjust the results of automatic clustering, and a set of constraints among photos are generated accordingly. A distance metric is then learned with these constraints and we can re-implement clustering with this metric. We conduct experiments on different photo albums, and experimental results have demonstrated that our approach is able to improve automatic photo clustering results, and it is better than pure manual adjustment approach by exploring distance metric learning.

  12. Semi-automatic parcellation of the corpus striatum

    NASA Astrophysics Data System (ADS)

    Al-Hakim, Ramsey; Nain, Delphine; Levitt, James; Shenton, Martha; Tannenbaum, Allen

    2007-03-01

    The striatum is the input component of the basal ganglia from the cerebral cortex. It includes the caudate, putamen, and nucleus accumbens. Thus, the striatum is an important component in limbic frontal-subcortical circuitry and is believed to be relevant both for reward-guided behaviors and for the expression of psychosis. The dorsal striatum is composed of the caudate and putamen, both of which are further subdivided into pre- and post-commissural components. The ventral striatum (VS) is primarily composed of the nucleus accumbens. The striatum can be functionally divided into three broad regions: 1) a limbic; 2) a cognitive and 3) a sensor-motor region. The approximate corresponding anatomic subregions for these 3 functional regions are: 1) the VS; 2) the pre/post-commissural caudate and the pre-commissural putamen and 3) the post-commissural putamen. We believe assessing these subregions, separately, in disorders with limbic and cognitive impairment such as schizophrenia may yield more informative group differences in comparison with normal controls than prior parcellation strategies of the striatum such as assessing the caudate and putamen. The manual parcellation of the striatum into these subregions is currently defined using certain landmark points and geometric rules. Since identification of these areas is important to clinical research, a reliable and fast parcellation technique is required. Currently, only full manual parcellation using editing software is available; however, this technique is extremely time intensive. Previous work has shown successful application of heuristic rules into a semi-automatic platform1. We present here a semi-automatic algorithm which implements the rules currently used for manual parcellation of the striatum, but requires minimal user input and significantly reduces the time required for parcellation.

  13. Pulmonary lobar volumetry using novel volumetric computer-aided diagnosis and computed tomography

    PubMed Central

    Iwano, Shingo; Kitano, Mariko; Matsuo, Keiji; Kawakami, Kenichi; Koike, Wataru; Kishimoto, Mariko; Inoue, Tsutomu; Li, Yuanzhong; Naganawa, Shinji

    2013-01-01

    OBJECTIVES To compare the accuracy of pulmonary lobar volumetry using the conventional number of segments method and novel volumetric computer-aided diagnosis using 3D computed tomography images. METHODS We acquired 50 consecutive preoperative 3D computed tomography examinations for lung tumours reconstructed at 1-mm slice thicknesses. We calculated the lobar volume and the emphysematous lobar volume < −950 HU of each lobe using (i) the slice-by-slice method (reference standard), (ii) number of segments method, and (iii) semi-automatic and (iv) automatic computer-aided diagnosis. We determined Pearson correlation coefficients between the reference standard and the three other methods for lobar volumes and emphysematous lobar volumes. We also compared the relative errors among the three measurement methods. RESULTS Both semi-automatic and automatic computer-aided diagnosis results were more strongly correlated with the reference standard than the number of segments method. The correlation coefficients for automatic computer-aided diagnosis were slightly lower than those for semi-automatic computer-aided diagnosis because there was one outlier among 50 cases (2%) in the right upper lobe and two outliers among 50 cases (4%) in the other lobes. The number of segments method relative error was significantly greater than those for semi-automatic and automatic computer-aided diagnosis (P < 0.001). The computational time for automatic computer-aided diagnosis was 1/2 to 2/3 than that of semi-automatic computer-aided diagnosis. CONCLUSIONS A novel lobar volumetry computer-aided diagnosis system could more precisely measure lobar volumes than the conventional number of segments method. Because semi-automatic computer-aided diagnosis and automatic computer-aided diagnosis were complementary, in clinical use, it would be more practical to first measure volumes by automatic computer-aided diagnosis, and then use semi-automatic measurements if automatic computer

  14. Follow-up of multicentric HCC according to the mRECIST criteria: role of 320-Row CT with semi-automatic 3D analysis software for evaluating the response to systemic therapy

    PubMed Central

    TELEGRAFO, M.; DILORENZO, G.; DI GIOVANNI, G.; CORNACCHIA, I.; STABILE IANORA, A.A.; ANGELELLI, G.; MOSCHETTA, M.

    2016-01-01

    Aim To evaluate the role of 320-detector row computed tomography (MDCT) with 3D analysis software in follow up of patients affected by multicentric hepatocellular carcinoma (HCC) treated with systemic therapy by using modified response evaluation criteria in solid tumors (mRECIST). Patients and methods 38 patients affected by multicentric HCC underwent MDCT. All exams were performed before and after iodinate contrast material intravenous injection by using a 320-detection row CT device. CT images were analyzed by two radiologists using multi-planar reconstructions (MPR) in order to assess the response to systemic therapy according to mRECIST criteria: complete response (CR), partial response (PR), progressive disease (PD), stable disease (SD). 30 days later, the same two radiologists evaluated target lesion response to systemic therapy according to mRECIST criteria by using 3D analysis software. The difference between the two systems in assessing HCC response to therapy was assessed by the analysis of the variance (Anova Test). Interobserver agreement between the two radiologists by using MPR images and 3D analysis software was calculated by using Cohen’s Kappa test. Results PR occurred in 10/38 cases (26%), PD in 6/38 (16%), SD in 22/38 (58%). Anova Test showed no statistically significant difference between the two systems for assessing target lesion response to therapy (p >0.05). Inter-observer agreement (k) was respectively of 0.62 for MPR images measurements and 0.86 for 3D analysis ones. Conclusions 3D Analysis software provides a semiautomatic system for assessing target lesion response to therapy according to mRE-CIST criteria in patient affected by multifocal HCC treated with systemic therapy. The reliability of 3D analysis software makes it useful in the clinical practice. PMID:28098056

  15. Semi-automatic 10/20 Identification Method for MRI-Free Probe Placement in Transcranial Brain Mapping Techniques.

    PubMed

    Xiao, Xiang; Zhu, Hao; Liu, Wei-Jie; Yu, Xiao-Ting; Duan, Lian; Li, Zheng; Zhu, Chao-Zhe

    2017-01-01

    The International 10/20 system is an important head-surface-based positioning system for transcranial brain mapping techniques, e.g., fNIRS and TMS. As guidance for probe placement, the 10/20 system permits both proper ROI coverage and spatial consistency among multiple subjects and experiments in a MRI-free context. However, the traditional manual approach to the identification of 10/20 landmarks faces problems in reliability and time cost. In this study, we propose a semi-automatic method to address these problems. First, a novel head surface reconstruction algorithm reconstructs head geometry from a set of points uniformly and sparsely sampled on the subject's head. Second, virtual 10/20 landmarks are determined on the reconstructed head surface in computational space. Finally, a visually-guided real-time navigation system guides the experimenter to each of the identified 10/20 landmarks on the physical head of the subject. Compared with the traditional manual approach, our proposed method provides a significant improvement both in reliability and time cost and thus could contribute to improving both the effectiveness and efficiency of 10/20-guided MRI-free probe placement.

  16. Semi-automatic 10/20 Identification Method for MRI-Free Probe Placement in Transcranial Brain Mapping Techniques

    PubMed Central

    Xiao, Xiang; Zhu, Hao; Liu, Wei-Jie; Yu, Xiao-Ting; Duan, Lian; Li, Zheng; Zhu, Chao-Zhe

    2017-01-01

    The International 10/20 system is an important head-surface-based positioning system for transcranial brain mapping techniques, e.g., fNIRS and TMS. As guidance for probe placement, the 10/20 system permits both proper ROI coverage and spatial consistency among multiple subjects and experiments in a MRI-free context. However, the traditional manual approach to the identification of 10/20 landmarks faces problems in reliability and time cost. In this study, we propose a semi-automatic method to address these problems. First, a novel head surface reconstruction algorithm reconstructs head geometry from a set of points uniformly and sparsely sampled on the subject's head. Second, virtual 10/20 landmarks are determined on the reconstructed head surface in computational space. Finally, a visually-guided real-time navigation system guides the experimenter to each of the identified 10/20 landmarks on the physical head of the subject. Compared with the traditional manual approach, our proposed method provides a significant improvement both in reliability and time cost and thus could contribute to improving both the effectiveness and efficiency of 10/20-guided MRI-free probe placement. PMID:28190997

  17. Quantitative analysis of the patellofemoral motion pattern using semi-automatic processing of 4D CT data.

    PubMed

    Forsberg, Daniel; Lindblom, Maria; Quick, Petter; Gauffin, Håkan

    2016-09-01

    To present a semi-automatic method with minimal user interaction for quantitative analysis of the patellofemoral motion pattern. 4D CT data capturing the patellofemoral motion pattern of a continuous flexion and extension were collected for five patients prone to patellar luxation both pre- and post-surgically. For the proposed method, an observer would place landmarks in a single 3D volume, which then are automatically propagated to the other volumes in a time sequence. From the landmarks in each volume, the measures patellar displacement, patellar tilt and angle between femur and tibia were computed. Evaluation of the observer variability showed the proposed semi-automatic method to be favorable over a fully manual counterpart, with an observer variability of approximately 1.5[Formula: see text] for the angle between femur and tibia, 1.5 mm for the patellar displacement, and 4.0[Formula: see text]-5.0[Formula: see text] for the patellar tilt. The proposed method showed that surgery reduced the patellar displacement and tilt at maximum extension with approximately 10-15 mm and 15[Formula: see text]-20[Formula: see text] for three patients but with less evident differences for two of the patients. A semi-automatic method suitable for quantification of the patellofemoral motion pattern as captured by 4D CT data has been presented. Its observer variability is on par with that of other methods but with the distinct advantage to support continuous motions during the image acquisition.

  18. Human Identification Using Automatic and Semi-Automatically Detected Facial Marks.

    PubMed

    Srinivas, Nisha; Flynn, Patrick J; Vorder Bruegge, Richard W

    2016-01-01

    Continuing advancements in the field of digital cameras and surveillance imaging devices have led law enforcement and intelligence agencies to use analysis of images and videos for the investigation and prosecution of crime. When determining identity from photographic evidence, forensic analysts perform comparison of visible facial features manually, which is inefficient. In this study, we will address research efforts to use facial marks as biometric signatures to distinguish between individuals. We propose two systems to assist forensic analysts during photographic comparison: an improved multiscale facial mark system in which facial marks are detected automatically, and a semi-automatic facial mark system that integrates human knowledge within the improved multiscale facial mark system. Experiment results employ a high-resolution time-elapsed dataset acquired at the University of Notre Dame between 2009 and 2011. The results indicate that the geometric distributions of facial mark patterns can be used to distinguish between individuals. © 2015 American Academy of Forensic Sciences.

  19. Semi-automatic conversion of BioProp semantic annotation to PASBio annotation.

    PubMed

    Tsai, Richard Tzong-Han; Dai, Hong-Jie; Huang, Chi-Hsin; Hsu, Wen-Lian

    2008-12-12

    Semantic role labeling (SRL) is an important text analysis technique. In SRL, sentences are represented by one or more predicate-argument structures (PAS). Each PAS is composed of a predicate (verb) and several arguments (noun phrases, adverbial phrases, etc.) with different semantic roles, including main arguments (agent or patient) as well as adjunct arguments (time, manner, or location). PropBank is the most widely used PAS corpus and annotation format in the newswire domain. In the biomedical field, however, more detailed and restrictive PAS annotation formats such as PASBio are popular. Unfortunately, due to the lack of an annotated PASBio corpus, no publicly available machine-learning (ML) based SRL systems based on PASBio have been developed. In previous work, we constructed a biomedical corpus based on the PropBank standard called BioProp, on which we developed an ML-based SRL system, BIOSMILE. In this paper, we aim to build a system to convert BIOSMILE's BioProp annotation output to PASBio annotation. Our system consists of BIOSMILE in combination with a BioProp-PASBio rule-based converter, and an additional semi-automatic rule generator. Our first experiment evaluated our rule-based converter's performance independently from BIOSMILE performance. The converter achieved an F-score of 85.29%. The second experiment evaluated combined system (BIOSMILE + rule-based converter). The system achieved an F-score of 69.08% for PASBio's 29 verbs. Our approach allows PAS conversion between BioProp and PASBio annotation using BIOSMILE alongside our newly developed semi-automatic rule generator and rule-based converter. Our system can match the performance of other state-of-the-art domain-specific ML-based SRL systems and can be easily customized for PASBio application development.

  20. Semi-automatic conversion of BioProp semantic annotation to PASBio annotation

    PubMed Central

    Tsai, Richard Tzong-Han; Dai, Hong-Jie; Huang, Chi-Hsin; Hsu, Wen-Lian

    2008-01-01

    Background Semantic role labeling (SRL) is an important text analysis technique. In SRL, sentences are represented by one or more predicate-argument structures (PAS). Each PAS is composed of a predicate (verb) and several arguments (noun phrases, adverbial phrases, etc.) with different semantic roles, including main arguments (agent or patient) as well as adjunct arguments (time, manner, or location). PropBank is the most widely used PAS corpus and annotation format in the newswire domain. In the biomedical field, however, more detailed and restrictive PAS annotation formats such as PASBio are popular. Unfortunately, due to the lack of an annotated PASBio corpus, no publicly available machine-learning (ML) based SRL systems based on PASBio have been developed. In previous work, we constructed a biomedical corpus based on the PropBank standard called BioProp, on which we developed an ML-based SRL system, BIOSMILE. In this paper, we aim to build a system to convert BIOSMILE's BioProp annotation output to PASBio annotation. Our system consists of BIOSMILE in combination with a BioProp-PASBio rule-based converter, and an additional semi-automatic rule generator. Results Our first experiment evaluated our rule-based converter's performance independently from BIOSMILE performance. The converter achieved an F-score of 85.29%. The second experiment evaluated combined system (BIOSMILE + rule-based converter). The system achieved an F-score of 69.08% for PASBio's 29 verbs. Conclusion Our approach allows PAS conversion between BioProp and PASBio annotation using BIOSMILE alongside our newly developed semi-automatic rule generator and rule-based converter. Our system can match the performance of other state-of-the-art domain-specific ML-based SRL systems and can be easily customized for PASBio application development. PMID:19091017

  1. Semi-automatic determination of lead in whole blood

    PubMed Central

    Delves, H. T.; Vinter, P.

    1966-01-01

    The procedure developed by Browett and Moss (1964) for the semi-automatic determination of the lead content of urine has been adapted for the determination of lead in blood. Determinations are normally carried out in duplicate on 2.0 ml. samples of whole blood and the minimum sample size is 0.5 ml. The organic substances present in blood are destroyed by a manual wet-oxidation procedure and the lead is determined colorimetrically as lead dithizonate using a Technicon AutoAnalyzer. The lower limit of detection, expressed as three times the standard deviation of the blank value, is 5 μg. Pb/100 ml. blood. The standard deviation of the method in the upper range of normal blood lead level of 30 μg. Pb/100 ml. blood (Moncrieff, Koumides, Clayton, Patrick, Renwick, and Roberts, 1964), is ± 3 μg. Pb/100 ml. blood. Ten samples per hour may be estimated in duplicate. Images PMID:5919367

  2. Sherlock: A Semi-automatic Framework for Quiz Generation Using a Hybrid Semantic Similarity Measure.

    PubMed

    Lin, Chenghua; Liu, Dong; Pang, Wei; Wang, Zhe

    In this paper, we present a semi-automatic system (Sherlock) for quiz generation using linked data and textual descriptions of RDF resources. Sherlock is distinguished from existing quiz generation systems in its generic framework for domain-independent quiz generation as well as in the ability of controlling the difficulty level of the generated quizzes. Difficulty scaling is non-trivial, and it is fundamentally related to cognitive science. We approach the problem with a new angle by perceiving the level of knowledge difficulty as a similarity measure problem and propose a novel hybrid semantic similarity measure using linked data. Extensive experiments show that the proposed semantic similarity measure outperforms four strong baselines with more than 47 % gain in clustering accuracy. In addition, we discovered in the human quiz test that the model accuracy indeed shows a strong correlation with the pairwise quiz similarity.

  3. a New Approach for the Semi-Automatic Texture Generation of the Buildings Facades, from Terrestrial Laser Scanner Data

    NASA Astrophysics Data System (ADS)

    Oniga, E.

    2012-07-01

    The result of the terrestrial laser scanning is an impressive number of spatial points, each of them being characterized as position by the X, Y and Z co-ordinates, by the value of the laser reflectance and their real color, expressed as RGB (Red, Green, Blue) values. The color code for each LIDAR point is taken from the georeferenced digital images, taken with a high resolution panoramic camera incorporated in the scanner system. In this article I propose a new algorithm for the semiautomatic texture generation, using the color information, the RGB values of every point that has been taken by terrestrial laser scanning technology and the 3D surfaces defining the buildings facades, generated with the Leica Cyclone software. The first step is when the operator defines the limiting value, i.e. the minimum distance between a point and the closest surface. The second step consists in calculating the distances, or the perpendiculars drawn from each point to the closest surface. In the third step we associate the points whose 3D coordinates are known, to every surface, depending on the limiting value. The fourth step consists in computing the Voronoi diagram for the points that belong to a surface. The final step brings automatic association between the RGB value of the color code and the corresponding polygon of the Voronoi diagram. The advantage of using this algorithm is that we can obtain, in a semi-automatic manner, a photorealistic 3D model of the building.

  4. A semi-automatic web based tool for the selection of research projects reviewers.

    PubMed

    Pupella, Valeria; Monteverde, Maria Eugenia; Lombardo, Claudio; Belardelli, Filippo; Giacomini, Mauro

    2014-01-01

    The correct evaluation of research proposals continues today to be problematic, and in many cases, grants and fellowships are subjected to this type of assessment. A web based semi-automatic tool to help in the selection of reviewers was developed. The core of the proposed system is the matching of the MeSH Descriptors of the publications submitted by the reviewers (for their accreditation) and the Descriptor linked to the research keywords, which were selected. Moreover, a citation related index was further calculated and adopted in order to discard not suitable reviewers. This tool was used as a support in a web site for the evaluation of candidates applying for a fellowship in the oncology field.

  5. Semi-automatic development of optimized surgical simulator with surgical manuals.

    PubMed

    Kuroda, Yoshihiro; Takemura, Tadamasa; Kume, Naoto; Okamoto, Kazuya; Hori, Kenta; Nakao, Megumi; Kuroda, Tomohiro; Yoshihara, Hiroyuki

    2007-01-01

    Recently, simulation platform and libraries are provided from several research groups. However, development of VR-based surgical simulator takes much effort not only for implementing simulation modules but also for setting surgical environment and choosing simulation modules. Surgical manual describes knowledge of manipulations in surgical procedure. In this study, language processing is used to extract anatomical objects and surgical manipulations in a scene from surgical manual. In addition, benchmark and LOD control of simulation modules optimize the simulation. We propose a framework of semi-automatic development of optimized simulator with surgical manuals. In the framework, SVM based machine learning is adapted in extracting surgical information and XML file was made. Simulation programs were created from XML file using a simulation library in different system configurations.

  6. Quantitative evaluation of background parenchymal enhancement (BPE) on breast MRI. A feasibility study with a semi-automatic and automatic software compared to observer-based scores

    PubMed Central

    Bignotti, Bianca; Tagliafico, Giulio; Tosto, Simona; Signori, Alessio; Calabrese, Massimo

    2015-01-01

    Objective: To evaluate quantitative measurements of background parenchymal enhancement (BPE) on breast MRI and compare them with observer-based scores. Methods: BPE of 48 patients (mean age: 48 years; age range: 36–66 years) referred to 3.0-T breast MRI between 2012 and 2014 was evaluated independently and blindly to each other by two radiologists. BPE was estimated qualitatively with the standard Breast Imaging Reporting and Data System (BI-RADS) scale and quantitatively with a semi-automatic and an automatic software interface. To assess intrareader agreement, MRIs were re-read after a 4-month interval by the same two readers. The Pearson correlation coefficient (r) and the Bland–Altman method were used to compare the methods used to estimate BPE. p-value <0.05 was considered significant. Results: The mean value of BPE with the semi-automatic software evaluated by each reader was 14% (range: 2–79%) for Reader 1 and 16% (range: 1–61%) for Reader 2 (p > 0.05). Mean values of BPE percentages for the automatic software were 17.5 ± 13.1 (p > 0.05 vs semi-automatic). The automatic software was unable to produce BPE values for 2 of 48 (4%) patients. With BI-RADS, interreader and intrareader values were κ = 0.70 [95% confidence interval (CI) 0.49–0.91] and κ = 0.69 (95% CI 0.46–0.93), respectively. With semi-automated software, interreader and intrareader values were κ = 0.81 (95% CI 0.59–0.99) and κ = 0.85 (95% CI 0.43–0.99), respectively. BI-RADS scores correlated with the automatic (r = 0.55, p < 0.001) and semi-automatic scores (r = 0.60, p < 0.001). Automatic scores correlated with the semi-automatic scores (r = 0.77, p < 0.001). The mean percentage difference between automatic and semi-automatic scores was 3.5% (95% CI 1.5–5.2). Conclusion: BPE quantitative evaluation is feasible with both semi-automatic and automatic software and correlates with radiologists' estimation. Advances in

  7. Quantitative evaluation of background parenchymal enhancement (BPE) on breast MRI. A feasibility study with a semi-automatic and automatic software compared to observer-based scores.

    PubMed

    Tagliafico, Alberto; Bignotti, Bianca; Tagliafico, Giulio; Tosto, Simona; Signori, Alessio; Calabrese, Massimo

    2015-01-01

    To evaluate quantitative measurements of background parenchymal enhancement (BPE) on breast MRI and compare them with observer-based scores. BPE of 48 patients (mean age: 48 years; age range: 36-66 years) referred to 3.0-T breast MRI between 2012 and 2014 was evaluated independently and blindly to each other by two radiologists. BPE was estimated qualitatively with the standard Breast Imaging Reporting and Data System (BI-RADS) scale and quantitatively with a semi-automatic and an automatic software interface. To assess intrareader agreement, MRIs were re-read after a 4-month interval by the same two readers. The Pearson correlation coefficient (r) and the Bland-Altman method were used to compare the methods used to estimate BPE. p-value <0.05 was considered significant. The mean value of BPE with the semi-automatic software evaluated by each reader was 14% (range: 2-79%) for Reader 1 and 16% (range: 1-61%) for Reader 2 (p > 0.05). Mean values of BPE percentages for the automatic software were 17.5 ± 13.1 (p > 0.05 vs semi-automatic). The automatic software was unable to produce BPE values for 2 of 48 (4%) patients. With BI-RADS, interreader and intrareader values were κ = 0.70 [95% confidence interval (CI) 0.49-0.91] and κ = 0.69 (95% CI 0.46-0.93), respectively. With semi-automated software, interreader and intrareader values were κ = 0.81 (95% CI 0.59-0.99) and κ = 0.85 (95% CI 0.43-0.99), respectively. BI-RADS scores correlated with the automatic (r = 0.55, p < 0.001) and semi-automatic scores (r = 0.60, p < 0.001). Automatic scores correlated with the semi-automatic scores (r = 0.77, p < 0.001). The mean percentage difference between automatic and semi-automatic scores was 3.5% (95% CI 1.5-5.2). BPE quantitative evaluation is feasible with both semi-automatic and automatic software and correlates with radiologists' estimation. Computerized BPE quantitative evaluation is feasible with both semi-automatic

  8. Does semi-automatic bone-fragment segmentation improve the reproducibility of the Letournel acetabular fracture classification?

    PubMed

    Boudissa, M; Orfeuvre, B; Chabanas, M; Tonetti, J

    2017-09-01

    The Letournel classification of acetabular fracture shows poor reproducibility in inexperienced observers, despite the introduction of 3D imaging. We therefore developed a method of semi-automatic segmentation based on CT data. The present prospective study aimed to assess: (1) whether semi-automatic bone-fragment segmentation increased the rate of correct classification; (2) if so, in which fracture types; and (3) feasibility using the open-source itksnap 3.0 software package without incurring extra cost for users. Semi-automatic segmentation of acetabular fractures significantly increases the rate of correct classification by orthopedic surgery residents. Twelve orthopedic surgery residents classified 23 acetabular fractures. Six used conventional 3D reconstructions provided by the center's radiology department (conventional group) and 6 others used reconstructions obtained by semi-automatic segmentation using the open-source itksnap 3.0 software package (segmentation group). Bone fragments were identified by specific colors. Correct classification rates were compared between groups on Chi(2) test. Assessment was repeated 2 weeks later, to determine intra-observer reproducibility. Correct classification rates were significantly higher in the "segmentation" group: 114/138 (83%) versus 71/138 (52%); P<0.0001. The difference was greater for simple (36/36 (100%) versus 17/36 (47%); P<0.0001) than complex fractures (79/102 (77%) versus 54/102 (53%); P=0.0004). Mean segmentation time per fracture was 27±3min [range, 21-35min]. The segmentation group showed excellent intra-observer correlation coefficients, overall (ICC=0.88), and for simple (ICC=0.92) and complex fractures (ICC=0.84). Semi-automatic segmentation, identifying the various bone fragments, was effective in increasing the rate of correct acetabular fracture classification on the Letournel system by orthopedic surgery residents. It may be considered for routine use in education and training. III

  9. An integrated radiation physics computer code system.

    NASA Technical Reports Server (NTRS)

    Steyn, J. J.; Harris, D. W.

    1972-01-01

    An integrated computer code system for the semi-automatic and rapid analysis of experimental and analytic problems in gamma photon and fast neutron radiation physics is presented. Such problems as the design of optimum radiation shields and radioisotope power source configurations may be studied. The system codes allow for the unfolding of complex neutron and gamma photon experimental spectra. Monte Carlo and analytic techniques are used for the theoretical prediction of radiation transport. The system includes a multichannel pulse-height analyzer scintillation and semiconductor spectrometer coupled to an on-line digital computer with appropriate peripheral equipment. The system is geometry generalized as well as self-contained with respect to material nuclear cross sections and the determination of the spectrometer response functions. Input data may be either analytic or experimental.

  10. An integrated radiation physics computer code system.

    NASA Technical Reports Server (NTRS)

    Steyn, J. J.; Harris, D. W.

    1972-01-01

    An integrated computer code system for the semi-automatic and rapid analysis of experimental and analytic problems in gamma photon and fast neutron radiation physics is presented. Such problems as the design of optimum radiation shields and radioisotope power source configurations may be studied. The system codes allow for the unfolding of complex neutron and gamma photon experimental spectra. Monte Carlo and analytic techniques are used for the theoretical prediction of radiation transport. The system includes a multichannel pulse-height analyzer scintillation and semiconductor spectrometer coupled to an on-line digital computer with appropriate peripheral equipment. The system is geometry generalized as well as self-contained with respect to material nuclear cross sections and the determination of the spectrometer response functions. Input data may be either analytic or experimental.

  11. Semi-automatic system for ultrasonic measurement of texture

    DOEpatents

    Thompson, R. Bruce; Wormley, Samuel J.

    1991-09-17

    A means and method for ultrasonic measurement of texture non-destructively and efficiently. Texture characteristics are derived by transmitting ultrasound energy into the material, measuring the time it takes to be received by ultrasound receiving means, and calculating velocity of the ultrasound energy from the timed measurements. Textured characteristics can then be derived from the velocity calculations. One or more sets of ultrasound transmitters and receivers are utilized to derive velocity measurements in different angular orientations through the material and in different ultrasound modes. An ultrasound transmitter is utilized to direct ultrasound energy to the material and one or more ultrasound receivers are utilized to receive the same. The receivers are at a predetermined fixed distance from the transmitter. A control means is utilized to control transmission of the ultrasound, and a processing means derives timing, calculation of velocity and derivation of texture characteristics.

  12. Semi-automatic system for ultrasonic measurement of texture

    DOEpatents

    Thompson, R.B.; Wormley, S.J.

    1991-09-17

    A means and method are disclosed for ultrasonic measurement of texture nondestructively and efficiently. Texture characteristics are derived by transmitting ultrasound energy into the material, measuring the time it takes to be received by ultrasound receiving means, and calculating velocity of the ultrasound energy from the timed measurements. Textured characteristics can then be derived from the velocity calculations. One or more sets of ultrasound transmitters and receivers are utilized to derive velocity measurements in different angular orientations through the material and in different ultrasound modes. An ultrasound transmitter is utilized to direct ultrasound energy to the material and one or more ultrasound receivers are utilized to receive the same. The receivers are at a predetermined fixed distance from the transmitter. A control means is utilized to control transmission of the ultrasound, and a processing means derives timing, calculation of velocity and derivation of texture characteristics. 5 figures.

  13. Evaluation of Semi-automatic Segmentation Methods for Persistent Ground Glass Nodules on Thin-Section CT Scans

    PubMed Central

    Kim, Young Jae; Lee, Seung Hyun; Park, Chang Min

    2016-01-01

    Objectives This work was a comparative study that aimed to find a proper method for accurately segmenting persistent ground glass nodules (GGN) in thin-section computed tomography (CT) images after detecting them. Methods To do this, we first applied five types of semi-automatic segmentation methods (i.e., level-set-based active contour model, localized region-based active contour model, seeded region growing, K-means clustering, and fuzzy C-means clustering) to preprocessed GGN images, respectively. Then, to measure the similarities, we calculated the Dice coefficient of the segmented area using each semiautomatic method with the result of the manually segmented area by two radiologists. Results Comparison experiments were performed using 40 persistent GGNs. In our experiment, the mean Dice coefficient for each semiautomatic segmentation tool with manually segmented area was 0.808 for the level-set-based active contour model, 0.8001 for the localized region-based active contour model, 0.629 for seeded region growing, 0.7953 for K-means clustering, and 0.7999 for fuzzy C-means clustering, respectively. Conclusions The level-set-based active contour model algorithm showed the best performance, which was most similar to the result of manual segmentation by two radiologists. From the differentiation between the normal parenchyma and the nodule, it was also the most efficient. Effective segmentation methods will be essential for the development of computer-aided diagnosis systems for more accurate early diagnosis and prognosis of lung cancer in thin-section CT images. PMID:27895963

  14. [PIV: a computer-aided portal image verification system].

    PubMed

    Fu, Weihua; Zhang, Hongzhi; Wu, Jing

    2002-12-01

    Portal image verification (PIV) is one of the key actions in QA procedure for sophisticated accurate radiotherapy. The purpose of this study was to develop a PIV software as a tool for improving the accuracy and visualization of portal field verification and computing field placement errors. PIV was developed in the visual C++ integrated environment under Windows 95 operating system. It can improve visualization by providing tools for image processing and multimode images display. Semi-automatic register methods make verification more accurate than view-box method. It can provide useful quantitative errors for regular fields. PIV is flexible and accurate. It is an effective tool for portal field verification.

  15. Semi-automatic mapping of cultural heritage from airborne laser scanning using deep learning

    NASA Astrophysics Data System (ADS)

    Due Trier, Øivind; Salberg, Arnt-Børre; Holger Pilø, Lars; Tonning, Christer; Marius Johansen, Hans; Aarsten, Dagrun

    2016-04-01

    This paper proposes to use deep learning to improve semi-automatic mapping of cultural heritage from airborne laser scanning (ALS) data. Automatic detection methods, based on traditional pattern recognition, have been applied in a number of cultural heritage mapping projects in Norway for the past five years. Automatic detection of pits and heaps have been combined with visual interpretation of the ALS data for the mapping of deer hunting systems, iron production sites, grave mounds and charcoal kilns. However, the performance of the automatic detection methods varies substantially between ALS datasets. For the mapping of deer hunting systems on flat gravel and sand sediment deposits, the automatic detection results were almost perfect. However, some false detections appeared in the terrain outside of the sediment deposits. These could be explained by other pit-like landscape features, like parts of river courses, spaces between boulders, and modern terrain modifications. However, these were easy to spot during visual interpretation, and the number of missed individual pitfall traps was still low. For the mapping of grave mounds, the automatic method produced a large number of false detections, reducing the usefulness of the semi-automatic approach. The mound structure is a very common natural terrain feature, and the grave mounds are less distinct in shape than the pitfall traps. Still, applying automatic mound detection on an entire municipality did lead to a new discovery of an Iron Age grave field with more than 15 individual mounds. Automatic mound detection also proved to be useful for a detailed re-mapping of Norway's largest Iron Age grave yard, which contains almost 1000 individual graves. Combined pit and mound detection has been applied to the mapping of more than 1000 charcoal kilns that were used by an iron work 350-200 years ago. The majority of charcoal kilns were indirectly detected as either pits on the circumference, a central mound, or both

  16. Scalable Semi-Automatic Annotation for Multi-Camera Person Tracking.

    PubMed

    Niño-Castañeda, Jorge; Frías-Velázquez, Andrés; Bo, Nyan Bo; Slembrouck, Maarten; Guan, Junzhi; Debard, Glen; Vanrumste, Bart; Tuytelaars, Tinne; Philips, Wilfried

    2016-05-01

    This paper proposes a generic methodology for the semi-automatic generation of reliable position annotations for evaluating multi-camera people-trackers on large video data sets. Most of the annotation data are automatically computed, by estimating a consensus tracking result from multiple existing trackers and people detectors and classifying it as either reliable or not. A small subset of the data, composed of tracks with insufficient reliability, is verified by a human using a simple binary decision task, a process faster than marking the correct person position. The proposed framework is generic and can handle additional trackers. We present results on a data set of $sim 6$ h captured by 4 cameras, featuring a person in a holiday flat, performing activities such as walking, cooking, eating, cleaning, and watching TV. When aiming for a tracking accuracy of 60 cm, 80% of all video frames are automatically annotated. The annotations for the remaining 20% of the frames were added after human verification of an automatically selected subset of data. This involved $sim 2.4$ h of manual labor. According to a subsequent comprehensive visual inspection to judge the annotation procedure, we found 99% of the automatically annotated frames to be correct. We provide guidelines on how to apply the proposed methodology to new data sets. We also provide an exploratory study for the multi-target case, applied on the existing and new benchmark video sequences.

  17. Semi-automatic delineation using weighted CT-MRI registered images for radiotherapy of nasopharyngeal cancer

    SciTech Connect

    Fitton, I.; Cornelissen, S. A. P.; Duppen, J. C.; Rasch, C. R. N.; Herk, M. van; Steenbakkers, R. J. H. M.; Peeters, S. T. H.; Hoebers, F. J. P.; Kaanders, J. H. A. M.; Nowak, P. J. C. M.

    2011-08-15

    Purpose: To develop a delineation tool that refines physician-drawn contours of the gross tumor volume (GTV) in nasopharynx cancer, using combined pixel value information from x-ray computed tomography (CT) and magnetic resonance imaging (MRI) during delineation. Methods: Operator-guided delineation assisted by a so-called ''snake'' algorithm was applied on weighted CT-MRI registered images. The physician delineates a rough tumor contour that is continuously adjusted by the snake algorithm using the underlying image characteristics. The algorithm was evaluated on five nasopharyngeal cancer patients. Different linear weightings CT and MRI were tested as input for the snake algorithm and compared according to contrast and tumor to noise ratio (TNR). The semi-automatic delineation was compared with manual contouring by seven experienced radiation oncologists. Results: A good compromise for TNR and contrast was obtained by weighing CT twice as strong as MRI. The new algorithm did not notably reduce interobserver variability, it did however, reduce the average delineation time by 6 min per case. Conclusions: The authors developed a user-driven tool for delineation and correction based a snake algorithm and registered weighted CT image and MRI. The algorithm adds morphological information from CT during the delineation on MRI and accelerates the delineation task.

  18. Scalable Semi-Automatic Annotation for Multi-Camera Person Tracking.

    PubMed

    Nino, Jorge; Frias-Velazquez, Andres; Bo Bo, Nyan; Slembrouck, Maarten; Guan, Junzhi; Debard, Glen; Vanrumste, Bart; Tuytelaars, Tinne; Philips, Wilfried

    2016-03-29

    This paper proposes a generic methodology for semi-automatic generation of reliable position annotations for evaluating multi-camera people-trackers on large video datasets. Most of the annotation data is computed automatically, by estimating a consensus tracking result from multiple existing trackers and people detectors and classifying it as either reliable or not. A small subset of the data, composed of tracks with insufficient reliability is verified by a human using a simple binary decision task, a process faster than marking the correct person position. The proposed framework is generic and can handle additional trackers. We present results on a dataset of approximately 6 hours captured by 4 cameras, featuring a person in a holiday flat, performing activities such as walking, cooking, eating, cleaning, and watching TV. When aiming for a tracking accuracy of 60cm, 80% of all video frames are automatically annotated. The annotations for the remaining 20% of the frames were added after human verification of an automatically selected subset of data. This involved about 2.4 hours of manual labour. According to a subsequent comprehensive visual inspection to judge the annotation procedure, we found 99% of the automatically annotated frames to be correct. We provide guidelines on how to apply the proposed methodology to new datasets. We also provide an exploratory study for the multi-target case, applied on existing and new benchmark video sequences.

  19. Automatic ultrasound image enhancement for 2D semi-automatic breast-lesion segmentation

    NASA Astrophysics Data System (ADS)

    Lu, Kongkuo; Hall, Christopher S.

    2014-03-01

    Breast cancer is the fastest growing cancer, accounting for 29%, of new cases in 2012, and second leading cause of cancer death among women in the United States and worldwide. Ultrasound (US) has been used as an indispensable tool for breast cancer detection/diagnosis and treatment. In computer-aided assistance, lesion segmentation is a preliminary but vital step, but the task is quite challenging in US images, due to imaging artifacts that complicate detection and measurement of the suspect lesions. The lesions usually present with poor boundary features and vary significantly in size, shape, and intensity distribution between cases. Automatic methods are highly application dependent while manual tracing methods are extremely time consuming and have a great deal of intra- and inter- observer variability. Semi-automatic approaches are designed to counterbalance the advantage and drawbacks of the automatic and manual methods. However, considerable user interaction might be necessary to ensure reasonable segmentation for a wide range of lesions. This work proposes an automatic enhancement approach to improve the boundary searching ability of the live wire method to reduce necessary user interaction while keeping the segmentation performance. Based on the results of segmentation of 50 2D breast lesions in US images, less user interaction is required to achieve desired accuracy, i.e. < 80%, when auto-enhancement is applied for live-wire segmentation.

  20. Semi-Automatic Volumetric Segmentation of the Upper Airways in Patients with Pierre Robin Sequence

    PubMed Central

    Salerno, Sergio; Gagliardo, Cesare; Vitabile, Salvatore; Militello, Carmelo; La Tona, Giuseppe; Giuffrè, Mario; Lo Casto, Antonio; Midiri, Massimo

    2014-01-01

    Summary Pierre Robin malformation is a rare craniofacial dysmorphism whose pathogenesis is multifactorial. Although there is some agreement in non-invasive treatment in less severe cases, the dispute is still open on cases with severe respiratory impairment. We present a semi-automatic novel diagnostic tool for calculating upper airway volume, in order to eventually address surgery in patients with Pierre Robin Sequence (PRS). Multidetector CT datasets of two patients and two controls were tested to assess the proposed method for ROI segmentation, upper airway volume computation and three-dimensional reconstructions. The experimental results show an irregular pattern and a severely reduced cross-sectional area (CSA) with a mean value of 8.3808 mm2 in patients with PRS and a mean CSA value of 33.7692 mm2 in controls (a ΔCSA of about -75%). Moreover, the similarity indexes and sensitivity/specificity values obtained showed a good segmentation performance. In particular, mean values of Jaccard and Dice similarity indexes were 91.69% and 94.07%, respectively, while the mean values of specificity and sensitivity were 96.69% and 98.03%, respectively. The proposed tool represents an easy way to perform a quantitative analysis of airway volume and useful 3D reconstructions. PMID:25196625

  1. Comparison of manual and semi-automatic measuring techniques in MSCT scans of patients with lymphoma: a multicentre study.

    PubMed

    Höink, A J; Weßling, J; Koch, R; Schülke, C; Kohlhase, N; Wassenaar, L; Mesters, R M; D'Anastasi, M; Fabel, M; Wulff, A; Pinto dos Santos, D; Kießling, A; Graser, A; Dicken, V; Karpitschka, M; Bornemann, L; Heindel, W; Buerke, B

    2014-11-01

    Multicentre evaluation of the precision of semi-automatic 2D/3D measurements in comparison to manual, linear measurements of lymph nodes regarding their inter-observer variability in multi-slice CT (MSCT) of patients with lymphoma. MSCT data of 63 patients were interpreted before and after chemotherapy by one/two radiologists in five university hospitals. In 307 lymph nodes, short (SAD)/long (LAD) axis diameter and WHO area were determined manually and semi-automatically. Volume was solely calculated semi-automatically. To determine the precision of the individual parameters, a mean was calculated for every lymph node/parameter. Deviation of the measured parameters from this mean was evaluated separately. Statistical analysis entailed intraclass correlation coefficients (ICC) and Kruskal-Wallis tests. Median relative deviations of semi-automatic parameters were smaller than deviations of manually assessed parameters, e.g. semi-automatic SAD 5.3 vs. manual 6.5 %. Median variations among different study sites were smaller if the measurement was conducted semi-automatically, e. g. manual LAD 5.7/4.2 % vs. semi-automatic 3.4/3.4 %. Semi-automatic volumetry was superior to the other parameters (2.8 %). Semi-automatic determination of different lymph node parameters is (compared to manually assessed parameters) associated with a slightly greater precision and a marginally lower inter-observer variability. These results are with regard to the increasing mobility of patients among different medical centres and in relation to the quality management of multicentre trials of importance. • In a multicentre setting, semi-automatic measurements are more accurate than manual assessments. • Lymph node volumetry outperforms all other semi-automatically and manually performed measurements. • Use of semi-automatic lymph node analyses can reduce the inter-observer variability.

  2. Automatic vs semi-automatic global cardiac function assessment using 64-row CT

    PubMed Central

    Greupner, J; Zimmermann, E; Hamm, B; Dewey, M

    2012-01-01

    Objective Global cardiac function assessment using multidetector CT (MDCT) is time-consuming. Therefore we sought to compare an automatic software tool with an established semi-automatic method. Methods A total of 36 patients underwent CT with 64×0.5 mm detector collimation, and global left ventricular function was subsequently assessed by two independent blinded readers using both an automatic region-growing-based software tool (with and without manual adjustment) and an established semi-automatic software tool. We also analysed automatic motion mapping to identify end-systole. Results The time needed for assessment using the semi-automatic approach (12:12±6:19 min) was reduced by 75–85% with the automatic software tool (unadjusted, 01:34±0:29 min, adjusted, 02:53±1:19 min; both p<0.001). There was good correlation (r=0.89; p<0.001) for the ejection fraction (EF) between the adjusted automatic (58.6±14.9%) and the semi-automatic (58.0±15.3%) approaches. Also the manually adjusted automatic approach led to significantly smaller limits of agreement than the unadjusted automatic approach for end-diastolic volume (±36.4 ml vs ±58.5 ml, p>0.05). Using motion mapping to automatically identify end-systole reduced analysis time by 95% compared with the semi-automatic approach, but showed inferior precision for EF and end-systolic volume. Conclusion Automatic function assessment using MDCT with manual adjustment shows good agreement with an established semi-automatic approach, while reducing the analysis by 75% to less than 3 min. This suggests that automatic CT function assessment with manual correction may be used for fast, comfortable and reliable evaluation of global left ventricular function. PMID:22045953

  3. Breast Contrast Enhanced MR Imaging: Semi-Automatic Detection of Vascular Map and Predominant Feeding Vessel.

    PubMed

    Petrillo, Antonella; Fusco, Roberta; Filice, Salvatore; Granata, Vincenza; Catalano, Orlando; Vallone, Paolo; Di Bonito, Maurizio; D'Aiuto, Massimiliano; Rinaldo, Massimo; Capasso, Immacolata; Sansone, Mario

    2016-01-01

    To obtain breast vascular map and to assess correlation between predominant feeding vessel and tumor location with a semi-automatic method compared to conventional radiologic reading. 148 malignant and 75 benign breast lesions were included. All patients underwent bilateral MR imaging. Written informed consent was obtained from the patients before MRI. The local ethics committee granted approval for this study. Semi-automatic breast vascular map and predominant vessel detection was performed on MRI, for each patient. Semi-automatic detection (depending on grey levels threshold manually chosen by radiologist) was compared with results of two expert radiologists; inter-observer variability and reliability of semi-automatic approach were assessed. Anatomic analysis of breast lesions revealed that 20% of patients had masses in internal half, 50% in external half and the 30% in subareolar/central area. As regards the 44 tumors in internal half, based on radiologic consensus, 40 demonstrated a predominant feeding vessel (61% were supplied by internal thoracic vessels, 14% by lateral thoracic vessels, 16% by both thoracic vessels and 9% had no predominant feeding vessel-p<0.01), based on semi-automatic detection, 38 tumors demonstrated a predominant feeding vessel (66% were supplied by internal thoracic vessels, 11% by lateral thoracic vessels, 9% by both thoracic vessels and 14% had no predominant feeding vessel-p<0.01). As regards the 111 tumors in external half, based on radiologic consensus, 91 demonstrated a predominant feeding vessel (25% were supplied by internal thoracic vessels, 39% by lateral thoracic vessels, 18% by both thoracic vessels and 18% had no predominant feeding vessel-p<0.01), based on semi-automatic detection, 94 demonstrated a predominant feeding vessel (27% were supplied by internal thoracic vessels, 45% by lateral thoracic vessels, 4% by both thoracic vessels and 24% had no predominant feeding vessel-p<0.01). An excellent agreement between two

  4. Breast Contrast Enhanced MR Imaging: Semi-Automatic Detection of Vascular Map and Predominant Feeding Vessel

    PubMed Central

    Petrillo, Antonella; Fusco, Roberta; Filice, Salvatore; Granata, Vincenza; Catalano, Orlando; Vallone, Paolo; Di Bonito, Maurizio; D’Aiuto, Massimiliano; Rinaldo, Massimo; Capasso, Immacolata; Sansone, Mario

    2016-01-01

    Purpose To obtain breast vascular map and to assess correlation between predominant feeding vessel and tumor location with a semi-automatic method compared to conventional radiologic reading. Methods 148 malignant and 75 benign breast lesions were included. All patients underwent bilateral MR imaging. Written informed consent was obtained from the patients before MRI. The local ethics committee granted approval for this study. Semi-automatic breast vascular map and predominant vessel detection was performed on MRI, for each patient. Semi-automatic detection (depending on grey levels threshold manually chosen by radiologist) was compared with results of two expert radiologists; inter-observer variability and reliability of semi-automatic approach were assessed. Results Anatomic analysis of breast lesions revealed that 20% of patients had masses in internal half, 50% in external half and the 30% in subareolar/central area. As regards the 44 tumors in internal half, based on radiologic consensus, 40 demonstrated a predominant feeding vessel (61% were supplied by internal thoracic vessels, 14% by lateral thoracic vessels, 16% by both thoracic vessels and 9% had no predominant feeding vessel—p<0.01), based on semi-automatic detection, 38 tumors demonstrated a predominant feeding vessel (66% were supplied by internal thoracic vessels, 11% by lateral thoracic vessels, 9% by both thoracic vessels and 14% had no predominant feeding vessel—p<0.01). As regards the 111 tumors in external half, based on radiologic consensus, 91 demonstrated a predominant feeding vessel (25% were supplied by internal thoracic vessels, 39% by lateral thoracic vessels, 18% by both thoracic vessels and 18% had no predominant feeding vessel—p<0.01), based on semi-automatic detection, 94 demonstrated a predominant feeding vessel (27% were supplied by internal thoracic vessels, 45% by lateral thoracic vessels, 4% by both thoracic vessels and 24% had no predominant feeding vessel—p<0.01). An

  5. Semi-automatic construction of reference standards for evaluation of image registration.

    PubMed

    Murphy, K; van Ginneken, B; Klein, S; Staring, M; de Hoop, B J; Viergever, M A; Pluim, J P W

    2011-02-01

    Quantitative evaluation of image registration algorithms is a difficult and under-addressed issue due to the lack of a reference standard in most registration problems. In this work a method is presented whereby detailed reference standard data may be constructed in an efficient semi-automatic fashion. A well-distributed set of n landmarks is detected fully automatically in one scan of a pair to be registered. Using a custom-designed interface, observers define corresponding anatomic locations in the second scan for a specified subset of s of these landmarks. The remaining n-s landmarks are matched fully automatically by a thin-plate-spline based system using the s manual landmark correspondences to model the relationship between the scans. The method is applied to 47 pairs of temporal thoracic CT scans, three pairs of brain MR scans and five thoracic CT datasets with synthetic deformations. Interobserver differences are used to demonstrate the accuracy of the matched points. The utility of the reference standard data as a tool in evaluating registration is shown by the comparison of six sets of registration results on the 47 pairs of thoracic CT data.

  6. Towards semi-automatic rock mass discontinuity orientation and set analysis from 3D point clouds

    NASA Astrophysics Data System (ADS)

    Guo, Jiateng; Liu, Shanjun; Zhang, Peina; Wu, Lixin; Zhou, Wenhui; Yu, Yinan

    2017-06-01

    Obtaining accurate information on rock mass discontinuities for deformation analysis and the evaluation of rock mass stability is important. Obtaining measurements for high and steep zones with the traditional compass method is difficult. Photogrammetry, three-dimensional (3D) laser scanning and other remote sensing methods have gradually become mainstream methods. In this study, a method that is based on a 3D point cloud is proposed to semi-automatically extract rock mass structural plane information. The original data are pre-treated prior to segmentation by removing outlier points. The next step is to segment the point cloud into different point subsets. Various parameters, such as the normal, dip/direction and dip, can be calculated for each point subset after obtaining the equation of the best fit plane for the relevant point subset. A cluster analysis (a point subset that satisfies some conditions and thus forms a cluster) is performed based on the normal vectors by introducing the firefly algorithm (FA) and the fuzzy c-means (FCM) algorithm. Finally, clusters that belong to the same discontinuity sets are merged and coloured for visualization purposes. A prototype system is developed based on this method to extract the points of the rock discontinuity from a 3D point cloud. A comparison with existing software shows that this method is feasible. This method can provide a reference for rock mechanics, 3D geological modelling and other related fields.

  7. Subjective Evaluation of a Semi-Automatic Optical See-Through Head-Mounted Display Calibration Technique.

    PubMed

    Moser, Kenneth; Itoh, Yuta; Oshima, Kohei; Swan, J Edward; Klinker, Gudrun; Sandor, Christian

    2015-04-01

    With the growing availability of optical see-through (OST) head-mounted displays (HMDs) there is a present need for robust, uncomplicated, and automatic calibration methods suited for non-expert users. This work presents the results of a user study which both objectively and subjectively examines registration accuracy produced by three OST HMD calibration methods: (1) SPAAM, (2) Degraded SPAAM, and (3) Recycled INDICA, a recently developed semi-automatic calibration method. Accuracy metrics used for evaluation include subject provided quality values and error between perceived and absolute registration coordinates. Our results show all three calibration methods produce very accurate registration in the horizontal direction but caused subjects to perceive the distance of virtual objects to be closer than intended. Surprisingly, the semi-automatic calibration method produced more accurate registration vertically and in perceived object distance overall. User assessed quality values were also the highest for Recycled INDICA, particularly when objects were shown at distance. The results of this study confirm that Recycled INDICA is capable of producing equal or superior on-screen registration compared to common OST HMD calibration methods. We also identify a potential hazard in using reprojection error as a quantitative analysis technique to predict registration accuracy. We conclude with discussing the further need for examining INDICA calibration in binocular HMD systems, and the present possibility for creation of a closed-loop continuous calibration method for OST Augmented Reality.

  8. Semi-Automatic Post-Processing for Improved Usability of Electure Podcasts

    ERIC Educational Resources Information Center

    Hurst, Wolfgang; Welte, Martina

    2009-01-01

    Purpose: Playing back recorded lectures on handheld devices offers interesting perspectives for learning, but suffers from small screen sizes. The purpose of this paper is to propose several semi-automatic post-processing steps in order to improve usability by providing a better readability and additional navigation functionality.…

  9. Semi-automatic breast ultrasound image segmentation based on mean shift and graph cuts.

    PubMed

    Zhou, Zhuhuang; Wu, Weiwei; Wu, Shuicai; Tsui, Po-Hsiang; Lin, Chung-Chih; Zhang, Ling; Wang, Tianfu

    2014-10-01

    Computerized tumor segmentation on breast ultrasound (BUS) images remains a challenging task. In this paper, we proposed a new method for semi-automatic tumor segmentation on BUS images using Gaussian filtering, histogram equalization, mean shift, and graph cuts. The only interaction required was to select two diagonal points to determine a region of interest (ROI) on an input image. The ROI image was shrunken by a factor of 2 using bicubic interpolation to reduce computation time. The shrunken image was smoothed by a Gaussian filter and then contrast-enhanced by histogram equalization. Next, the enhanced image was filtered by pyramid mean shift to improve homogeneity. The object and background seeds for graph cuts were automatically generated on the filtered image. Using these seeds, the filtered image was then segmented by graph cuts into a binary image containing the object and background. Finally, the binary image was expanded by a factor of 2 using bicubic interpolation, and the expanded image was processed by morphological opening and closing to refine the tumor contour. The method was implemented with OpenCV 2.4.3 and Visual Studio 2010 and tested for 38 BUS images with benign tumors and 31 BUS images with malignant tumors from different ultrasound scanners. Experimental results showed that our method had a true positive rate (TP) of 91.7%, a false positive (FP) rate of 11.9%, and a similarity (SI) rate of 85.6%. The mean run time on Intel Core 2.66 GHz CPU and 4 GB RAM was 0.49 ± 0.36 s. The experimental results indicate that the proposed method may be useful in BUS image segmentation. © The Author(s) 2014.

  10. Development and evaluation of a semi-automatic technique for determining the bilateral symmetry plane of the facial skeleton.

    PubMed

    Willing, Ryan T; Roumeliotis, Grayson; Jenkyn, Thomas R; Yazdani, Arjang

    2013-12-01

    During reconstructive surgery of the face, one side may be used as a template for the other, exploiting assumed bilateral facial symmetry. The best method to calculate this plane, however, is debated. A new semi-automatic technique for calculating the symmetry plane of the facial skeleton is presented here that uses surface models reconstructed from computed tomography image data in conjunction with principal component analysis and an iterative closest point alignment method. This new technique was found to provide more accurate symmetry planes than traditional methods when applied to a set of 7 human craniofacial skeleton specimens, and showed little vulnerability to missing model data, usually deviating less than 1.5° and 2 mm from the intact model symmetry plane when 30 mm radius voids were present. This new technique will be used for subsequent studies measuring symmetry of the facial skeleton for different patient populations.

  11. Semi-automatic construction of the Chinese-English MeSH using Web-based term translation method.

    PubMed

    Lu, Wen-Hsiang; Lin, Shih-Jui; Chan, Yi-Che; Chen, Kuan-Hsi

    2005-01-01

    Due to language barrier, non-English users are unable to retrieve the most updated medical information from the U.S. authoritative medical websites, such as PubMed and MedlinePlus. A few cross-language medical information retrieval (CLMIR) systems have been utilizing MeSH (Medical Subject Heading) with multilingual thesaurus to bridge the gap. Unfortunately, MeSH has yet not been translated into traditional Chinese currently. We proposed a semi-automatic approach to constructing Chinese-English MeSH based on Web-based term translation. The system provides knowledge engineers with candidate terms mining from anchor texts and search-result pages. The result is encouraging. Currently, more than 19,000 Chinese-English MeSH entries have been complied. This thesaurus will be used in Chinese-English CLMIR in the future.

  12. Semi-automatic classification of textures in thoracic CT scans

    NASA Astrophysics Data System (ADS)

    Kockelkorn, Thessa T. J. P.; de Jong, Pim A.; Schaefer-Prokop, Cornelia M.; Wittenberg, Rianne; Tiehuis, Audrey M.; Gietema, Hester A.; Grutters, Jan C.; Viergever, Max A.; van Ginneken, Bram

    2016-08-01

    The textural patterns in the lung parenchyma, as visible on computed tomography (CT) scans, are essential to make a correct diagnosis in interstitial lung disease. We developed one automatic and two interactive protocols for classification of normal and seven types of abnormal lung textures. Lungs were segmented and subdivided into volumes of interest (VOIs) with homogeneous texture using a clustering approach. In the automatic protocol, VOIs were classified automatically by an extra-trees classifier that was trained using annotations of VOIs from other CT scans. In the interactive protocols, an observer iteratively trained an extra-trees classifier to distinguish the different textures, by correcting mistakes the classifier makes in a slice-by-slice manner. The difference between the two interactive methods was whether or not training data from previously annotated scans was used in classification of the first slice. The protocols were compared in terms of the percentages of VOIs that observers needed to relabel. Validation experiments were carried out using software that simulated observer behavior. In the automatic classification protocol, observers needed to relabel on average 58% of the VOIs. During interactive annotation without the use of previous training data, the average percentage of relabeled VOIs decreased from 64% for the first slice to 13% for the second half of the scan. Overall, 21% of the VOIs were relabeled. When previous training data was available, the average overall percentage of VOIs requiring relabeling was 20%, decreasing from 56% in the first slice to 13% in the second half of the scan.

  13. Semi-automatic handling of meteorological ground measurements using WeatherProg: prospects and practical implications

    NASA Astrophysics Data System (ADS)

    Langella, Giuliano; Basile, Angelo; Bonfante, Antonello; De Mascellis, Roberto; Manna, Piero; Terribile, Fabio

    2016-04-01

    WeatherProg is a computer program for the semi-automatic handling of data measured at ground stations within a climatic network. The program performs a set of tasks ranging from gathering raw point-based sensors measurements to the production of digital climatic maps. Originally the program was developed as the baseline asynchronous engine for the weather records management within the SOILCONSWEB Project (LIFE08 ENV/IT/000408), in which daily and hourly data where used to run water balance in the soil-plant-atmosphere continuum or pest simulation models. WeatherProg can be configured to automatically perform the following main operations: 1) data retrieval; 2) data decoding and ingestion into a database (e.g. SQL based); 3) data checking to recognize missing and anomalous values (using a set of differently combined checks including logical, climatological, spatial, temporal and persistence checks); 4) infilling of data flagged as missing or anomalous (deterministic or statistical methods); 5) spatial interpolation based on alternative/comparative methods such as inverse distance weighting, iterative regression kriging, and a weighted least squares regression (based on physiography), using an approach similar to PRISM. 6) data ingestion into a geodatabase (e.g. PostgreSQL+PostGIS or rasdaman). There is an increasing demand for digital climatic maps both for research and development (there is a gap between the major of scientific modelling approaches that requires digital climate maps and the gauged measurements) and for practical applications (e.g. the need to improve the management of weather records which in turn raises the support provided to farmers). The demand is particularly burdensome considering the requirement to handle climatic data at the daily (e.g. in the soil hydrological modelling) or even at the hourly time step (e.g. risk modelling in phytopathology). The key advantage of WeatherProg is the ability to perform all the required operations and

  14. The C5 Unit: a semi-automatic cell culture device suitable for experiments under microgravity.

    PubMed

    Vens, C; Kump, B; Münstermann, B; Heinlein, U A

    1996-06-27

    This paper presents data on a novel, semi-automatic cell culturing device called 'C5 Unit' (connectable circuit cell culture chamber) which was developed and adapted to the quality and size criteria set by the characteristics of the ESA Biorack. The suitability of the hardware for culturing cells under microgravity conditions was demonstrated by successful culture of primary mouse cells from neonatal cerebellum and testis aboard the Space Shuttle during the IML-2 mission.

  15. Codesign Environment for Computer Vision Hw/Sw Systems

    NASA Astrophysics Data System (ADS)

    Toledo, Ana; Cuenca, Sergio; Suardíaz, Juan

    2006-10-01

    In this paper we present a novel codesign environment which is conceived especially for computer vision hybrid systems. This setting is based on Mathworks Simulink and Xilinx System Generator tools and is comprised of the following: an incremental codesign flow, diverse libraries of virtual components with three levels of description (high level, hardware and software), semi-automatic tools to help in the partition of the system and a methodology for building new library components. The use of high level libraries allows for the development of systems without the need of exhaustive knowledge of the actual architecture or special skills on hardware description languages. This enable a non-traumatic incorporation of the reconfigurable technologies in the image processing systems generally developed for engineers which are not very related to hardware design disciplines.

  16. Computer systems

    NASA Technical Reports Server (NTRS)

    Olsen, Lola

    1992-01-01

    In addition to the discussions, Ocean Climate Data Workshop hosts gave participants an opportunity to hear about, see, and test for themselves some of the latest computer tools now available for those studying climate change and the oceans. Six speakers described computer systems and their functions. The introductory talks were followed by demonstrations to small groups of participants and some opportunities for participants to get hands-on experience. After this familiarization period, attendees were invited to return during the course of the Workshop and have one-on-one discussions and further hands-on experience with these systems. Brief summaries or abstracts of introductory presentations are addressed.

  17. Efficient Semi-Automatic 3D Segmentation for Neuron Tracing in Electron Microscopy Images

    PubMed Central

    Jones, Cory; Liu, Ting; Cohan, Nathaniel Wood; Ellisman, Mark; Tasdizen, Tolga

    2015-01-01

    0.1. Background In the area of connectomics, there is a significant gap between the time required for data acquisition and dense reconstruction of the neural processes contained in the same dataset. Automatic methods are able to eliminate this timing gap, but the state-of-the-art accuracy so far is insufficient for use without user corrections. If completed naively, this process of correction can be tedious and time consuming. 0.2. New Method We present a new semi-automatic method that can be used to perform 3D segmentation of neurites in EM image stacks. It utilizes an automatic method that creates a hierarchical structure for recommended merges of superpixels. The user is then guided through each predicted region to quickly identify errors and establish correct links. 0.3. Results We tested our method on three datasets with both novice and expert users. Accuracy and timing were compared with published automatic, semi-automatic, and manual results. 0.4. Comparison with Existing Methods Post-automatic correction methods have also been used in [1] and [2]. These methods do not provide navigation or suggestions in the manner we present. Other semi-automatic methods require user input prior to the automatic segmentation such as [3] and [4] and are inherently different than our method. 0.5. Conclusion Using this method on the three datasets, novice users achieved accuracy exceeding state-of-the-art automatic results, and expert users achieved accuracy on par with full manual labeling but with a 70% time improvement when compared with other examples in publication. PMID:25769273

  18. Efficient semi-automatic 3D segmentation for neuron tracing in electron microscopy images.

    PubMed

    Jones, Cory; Liu, Ting; Cohan, Nathaniel Wood; Ellisman, Mark; Tasdizen, Tolga

    2015-05-15

    In the area of connectomics, there is a significant gap between the time required for data acquisition and dense reconstruction of the neural processes contained in the same dataset. Automatic methods are able to eliminate this timing gap, but the state-of-the-art accuracy so far is insufficient for use without user corrections. If completed naively, this process of correction can be tedious and time consuming. We present a new semi-automatic method that can be used to perform 3D segmentation of neurites in EM image stacks. It utilizes an automatic method that creates a hierarchical structure for recommended merges of superpixels. The user is then guided through each predicted region to quickly identify errors and establish correct links. We tested our method on three datasets with both novice and expert users. Accuracy and timing were compared with published automatic, semi-automatic, and manual results. Post-automatic correction methods have also been used in Mishchenko et al. (2010) and Haehn et al. (2014). These methods do not provide navigation or suggestions in the manner we present. Other semi-automatic methods require user input prior to the automatic segmentation such as Jeong et al. (2009) and Cardona et al. (2010) and are inherently different than our method. Using this method on the three datasets, novice users achieved accuracy exceeding state-of-the-art automatic results, and expert users achieved accuracy on par with full manual labeling but with a 70% time improvement when compared with other examples in publication. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Semi-automatic detection of skin malformations by analysis of spectral images

    NASA Astrophysics Data System (ADS)

    Rubins, U.; Spigulis, J.; Valeine, L.; Berzina, A.

    2013-06-01

    The multi-spectral imaging technique to reveal skin malformations has been described in this work. Four spectral images taken at polarized monochromatic LED illumination (450nm, 545nm, 660nm and 940 nm) and polarized white LED light imaged by CMOS sensor via cross-oriented polarizing filter were analyzed to calculate chromophore maps. The algorithm based on skin color analysis and user-defined threshold selection allows highlighting of skin areas with predefined chromophore concentration semi-automatically. Preliminary results of clinical tests are presented.

  20. [Computer assisted radiological diagnostics of arthritic joint alterations].

    PubMed

    Kainberger, F; Langs, G; Peloschek, P; Schlager, T; Schüller-Weidekamm, C; Valentinitsch, A

    2006-12-01

    Computer assisted diagnosis (CAD) schemes are currently used in the field of musculoskeletal diseases to quantitatively assess vertebral fractures, joint space narrowing, andr erosion. Most systems work semi-automatically, i.e. they are operator dependent in the selection of anatomical landmarks. Fully automatic programs are currently under development. Some CAD products have already been successfully used in clinical trials.

  1. Extraction of electronic health record data in a hospital setting: comparison of automatic and semi-automatic methods using anti-TNF therapy as model.

    PubMed

    Cars, Thomas; Wettermark, Björn; Malmström, Rickard E; Ekeving, Gunnar; Vikström, Bo; Bergman, Ulf; Neovius, Martin; Ringertz, Bo; Gustafsson, Lars L

    2013-06-01

    There is limited experience and methods for extractions of drug therapy data from electronic health records (EHR) in the hospital setting. We have therefore developed and evaluated completeness and consistency of an automatic versus a semi-automatic extraction procedure applied on prescribing and administration of the TNF inhibitor infliximab using a hospital EHR system in Karolinska University Hospital, Sweden. Using two different extraction methods (automatic and semi-automatic), all administered infusions of infliximab between 2007 and 2010 were extracted from a database linked to the EHR system. Extracted data included encrypted personal identity number (PIN), date of birth, sex, time of prescription/administration, healthcare units, prescribed/administered dose and time of admission/discharge. The primary diagnosis (ICD-10) for the treatment with infliximab was extracted by linking infliximab infusions to their corresponding treatment episode. A total of 13,590 infusions of infliximab were administered during the period of 2007 to 2010. Of those were 13,531 (99.6%) possible to link to a corresponding treatment episode, and a primary diagnosis was extracted for 13,530 infusions. Information on encrypted PIN, date of birth, time of prescription/administration, time of admission/discharge and healthcare unit was complete. Information about sex was missing in one patient only. Calculable information about dosage was extracted for 13,300 (98.3%) of all linked infusions. This methodological study showed the potential to extract drug therapy data in a hospital setting. The semi-automatic procedure produced an almost complete pattern of demographics, diagnoses and dosages for the treatment with infliximab. © 2013 Nordic Pharmacological Society. Published by John Wiley & Sons Ltd.

  2. Myocardial tissue characterization in echocardiography with videodensitometry: evaluation of a new semi-automatic software applied on a population of hypertensive patients.

    PubMed

    Lasserre, R; Gosse, P; Mansour, S

    2003-12-01

    The interactions between ultrasound and cardiac muscle can be exploited to characterize abnormalities of myocardial structure in echocardiography. Two different methods permit an objective assessment of myocardial tissue characterization: analysis of the radiofrequency signal and videodensitometry. We conducted a videodensitometric study using a new practical semi-automatic software applied on digital signal to evaluate gray level cyclic variations of myocardial walls. The aim was to determine parameters differentiating healthy and hypertrophic myocardium in hypertensive patients. Echocardiographic examinations were performed on 30 hypertensives vs 30 healthy controls using second harmonic imaging. Dynamic 2D sequences were recorded in digital form and transferred on computer. Region of interest (ROI) was selected on interventricular septum (IVS) and the software automatically analyzed its systolo-diastolic displacements. ROI echo intensity and its cyclic variations were computed. Values were normalized with blood backscatter. The hypertensives had a smaller amplitude of gray level cyclic variation than did the controls (22+/-6 vs 27+/-11; P=0.02), and this parameter was correlated in multivariate analysis with left ventricle fractional shortening (P=0.032) and diastolic pressure(P=0.014). Magnitude of gray level cyclic variation of IVS can be studied easily with this new semi-automatic software, is altered in hypertensives and correlated with parameters of systolic function.

  3. Semi-automatic segmentation for 3D motion analysis of the tongue with dynamic MRI.

    PubMed

    Lee, Junghoon; Woo, Jonghye; Xing, Fangxu; Murano, Emi Z; Stone, Maureen; Prince, Jerry L

    2014-12-01

    Dynamic MRI has been widely used to track the motion of the tongue and measure its internal deformation during speech and swallowing. Accurate segmentation of the tongue is a prerequisite step to define the target boundary and constrain the tracking to tissue points within the tongue. Segmentation of 2D slices or 3D volumes is challenging because of the large number of slices and time frames involved in the segmentation, as well as the incorporation of numerous local deformations that occur throughout the tongue during motion. In this paper, we propose a semi-automatic approach to segment 3D dynamic MRI of the tongue. The algorithm steps include seeding a few slices at one time frame, propagating seeds to the same slices at different time frames using deformable registration, and random walker segmentation based on these seed positions. This method was validated on the tongue of five normal subjects carrying out the same speech task with multi-slice 2D dynamic cine-MR images obtained at three orthogonal orientations and 26 time frames. The resulting semi-automatic segmentations of a total of 130 volumes showed an average dice similarity coefficient (DSC) score of 0.92 with less segmented volume variability between time frames than in manual segmentations.

  4. Semi-automatic segmentation for 3D motion analysis of the tongue with dynamic MRI

    PubMed Central

    Lee, Junghoon; Woo, Jonghye; Xing, Fangxu; Murano, Emi Z.; Stone, Maureen; Prince, Jerry L.

    2014-01-01

    Dynamic MRI has been widely used to track the motion of the tongue and measure its internal deformation during speech and swallowing. Accurate segmentation of the tongue is a prerequisite step to define the target boundary and constrain the tracking to tissue points within the tongue. Segmentation of 2D slices or 3D volumes is challenging because of the large number of slices and time frames involved in the segmentation, as well as the incorporation of numerous local deformations that occur throughout the tongue during motion. In this paper, we propose a semi-automatic approach to segment 3D dynamic MRI of the tongue. The algorithm steps include seeding a few slices at one time frame, propagating seeds to the same slices at different time frames using deformable registration, and random walker segmentation based on these seed positions. This method was validated on the tongue of five normal subjects carrying out the same speech task with multi-slice 2D dynamic cine-MR images obtained at three orthogonal orientations and 26 time frames. The resulting semi-automatic segmentations of a total of 130 volumes showed an average dice similarity coefficient (DSC) score of 0.92 with less segmented volume variability between time frames than in manual segmentations. PMID:25155697

  5. Semi-automatic matching of OCT and IVUS images for image fusion

    NASA Astrophysics Data System (ADS)

    Pauly, Olivier; Unal, Gozde; Slabaugh, Greg; Carlier, Stephane; Fang, Tong

    2008-03-01

    Medical imaging is essential in the diagnosis of atherosclerosis. In this paper, we propose the semi-automatic matching of two promising and complementary intravascular imaging techniques, Intravascular Ultrasound (IVUS) and Optical Coherence Tomography (OCT), with the ultimate goal of producing hybrid images with increased diagnostic value for assessing arterial health. If no ECG gating has been performed on the IVUS and OCT pullbacks, there is typically an anatomical shuffle (displacement in time and space) in the image sequences due to the catheter motion in the artery during the cardiac cycle, and thus, this is not possible to perform a 3D registration. Therefore, the goal of our work is to detect semi-automatically the corresponding images in both modalities as a preprocessing step for the fusion. Our method is based on the characterization of the lumen shape by a set of Gabor Jets features. We also introduce different correction terms based on the approximate position of the slice in the artery. Then we train different support vector machines based on these features to recognize these correspondences. Experimental results demonstrate the usefulness of our approach, which achieves up to 95% matching accuracy for our data.

  6. Manual and semi-automatic registration vs retrospective ECG gating for correction of cardiac motion

    NASA Astrophysics Data System (ADS)

    So, Aaron; Adam, Vincent; Acharya, Kishor; Pan, Tin-Su; Lee, Ting-Yim

    2003-05-01

    A manual and a semi-automatic image registration method were compared with retrospective ECG (rECG) gating to correct for cardiac motion in myocardial perfusion (MBF) measurement. 5 beagles were used in 11 experiments. For each experiment a 30 s cine CT scan of the heart was acquired after contrast injection. For the manual method, a reference end-diastole (ED) image was selected from the first cardiac cycle. ED images in subsequent cardiac cycles were manually selected to match the shape of the reference ED image. For each cardiac cycle in the semi-automatic method, the image with the maximum area and the most similar shape to the selected image of the previous cardiac cycle was chosen as ED image. MBFs were calculated from the images registered by the three methods and compared. The averages of the difference of MBFmanual and MBFsemi-auto and MBFrECG in the lateral free wall of LV were 3.6 and 3.4 ml/min/100g respectively. The corresponding standard deviations from the mean were 9.1 and 28.3 ml/min/100g respectively. We concluded from these preliminary results that image registration methods were better than rECG gating for correcting heart, which should facilitate more precise measurement of MBF.

  7. A semi-automatic method for developing an anthropomorphic numerical model of dielectric anatomy by MRI

    NASA Astrophysics Data System (ADS)

    Mazzurana, M.; Sandrini, L.; Vaccari, A.; Malacarne, C.; Cristoforetti, L.; Pontalti, R.

    2003-10-01

    Complex permittivity values have a dominant role in the overall consideration of interaction between radiofrequency electromagnetic fields and living matter, and in related applications such as electromagnetic dosimetry. There are still some concerns about the accuracy of published data and about their variability due to the heterogeneous nature of biological tissues. The aim of this study is to provide an alternative semi-automatic method by which numerical dielectric human models for dosimetric studies can be obtained. Magnetic resonance imaging (MRI) tomography was used to acquire images. A new technique was employed to correct nonuniformities in the images and frequency-dependent transfer functions to correlate image intensity with complex permittivity were used. The proposed method provides frequency-dependent models in which permittivity and conductivity vary with continuity—even in the same tissue—reflecting the intrinsic realistic spatial dispersion of such parameters. The human model is tested with an FDTD (finite difference time domain) algorithm at different frequencies; the results of layer-averaged and whole-body-averaged SAR (specific absorption rate) are compared with published work, and reasonable agreement has been found. Due to the short time needed to obtain a whole body model, this semi-automatic method may be suitable for efficient study of various conditions that can determine large differences in the SAR distribution, such as body shape, posture, fat-to-muscle ratio, height and weight.

  8. Semi-automatic 3D lung nodule segmentation in CT using dynamic programming

    NASA Astrophysics Data System (ADS)

    Sargent, Dustin; Park, Sun Young

    2017-02-01

    We present a method for semi-automatic segmentation of lung nodules in chest CT that can be extended to general lesion segmentation in multiple modalities. Most semi-automatic algorithms for lesion segmentation or similar tasks use region-growing or edge-based contour finding methods such as level-set. However, lung nodules and other lesions are often connected to surrounding tissues, which makes these algorithms prone to growing the nodule boundary into the surrounding tissue. To solve this problem, we apply a 3D extension of the 2D edge linking method with dynamic programming to find a closed surface in a spherical representation of the nodule ROI. The algorithm requires a user to draw a maximal diameter across the nodule in the slice in which the nodule cross section is the largest. We report the lesion volume estimation accuracy of our algorithm on the FDA lung phantom dataset, and the RECIST diameter estimation accuracy on the lung nodule dataset from the SPIE 2016 lung nodule classification challenge. The phantom results in particular demonstrate that our algorithm has the potential to mitigate the disparity in measurements performed by different radiologists on the same lesions, which could improve the accuracy of disease progression tracking.

  9. Semi-automatic image analysis methodology for the segmentation of bubbles and drops in complex dispersions occurring in bioreactors

    NASA Astrophysics Data System (ADS)

    Taboada, B.; Vega-Alvarado, L.; Córdova-Aguilar, M. S.; Galindo, E.; Corkidi, G.

    2006-09-01

    Characterization of multiphase systems occurring in fermentation processes is a time-consuming and tedious process when manual methods are used. This work describes a new semi-automatic methodology for the on-line assessment of diameters of oil drops and air bubbles occurring in a complex simulated fermentation broth. High-quality digital images were obtained from the interior of a mechanically stirred tank. These images were pre-processed to find segments of edges belonging to the objects of interest. The contours of air bubbles and oil drops were then reconstructed using an improved Hough transform algorithm which was tested in two, three and four-phase simulated fermentation model systems. The results were compared against those obtained manually by a trained observer, showing no significant statistical differences. The method was able to reduce the total processing time for the measurements of bubbles and drops in different systems by 21-50% and the manual intervention time for the segmentation procedure by 80-100%.

  10. A semi-automatic framework of measuring pulmonary arterial metrics at anatomic airway locations using CT imaging

    NASA Astrophysics Data System (ADS)

    Jin, Dakai; Guo, Junfeng; Dougherty, Timothy M.; Iyer, Krishna S.; Hoffman, Eric A.; Saha, Punam K.

    2016-03-01

    Pulmonary vascular dysfunction has been implicated in smoking-related susceptibility to emphysema. With the growing interest in characterizing arterial morphology for early evaluation of the vascular role in pulmonary diseases, there is an increasing need for the standardization of a framework for arterial morphological assessment at airway segmental levels. In this paper, we present an effective and robust semi-automatic framework to segment pulmonary arteries at different anatomic airway branches and measure their cross-sectional area (CSA). The method starts with user-specified endpoints of a target arterial segment through a custom-built graphical user interface. It then automatically detect the centerline joining the endpoints, determines the local structure orientation and computes the CSA along the centerline after filtering out the adjacent pulmonary structures, such as veins or airway walls. Several new techniques are presented, including collision-impact based cost function for centerline detection, radial sample-line based CSA computation, and outlier analysis of radial distance to subtract adjacent neighboring structures in the CSA measurement. The method was applied to repeat-scan pulmonary multirow detector CT (MDCT) images from ten healthy subjects (age: 21-48 Yrs, mean: 28.5 Yrs; 7 female) at functional residual capacity (FRC). The reproducibility of computed arterial CSA from four airway segmental regions in middle and lower lobes was analyzed. The overall repeat-scan intra-class correlation (ICC) of the computed CSA from all four airway regions in ten subjects was 96% with maximum ICC found at LB10 and RB4 regions.

  11. A semi-automatic framework of measuring pulmonary arterial metrics at anatomic airway locations using CT imaging

    PubMed Central

    Jin, Dakai; Guo, Junfeng; Dougherty, Timothy M.; Iyer, Krishna S.; Hoffman, Eric A.; Saha, Punam K.

    2017-01-01

    Pulmonary vascular dysfunction has been implicated in smoking-related susceptibility to emphysema. With the growing interest in characterizing arterial morphology for early evaluation of the vascular role in pulmonary diseases, there is an increasing need for the standardization of a framework for arterial morphological assessment at airway segmental levels. In this paper, we present an effective and robust semi-automatic framework to segment pulmonary arteries at different anatomic airway branches and measure their cross-sectional area (CSA). The method starts with user-specified endpoints of a target arterial segment through a custom-built graphical user interface. It then automatically detect the centerline joining the endpoints, determines the local structure orientation and computes the CSA along the centerline after filtering out the adjacent pulmonary structures, such as veins or airway walls. Several new techniques are presented, including collision-impact based cost function for centerline detection, radial sample-line based CSA computation, and outlier analysis of radial distance to subtract adjacent neighboring structures in the CSA measurement. The method was applied to repeat-scan pulmonary multirow detector CT (MDCT) images from ten healthy subjects (age: 21–48 Yrs, mean: 28.5 Yrs; 7 female) at functional residual capacity (FRC). The reproducibility of computed arterial CSA from four airway segmental regions in middle and lower lobes was analyzed. The overall repeat-scan intra-class correlation (ICC) of the computed CSA from all four airway regions in ten subjects was 96% with maximum ICC found at LB10 and RB4 regions. PMID:28250572

  12. Semi-automatic measuring of arteriovenous relation as a possible silent brain infarction risk index in hypertensive patients.

    PubMed

    Vázquez Dorrego, X M; Manresa Domínguez, J M; Heras Tebar, A; Forés, R; Girona Marcé, A; Alzamora Sas, M T; Delgado Martínez, P; Riba-Llena, I; Ugarte Anduaga, J; Beristain Iraola, A; Barandiaran Martirena, I; Ruiz Bilbao, S M; Torán Monserrat, P

    2016-11-01

    To evaluate the usefulness of a semiautomatic measuring system of arteriovenous relation (RAV) from retinographic images of hypertensive patients in assessing their cardiovascular risk and silent brain ischemia (ICS) detection. Semi-automatic measurement of arterial and venous width were performed with the aid of Imedos software and conventional fundus examination from the analysis of retinal images belonging to the 976 patients integrated in the cohort Investigating Silent Strokes in Hypertensives: a magnetic resonance imaging study (ISSYS), group of hypertensive patients. All patients have been subjected to a cranial magnetic resonance imaging (RMN) to assess the presence or absence of brain silent infarct. Retinal images of 768 patients were studied. Among the clinical findings observed, association with ICS was only detected in patients with microaneurysms (OR 2.50; 95% CI: 1.05-5.98) or altered RAV (<0.666) (OR: 4.22; 95% CI: 2.56-6.96). In multivariate logistic regression analysis adjusted by age and sex, only altered RAV continued demonstrating as a risk factor (OR: 3.70; 95% CI: 2.21-6.18). The results show that the semiautomatic analysis of the retinal vasculature from retinal images has the potential to be considered as an important vascular risk factor in hypertensive population. Copyright © 2016 Sociedad Española de Oftalmología. Publicado por Elsevier España, S.L.U. All rights reserved.

  13. Procedure for the semi-automatic detection of gastro-oesophageal reflux patterns in intraluminal impedance measurements in infants.

    PubMed

    Trachterna, M; Wenzl, T G; Silny, J; Rau, G; Heimann, G

    1999-04-01

    The diagnosis of gastro-oesophageal reflux (GOR) is of great interest for paediatric gastroenterologists. pH monitoring is the commonly used procedure for GOR diagnosis but a major amount of postprandial GOR is missed due to the mostly non-acidic gastric contents in infants. The multiple intraluminal impedance technique is based on the recording of the impedance changes during bolus transport inside the oesophagus. It is the first method which allows the pH-independent, long-term registration of GOR. The use of the impedance technology in clinical practice has been limited so far by the time-consuming, visual evaluation of the impedance traces. The new approach of a semi-automatic analysis of the impedance measurements allows the automated detection of reflux patterns. It is based on event marking and an optimised feature description of the impedance traces combined with a fuzzy system for pattern recognition. The classifier is developed and tested on 50 investigations in infants. Compared to the comprehensive, multiple visual evaluation the achieved precision is 75% sensitivity and 48% positive prediction. In comparison to a single visual evaluation the analysis of the automatically proposed patterns corresponds to a 96% reduction of the evaluation time with no loss of precision. Thus the applicability of the impedance technology is enhanced significantly. A combined measurement of pH and impedance gives evidence about the occurrence of GOR, its pH and the acidic exposure of the oesophagus.

  14. Semia: semi-automatic interactive graphic editing tool to annotate ambulatory ECG records.

    PubMed

    Dorn, Roman; Jager, Franc

    2004-09-01

    We designed and developed a special purpose interactive graphic editing tool semi-automatic (Semia) to annotate transient ischaemic ST segment episodes and other non-ischaemic ST segment events in 24h ambulatory electrocardiogram (ECG) records. The tool allows representation and viewing of the data, interaction with the data globally and locally at different resolutions, examining data at any point, manual adjustment of heart-beat fiducial points, and manual and automatic editing of annotations. Efficient and fast display of ambulatory ECG signal waveforms, display of diagnostic and morphology feature-vector time-series, dynamic interface controls, and automated procedures to help annotate, made the tool efficient, user friendly and usable. Human expert annotators used the Semia tool to successfully annotate the Long-Term ST database (LTST DB), a result of a multinational effort. The tool supported paperless editing of annotations at dislocated geographical sites. We present design, characteristic "look and feel", functionality, and development of Semia annotating tool.

  15. Semi-automatic detection of linear archaeological traces from orthorectified aerial images

    NASA Astrophysics Data System (ADS)

    Figorito, Benedetto; Tarantino, Eufemia

    2014-02-01

    This paper presents a semi-automatic approach for archaeological traces detection from aerial images. The method developed was based on the multiphase active contour model (ACM). The image was segmented into three competing regions to improve the visibility of buried remains showing in the image as crop marks (i.e. centuriations, agricultural allocations, ancient roads, etc.). An initial determination of relevant traces can be quickly carried out by the operator by sketching straight lines close to the traces. Subsequently, tuning parameters (i.e. eccentricity, orientation, minimum area and distance from input line) are used to remove non-target objects and parameterize the detected traces. The algorithm and graphical user interface for this method were developed in a MATLAB environment and tested on high resolution orthorectified aerial images. A qualitative analysis of the method was lastly performed by comparing the traces extracted with ancient traces verified by archaeologists.

  16. Semi automatic indexing of PostScript files using Medical Text Indexer in medical education.

    PubMed

    Mollah, Shamim Ara; Cimino, Christopher

    2007-10-11

    At Albert Einstein College of Medicine a large part of online lecture materials contain PostScript files. As the collection grows it becomes essential to create a digital library to have easy access to relevant sections of the lecture material that is full-text indexed; to create this index it is necessary to extract all the text from the document files that constitute the originals of the lectures. In this study we present a semi automatic indexing method using robust technique for extracting text from PostScript files and National Library of Medicine's Medical Text Indexer (MTI) program for indexing the text. This model can be applied to other medical schools for indexing purposes.

  17. Semi-automatic surface and volume mesh generation for subject-specific biomedical geometries.

    PubMed

    Sazonov, Igor; Nithiarasu, Perumal

    2012-01-01

    An overview of surface and volume mesh generation techniques for creating valid meshes to carry out biomedical flows is provided. The methods presented are designed for robust numerical modelling of biofluid flow through subject-specific geometries. The applications of interest are haemodynamics in blood vessels and air flow in upper human respiratory tract. The methods described are designed to minimize distortion to a given domain boundary. They are also designed to generate a triangular surface mesh first and then volume mesh (tetrahedrons) with high quality surface and volume elements. For blood flow applications, a simple procedure to generate a boundary layer mesh is also described. The methods described here are semi-automatic in nature because of the fact that the geometries are complex, and automation of the procedures may be possible if high quality scans are used.

  18. Semi-automatic identification photo generation with facial pose and illumination normalization

    NASA Astrophysics Data System (ADS)

    Jiang, Bo; Liu, Sijiang; Wu, Song

    2016-07-01

    Identification photo is a category of facial image that has strict requirements on image quality like size, illumination, user expression, dressing, etc. Traditionally, these photos are taken in professional studios. With the rapid popularity of mobile devices, how to conveniently take identification photo at any time and anywhere with such devices is an interesting problem. In this paper, we propose a novel semi-automatic identification photo generation approach. Given a user image, facial pose and expression are first normalized to meet the basic requirements. To correct uneven lighting condition in photo, an facial illumination normalization approach is adopted to further improve the image quality. Finally, foreground user is extracted and re-targeted to a specific photo size. Besides, background can also be changed as required. Preliminary experimental results show that the proposed method is efficient and effective in identification photo generation compared to commercial software based manual tunning.

  19. Semi-Automatic Normalization of Multitemporal Remote Images Based on Vegetative Pseudo-Invariant Features

    PubMed Central

    Garcia-Torres, Luis; Caballero-Novella, Juan J.; Gómez-Candón, David; De-Castro, Ana Isabel

    2014-01-01

    A procedure to achieve the semi-automatic relative image normalization of multitemporal remote images of an agricultural scene called ARIN was developed using the following procedures: 1) defining the same parcel of selected vegetative pseudo-invariant features (VPIFs) in each multitemporal image; 2) extracting data concerning the VPIF spectral bands from each image; 3) calculating the correction factors (CFs) for each image band to fit each image band to the average value of the image series; and 4) obtaining the normalized images by linear transformation of each original image band through the corresponding CF. ARIN software was developed to semi-automatically perform the ARIN procedure. We have validated ARIN using seven GeoEye-1 satellite images taken over the same location in Southern Spain from early April to October 2010 at an interval of approximately 3 to 4 weeks. The following three VPIFs were chosen: citrus orchards (CIT), olive orchards (OLI) and poplar groves (POP). In the ARIN-normalized images, the range, standard deviation (s. d.) and root mean square error (RMSE) of the spectral bands and vegetation indices were considerably reduced compared to the original images, regardless of the VPIF or the combination of VPIFs selected for normalization, which demonstrates the method’s efficacy. The correlation coefficients between the CFs among VPIFs for any spectral band (and all bands overall) were calculated to be at least 0.85 and were significant at P = 0.95, indicating that the normalization procedure was comparably performed regardless of the VPIF chosen. ARIN method was designed only for agricultural and forestry landscapes where VPIFs can be identified. PMID:24604031

  20. Semi-automatic normalization of multitemporal remote images based on vegetative pseudo-invariant features.

    PubMed

    Garcia-Torres, Luis; Caballero-Novella, Juan J; Gómez-Candón, David; De-Castro, Ana Isabel

    2014-01-01

    A procedure to achieve the semi-automatic relative image normalization of multitemporal remote images of an agricultural scene called ARIN was developed using the following procedures: 1) defining the same parcel of selected vegetative pseudo-invariant features (VPIFs) in each multitemporal image; 2) extracting data concerning the VPIF spectral bands from each image; 3) calculating the correction factors (CFs) for each image band to fit each image band to the average value of the image series; and 4) obtaining the normalized images by linear transformation of each original image band through the corresponding CF. ARIN software was developed to semi-automatically perform the ARIN procedure. We have validated ARIN using seven GeoEye-1 satellite images taken over the same location in Southern Spain from early April to October 2010 at an interval of approximately 3 to 4 weeks. The following three VPIFs were chosen: citrus orchards (CIT), olive orchards (OLI) and poplar groves (POP). In the ARIN-normalized images, the range, standard deviation (s. d.) and root mean square error (RMSE) of the spectral bands and vegetation indices were considerably reduced compared to the original images, regardless of the VPIF or the combination of VPIFs selected for normalization, which demonstrates the method's efficacy. The correlation coefficients between the CFs among VPIFs for any spectral band (and all bands overall) were calculated to be at least 0.85 and were significant at P = 0.95, indicating that the normalization procedure was comparably performed regardless of the VPIF chosen. ARIN method was designed only for agricultural and forestry landscapes where VPIFs can be identified.

  1. Semi-automatic Deformable Registration of Prostate MR Images to Pathological Slices

    PubMed Central

    Mazaheri, Yousef; Bokacheva, Louisa; Kroon, Dirk-Jan; Akin, Oguz; Hricak, Hedvig; Chamudot, Daniel; Fine, Samson; Koutcher, Jason A.

    2010-01-01

    Purpose To present a semi-automatic deformable registration algorithm for co-registering T2-weighted (T2w) images of the prostate with whole-mount pathological sections of prostatectomy specimens. Materials and Methods Twenty-four patients underwent 1.5-T endorectal MR imaging before radical prostatectomy with whole-mount step-section pathologic analysis of surgical specimens. For each patient, the T2w imaging containing the largest area of tumor was manually matched with the corresponding pathologic slice. The prostate was co-registered using a free form deformation (FFD) algorithm based on B-splines. Registration quality was assessed through differences between prostate diameters measured in right-left (RL) and anteroposterior (AP) directions on T2w images and pathologic slices and calculation of the Dice similarity coefficient, D, for the whole prostate (WP), the peripheral zone (PZ) and the transition zone (TZ). Results The mean differences in diameters measured on pathology and MR imaging in the RL direction and the AP direction were 0.49 cm and -0.63 cm, respectively, before registration and 0.10 cm and -0.11 cm, respectively, after registration. The mean D values for the WP, PZ and TZ, were 0.76, 0.65, and 0.77, respectively, before registration and increased to 0.91, 0.76, and 0.85, respectively, after registration. The improvements in D were significant for all three tissues (P < 0.001 for all). Conclusion The proposed semi-automatic method enabled successful co-registration of anatomical prostate MR images to pathologic slices. PMID:21031521

  2. Gray-Matter Volume Estimate Score: A Novel Semi-Automatic Method Measuring Early Ischemic Change on CT

    PubMed Central

    Song, Dongbeom; Lee, Kijeong; Kim, Eun Hye; Kim, Young Dae; Lee, Hye Sun; Kim, Jinkwon; Song, Tae-Jin; Ahn, Sung Soo; Nam, Hyo Suk; Heo, Ji Hoe

    2016-01-01

    Background and Purpose We developed a novel method named Gray-matter Volume Estimate Score (GRAVES), measuring early ischemic changes on Computed Tomography (CT) semi-automatically by computer software. This study aimed to compare GRAVES and Alberta Stroke Program Early CT Score (ASPECTS) with regards to outcome prediction and inter-rater agreement. Methods This was a retrospective cohort study. Among consecutive patients with ischemic stroke in the anterior circulation who received intra-arterial therapy (IAT), those with a readable pretreatment CT were included. Two stroke neurologists independently measured both the GRAVES and ASPECTS. GRAVES was defined as the percentage of estimated hypodense lesion in the gray matter of the ipsilateral hemisphere. Spearman correlation analysis, receiver operating characteristic (ROC) comparison test, and intra-class correlation coefficient (ICC) comparison tests were performed between GRAVES and ASPECTS. Results Ninety-four subjects (age: 68.7±10.3; male: 54 [54.9%]) were enrolled. The mean GRAVES was 9.0±8.9 and the median ASPECTS was 8 (interquartile range, 6-9). Correlation between ASPECTS and GRAVES was good (Spearman’s rank correlation coefficient, 0.642; P<0.001). ROC comparison analysis showed that the predictive value of GRAVES for favorable outcome was not significantly different from that of ASPECTS (area under curve, 0.765 vs. 0.717; P=0.308). ICC comparison analysis revealed that inter-rater agreement of GRAVES was significantly better than that of ASPECTS (0.978 vs. 0.895; P<0.001). Conclusions GRAVES had a good correlation with ASPECTS. GRAVES was as good as ASPECTS in predicting a favorable clinical outcome, but was better than ASPECTS regarding inter-rater agreement. GRAVES may be used to predict the outcome of IAT. PMID:26467197

  3. 10 CFR Appendix J to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Automatic and Semi-Automatic Clothes...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... of Automatic and Semi-Automatic Clothes Washers J Appendix J to Subpart B of Part 430 Energy.... 430, Subpt. B, App. J Appendix J to Subpart B of Part 430—Uniform Test Method for Measuring the Energy Consumption of Automatic and Semi-Automatic Clothes Washers The provisions of this appendix J shall apply...

  4. Semi-automatic detection of Gd-DTPA-saline filled capsules for colonic transit time assessment in MRI

    NASA Astrophysics Data System (ADS)

    Harrer, Christian; Kirchhoff, Sonja; Keil, Andreas; Kirchhoff, Chlodwig; Mussack, Thomas; Lienemann, Andreas; Reiser, Maximilian; Navab, Nassir

    2008-03-01

    Functional gastrointestinal disorders result in a significant number of consultations in primary care facilities. Chronic constipation and diarrhea are regarded as two of the most common diseases affecting between 2% and 27% of the population in western countries 1-3. Defecatory disorders are most commonly due to dysfunction of the pelvic floor or the anal sphincter. Although an exact differentiation of these pathologies is essential for adequate therapy, diagnosis is still only based on a clinical evaluation1. Regarding quantification of constipation only the ingestion of radio-opaque markers or radioactive isotopes and the consecutive assessment of colonic transit time using X-ray or scintigraphy, respectively, has been feasible in clinical settings 4-8. However, these approaches have several drawbacks such as involving rather inconvenient, time consuming examinations and exposing the patient to ionizing radiation. Therefore, conventional assessment of colonic transit time has not been widely used. Most recently a new technique for the assessment of colonic transit time using MRI and MR-contrast media filled capsules has been introduced 9. However, due to numerous examination dates per patient and corresponding datasets with many images, the evaluation of the image data is relatively time-consuming. The aim of our study was to develop a computer tool to facilitate the detection of the capsules in MRI datasets and thus to shorten the evaluation time. We present a semi-automatic tool which provides an intensity, size 10, and shape-based 11,12 detection of ingested Gd-DTPA-saline filled capsules. After an automatic pre-classification, radiologists may easily correct the results using the application-specific user interface, therefore decreasing the evaluation time significantly.

  5. A hybrid semi-automatic method for liver segmentation based on level-set methods using multiple seed points.

    PubMed

    Yang, Xiaopeng; Yu, Hee Chul; Choi, Younggeun; Lee, Wonsup; Wang, Baojian; Yang, Jaedo; Hwang, Hongpil; Kim, Ji Hyun; Song, Jisoo; Cho, Baik Hwan; You, Heecheon

    2014-01-01

    The present study developed a hybrid semi-automatic method to extract the liver from abdominal computerized tomography (CT) images. The proposed hybrid method consists of a customized fast-marching level-set method for detection of an optimal initial liver region from multiple seed points selected by the user and a threshold-based level-set method for extraction of the actual liver region based on the initial liver region. The performance of the hybrid method was compared with those of the 2D region growing method implemented in OsiriX using abdominal CT datasets of 15 patients. The hybrid method showed a significantly higher accuracy in liver extraction (similarity index, SI=97.6 ± 0.5%; false positive error, FPE = 2.2 ± 0.7%; false negative error, FNE=2.5 ± 0.8%; average symmetric surface distance, ASD=1.4 ± 0.5mm) than the 2D (SI=94.0 ± 1.9%; FPE = 5.3 ± 1.1%; FNE=6.5 ± 3.7%; ASD=6.7 ± 3.8mm) region growing method. The total liver extraction time per CT dataset of the hybrid method (77 ± 10 s) is significantly less than the 2D region growing method (575 ± 136 s). The interaction time per CT dataset between the user and a computer of the hybrid method (28 ± 4 s) is significantly shorter than the 2D region growing method (484 ± 126 s). The proposed hybrid method was found preferred for liver segmentation in preoperative virtual liver surgery planning.

  6. LTRsift: a graphical user interface for semi-automatic classification and postprocessing of de novo detected LTR retrotransposons

    PubMed Central

    2012-01-01

    Background Long terminal repeat (LTR) retrotransposons are a class of eukaryotic mobile elements characterized by a distinctive sequence similarity-based structure. Hence they are well suited for computational identification. Current software allows for a comprehensive genome-wide de novo detection of such elements. The obvious next step is the classification of newly detected candidates resulting in (super-)families. Such a de novo classification approach based on sequence-based clustering of transposon features has been proposed before, resulting in a preliminary assignment of candidates to families as a basis for subsequent manual refinement. However, such a classification workflow is typically split across a heterogeneous set of glue scripts and generic software (for example, spreadsheets), making it tedious for a human expert to inspect, curate and export the putative families produced by the workflow. Results We have developed LTRsift, an interactive graphical software tool for semi-automatic postprocessing of de novo predicted LTR retrotransposon annotations. Its user-friendly interface offers customizable filtering and classification functionality, displaying the putative candidate groups, their members and their internal structure in a hierarchical fashion. To ease manual work, it also supports graphical user interface-driven reassignment, splitting and further annotation of candidates. Export of grouped candidate sets in standard formats is possible. In two case studies, we demonstrate how LTRsift can be employed in the context of a genome-wide LTR retrotransposon survey effort. Conclusions LTRsift is a useful and convenient tool for semi-automated classification of newly detected LTR retrotransposons based on their internal features. Its efficient implementation allows for convenient and seamless filtering and classification in an integrated environment. Developed for life scientists, it is helpful in postprocessing and refining the output of software

  7. Evaluation of ventricular dysfunction using semi-automatic longitudinal strain analysis of four-chamber cine MR imaging.

    PubMed

    Kawakubo, Masateru; Nagao, Michinobu; Kumazawa, Seiji; Yamasaki, Yuzo; Chishaki, Akiko S; Nakamura, Yasuhiko; Honda, Hiroshi; Morishita, Junji

    2016-02-01

    The aim of this study was to evaluate ventricular dysfunction using the longitudinal strain analysis in 4-chamber (4CH) cine MR imaging, and to investigate the agreement between the semi-automatic and manual measurements in the analysis. Fifty-two consecutive patients with ischemic, or non-ischemic cardiomyopathy and repaired tetralogy of Fallot who underwent cardiac MR examination incorporating cine MR imaging were retrospectively enrolled. The LV and RV longitudinal strain values were obtained by semi-automatically and manually. Receiver operating characteristic (ROC) analysis was performed to determine the optimal cutoff of the minimum longitudinal strain value for the detection of patients with cardiac dysfunction. The correlations between manual and semi-automatic measurements for LV and RV walls were analyzed by Pearson coefficient analysis. ROC analysis demonstrated the optimal cut-off of the minimum longitudinal strain values (εL_min) for diagnoses the LV and RV dysfunction at a high accuracy (LV εL_min = -7.8 %: area under the curve, 0.89; sensitivity, 83 %; specificity, 91 %, RV εL_min = -15.7 %: area under the curve, 0.82; sensitivity, 92 %; specificity, 68 %). Excellent correlations between manual and semi-automatic measurements for LV and RV free wall were observed (LV, r = 0.97, p < 0.01; RV, r = 0.79, p < 0.01). Our semi-automatic longitudinal strain analysis in 4CH cine MR imaging can evaluate LV and RV dysfunction with simply and easy measurements. The strain analysis could have extensive application in cardiac imaging for various clinical cases.

  8. Response Evaluation of Malignant Liver Lesions After TACE/SIRT: Comparison of Manual and Semi-Automatic Measurement of Different Response Criteria in Multislice CT.

    PubMed

    Höink, Anna Janina; Schülke, Christoph; Koch, Raphael; Löhnert, Annika; Kammerer, Sara; Fortkamp, Rasmus; Heindel, Walter; Buerke, Boris

    2017-08-23

    Purpose To compare measurement precision and interobserver variability in the evaluation of hepatocellular carcinoma (HCC) and liver metastases in MSCT before and after transarterial local ablative therapies. Materials and Methods Retrospective study of 72 patients with malignant liver lesions (42 metastases; 30 HCCs) before and after therapy (43 SIRT procedures; 29 TACE procedures). Established (LAD; SAD; WHO) and vitality-based parameters (mRECIST; mLAD; mSAD; EASL) were assessed manually and semi-automatically by two readers. The relative interobserver difference (RID) and intraclass correlation coefficient (ICC) were calculated. Results The median RID for vitality-based parameters was lower from semi-automatic than from manual measurement of mLAD (manual 12.5 %; semi-automatic 3.4 %), mSAD (manual 12.7 %; semi-automatic 5.7 %) and EASL (manual 10.4 %; semi-automatic 1.8 %). The difference in established parameters was not statistically noticeable (p > 0.05). The ICCs of LAD (manual 0.984; semi-automatic 0.982), SAD (manual 0.975; semi-automatic 0.958) and WHO (manual 0.984; semi-automatic 0.978) are high, both in manual and semi-automatic measurements. The ICCs of manual measurements of mLAD (0.897), mSAD (0.844) and EASL (0.875) are lower. This decrease cannot be found in semi-automatic measurements of mLAD (0.997), mSAD (0.992) and EASL (0.998). Conclusion Vitality-based tumor measurements of HCC and metastases after transarterial local therapies should be performed semi-automatically due to greater measurement precision, thus increasing the reproducibility and in turn the reliability of therapeutic decisions. Key points  · Liver lesion measurements according to EASL and mRECIST are more precise when performed semi-automatically.. · The higher reproducibility may facilitate a more reliable classification of therapy response.. · Measurements according to RECIST and WHO offer equivalent precision semi-automatically

  9. Validation of simplified dosimetry approaches in ⁸⁹Zr-PET/CT: the use of manual versus semi-automatic delineation methods to estimate organ absorbed doses.

    PubMed

    Makris, N E; van Velden, F H P; Huisman, M C; Menke, C W; Lammertsma, A A; Boellaard, R

    2014-10-01

    Increasing interest in immuno-positron emission tomography (PET) studies requires development of dosimetry methods which will provide accurate estimations of organ absorbed doses. The purpose of this study is to develop and validate simplified dosimetry approaches for (89)Zirconium-PET (Zr-PET)/computed tomography (CT) studies. Five patients with advanced colorectal cancer received 37.1 ± 0.9 MBq (89)Zr-cetuximab within 2 h after administration of a therapeutic dose of 500 mg m(-2) cetuximab. PET/CT scans were obtained 1, 24, 48, 94, and 144 h post injection. Volumes of interest (VOIs) were manually delineated in lungs, liver, spleen, and kidneys for all scans, providing a reference VOI set. Simplified manual VOIs were drawn independently on CT scans using larger voxel sizes. The transformation of VOIs based on rigid and/or nonrigid registrations of the first CT scan (CT1) onto all successive CT scans was also investigated. The transformation matrix obtained from each registration was applied to the manual VOIs of CT₁ to obtain VOIs for the successive scans. Dice similarity coefficient (DSC) and Hausdorff distance were used to assess the performance of the registrations. Organ total activity, organ absorbed dose, and effective dose were calculated for all methods. Semi-automatic delineation based on nonrigid registration showed excellent agreement for lungs and liver (DSC: 0.90 ± 0.04; 0.81 ± 0.06) and good agreement for spleen and kidneys (DSC: 0.71 ± 0.07; 0.66 ± 0.08). Hausdorff distance ranged from 13 to 16 mm depending on the organ. Simplified manual delineation methods, in liver and lungs, performed similarly to semi-automatic delineation methods. For kidneys and spleen, however, poorer accuracy in total activity and absorbed dose was observed, as the voxel size increased. Organ absorbed dose and total activity based on nonrigid registration were within 10%. The effective dose was within ±3% for all VOI delineation methods. A fast, semi-automatic, and

  10. Semi-automatic attenuation of cochlear implant artifacts for the evaluation of late auditory evoked potentials.

    PubMed

    Viola, Filipa Campos; De Vos, Maarten; Hine, Jemma; Sandmann, Pascale; Bleeck, Stefan; Eyles, Julie; Debener, Stefan

    2012-02-01

    Electrical artifacts caused by the cochlear implant (CI) contaminate electroencephalographic (EEG) recordings from implanted individuals and corrupt auditory evoked potentials (AEPs). Independent component analysis (ICA) is efficient in attenuating the electrical CI artifact and AEPs can be successfully reconstructed. However the manual selection of CI artifact related independent components (ICs) obtained with ICA is unsatisfactory, since it contains expert-choices and is time consuming. We developed a new procedure to evaluate temporal and topographical properties of ICs and semi-automatically select those components representing electrical CI artifact. The CI Artifact Correction (CIAC) algorithm was tested on EEG data from two different studies. The first consists of published datasets from 18 CI users listening to environmental sounds. Compared to the manual IC selection performed by an expert the sensitivity of CIAC was 91.7% and the specificity 92.3%. After CIAC-based attenuation of CI artifacts, a high correlation between age and N1-P2 peak-to-peak amplitude was observed in the AEPs, replicating previously reported findings and further confirming the algorithm's validity. In the second study AEPs in response to pure tone and white noise stimuli from 12 CI users that had also participated in the other study were evaluated. CI artifacts were attenuated based on the IC selection performed semi-automatically by CIAC and manually by one expert. Again, a correlation between N1 amplitude and age was found. Moreover, a high test-retest reliability for AEP N1 amplitudes and latencies suggested that CIAC-based attenuation reliably preserves plausible individual response characteristics. We conclude that CIAC enables the objective and efficient attenuation of the CI artifact in EEG recordings, as it provided a reasonable reconstruction of individual AEPs. The systematic pattern of individual differences in N1 amplitudes and latencies observed with different stimuli at

  11. Semi-Automatic Building Models and FAÇADE Texture Mapping from Mobile Phone Images

    NASA Astrophysics Data System (ADS)

    Jeong, J.; Kim, T.

    2016-06-01

    Research on 3D urban modelling has been actively carried out for a long time. Recently the need of 3D urban modelling research is increased rapidly due to improved geo-web services and popularized smart devices. Nowadays 3D urban models provided by, for example, Google Earth use aerial photos for 3D urban modelling but there are some limitations: immediate update for the change of building models is difficult, many buildings are without 3D model and texture, and large resources for maintaining and updating are inevitable. To resolve the limitations mentioned above, we propose a method for semi-automatic building modelling and façade texture mapping from mobile phone images and analyze the result of modelling with actual measurements. Our method consists of camera geometry estimation step, image matching step, and façade mapping step. Models generated from this method were compared with actual measurement value of real buildings. Ratios of edge length of models and measurements were compared. Result showed 5.8% average error of length ratio. Through this method, we could generate a simple building model with fine façade textures without expensive dedicated tools and dataset.

  12. Towards Semi-Automatic Artifact Rejection for the Improvement of Alzheimer's Disease Screening from EEG Signals.

    PubMed

    Solé-Casals, Jordi; Vialatte, François-Benoît

    2015-07-23

    A large number of studies have analyzed measurable changes that Alzheimer's disease causes on electroencephalography (EEG). Despite being easily reproducible, those markers have limited sensitivity, which reduces the interest of EEG as a screening tool for this pathology. This is for a large part due to the poor signal-to-noise ratio of EEG signals: EEG recordings are indeed usually corrupted by spurious extra-cerebral artifacts. These artifacts are responsible for a consequent degradation of the signal quality. We investigate the possibility to automatically clean a database of EEG recordings taken from patients suffering from Alzheimer's disease and healthy age-matched controls. We present here an investigation of commonly used markers of EEG artifacts: kurtosis, sample entropy, zero-crossing rate and fractal dimension. We investigate the reliability of the markers, by comparison with human labeling of sources. Our results show significant differences with the sample entropy marker. We present a strategy for semi-automatic cleaning based on blind source separation, which may improve the specificity of Alzheimer screening using EEG signals.

  13. Semi-automatic image personalization tool for variable text insertion and replacement

    NASA Astrophysics Data System (ADS)

    Ding, Hengzhou; Bala, Raja; Fan, Zhigang; Eschbach, Reiner; Bouman, Charles A.; Allebach, Jan P.

    2010-02-01

    Image personalization is a widely used technique in personalized marketing,1 in which a vendor attempts to promote new products or retain customers by sending marketing collateral that is tailored to the customers' demographics, needs, and interests. With current solutions of which we are aware such as XMPie,2 DirectSmile,3 and AlphaPicture,4 in order to produce this tailored marketing collateral, image templates need to be created manually by graphic designers, involving complex grid manipulation and detailed geometric adjustments. As a matter of fact, the image template design is highly manual, skill-demanding and costly, and essentially the bottleneck for image personalization. We present a semi-automatic image personalization tool for designing image templates. Two scenarios are considered: text insertion and text replacement, with the text replacement option not offered in current solutions. The graphical user interface (GUI) of the tool is described in detail. Unlike current solutions, the tool renders the text in 3-D, which allows easy adjustment of the text. In particular, the tool has been implemented in Java, which introduces flexible deployment and eliminates the need for any special software or know-how on the part of the end user.

  14. Conceptual design of semi-automatic wheelbarrow to overcome ergonomics problems among palm oil plantation workers

    NASA Astrophysics Data System (ADS)

    Nawik, N. S. M.; Deros, B. M.; Rahman, M. N. A.; Sukadarin, E. H.; Nordin, N.; Tamrin, S. B. M.; Bakar, S. A.; Norzan, M. L.

    2015-12-01

    An ergonomics problem is one of the main issues faced by palm oil plantation workers especially during harvesting and collecting of fresh fruit bunches (FFB). Intensive manual handling and labor activities involved have been associated with high prevalence of musculoskeletal disorders (MSDs) among palm oil plantation workers. New and safe technology on machines and equipment in palm oil plantation are very important in order to help workers reduce risks and injuries while working. The aim of this research is to improve the design of a wheelbarrow, which is suitable for workers and a small size oil palm plantation. The wheelbarrow design was drawn using CATIA ergonomic features. The characteristic of ergonomics assessment is performed by comparing the existing design of wheelbarrow. Conceptual design was developed based on the problems that have been reported by workers. From the analysis of the problem, finally have resulting concept design the ergonomic quality of semi-automatic wheelbarrow with safe and suitable used for palm oil plantation workers.

  15. Semi-automatic characterization and simulation of VCSEL devices for high speed VSR communications

    NASA Astrophysics Data System (ADS)

    Pellevrault, S.; Toffano, Z.; Destrez, A.; Pez, M.; Quentel, F.

    2006-04-01

    Very short range (VSR) high bit rate optical fiber communications are an emerging market dedicated to local area networks, digital displays or board to board interconnects within real time calculators. In this technology, a very fast way to exchange data with high noise immunity and low-cost is needed. Optical multimode graded index fibers are used here because they have electrical noise immunity and are easier to handle than monomode fibers. 850 nm VCSEL are used in VSR communications because of their low cost, direct on-wafer tests, and the possibility of manufacturing VCSEL arrays very easily compared to classical optical transceivers using edge-emitting laser diodes. Although much research has been carried out in temperature modeling on VCSEL emitters, few studies have been devoted to characterizations over a very broad range of temperatures. Nowadays, VCSEL VSR communications tend to be used in severe environments such as space, avionics and military equipments. Therefore, a simple way to characterize VCSEL emitters over a broad range of temperature is required. In this paper, we propose a complete characterization of the emitter part of 2.5 Gb/s opto-electrical transceiver modules operating from -40°C to +120°C using 850 nm VCSELs. Our method uses simple and semi-automatic measurements of a given set of chosen device parameters in order to make fast and efficient simulations.

  16. A simple semi-automatic approach for land cover classification from multispectral remote sensing imagery.

    PubMed

    Jiang, Dong; Huang, Yaohuan; Zhuang, Dafang; Zhu, Yunqiang; Xu, Xinliang; Ren, Hongyan

    2012-01-01

    Land cover data represent a fundamental data source for various types of scientific research. The classification of land cover based on satellite data is a challenging task, and an efficient classification method is needed. In this study, an automatic scheme is proposed for the classification of land use using multispectral remote sensing images based on change detection and a semi-supervised classifier. The satellite image can be automatically classified using only the prior land cover map and existing images; therefore human involvement is reduced to a minimum, ensuring the operability of the method. The method was tested in the Qingpu District of Shanghai, China. Using Environment Satellite 1(HJ-1) images of 2009 with 30 m spatial resolution, the areas were classified into five main types of land cover based on previous land cover data and spectral features. The results agreed on validation of land cover maps well with a Kappa value of 0.79 and statistical area biases in proportion less than 6%. This study proposed a simple semi-automatic approach for land cover classification by using prior maps with satisfied accuracy, which integrated the accuracy of visual interpretation and performance of automatic classification methods. The method can be used for land cover mapping in areas lacking ground reference information or identifying rapid variation of land cover regions (such as rapid urbanization) with convenience.

  17. Semi-automatic mapping for identifying complex geobodies in seismic images

    NASA Astrophysics Data System (ADS)

    Domínguez-C, Raymundo; Romero-Salcedo, Manuel; Velasquillo-Martínez, Luis G.; Shemeretov, Leonid

    2017-03-01

    Seismic images are composed of positive and negative seismic wave traces with different amplitudes (Robein 2010 Seismic Imaging: A Review of the Techniques, their Principles, Merits and Limitations (Houten: EAGE)). The association of these amplitudes together with a color palette forms complex visual patterns. The color intensity of such patterns is directly related to impedance contrasts: the higher the contrast, the higher the color intensity. Generally speaking, low impedance contrasts are depicted with low tone colors, creating zones with different patterns whose features are not evident for a 3D automated mapping option available on commercial software. In this work, a workflow for a semi-automatic mapping of seismic images focused on those areas with low-intensity colored zones that may be associated with geobodies of petroleum interest is proposed. The CIE L*A*B* color space was used to perform the seismic image processing, which helped find small but significant differences between pixel tones. This process generated binary masks that bound color regions to low-intensity colors. The three-dimensional-mask projection allowed the construction of 3D structures for such zones (geobodies). The proposed method was applied to a set of digital images from a seismic cube and tested on four representative study cases. The obtained results are encouraging because interesting geobodies are obtained with a minimum of information.

  18. A Simple Semi-Automatic Approach for Land Cover Classification from Multispectral Remote Sensing Imagery

    PubMed Central

    Jiang, Dong; Huang, Yaohuan; Zhuang, Dafang; Zhu, Yunqiang; Xu, Xinliang; Ren, Hongyan

    2012-01-01

    Land cover data represent a fundamental data source for various types of scientific research. The classification of land cover based on satellite data is a challenging task, and an efficient classification method is needed. In this study, an automatic scheme is proposed for the classification of land use using multispectral remote sensing images based on change detection and a semi-supervised classifier. The satellite image can be automatically classified using only the prior land cover map and existing images; therefore human involvement is reduced to a minimum, ensuring the operability of the method. The method was tested in the Qingpu District of Shanghai, China. Using Environment Satellite 1(HJ-1) images of 2009 with 30 m spatial resolution, the areas were classified into five main types of land cover based on previous land cover data and spectral features. The results agreed on validation of land cover maps well with a Kappa value of 0.79 and statistical area biases in proportion less than 6%. This study proposed a simple semi-automatic approach for land cover classification by using prior maps with satisfied accuracy, which integrated the accuracy of visual interpretation and performance of automatic classification methods. The method can be used for land cover mapping in areas lacking ground reference information or identifying rapid variation of land cover regions (such as rapid urbanization) with convenience. PMID:23049886

  19. Fabrication of glass micropipettes: a semi-automatic approach for trimming the pipette tip.

    PubMed

    Engström, K G; Meiselman, H J

    1992-01-01

    Micropipettes as research instruments are well established in cell biology, including blood rheology. However, the experimental results are, to some extent, dependent on the quality of the pipette itself; it is usually critical to have the desired pipette internal diameter and a perpendicular tip. Pipette fabrication is a two-step procedure involving: a) the pulling of the pipette from a glass capillary; b) the trimming of the pipette tip. A common method to trim and fracture the pipette tip is the use of a melted glass bead on a heated tungsten wire. Previous devices using this method were often associated with problems because the heated wire varied in length with temperature. As a result, the bead together with the attached pipette tip moved markedly and thus hampered the possibility to obtain a perpendicularly cut pipette tip. An improved design, based on the same principle with a melted glass bead, is thus suggested; it eliminates the problem with a moving glass bead and, in addition, allows semi-automatic pipette trimming by utilizing the heat-induced elongation/retraction of the heated wire to fracture the tip without requiring manual assistance. Furthermore, a simple pipette storing technique is suggested, based on standard laboratory utensils, in order to more easily handle fragile pipettes without risk of breakage.

  20. Semi-automatic mapping of linear-trending bedforms using 'Self-Organizing Maps' algorithm

    NASA Astrophysics Data System (ADS)

    Foroutan, M.; Zimbelman, J. R.

    2017-09-01

    Increased application of high resolution spatial data such as high resolution satellite or Unmanned Aerial Vehicle (UAV) images from Earth, as well as High Resolution Imaging Science Experiment (HiRISE) images from Mars, makes it necessary to increase automation techniques capable of extracting detailed geomorphologic elements from such large data sets. Model validation by repeated images in environmental management studies such as climate-related changes as well as increasing access to high-resolution satellite images underline the demand for detailed automatic image-processing techniques in remote sensing. This study presents a methodology based on an unsupervised Artificial Neural Network (ANN) algorithm, known as Self Organizing Maps (SOM), to achieve the semi-automatic extraction of linear features with small footprints on satellite images. SOM is based on competitive learning and is efficient for handling huge data sets. We applied the SOM algorithm to high resolution satellite images of Earth and Mars (Quickbird, Worldview and HiRISE) in order to facilitate and speed up image analysis along with the improvement of the accuracy of results. About 98% overall accuracy and 0.001 quantization error in the recognition of small linear-trending bedforms demonstrate a promising framework.

  1. A semi-automatic method for extracting thin line structures in images as rooted tree network

    SciTech Connect

    Brazzini, Jacopo; Dillard, Scott; Soille, Pierre

    2010-01-01

    This paper addresses the problem of semi-automatic extraction of line networks in digital images - e.g., road or hydrographic networks in satellite images, blood vessels in medical images, robust. For that purpose, we improve a generic method derived from morphological and hydrological concepts and consisting in minimum cost path estimation and flow simulation. While this approach fully exploits the local contrast and shape of the network, as well as its arborescent nature, we further incorporate local directional information about the structures in the image. Namely, an appropriate anisotropic metric is designed by using both the characteristic features of the target network and the eigen-decomposition of the gradient structure tensor of the image. Following, the geodesic propagation from a given seed with this metric is combined with hydrological operators for overland flow simulation to extract the line network. The algorithm is demonstrated for the extraction of blood vessels in a retina image and of a river network in a satellite image.

  2. Semi-automatic border detection method for left ventricular volume estimation in 4D ultrasound data

    NASA Astrophysics Data System (ADS)

    van Stralen, Marijn; Bosch, Johan G.; Voormolen, Marco M.; van Burken, Gerard; Krenning, Boudewijn J.; van Geuns, Robert Jan M.; Angelie, Emmanuelle; van der Geest, Rob J.; Lancee, Charles T.; de Jong, Nico; Reiber, Johan H. C.

    2005-04-01

    We propose a semi-automatic endocardial border detection method for LV volume estimation in 3D time series of cardiac ultrasound data. It is based on pattern matching and dynamic programming techniques and operates on 2D slices of the 4D data requiring minimal user-interaction. We evaluated on data acquired with the Fast Rotating Ultrasound (FRU) transducer: a linear phased array transducer rotated at high speed around its image axis, generating high quality 2D images of the heart. We automatically select a subset of 2D images at typically 10 rotation angles and 16 cardiac phases. From four manually drawn contours a 4D shape model and a 4D edge pattern model is derived. For the selected images, contour shape and edge patterns are estimated using the models. Pattern matching and dynamic programming is applied to detect the contours automatically. The method allows easy corrections in the detected 2D contours, to iteratively achieve more accurate models and improved detections. An evaluation of this method on FRU data against MRI was done for full cycle LV volumes on 10 patients. Good correlations were found against MRI volumes (r=0.94, y=0.72x + 30.3, difference of 9.6 +/- 17.4 ml (Av +/- SD) ) and a low interobserver variability for US (r=0.94, y=1.11x - 16.8, difference of 1.4 +/- 14.2 ml). On average only 2.8 corrections per patient were needed (in a total of 160 images). Although the method shows good correlations with MRI without corrections, applying these corrections can make significant improvements.

  3. Colon wall motility: comparison of novel quantitative semi-automatic measurements using cine MRI.

    PubMed

    Hoad, C L; Menys, A; Garsed, K; Marciani, L; Hamy, V; Murray, K; Costigan, C; Atkinson, D; Major, G; Spiller, R C; Taylor, S A; Gowland, P A

    2016-03-01

    Recently, cine magnetic resonance imaging (MRI) has shown promise for visualizing movement of the colonic wall, although assessment of data has been subjective and observer dependent. This study aimed to develop an objective and semi-automatic imaging metric of ascending colonic wall movement, using image registration techniques. Cine balanced turbo field echo MRI images of ascending colonic motility were acquired over 2 min from 23 healthy volunteers (HVs) at baseline and following two different macrogol stimulus drinks (11 HVs drank 1 L and 12 HVs drank 2 L). Motility metrics derived from large scale geometric and small scale pixel movement parameters following image registration were developed using the post ingestion data and compared to observer grading of wall motion. Inter and intra-observer variability in the highest correlating metric was assessed using Bland-Altman analysis calculated from two separate observations on a subset of data. All the metrics tested showed significant correlation with the observer rating scores. Line analysis (LA) produced the highest correlation coefficient of 0.74 (95% CI: 0.55-0.86), p < 0.001 (Spearman Rho). Bland-Altman analysis of the inter- and intra-observer variability for the LA metric, showed almost zero bias and small limits of agreement between observations (-0.039 to 0.052 intra-observer and -0.051 to 0.054 inter-observer, range of measurement 0-0.353). The LA index of colonic motility derived from cine MRI registered data provides a quick, accurate and non-invasive method to detect wall motion within the ascending colon following a colonic stimulus in the form of a macrogol drink. © 2015 John Wiley & Sons Ltd.

  4. Topographic Effects on Magnetic Data and Their Influence on Semi-Automatic Interpretation Methods: a Cross-Correlation Approach

    NASA Astrophysics Data System (ADS)

    Ugalde, H.; Morris, W. A.

    2009-05-01

    Most of the recent advances in magnetic surveying have focused on achieving higher levels of instrument sensitivity, and/or better definition of the morphology of the magnetic field through the use of measured magnetic field gradients. Semi-automatic interpretation routines are usually applied to the acquired magnetic data under the assumption that the observed magnetic dataset provides an unbiased representation of the magnetic mineral variations in the surface and subsurface geology. However, topographic effects on magnetic data are normally neglected. A common misconception is that magnetic data acquired (or transformed to) a surface that is parallel to the ground has no topographic effects on it. That is normally true when the observed magnetic anomalies are greater than 5000 nT and the topography is relatively flat; however topographic variations greater than 100 m can induce magnetic anomalies in the ±100 nT range. In this kind of situation any interpretation routine applied to the data will be biased by topography and therefore will fail in understanding the true nature of the subsurface geology. This work shows a cross correlation analysis between topography and the measured magnetic data, as a guideline to determine the areas where a magnetic terrain correction needs to be applied, prior to any subsequent modeling/interpretation routine. Calculations done on synthetic data show that in case of vertical total magnetization (induced plus remanent fields), there is a maximum correlation between topography and the observed magnetic field, on those areas of larger topographic changes. The topographic magnetic effect is then removed on the selected areas by means of computing the magnetic signature of a 3D body of uniform magnetic susceptibility, and limited by a digital elevation model of the area (top), and a flat surface at the bottom. Comparative results of using standard total magnetic intensity data and topography-corrected data for Euler Deconvolution

  5. Validation of semi-automatic scoring of dicentric chromosomes after simulation of three different irradiation scenarios.

    PubMed

    Romm, H; Ainsbury, E; Barnard, S; Barrios, L; Barquinero, J F; Beinke, C; Deperas, M; Gregoire, E; Koivistoinen, A; Lindholm, C; Moquet, J; Oestreicher, U; Puig, R; Rothkamm, K; Sommer, S; Thierens, H; Vandersickel, V; Vral, A; Wojcik, A

    2014-06-01

    Large scale radiological emergencies require high throughput techniques of biological dosimetry for population triage in order to identify individuals indicated for medical treatment. The dicentric assay is the "gold standard" technique for the performance of biological dosimetry, but it is very time consuming and needs well trained scorers. To increase the throughput of blood samples, semi-automation of dicentric scoring was investigated in the framework of the MULTIBIODOSE EU FP7 project, and dose effect curves were established in six biodosimetry laboratories. To validate these dose effect curves, blood samples from 33 healthy donors (>10 donors/scenario) were irradiated in vitro with ⁶⁰Co gamma rays simulating three different exposure scenarios: acute whole body, partial body, and protracted exposure, with three different doses for each scenario. All the blood samples were irradiated at Ghent University, Belgium, and then shipped blind coded to the participating laboratories. The blood samples were set up by each lab using their own standard protocols, and metaphase slides were prepared to validate the calibration curves established by semi-automatic dicentric scoring. In order to achieve this, 300 metaphases per sample were captured, and the doses were estimated using the newly formed dose effect curves. After acute uniform exposure, all laboratories were able to distinguish between 0 Gy, 0.5 Gy, 2.0, and 4.0 Gy (p < 0.001), and, in most cases, the dose estimates were within a range of ± 0.5 Gy of the given dose. After protracted exposure, all laboratories were able to distinguish between 1.0 Gy, 2.0 Gy, and 4.0 Gy (p < 0.001), and here also a large number of the dose estimates were within ± 0.5 Gy of the irradiation dose. After simulated partial body exposure, all laboratories were able to distinguish between 2.0 Gy, 4.0 Gy, and 6.0 Gy (p < 0.001). Overdispersion of the dicentric distribution enabled the detection of the partial body samples; however

  6. Computer System for Unattended Control of Negative Ion Source

    SciTech Connect

    Zubarev, P. V.; Khilchenko, A. D.; Kvashnin, A. N.; Moiseev, D. V.; Puriga, E. A.; Sanin, A. L.; Savkin, V. Ya.

    2011-09-26

    The computer system for control of cw surface-plasma source of negative ions is described. The system provides an automatic handling of source parameters by the specified scenario. It includes the automatic source start and long-term operation with switching and control of the power supplies blocks, setting and reading of source parameters like hydrogen feed, cesium seed, electrodes' temperature, checking of the protection and blockings elements like vacuum degradation, absence of cooling water, etc. The semi-automatic mode of control is also available, where the order of steps and magnitude of parameters, included to scenario, is corrected in situ by the operator. Control system includes the main controller and a set of peripheral local controllers. Commands execution is carried out by the main controller. Each peripheral controller is driven by the stand-alone program, stored in its ROM. Control system is handled from PC via Ethernet. The PC and controllers are connected by fiber optic lines, which provide the high voltage insulation and the stable system operation in spite the breakdowns and electromagnetic noise of cross-field discharge. The PC program for data setting and information display is developed under the LabView.

  7. Comparison of hands-off time during CPR with manual and semi-automatic defibrillation in a manikin model.

    PubMed

    Pytte, Morten; Pedersen, Tor E; Ottem, Jan; Rokvam, Anne Siri; Sunde, Kjetil

    2007-04-01

    Rhythm analysis with current semi-automatic external defibrillators (AEDs) requires mandatory interruptions of chest compressions that may compromise the outcome after cardiopulmonary resuscitation (CPR). We hypothesised that interruptions would be shorter when the defibrillator was operated in manual mode by trained and certified ambulance personnel. Sixteen pairs of ambulance personnel operated the defibrillator (Lifepak((R))12) in both semi-automatic (AED) and manual (MED) mode in a randomised, cross-over manikin CPR study, following the ERC 2000 Guidelines. Median time from last chest compression to shock delivery (with interquartile range) was 17s (13, 18) versus 11s (6, 15) (mean difference (95% CI) 6s (2, 10), p=0.004). Similarly, median time from shock delivery to resumed chest compressions was 25s (22, 26) versus 8s (7, 12) (median difference 13s, p=0.001) in the AED and MED groups, respectively. While sensitivity for identifying ventricular fibrillation (VF) in both modes and specificity in the AED mode were 100%, specificity was 89% in manual mode. Thus, some unwarranted shocks resulting in hands-off time (time without chest compressions) were given in manual mode. However, mean hands-off-ratio (time without chest compressions divided by total resuscitation time) was still lower, 0.2s (0.1, 0.3) versus 0.3s (0.28, 0.32) in manual mode, mean difference 0.10s (0.05, 0.15), p=0.001. Paramedics performed CPR with less hands-off time before and after shocks on a manikin with manual compared to semi-automatic defibrillation following the 2000 Guidelines. However, 12% of the shocks given manually were inappropriate.

  8. Semi-automatic synthesis, antiproliferative activity and DNA-binding properties of new netropsin and bis-netropsin analogues.

    PubMed

    Szerszenowicz, Jakub; Drozdowska, Danuta

    2014-07-31

    A general route for the semi-automatic synthesis of some new potential minor groove binders was established. Six four-numbered sub-libraries of new netropsin and bis-netropsin analogues have been synthesized using a Syncore Reactor. The structures of the all new substances prepared in this investigation were fully characterized by NMR ((1)H, (13)C), HPLC and LC-MS. The antiproliferative activity of the obtained compounds was tested on MCF-7 breast cancer cells. The ethidium displacement assay using pBR322 confirmed the DNA-binding properties of the new analogues of netropsin and bis-netropsin.

  9. Localization accuracy from automatic and semi-automatic rigid registration of locally-advanced lung cancer targets during image-guided radiation therapy

    SciTech Connect

    Robertson, Scott P.; Weiss, Elisabeth; Hugo, Geoffrey D.

    2012-01-15

    Purpose: To evaluate localization accuracy resulting from rigid registration of locally-advanced lung cancer targets using fully automatic and semi-automatic protocols for image-guided radiation therapy. Methods: Seventeen lung cancer patients, fourteen also presenting with involved lymph nodes, received computed tomography (CT) scans once per week throughout treatment under active breathing control. A physician contoured both lung and lymph node targets for all weekly scans. Various automatic and semi-automatic rigid registration techniques were then performed for both individual and simultaneous alignments of the primary gross tumor volume (GTV{sub P}) and involved lymph nodes (GTV{sub LN}) to simulate the localization process in image-guided radiation therapy. Techniques included ''standard'' (direct registration of weekly images to a planning CT), ''seeded'' (manual prealignment of targets to guide standard registration), ''transitive-based'' (alignment of pretreatment and planning CTs through one or more intermediate images), and ''rereferenced'' (designation of a new reference image for registration). Localization error (LE) was assessed as the residual centroid and border distances between targets from planning and weekly CTs after registration. Results: Initial bony alignment resulted in centroid LE of 7.3 {+-} 5.4 mm and 5.4 {+-} 3.4 mm for the GTV{sub P} and GTV{sub LN}, respectively. Compared to bony alignment, transitive-based and seeded registrations significantly reduced GTV{sub P} centroid LE to 4.7 {+-} 3.7 mm (p = 0.011) and 4.3 {+-} 2.5 mm (p < 1 x 10{sup -3}), respectively, but the smallest GTV{sub P} LE of 2.4 {+-} 2.1 mm was provided by rereferenced registration (p < 1 x 10{sup -6}). Standard registration significantly reduced GTV{sub LN} centroid LE to 3.2 {+-} 2.5 mm (p < 1 x 10{sup -3}) compared to bony alignment, with little additional gain offered by the other registration techniques. For simultaneous target alignment, centroid LE as low

  10. Computer Center: CIBE Systems.

    ERIC Educational Resources Information Center

    Crovello, Theodore J.

    1982-01-01

    Differentiates between computer systems and Computers in Biological Education (CIBE) systems (computer system intended for use in biological education). Describes several CIBE stand alone systems: single-user microcomputer; single-user microcomputer/video-disc; multiuser microcomputers; multiuser maxicomputer; and local and long distance computer…

  11. Computer Center: CIBE Systems.

    ERIC Educational Resources Information Center

    Crovello, Theodore J.

    1982-01-01

    Differentiates between computer systems and Computers in Biological Education (CIBE) systems (computer system intended for use in biological education). Describes several CIBE stand alone systems: single-user microcomputer; single-user microcomputer/video-disc; multiuser microcomputers; multiuser maxicomputer; and local and long distance computer…

  12. A semi-automatic 2D-to-3D video conversion with adaptive key-frame selection

    NASA Astrophysics Data System (ADS)

    Ju, Kuanyu; Xiong, Hongkai

    2014-11-01

    To compensate the deficit of 3D content, 2D to 3D video conversion (2D-to-3D) has recently attracted more attention from both industrial and academic communities. The semi-automatic 2D-to-3D conversion which estimates corresponding depth of non-key-frames through key-frames is more desirable owing to its advantage of balancing labor cost and 3D effects. The location of key-frames plays a role on quality of depth propagation. This paper proposes a semi-automatic 2D-to-3D scheme with adaptive key-frame selection to keep temporal continuity more reliable and reduce the depth propagation errors caused by occlusion. The potential key-frames would be localized in terms of clustered color variation and motion intensity. The distance of key-frame interval is also taken into account to keep the accumulated propagation errors under control and guarantee minimal user interaction. Once their depth maps are aligned with user interaction, the non-key-frames depth maps would be automatically propagated by shifted bilateral filtering. Considering that depth of objects may change due to the objects motion or camera zoom in/out effect, a bi-directional depth propagation scheme is adopted where a non-key frame is interpolated from two adjacent key frames. The experimental results show that the proposed scheme has better performance than existing 2D-to-3D scheme with fixed key-frame interval.

  13. Semi-Automatic Contacting System ’GR-1’ in Marine Radios,

    DTIC Science & Technology

    1982-10-26

    TECIHOOY Do. 100411 TECNOLOGY IVISION VISION. WP.APG. OHIO. FMD -ID(RS)T-1043-92 Date 26 Oct 19 62 * 6~~ * -’ *(.* b ~ ~ ~ -C *; PVN’ 7 ’.\\ Th’% q...duction of GR-2 without the prior Installation of OR-1. - a significant Improvement of the conditions of radio oomunioations In the mobile marine service in

  14. NLP techniques associated with the OpenGALEN ontology for semi-automatic textual extraction of medical knowledge: abstracting and mapping equivalent linguistic and logical constructs.

    PubMed

    do Amaral, M B; Roberts, A; Rector, A L

    2000-01-01

    This research project presents methodological and theoretical issues related to the inter-relationship between linguistic and conceptual semantics, analysing the results obtained by the application of a NLP parser to a set of radiology reports. Our objective is to define a technique for associating linguistic methods with domain specific ontologies for semi-automatic extraction of intermediate representation (IR) information formats and medical ontological knowledge from clinical texts. We have applied the Edinburgh LTG natural language parser to 2810 clinical narratives describing radiology procedures. In a second step, we have used medical expertise and ontology formalism for identification of semantic structures and abstraction of IR schemas related to the processed texts. These IR schemas are an association of linguistic and conceptual knowledge, based on their semantic contents. This methodology aims to contribute to the elaboration of models relating linguistic and logical constructs based on empirical data analysis. Advance in this field might lead to the development of computational techniques for automatic enrichment of medical ontologies from real clinical environments, using descriptive knowledge implicit in large text corpora sources.

  15. Contour propagation in MRI-guided radiotherapy treatment of cervical cancer: the accuracy of rigid, non-rigid and semi-automatic registrations

    NASA Astrophysics Data System (ADS)

    van der Put, R. W.; Kerkhof, E. M.; Raaymakers, B. W.; Jürgenliemk-Schulz, I. M.; Lagendijk, J. J. W.

    2009-12-01

    External beam radiation treatment for patients with cervical cancer is hindered by the relatively large motion of the target volume. A hybrid MRI-accelerator system makes it possible to acquire online MR images during treatment in order to correct for motion and deformation. To fully benefit from such a system, online delineation of the target volumes is necessary. The aim of this study is to investigate the accuracy of rigid, non-rigid and semi-automatic registrations of MR images for interfractional contour propagation in patients with cervical cancer. Registration using mutual information was performed on both bony anatomy and soft tissue. A B-spline transform was used for the non-rigid method. Semi-automatic registration was implemented with a point set registration algorithm on a small set of manual landmarks. Online registration was simulated by application of each method to four weekly MRI scans for each of 33 cervical cancer patients. Evaluation was performed by distance analysis with respect to manual delineations. The results show that soft-tissue registration significantly (P < 0.001) improves the accuracy of contour propagation compared to registration based on bony anatomy. A combination of user-assisted and non-rigid registration provides the best results with a median error of 3.2 mm (1.4-9.9 mm) compared to 5.9 mm (1.7-19.7 mm) with bone registration (P < 0.001) and 3.4 mm (1.3-19.1 mm) with non-rigid registration (P = 0.01). In a clinical setting, the benefit may be further increased when outliers can be removed by visual inspection of the online images. We conclude that for external beam radiation treatment of cervical cancer, online MRI imaging will allow target localization based on soft tissue visualization, which provides a significantly higher accuracy than localization based on bony anatomy. The use of limited user input to guide the registration increases overall accuracy. Additional non-rigid registration further reduces the propagation

  16. Semi-automatic assessment of pediatric hydronephrosis severity in 3D ultrasound

    NASA Astrophysics Data System (ADS)

    Cerrolaza, Juan J.; Otero, Hansel; Yao, Peter; Biggs, Elijah; Mansoor, Awais; Ardon, Roberto; Jago, James; Peters, Craig A.; Linguraru, Marius George

    2016-03-01

    Hydronephrosis is the most common abnormal finding in pediatric urology. Thanks to its non-ionizing nature, ultrasound (US) imaging is the preferred diagnostic modality for the evaluation of the kidney and the urinary track. However, due to the lack of correlation of US with renal function, further invasive and/or ionizing studies might be required (e.g., diuretic renograms). This paper presents a computer-aided diagnosis (CAD) tool for the accurate and objective assessment of pediatric hydronephrosis based on morphological analysis of kidney from 3DUS scans. The integration of specific segmentation tools in the system, allows to delineate the relevant renal structures from 3DUS scans of the patients with minimal user interaction, and the automatic computation of 90 anatomical features. Using the washout half time (T1/2) as indicative of renal obstruction, an optimal subset of predictive features is selected to differentiate, with maximum sensitivity, those severe cases where further attention is required (e.g., in the form of diuretic renograms), from the noncritical ones. The performance of this new 3DUS-based CAD system is studied for two clinically relevant T1/2 thresholds, 20 and 30 min. Using a dataset of 20 hydronephrotic cases, pilot experiments show how the system outperforms previous 2D implementations by successfully identifying all the critical cases (100% of sensitivity), and detecting up to 100% (T1/2 = 20 min) and 67% (T1/2 = 30 min) of non-critical ones for T1/2 thresholds of 20 and 30 min, respectively.

  17. Semi-automatic central-chest lymph-node definition from 3D MDCT images

    NASA Astrophysics Data System (ADS)

    Lu, Kongkuo; Higgins, William E.

    2010-03-01

    Central-chest lymph nodes play a vital role in lung-cancer staging. The three-dimensional (3D) definition of lymph nodes from multidetector computed-tomography (MDCT) images, however, remains an open problem. This is because of the limitations in the MDCT imaging of soft-tissue structures and the complicated phenomena that influence the appearance of a lymph node in an MDCT image. In the past, we have made significant efforts toward developing (1) live-wire-based segmentation methods for defining 2D and 3D chest structures and (2) a computer-based system for automatic definition and interactive visualization of the Mountain central-chest lymph-node stations. Based on these works, we propose new single-click and single-section live-wire methods for segmenting central-chest lymph nodes. The single-click live wire only requires the user to select an object pixel on one 2D MDCT section and is designed for typical lymph nodes. The single-section live wire requires the user to process one selected 2D section using standard 2D live wire, but it is more robust. We applied these methods to the segmentation of 20 lymph nodes from two human MDCT chest scans (10 per scan) drawn from our ground-truth database. The single-click live wire segmented 75% of the selected nodes successfully and reproducibly, while the success rate for the single-section live wire was 85%. We are able to segment the remaining nodes, using our previously derived (but more interaction intense) 2D live-wire method incorporated in our lymph-node analysis system. Both proposed methods are reliable and applicable to a wide range of pulmonary lymph nodes.

  18. Conversation analysis at work: detection of conflict in competitive discussions through semi-automatic turn-organization analysis.

    PubMed

    Pesarin, Anna; Cristani, Marco; Murino, Vittorio; Vinciarelli, Alessandro

    2012-10-01

    This study proposes a semi-automatic approach aimed at detecting conflict in conversations. The approach is based on statistical techniques capable of identifying turn-organization regularities associated with conflict. The only manual step of the process is the segmentation of the conversations into turns (time intervals during which only one person talks) and overlapping speech segments (time intervals during which several persons talk at the same time). The rest of the process takes place automatically and the results show that conflictual exchanges can be detected with Precision and Recall around 70% (the experiments have been performed over 6 h of political debates). The approach brings two main benefits: the first is the possibility of analyzing potentially large amounts of conversational data with a limited effort, the second is that the model parameters provide indications on what turn-regularities are most likely to account for the presence of conflict.

  19. Comparison Of Semi-Automatic And Automatic Slick Detection Algorithms For Jiyeh Power Station Oil Spill, Lebanon

    NASA Astrophysics Data System (ADS)

    Osmanoglu, B.; Ozkan, C.; Sunar, F.

    2013-10-01

    After air strikes on July 14 and 15, 2006 the Jiyeh Power Station started leaking oil into the eastern Mediterranean Sea. The power station is located about 30 km south of Beirut and the slick covered about 170 km of coastline threatening the neighboring countries Turkey and Cyprus. Due to the ongoing conflict between Israel and Lebanon, cleaning efforts could not start immediately resulting in 12 000 to 15 000 tons of fuel oil leaking into the sea. In this paper we compare results from automatic and semi-automatic slick detection algorithms. The automatic detection method combines the probabilities calculated for each pixel from each image to obtain a joint probability, minimizing the adverse effects of atmosphere on oil spill detection. The method can readily utilize X-, C- and L-band data where available. Furthermore wind and wave speed observations can be used for a more accurate analysis. For this study, we utilize Envisat ASAR ScanSAR data. A probability map is generated based on the radar backscatter, effect of wind and dampening value. The semi-automatic algorithm is based on supervised classification. As a classifier, Artificial Neural Network Multilayer Perceptron (ANN MLP) classifier is used since it is more flexible and efficient than conventional maximum likelihood classifier for multisource and multi-temporal data. The learning algorithm for ANN MLP is chosen as the Levenberg-Marquardt (LM). Training and test data for supervised classification are composed from the textural information created from SAR images. This approach is semiautomatic because tuning the parameters of classifier and composing training data need a human interaction. We point out the similarities and differences between the two methods and their results as well as underlining their advantages and disadvantages. Due to the lack of ground truth data, we compare obtained results to each other, as well as other published oil slick area assessments.

  20. Quantitative evaluation of six graph based semi-automatic liver tumor segmentation techniques using multiple sets of reference segmentation

    NASA Astrophysics Data System (ADS)

    Su, Zihua; Deng, Xiang; Chefd'hotel, Christophe; Grady, Leo; Fei, Jun; Zheng, Dong; Chen, Ning; Xu, Xiaodong

    2011-03-01

    Graph based semi-automatic tumor segmentation techniques have demonstrated great potential in efficiently measuring tumor size from CT images. Comprehensive and quantitative validation is essential to ensure the efficacy of graph based tumor segmentation techniques in clinical applications. In this paper, we present a quantitative validation study of six graph based 3D semi-automatic tumor segmentation techniques using multiple sets of expert segmentation. The six segmentation techniques are Random Walk (RW), Watershed based Random Walk (WRW), LazySnapping (LS), GraphCut (GHC), GrabCut (GBC), and GrowCut (GWC) algorithms. The validation was conducted using clinical CT data of 29 liver tumors and four sets of expert segmentation. The performance of the six algorithms was evaluated using accuracy and reproducibility. The accuracy was quantified using Normalized Probabilistic Rand Index (NPRI), which takes into account of the variation of multiple expert segmentations. The reproducibility was evaluated by the change of the NPRI from 10 different sets of user initializations. Our results from the accuracy test demonstrated that RW (0.63) showed the highest NPRI value, compared to WRW (0.61), GWC (0.60), GHC (0.58), LS (0.57), GBC (0.27). The results from the reproducibility test indicated that GBC is more sensitive to user initialization than the other five algorithms. Compared to previous tumor segmentation validation studies using one set of reference segmentation, our evaluation methods use multiple sets of expert segmentation to address the inter or intra rater variability issue in ground truth annotation, and provide quantitative assessment for comparing different segmentation algorithms.

  1. Pauses in chest compression and inappropriate shocks: a comparison of manual and semi-automatic defibrillation attempts.

    PubMed

    Kramer-Johansen, Jo; Edelson, Dana P; Abella, Benjamin S; Becker, Lance B; Wik, Lars; Steen, Petter Andreas

    2007-05-01

    Semi-automatic defibrillation requires pauses in chest compressions during ECG analysis and charging, and prolonged pre-shock compression pauses reduce the chance of a return of spontaneous circulation (ROSC). We hypothesised that pauses are shorter for manual defibrillation by trained rescuers, but with an increased number of inappropriate shocks given for a non-VF/VT rhythm. From a prospective study of CPR quality during in- and out-of-hospital cardiac arrest, the duration of pre-shock, inter-shock, and post-shock pauses were compared with Mann-Whitney U-test during manual and AED mode with the same defibrillator, and proportions of inappropriate shocks were compared with Chi-squared tests. A 635 manual and 530 semi-automatic shocks were studied. Number of shocks per episode was similar for the two groups. All pauses measured in seconds (s) were shorter for manual use (P<0.0001); median (25, 75 percentiles); 15 (11, 21) versus 22 (18, 28) pre-shock, 13 (9, 20) versus 23 (22, 26) inter-shock, and 9 (6, 18) versus 20 (11, 31) post-shock, but 163 (26%) manual shocks were inappropriate compared with 30 (6%) AED shocks, odds ratio (OR) 5.7 (95% CI; 3.8-8.7). A 150 (78%) of the inappropriate shocks were delivered for organised rhythms. The proportion of inappropriate manual shocks was higher for resident physicians in-hospital than paramedics out-of-hospital; 77/228 (34%) versus 86/407 (21%), OR 1.9 (1.3-2.7). Manual defibrillation resulted in shorter pauses in chest compressions, but a higher frequency of inappropriate shocks. A higher formal level of education did not prevent inappropriate shocks. Trial registrationhttp://www.clinicaltrials.gov/ (NCT00138996 and NCT00228293).

  2. Volumetric glioma quantification: comparison of manual and semi-automatic tumor segmentation for the quantification of tumor growth.

    PubMed

    Odland, Audun; Server, Andres; Saxhaug, Cathrine; Breivik, Birger; Groote, Rasmus; Vardal, Jonas; Larsson, Christopher; Bjørnerud, Atle

    2015-11-01

    Volumetric magnetic resonance imaging (MRI) is now widely available and routinely used in the evaluation of high-grade gliomas (HGGs). Ideally, volumetric measurements should be included in this evaluation. However, manual tumor segmentation is time-consuming and suffers from inter-observer variability. Thus, tools for semi-automatic tumor segmentation are needed. To present a semi-automatic method (SAM) for segmentation of HGGs and to compare this method with manual segmentation performed by experts. The inter-observer variability among experts manually segmenting HGGs using volumetric MRIs was also examined. Twenty patients with HGGs were included. All patients underwent surgical resection prior to inclusion. Each patient underwent several MRI examinations during and after adjuvant chemoradiation therapy. Three experts performed manual segmentation. The results of tumor segmentation by the experts and by the SAM were compared using Dice coefficients and kappa statistics. A relatively close agreement was seen among two of the experts and the SAM, while the third expert disagreed considerably with the other experts and the SAM. An important reason for this disagreement was a different interpretation of contrast enhancement as either surgically-induced or glioma-induced. The time required for manual tumor segmentation was an average of 16 min per scan. Editing of the tumor masks produced by the SAM required an average of less than 2 min per sample. Manual segmentation of HGG is very time-consuming and using the SAM could increase the efficiency of this process. However, the accuracy of the SAM ultimately depends on the expert doing the editing. Our study confirmed a considerable inter-observer variability among experts defining tumor volume from volumetric MRIs. © The Foundation Acta Radiologica 2014.

  3. Semi-Automatic Removal of Foreground Stars from Images of Galaxies

    NASA Astrophysics Data System (ADS)

    Frei, Zsolt

    1996-07-01

    A new procedure, designed to remove foreground stars from galaxy proviles is presented here. Although several programs exist for stellar and faint object photometry, none of them treat star removal from the images very carefully. I present my attempt to develop such a system, and briefly compare the performance of my software to one of the well-known stellar photometry packages, DAOPhot (Stetson 1987). Major steps in my procedure are: (1) automatic construction of an empirical 2D point spread function from well separated stars that are situated off the galaxy; (2) automatic identification of those peaks that are likely to be foreground stars, scaling the PSF and removing these stars, and patching residuals (in the automatically determined smallest possible area where residuals are truly significant); and (3) cosmetic fix of remaining degradations in the image. The algorithm and software presented here is significantly better for automatic removal of foreground stars from images of galaxies than DAOPhot or similar packages, since: (a) the most suitable stars are selected automatically from the image for the PSF fit; (b) after star-removal an intelligent and automatic procedure removes any possible residuals; (c) unlimited number of images can be cleaned in one run without any user interaction whatsoever. (SECTION: Computing and Data Analysis)

  4. ALMA correlator computer systems

    NASA Astrophysics Data System (ADS)

    Pisano, Jim; Amestica, Rodrigo; Perez, Jesus

    2004-09-01

    We present a design for the computer systems which control, configure, and monitor the Atacama Large Millimeter Array (ALMA) correlator and process its output. Two distinct computer systems implement this functionality: a rack- mounted PC controls and monitors the correlator, and a cluster of 17 PCs process the correlator output into raw spectral results. The correlator computer systems interface to other ALMA computers via gigabit Ethernet networks utilizing CORBA and raw socket connections. ALMA Common Software provides the software infrastructure for this distributed computer environment. The control computer interfaces to the correlator via multiple CAN busses and the data processing computer cluster interfaces to the correlator via sixteen dedicated high speed data ports. An independent array-wide hardware timing bus connects to the computer systems and the correlator hardware ensuring synchronous behavior and imposing hard deadlines on the control and data processor computers. An aggregate correlator output of 1 gigabyte per second with 16 millisecond periods and computational data rates of approximately 1 billion floating point operations per second define other hard deadlines for the data processing computer cluster.

  5. Computer controlled antenna system

    NASA Technical Reports Server (NTRS)

    Raumann, N. A.

    1972-01-01

    The application of small computers using digital techniques for operating the servo and control system of large antennas is discussed. The advantages of the system are described. The techniques were evaluated with a forty foot antenna and the Sigma V computer. Programs have been completed which drive the antenna directly without the need for a servo amplifier, antenna position programmer or a scan generator.

  6. Roads Centre-Axis Extraction in Airborne SAR Images: AN Approach Based on Active Contour Model with the Use of Semi-Automatic Seeding

    NASA Astrophysics Data System (ADS)

    Lotte, R. G.; Sant'Anna, S. J. S.; Almeida, C. M.

    2013-05-01

    Research works dealing with computational methods for roads extraction have considerably increased in the latest two decades. This procedure is usually performed on optical or microwave sensors (radar) imagery. Radar images offer advantages when compared to optical ones, for they allow the acquisition of scenes regardless of atmospheric and illumination conditions, besides the possibility of surveying regions where the terrain is hidden by the vegetation canopy, among others. The cartographic mapping based on these images is often manually accomplished, requiring considerable time and effort from the human interpreter. Maps for detecting new roads or updating the existing roads network are among the most important cartographic products to date. There are currently many studies involving the extraction of roads by means of automatic or semi-automatic approaches. Each of them presents different solutions for different problems, making this task a scientific issue still open. One of the preliminary steps for roads extraction can be the seeding of points belonging to roads, what can be done using different methods with diverse levels of automation. The identified seed points are interpolated to form the initial road network, and are hence used as an input for an extraction method properly speaking. The present work introduces an innovative hybrid method for the extraction of roads centre-axis in a synthetic aperture radar (SAR) airborne image. Initially, candidate points are fully automatically seeded using Self-Organizing Maps (SOM), followed by a pruning process based on specific metrics. The centre-axis are then detected by an open-curve active contour model (snakes). The obtained results were evaluated as to their quality with respect to completeness, correctness and redundancy.

  7. Computer Vision Assisted Virtual Reality Calibration

    NASA Technical Reports Server (NTRS)

    Kim, W.

    1999-01-01

    A computer vision assisted semi-automatic virtual reality (VR) calibration technology has been developed that can accurately match a virtual environment of graphically simulated three-dimensional (3-D) models to the video images of the real task environment.

  8. Computer Vision Assisted Virtual Reality Calibration

    NASA Technical Reports Server (NTRS)

    Kim, W.

    1999-01-01

    A computer vision assisted semi-automatic virtual reality (VR) calibration technology has been developed that can accurately match a virtual environment of graphically simulated three-dimensional (3-D) models to the video images of the real task environment.

  9. 3D dento-maxillary osteolytic lesion and active contour segmentation pilot study in CBCT: semi-automatic vs manual methods.

    PubMed

    Vallaeys, K; Kacem, A; Legoux, H; Le Tenier, M; Hamitouche, C; Arbab-Chirani, R

    2015-01-01

    This study was designed to evaluate the reliability of a semi-automatic segmentation tool for dento-maxillary osteolytic image analysis compared with manually defined segmentation in CBCT scans. Five CBCT scans were selected from patients for whom periapical radiolucency images were available. All images were obtained using a ProMax® 3D Mid Planmeca (Planmeca Oy, Helsinki, Finland) and were acquired with 200-μm voxel size. Two clinicians performed the manual segmentations. Four operators applied three different semi-automatic procedures. The volumes of the lesions were measured. An analysis of dispersion was made for each procedure and each case. An ANOVA was used to evaluate the operator effect. Non-paired t-tests were used to compare semi-automatic procedures with the manual procedure. Statistical significance was set at α = 0.01. The coefficients of variation for the manual procedure were 2.5-3.5% on average. There was no statistical difference between the two operators. The results of manual procedures can be used as a reference. For the semi-automatic procedures, the dispersion around the mean can be elevated depending on the operator and case. ANOVA revealed significant differences between the operators for the three techniques according to cases. Region-based segmentation was only comparable with the manual procedure for delineating a circumscribed osteolytic dento-maxillary lesion. The semi-automatic segmentations tested are interesting but are limited to complex surface structures. A methodology that combines the strengths of both methods could be of interest and should be tested. The improvement in the image analysis that is possible through the segmentation procedure and CBCT image quality could be of value.

  10. Semi-automatic registration of 3D orthodontics models from photographs

    NASA Astrophysics Data System (ADS)

    Destrez, Raphaël.; Treuillet, Sylvie; Lucas, Yves; Albouy-Kissi, Benjamin

    2013-03-01

    In orthodontics, a common practice used to diagnose and plan the treatment is the dental cast. After digitization by a CT-scan or a laser scanner, the obtained 3D surface models can feed orthodontics numerical tools for computer-aided diagnosis and treatment planning. One of the pre-processing critical steps is the 3D registration of dental arches to obtain the occlusion of these numerical models. For this task, we propose a vision based method to automatically compute the registration based on photos of patient mouth. From a set of matched singular points between two photos and the dental 3D models, the rigid transformation to apply to the mandible to be in contact with the maxillary may be computed by minimizing the reprojection errors. During a precedent study, we established the feasibility of this visual registration approach with a manual selection of singular points. This paper addresses the issue of automatic point detection. Based on a priori knowledge, histogram thresholding and edge detection are used to extract specific points in 2D images. Concurrently, curvatures information detects 3D corresponding points. To improve the quality of the final registration, we also introduce a combined optimization of the projection matrix with the 2D/3D point positions. These new developments are evaluated on real data by considering the reprojection errors and the deviation angles after registration in respect to the manual reference occlusion realized by a specialist.

  11. Semi-automatic mapping of fault rocks on a Digital Outcrop Model, Gole Larghe Fault Zone (Southern Alps, Italy)

    NASA Astrophysics Data System (ADS)

    Vho, Alice; Bistacchi, Andrea

    2015-04-01

    A quantitative analysis of fault-rock distribution is of paramount importance for studies of fault zone architecture, fault and earthquake mechanics, and fluid circulation along faults at depth. Here we present a semi-automatic workflow for fault-rock mapping on a Digital Outcrop Model (DOM). This workflow has been developed on a real case of study: the strike-slip Gole Larghe Fault Zone (GLFZ). It consists of a fault zone exhumed from ca. 10 km depth, hosted in granitoid rocks of Adamello batholith (Italian Southern Alps). Individual seismogenic slip surfaces generally show green cataclasites (cemented by the precipitation of epidote and K-feldspar from hydrothermal fluids) and more or less well preserved pseudotachylytes (black when well preserved, greenish to white when altered). First of all, a digital model for the outcrop is reconstructed with photogrammetric techniques, using a large number of high resolution digital photographs, processed with VisualSFM software. By using high resolution photographs the DOM can have a much higher resolution than with LIDAR surveys, up to 0.2 mm/pixel. Then, image processing is performed to map the fault-rock distribution with the ImageJ-Fiji package. Green cataclasites and epidote/K-feldspar veins can be quite easily separated from the host rock (tonalite) using spectral analysis. Particularly, band ratio and principal component analysis have been tested successfully. The mapping of black pseudotachylyte veins is more tricky because the differences between the pseudotachylyte and biotite spectral signature are not appreciable. For this reason we have tested different morphological processing tools aimed at identifying (and subtracting) the tiny biotite grains. We propose a solution based on binary images involving a combination of size and circularity thresholds. Comparing the results with manually segmented images, we noticed that major problems occur only when pseudotachylyte veins are very thin and discontinuous. After

  12. A Semi-Automatic Alignment Method for Math Educational Standards Using the MP (Materialization Pattern) Model

    ERIC Educational Resources Information Center

    Choi, Namyoun

    2010-01-01

    Educational standards alignment, which matches similar or equivalent concepts of educational standards, is a necessary task for educational resource discovery and retrieval. Automated or semi-automated alignment systems for educational standards have been recently available. However, existing systems frequently result in inconsistency in…

  13. A Semi-Automatic Alignment Method for Math Educational Standards Using the MP (Materialization Pattern) Model

    ERIC Educational Resources Information Center

    Choi, Namyoun

    2010-01-01

    Educational standards alignment, which matches similar or equivalent concepts of educational standards, is a necessary task for educational resource discovery and retrieval. Automated or semi-automated alignment systems for educational standards have been recently available. However, existing systems frequently result in inconsistency in…

  14. Semi-automatic Synthesis of Security Policies by Invariant-Guided Abduction

    NASA Astrophysics Data System (ADS)

    Hurlin, Clément; Kirchner, Hélène

    We present a specification approach of secured systems as transition systems and security policies as constraints that guard the transitions. In this context, security properties are expressed as invariants. Then we propose an abduction algorithm to generate possible security policies for a given transition-based system. Because abduction is guided by invariants, the generated security policies enforce security properties specified by these invariants. In this framework we are able to tune abduction in two ways in order to: (i) filter out bad security policies and (ii) generate additional possible security policies. Invariant-guided abduction helps designing policies and thus allows using formal methods much earlier in the process of building secured systems. This approach is illustrated on role-based access control systems.

  15. Comparison of Semi Automatic DTM from Image Matching with DTM from LIDAR

    NASA Astrophysics Data System (ADS)

    Rahmayudi, Aji; Rizaldy, Aldino

    2016-06-01

    Nowadays DTM LIDAR was used extensively for generating contour line in Topographic Map. This method is very superior compared to traditionally stereomodel compilation from aerial images that consume large resource of human operator and very time consuming. Since the improvement of computer vision and digital image processing, it is possible to generate point cloud DSM from aerial images using image matching algorithm. It is also possible to classify point cloud DSM to DTM using the same technique with LIDAR classification and producing DTM which is comparable to DTM LIDAR. This research will study the accuracy difference of both DTMs and the result of DTM in several different condition including urban area and forest area, flat terrain and mountainous terrain, also time calculation for mass production Topographic Map. From statistical data, both methods are able to produce 1:5.000 Topographic Map scale.

  16. Semi-automatic extraction of lineaments from remote sensing data and the derivation of groundwater flow-paths

    NASA Astrophysics Data System (ADS)

    Mallast, U.; Gloaguen, R.; Geyer, S.; Rödiger, T.; Siebert, C.

    2011-01-01

    We describe a semi-automatic method to objectively and reproducibly extract lineaments based on the global one arc-second ASTER GDEM. The combined method of linear filtering and object-based classification ensures a high degree of accuracy resulting in a lineament map. Subsequently lineaments are differentiated into geological and morphological lineaments to assign a probable origin and hence a hydro-geological significance. In the western catchment area of the Dead Sea (Israel) the orientation and location of the differentiated lineaments are compared to characteristics of known structural features. The authors demonstrate that a strong correlation between lineaments and structural features exist, being either influenced by the Syrian Arc paleostress field or the Dead Sea stress field or by both. Subsequently, we analyse the distances between lineaments and wells thereby creating an assessment criterion concerning the hydraulic significance of detected lineaments. Derived from this analysis the authors suggest that the statistic analysis of lineaments allows a delineation of flow-paths and thus significant information for groundwater analysis. We validate the flow-path delineation by comparison with existing groundwater model results based on well data.

  17. Derivation of groundwater flow-paths based on semi-automatic extraction of lineaments from remote sensing data

    NASA Astrophysics Data System (ADS)

    Mallast, U.; Gloaguen, R.; Geyer, S.; Rödiger, T.; Siebert, C.

    2011-08-01

    In this paper we present a semi-automatic method to infer groundwater flow-paths based on the extraction of lineaments from digital elevation models. This method is especially adequate in remote and inaccessible areas where in-situ data are scarce. The combined method of linear filtering and object-based classification provides a lineament map with a high degree of accuracy. Subsequently, lineaments are differentiated into geological and morphological lineaments using auxiliary information and finally evaluated in terms of hydro-geological significance. Using the example of the western catchment of the Dead Sea (Israel/Palestine), the orientation and location of the differentiated lineaments are compared to characteristics of known structural features. We demonstrate that a strong correlation between lineaments and structural features exists. Using Euclidean distances between lineaments and wells provides an assessment criterion to evaluate the hydraulic significance of detected lineaments. Based on this analysis, we suggest that the statistical analysis of lineaments allows a delineation of flow-paths and thus significant information on groundwater movements. To validate the flow-paths we compare them to existing results of groundwater models that are based on well data.

  18. Towards Semi-Automatic Artifact Rejection for the Improvement of Alzheimer’s Disease Screening from EEG Signals

    PubMed Central

    Solé-Casals, Jordi; Vialatte, François-Benoît

    2015-01-01

    A large number of studies have analyzed measurable changes that Alzheimer’s disease causes on electroencephalography (EEG). Despite being easily reproducible, those markers have limited sensitivity, which reduces the interest of EEG as a screening tool for this pathology. This is for a large part due to the poor signal-to-noise ratio of EEG signals: EEG recordings are indeed usually corrupted by spurious extra-cerebral artifacts. These artifacts are responsible for a consequent degradation of the signal quality. We investigate the possibility to automatically clean a database of EEG recordings taken from patients suffering from Alzheimer’s disease and healthy age-matched controls. We present here an investigation of commonly used markers of EEG artifacts: kurtosis, sample entropy, zero-crossing rate and fractal dimension. We investigate the reliability of the markers, by comparison with human labeling of sources. Our results show significant differences with the sample entropy marker. We present a strategy for semi-automatic cleaning based on blind source separation, which may improve the specificity of Alzheimer screening using EEG signals. PMID:26213933

  19. Semi-automatic ground truth generation using unsupervised clustering and limited manual labeling: Application to handwritten character recognition

    PubMed Central

    Vajda, Szilárd; Rangoni, Yves; Cecotti, Hubert

    2015-01-01

    For training supervised classifiers to recognize different patterns, large data collections with accurate labels are necessary. In this paper, we propose a generic, semi-automatic labeling technique for large handwritten character collections. In order to speed up the creation of a large scale ground truth, the method combines unsupervised clustering and minimal expert knowledge. To exploit the potential discriminant complementarities across features, each character is projected into five different feature spaces. After clustering the images in each feature space, the human expert labels the cluster centers. Each data point inherits the label of its cluster’s center. A majority (or unanimity) vote decides the label of each character image. The amount of human involvement (labeling) is strictly controlled by the number of clusters – produced by the chosen clustering approach. To test the efficiency of the proposed approach, we have compared, and evaluated three state-of-the art clustering methods (k-means, self-organizing maps, and growing neural gas) on the MNIST digit data set, and a Lampung Indonesian character data set, respectively. Considering a k-nn classifier, we show that labeling manually only 1.3% (MNIST), and 3.2% (Lampung) of the training data, provides the same range of performance than a completely labeled data set would. PMID:25870463

  20. a Semi-Automatic Rule Set Building Method for Urban Land Cover Classification Based on Machine Learning and Human Knowledge

    NASA Astrophysics Data System (ADS)

    Gu, H. Y.; Li, H. T.; Liu, Z. Y.; Shao, C. Y.

    2017-09-01

    Classification rule set is important for Land Cover classification, which refers to features and decision rules. The selection of features and decision are based on an iterative trial-and-error approach that is often utilized in GEOBIA, however, it is time-consuming and has a poor versatility. This study has put forward a rule set building method for Land cover classification based on human knowledge and machine learning. The use of machine learning is to build rule sets effectively which will overcome the iterative trial-and-error approach. The use of human knowledge is to solve the shortcomings of existing machine learning method on insufficient usage of prior knowledge, and improve the versatility of rule sets. A two-step workflow has been introduced, firstly, an initial rule is built based on Random Forest and CART decision tree. Secondly, the initial rule is analyzed and validated based on human knowledge, where we use statistical confidence interval to determine its threshold. The test site is located in Potsdam City. We utilised the TOP, DSM and ground truth data. The results show that the method could determine rule set for Land Cover classification semi-automatically, and there are static features for different land cover classes.

  1. From Laser Scanning to Finite Element Analysis of Complex Buildings by Using a Semi-Automatic Procedure.

    PubMed

    Castellazzi, Giovanni; D'Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro

    2015-07-28

    In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation.

  2. Rapid Semi-Automatic Segmentation of the Spinal Cord from Magnetic Resonance Images: Application in Multiple Sclerosis

    PubMed Central

    Horsfield, Mark A.; Sala, Stefania; Neema, Mohit; Absinta, Martina; Bakshi, Anshika; Sormani, Maria Pia; Rocca, Maria A.; Bakshi, Rohit; Filippi, Massimo

    2010-01-01

    A new semi-automatic method for segmenting the spinal cord from MR images is presented. The method is based on an active surface (AS) model of the cord surface, with intrinsic smoothness constraints. The model is initialized by the user marking the approximate cord center-line on a few representative slices, and the compact surface parametrization results in a rapid segmentation, taking on the order of one minute. Using 3-D acquired T1-weighted images of the cervical spine from human controls and patients with multiple sclerosis, the intra- and inter-observer reproducibilities were evaluated, and compared favorably with an existing cord segmentation method. While the AS method overestimated the cord area by approximately 14% compared to manual outlining, correlations between cord cross-sectional area and clinical disability scores confirmed the relevance of the new method in measuring cord atrophy in multiple sclerosis. Segmentation of the cord from 2-D multi-slice T2-weighted images is also demonstrated over the cervical and thoracic region. Since the cord center-line is an intrinsic parameter extracted as part of the segmentation process, the image can be resampled such that the center-line forms one coordinate axis of a new image, allowing simple visualization of the cord structure and pathology; this could find wider application in standard radiological practice. PMID:20060481

  3. Semi-automatic verification of cropland and grassland using very high resolution mono-temporal satellite images

    NASA Astrophysics Data System (ADS)

    Helmholz, Petra; Rottensteiner, Franz; Heipke, Christian

    2014-11-01

    Many public and private decisions rely on geospatial information stored in a GIS database. For good decision making this information has to be complete, consistent, accurate and up-to-date. In this paper we introduce a new approach for the semi-automatic verification of a specific part of the, possibly outdated GIS database, namely cropland and grassland objects, using mono-temporal very high resolution (VHR) multispectral satellite images. The approach consists of two steps: first, a supervised pixel-based classification based on a Markov Random Field is employed to extract image regions which contain agricultural areas (without distinction between cropland and grassland), and these regions are intersected with boundaries of the agricultural objects from the GIS database. Subsequently, GIS objects labelled as cropland or grassland in the database and showing agricultural areas in the image are subdivided into different homogeneous regions by means of image segmentation, followed by a classification of these segments into either cropland or grassland using a Support Vector Machine. The classification result of all segments belonging to one GIS object are finally merged and compared with the GIS database label. The developed approach was tested on a number of images. The evaluation shows that errors in the GIS database can be significantly reduced while also speeding up the whole verification task when compared to a manual process.

  4. A semi-automatic framework for highway extraction and vehicle detection based on a geometric deformable model

    NASA Astrophysics Data System (ADS)

    Niu, Xutong

    Road extraction and vehicle detection are two of the most important steps of traffic flow analysis from multi-frame aerial photographs. The traditional way of deriving traffic flow trajectories relies on manual vehicle counting from a sequence of aerial photographs. It is tedious and time-consuming work. To improve this process, this research presents a new semi-automatic framework for highway extraction and vehicle detection from aerial photographs. The basis of the new framework is a geometric deformable model. This model refers to the minimization of an objective function that connects the optimization problem with the propagation of regular curves. Utilizing implicit representation of two-dimensional curve, the implementation of this model is capable of dealing with topological changes during curve deformation process and the output is independent of the position of the initial curves. A seed point propagation framework is designed and implemented. This framework incorporates highway extraction, tracking, and linking into one procedure. Manually selected seed points can be automatically propagated throughout a whole highway network. During the process, road center points are also extracted, which introduces a search direction for solving possible blocking problems. This new framework has been successfully applied to highway network extraction and vehicle detection from a large orthophoto mosaic. In this research, vehicles on the extracted highway network were detected with an 83% success rate.

  5. Semi-automatic segmentation of vertebral bodies in volumetric MR images using a statistical shape+pose model

    NASA Astrophysics Data System (ADS)

    Suzani, Amin; Rasoulian, Abtin; Fels, Sidney; Rohling, Robert N.; Abolmaesumi, Purang

    2014-03-01

    Segmentation of vertebral structures in magnetic resonance (MR) images is challenging because of poor con­trast between bone surfaces and surrounding soft tissue. This paper describes a semi-automatic method for segmenting vertebral bodies in multi-slice MR images. In order to achieve a fast and reliable segmentation, the method takes advantage of the correlation between shape and pose of different vertebrae in the same patient by using a statistical multi-vertebrae anatomical shape+pose model. Given a set of MR images of the spine, we initially reduce the intensity inhomogeneity in the images by using an intensity-correction algorithm. Then a 3D anisotropic diffusion filter smooths the images. Afterwards, we extract edges from a relatively small region of the pre-processed image with a simple user interaction. Subsequently, an iterative Expectation Maximization tech­nique is used to register the statistical multi-vertebrae anatomical model to the extracted edge points in order to achieve a fast and reliable segmentation for lumbar vertebral bodies. We evaluate our method in terms of speed and accuracy by applying it to volumetric MR images of the spine acquired from nine patients. Quantitative and visual results demonstrate that the method is promising for segmentation of vertebral bodies in volumetric MR images.

  6. New semi-automatic method for reaction product charge and mass identification in heavy-ion collisions at Fermi energies

    NASA Astrophysics Data System (ADS)

    Gruyer, D.; Bonnet, E.; Chbihi, A.; Frankland, J. D.; Barlini, S.; Borderie, B.; Bougault, R.; Dueñas, J. A.; Galichet, E.; Kordyasz, A.; Kozik, T.; Le Neindre, N.; Lopez, O.; Pârlog, M.; Pastore, G.; Piantelli, S.; Valdré, S.; Verde, G.; Vient, E.

    2017-03-01

    This article presents a new semi-automatic method for charge and mass identification of charged nuclear fragments using either ΔE - E correlations between measured energy losses in two successive detectors or correlations between charge signal amplitude and rise time in a single silicon detector, derived from digital pulse shape analysis techniques. In both cases different nuclear species (defined by their atomic number Z and mass number A) can be visually identified from such correlations if they are presented as a two-dimensional histogram ('identification matrix'), in which case correlations for different species populate different ridge lines ('identification lines') in the matrix. The proposed algorithm is based on the identification matrix's properties and uses as little information as possible on the global form of the identification lines, making it applicable to a large variety of matrices. Particular attention has been paid to the implementation in a suitable graphical environment, so that only two mouse-clicks are required from the user to calculate all initialization parameters. Example applications to recent data from both INDRA and FAZIA telescopes are presented.

  7. Comparison of acute and chronic traumatic brain injury using semi-automatic multimodal segmentation of MR volumes.

    PubMed

    Irimia, Andrei; Chambers, Micah C; Alger, Jeffry R; Filippou, Maria; Prastawa, Marcel W; Wang, Bo; Hovda, David A; Gerig, Guido; Toga, Arthur W; Kikinis, Ron; Vespa, Paul M; Van Horn, John D

    2011-11-01

    Although neuroimaging is essential for prompt and proper management of traumatic brain injury (TBI), there is a regrettable and acute lack of robust methods for the visualization and assessment of TBI pathophysiology, especially for of the purpose of improving clinical outcome metrics. Until now, the application of automatic segmentation algorithms to TBI in a clinical setting has remained an elusive goal because existing methods have, for the most part, been insufficiently robust to faithfully capture TBI-related changes in brain anatomy. This article introduces and illustrates the combined use of multimodal TBI segmentation and time point comparison using 3D Slicer, a widely-used software environment whose TBI data processing solutions are openly available. For three representative TBI cases, semi-automatic tissue classification and 3D model generation are performed to perform intra-patient time point comparison of TBI using multimodal volumetrics and clinical atrophy measures. Identification and quantitative assessment of extra- and intra-cortical bleeding, lesions, edema, and diffuse axonal injury are demonstrated. The proposed tools allow cross-correlation of multimodal metrics from structural imaging (e.g., structural volume, atrophy measurements) with clinical outcome variables and other potential factors predictive of recovery. In addition, the workflows described are suitable for TBI clinical practice and patient monitoring, particularly for assessing damage extent and for the measurement of neuroanatomical change over time. With knowledge of general location, extent, and degree of change, such metrics can be associated with clinical measures and subsequently used to suggest viable treatment options.

  8. Semi-automatic ground truth generation using unsupervised clustering and limited manual labeling: Application to handwritten character recognition.

    PubMed

    Vajda, Szilárd; Rangoni, Yves; Cecotti, Hubert

    2015-06-01

    For training supervised classifiers to recognize different patterns, large data collections with accurate labels are necessary. In this paper, we propose a generic, semi-automatic labeling technique for large handwritten character collections. In order to speed up the creation of a large scale ground truth, the method combines unsupervised clustering and minimal expert knowledge. To exploit the potential discriminant complementarities across features, each character is projected into five different feature spaces. After clustering the images in each feature space, the human expert labels the cluster centers. Each data point inherits the label of its cluster's center. A majority (or unanimity) vote decides the label of each character image. The amount of human involvement (labeling) is strictly controlled by the number of clusters - produced by the chosen clustering approach. To test the efficiency of the proposed approach, we have compared, and evaluated three state-of-the art clustering methods (k-means, self-organizing maps, and growing neural gas) on the MNIST digit data set, and a Lampung Indonesian character data set, respectively. Considering a k-nn classifier, we show that labeling manually only 1.3% (MNIST), and 3.2% (Lampung) of the training data, provides the same range of performance than a completely labeled data set would.

  9. Semi-Automatic Modelling of Building FAÇADES with Shape Grammars Using Historic Building Information Modelling

    NASA Astrophysics Data System (ADS)

    Dore, C.; Murphy, M.

    2013-02-01

    This paper outlines a new approach for generating digital heritage models from laser scan or photogrammetric data using Historic Building Information Modelling (HBIM). HBIM is a plug-in for Building Information Modelling (BIM) software that uses parametric library objects and procedural modelling techniques to automate the modelling stage. The HBIM process involves a reverse engineering solution whereby parametric interactive objects representing architectural elements are mapped onto laser scan or photogrammetric survey data. A library of parametric architectural objects has been designed from historic manuscripts and architectural pattern books. These parametric objects were built using an embedded programming language within the ArchiCAD BIM software called Geometric Description Language (GDL). Procedural modelling techniques have been implemented with the same language to create a parametric building façade which automatically combines library objects based on architectural rules and proportions. Different configurations of the façade are controlled by user parameter adjustment. The automatically positioned elements of the façade can be subsequently refined using graphical editing while overlaying the model with orthographic imagery. Along with this semi-automatic method for generating façade models, manual plotting of library objects can also be used to generate a BIM model from survey data. After the 3D model has been completed conservation documents such as plans, sections, elevations and 3D views can be automatically generated for conservation projects.

  10. From Laser Scanning to Finite Element Analysis of Complex Buildings by Using a Semi-Automatic Procedure

    PubMed Central

    Castellazzi, Giovanni; D’Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro

    2015-01-01

    In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation. PMID:26225978

  11. Semi-automatic selection of summary statistics for ABC model choice.

    PubMed

    Prangle, Dennis; Fearnhead, Paul; Cox, Murray P; Biggs, Patrick J; French, Nigel P

    2014-02-01

    A central statistical goal is to choose between alternative explanatory models of data. In many modern applications, such as population genetics, it is not possible to apply standard methods based on evaluating the likelihood functions of the models, as these are numerically intractable. Approximate Bayesian computation (ABC) is a commonly used alternative for such situations. ABC simulates data x for many parameter values under each model, which is compared to the observed data x obs. More weight is placed on models under which S(x) is close to S(x obs), where S maps data to a vector of summary statistics. Previous work has shown the choice of S is crucial to the efficiency and accuracy of ABC. This paper provides a method to select good summary statistics for model choice. It uses a preliminary step, simulating many x values from all models and fitting regressions to this with the model as response. The resulting model weight estimators are used as S in an ABC analysis. Theoretical results are given to justify this as approximating low dimensional sufficient statistics. A substantive application is presented: choosing between competing coalescent models of demographic growth for Campylobacter jejuni in New Zealand using multi-locus sequence typing data.

  12. A Semi-Automatic Approach to Construct Vietnamese Ontology from Online Text

    ERIC Educational Resources Information Center

    Nguyen, Bao-An; Yang, Don-Lin

    2012-01-01

    An ontology is an effective formal representation of knowledge used commonly in artificial intelligence, semantic web, software engineering, and information retrieval. In open and distance learning, ontologies are used as knowledge bases for e-learning supplements, educational recommenders, and question answering systems that support students with…

  13. Semi-automatic registration of multi-source satellite imagery with varying geometric resolutions

    NASA Astrophysics Data System (ADS)

    Al-Ruzouq, Rami

    Image registration concerns the problem of how to combine data and information from multiple sensors in order to achieve improved accuracy and better inferences about the environment than could be attained through the use of a single sensor. Registration of imagery from multiple sources is essential for a variety of applications in remote sensing, medical diagnosis, computer vision, and pattern recognition. In general, an image registration methodology must deal with four issues. First, a decision has to be made regarding the choice of primitives for the registration procedure. The second issue concerns establishing the registration transformation function that mathematically relates images to be registered. Then, a similarity measure should be devised to ensure the correspondence of conjugate primitives. Finally, a matching strategy has to be designed and implemented as a controlling framework that utilizes the primitives, the similarity measure, and the transformation function to solve the registration problem. The Modified Iterated Hough Transform (MINT) is used as the matching strategy for automatically deriving an estimate of the parameters involved in the transformation function as well as the correspondence between conjugate primitives. The MIHT procedure follows an optimal sequence for parameter estimation. This sequence takes into account the contribution of linear features with different orientations at various locations within the imagery towards the estimation of the transformation parameters in question. Accurate co-registration of multi-sensor datasets captured at different times is a prerequisite step for a reliable change detection procedure. Once the registration problem has been solved, the suggested methodology proceeds by detecting changes between the registered images. Derived edges from the registered images are used as the basis for change detection. Edges are utilized because they are invariant regardless of possible radiometric differences

  14. Reflective random indexing for semi-automatic indexing of the biomedical literature.

    PubMed

    Vasuki, Vidya; Cohen, Trevor

    2010-10-01

    The rapid growth of biomedical literature is evident in the increasing size of the MEDLINE research database. Medical Subject Headings (MeSH), a controlled set of keywords, are used to index all the citations contained in the database to facilitate search and retrieval. This volume of citations calls for efficient tools to assist indexers at the US National Library of Medicine (NLM). Currently, the Medical Text Indexer (MTI) system provides assistance by recommending MeSH terms based on the title and abstract of an article using a combination of distributional and vocabulary-based methods. In this paper, we evaluate a novel approach toward indexer assistance by using nearest neighbor classification in combination with Reflective Random Indexing (RRI), a scalable alternative to the established methods of distributional semantics. On a test set provided by the NLM, our approach significantly outperforms the MTI system, suggesting that the RRI approach would make a useful addition to the current methodologies.

  15. A conceptual study of automatic and semi-automatic quality assurance techniques for round image processing

    NASA Technical Reports Server (NTRS)

    1983-01-01

    This report summarizes the results of a study conducted by Engineering and Economics Research (EER), Inc. under NASA Contract Number NAS5-27513. The study involved the development of preliminary concepts for automatic and semiautomatic quality assurance (QA) techniques for ground image processing. A distinction is made between quality assessment and the more comprehensive quality assurance which includes decision making and system feedback control in response to quality assessment.

  16. Semi-automatic building extraction in informal settlements from high-resolution satellite imagery

    NASA Astrophysics Data System (ADS)

    Mayunga, Selassie David

    The extraction of man-made features from digital remotely sensed images is considered as an important step underpinning management of human settlements in any country. Man-made features and buildings in particular are required for varieties of applications such as urban planning, creation of geographical information systems (GIS) databases and Urban City models. The traditional man-made feature extraction methods are very expensive in terms of equipment, labour intensive, need well-trained personnel and cannot cope with changing environments, particularly in dense urban settlement areas. This research presents an approach for extracting buildings in dense informal settlement areas using high-resolution satellite imagery. The proposed system uses a novel strategy of extracting building by measuring a single point at the approximate centre of the building. The fine measurement of the building outlines is then effected using a modified snake model. The original snake model on which this framework is based, incorporates an external constraint energy term which is tailored to preserving the convergence properties of the snake model; its use to unstructured objects will negatively affect their actual shapes. The external constrained energy term was removed from the original snake model formulation, thereby, giving ability to cope with high variability of building shapes in informal settlement areas. The proposed building extraction system was tested on two areas, which have different situations. The first area was Tungi in Dar Es Salaam, Tanzania where three sites were tested. This area is characterized by informal settlements, which are illegally formulated within the city boundaries. The second area was Oromocto in New Brunswick, Canada where two sites were tested. Oromocto area is mostly flat and the buildings are constructed using similar materials. Qualitative and quantitative measures were employed to evaluate the accuracy of the results as well as the performance

  17. Terrain-driven unstructured mesh development through semi-automatic vertical feature extraction

    NASA Astrophysics Data System (ADS)

    Bilskie, Matthew V.; Coggin, David; Hagen, Scott C.; Medeiros, Stephen C.

    2015-12-01

    validation techniques are necessary for state-of-the-art flood inundation models. In addition, the semi-automated, unstructured mesh generation process presented herein increases the overall accuracy of simulated storm surge across the floodplain without reliance on hand digitization or sacrificing computational cost.

  18. Trusted Computer Systems - Glossary

    DTIC Science & Technology

    1981-03-01

    Trusted Computer Systems- Glossary George A. Huff March 1981 CONTRACTSPONSOR OUSDRE, 0 3T I CONTRACT NO, FT1968-8J1-C-001C PROJECT NO 8120 DEPT. 075...its use for the simultaneous processing of multiple levels of classified or sensitive information. This glossary was prepared for distribution at the...Third Computer Security Initiative Seminar held at the Nati-onal Bureau of Standards, November 18-20, 1980. Emphasis is on terms which relate to the

  19. Computational Systems Biology

    SciTech Connect

    McDermott, Jason E.; Samudrala, Ram; Bumgarner, Roger E.; Montogomery, Kristina; Ireton, Renee

    2009-05-01

    Computational systems biology is the term that we use to describe computational methods to identify, infer, model, and store relationships between the molecules, pathways, and cells (“systems”) involved in a living organism. Based on this definition, the field of computational systems biology has been in existence for some time. However, the recent confluence of high throughput methodology for biological data gathering, genome-scale sequencing and computational processing power has driven a reinvention and expansion of this field. The expansions include not only modeling of small metabolic{Ishii, 2004 #1129; Ekins, 2006 #1601; Lafaye, 2005 #1744} and signaling systems{Stevenson-Paulik, 2006 #1742; Lafaye, 2005 #1744} but also modeling of the relationships between biological components in very large systems, incluyding whole cells and organisms {Ideker, 2001 #1124; Pe'er, 2001 #1172; Pilpel, 2001 #393; Ideker, 2002 #327; Kelley, 2003 #1117; Shannon, 2003 #1116; Ideker, 2004 #1111}{Schadt, 2003 #475; Schadt, 2006 #1661}{McDermott, 2002 #878; McDermott, 2005 #1271}. Generally these models provide a general overview of one or more aspects of these systems and leave the determination of details to experimentalists focused on smaller subsystems. The promise of such approaches is that they will elucidate patterns, relationships and general features that are not evident from examining specific components or subsystems. These predictions are either interesting in and of themselves (for example, the identification of an evolutionary pattern), or are interesting and valuable to researchers working on a particular problem (for example highlight a previously unknown functional pathway). Two events have occurred to bring about the field computational systems biology to the forefront. One is the advent of high throughput methods that have generated large amounts of information about particular systems in the form of genetic studies, gene expression analyses (both protein and

  20. Field comparison of manual and semi-automatic methods for the measurement of total gaseous mercury in ambient air and assessment of equivalence.

    PubMed

    Brown, Richard J C; Kumar, Yarshini; Brown, Andrew S; Dexter, Matthew A; Corns, Warren T

    2012-02-01

    The manual and semi-automatic methods for the measurement of total gaseous mercury in ambient air have been compared in a field trial for the first time. The comparison results have shown that whilst the expected random scatter is present, there was no significant systematic bias between the two methods, whose operational differences have also been outlined and analysed in this work. Furthermore it has been observed that because variation in instrument sensitivity is largely random in nature there is little effect on the results of the comparison if the period between instrument calibrations is altered. When the manual and semi-automatic methods are compared according to guidelines produced by the European Commission the results presented here, taken together with other supporting evidence, strongly suggest that the two methods are equivalent.

  1. Semi-automatic calibration technique using six inertial frames of reference

    NASA Astrophysics Data System (ADS)

    Lai, Alan; James, Daniel A.; Hayes, Jason P.; Harvey, Erol C.

    2004-03-01

    A triaxial accelerometer calibration technique that evades the problems of the conventional calibration method of aligning with gravity is proposed in this paper. It is based on the principle that the vector sum of acceleration from three sensing axes should be equal to the gravity vector. The method requires the accelerometer to be oriented and stationary in 6 different ways to solve for the 3 scale factors and 3 offsets. The Newton-Raphson method was employed to solve the non-linear equations in order to obtain the scale factors and offsets. The iterative process was fast, with an average of 5 iterations required to solve the system of equations. The accuracy of the derived scale factors and offsets were determined by using them to calculate the gravity vector magnitude using the triaxial accelerometer to measure gravity. The triaxial accelerometer was used to measure gravity 264 times to determine the accuracy of the 44 acceptable sets of scale factors and offsets derived from the calibrations (gravity was assumed to equal 9.8000 ms-2 during the calibration). It was found that the best calibration calculated the gravity vector magnitude to 9.8156 +/- 0.4294 ms-2. This equates to a maximum of 4.5% error in terms of a constant acceleration measurement. Because of the principle behind this method, it has the disadvantage that noise/error in only one axis will cause an inaccurate determination of all the scale factors and offsets.

  2. Computer network defense system

    DOEpatents

    Urias, Vincent; Stout, William M. S.; Loverro, Caleb

    2017-08-22

    A method and apparatus for protecting virtual machines. A computer system creates a copy of a group of the virtual machines in an operating network in a deception network to form a group of cloned virtual machines in the deception network when the group of the virtual machines is accessed by an adversary. The computer system creates an emulation of components from the operating network in the deception network. The components are accessible by the group of the cloned virtual machines as if the group of the cloned virtual machines was in the operating network. The computer system moves network connections for the group of the virtual machines in the operating network used by the adversary from the group of the virtual machines in the operating network to the group of the cloned virtual machines, enabling protecting the group of the virtual machines from actions performed by the adversary.

  3. Validation of a semi-automatic co-registration of MRI scans in patients with brain tumors during treatment follow-up.

    PubMed

    van der Hoorn, Anouk; Yan, Jiun-Lin; Larkin, Timothy J; Boonzaier, Natalie R; Matys, Tomasz; Price, Stephen J

    2016-07-01

    There is an expanding research interest in high-grade gliomas because of their significant population burden and poor survival despite the extensive standard multimodal treatment. One of the obstacles is the lack of individualized monitoring of tumor characteristics and treatment response before, during and after treatment. We have developed a two-stage semi-automatic method to co-register MRI scans at different time points before and after surgical and adjuvant treatment of high-grade gliomas. This two-stage co-registration includes a linear co-registration of the semi-automatically derived mask of the preoperative contrast-enhancing area or postoperative resection cavity, brain contour and ventricles between different time points. The resulting transformation matrix was then applied in a non-linear manner to co-register conventional contrast-enhanced T1 -weighted images. Targeted registration errors were calculated and compared with linear and non-linear co-registered images. Targeted registration errors were smaller for the semi-automatic non-linear co-registration compared with both the non-linear and linear co-registered images. This was further visualized using a three-dimensional structural similarity method. The semi-automatic non-linear co-registration allowed for optimal correction of the variable brain shift at different time points as evaluated by the minimal targeted registration error. This proposed method allows for the accurate evaluation of the treatment response, essential for the growing research area of brain tumor imaging and treatment response evaluation in large sets of patients. Copyright © 2016 John Wiley & Sons, Ltd.

  4. Semi-automatic carotid intraplaque hemorrhage detection and quantification on Magnetization-Prepared Rapid Acquisition Gradient-Echo (MP-RAGE) with optimized threshold selection.

    PubMed

    Liu, Jin; Balu, Niranjan; Hippe, Daniel S; Ferguson, Marina S; Martinez-Malo, Vanesa; DeMarco, J Kevin; Zhu, David C; Ota, Hideki; Sun, Jie; Xu, Dongxiang; Kerwin, William S; Hatsukami, Thomas S; Yuan, Chun

    2016-07-16

    Intraplaque hemorrhage (IPH) is associated with atherosclerosis progression and subsequent cardiovascular events. We sought to develop a semi-automatic method with an optimized threshold for carotid IPH detection and quantification on MP-RAGE images using matched histology as the gold standard. Fourteen patients scheduled for carotid endarterectomy underwent 3D MP-RAGE cardiovascular magnetic resonance (CMR) preoperatively. Presence and area of IPH were recorded using histology. Presence and area of IPH were also recorded on CMR based on intensity thresholding using three references for intensity normalization: the sternocleidomastoid muscle (SCM), the adjacent muscle and the automatically generated local median value. The optimized intensity thresholds were obtained by maximizing the Youden's index for IPH detection. Using leave-one-out cross validation, the sensitivity and specificity for IPH detection based on our proposed semi-automatic method and the agreement with histology on IPH area quantification were evaluated. The optimized intensity thresholds for IPH detection were 1.0 times the SCM intensity, 1.6 times the adjacent muscle intensity and 2.2 times the median intensity. Using the semi-automatic method with the optimized intensity threshold, the following IPH detection and quantification performance was obtained: sensitivities up to 59, 68 and 80 %; specificities up to 85, 74 and 79 %; Pearson's correlation coefficients (IPH area measurement) up to 0.76, 0.93 and 0.90, respectively, using SCM, the adjacent muscle and the local median value for intensity normalization, after heavily calcified and small IPH were excluded. A semi-automatic method with good performance on IPH detection and quantification can be obtained in MP-RAGE CMR, using an optimized intensity threshold comparing to the adjacent muscle. The automatically generated reference of local median value provides comparable performance and may be particularly useful for developing automatic

  5. A novel semi-automatic segmentation method for volumetric assessment of the colon based on magnetic resonance imaging.

    PubMed

    Sandberg, Thomas Holm; Nilsson, Matias; Poulsen, Jakob Lykke; Gram, Mikkel; Frøkjær, Jens Brøndum; Østergaard, Lasse Riis; Drewes, Asbjørn Mohr

    2015-10-01

    To develop a novel semi-automatic segmentation method for quantification of the colon from magnetic resonance imaging (MRI). Fourteen abdominal T2-weighted and dual-echo Dixon-type water-only MRI scans were obtained from four healthy subjects. Regions of interest containing the colon were outlined manually on the T2-weighted images. Segmentation of the colon and feces was obtained using k-means clustering and image registration. Regional colonic and fecal volumes were obtained. Inter-observer agreement between two observers was assessed using the Dice similarity coefficient as measure of overlap. Colonic segmentations showed wide variation in volume and morphology between subjects. Colon volumes of the four healthy subjects for both observers were (median [interquartile range]) ascending colon 200 mL [169.5-260], transverse 200.5 mL [113.5-242.5], descending 148 mL [121.5-178.5], sigmoid-rectum 277 mL [192-345], and total 819 mL [687-898.5]. Overlap agreement for the total colon segmentation between the two observers was high with a Dice similarity coefficient of 0.91 [0.84-0.94]. The colon volume to feces volume ratio was on average 0.7. Regional colon volumes were comparable to previous findings using fully manual segmentation. The method showed good agreement between observers and may be used in future studies of gastrointestinal disorders to assess colon and fecal volume and colon morphology. Novel insight into morphology and quantitative assessment of the colon using this method may provide new biomarkers for constipation and abdominal pain compared to radiography which suffers from poor reliability.

  6. SplitRacer - a semi-automatic tool for the analysis and interpretation of teleseismic shear-wave splitting

    NASA Astrophysics Data System (ADS)

    Reiss, Miriam Christina; Rümpker, Georg

    2017-04-01

    We present a semi-automatic, graphical user interface tool for the analysis and interpretation of teleseismic shear-wave splitting in MATLAB. Shear wave splitting analysis is a standard tool to infer seismic anisotropy, which is often interpreted as due to lattice-preferred orientation of e.g. mantle minerals or shape-preferred orientation caused by cracks or alternating layers in the lithosphere and hence provides a direct link to the earth's kinematic processes. The increasing number of permanent stations and temporary experiments result in comprehensive studies of seismic anisotropy world-wide. Their successive comparison with a growing number of global models of mantle flow further advances our understanding the earth's interior. However, increasingly large data sets pose the inevitable question as to how to process them. Well-established routines and programs are accurate but often slow and impractical for analyzing a large amount of data. Additionally, shear wave splitting results are seldom evaluated using the same quality criteria which complicates a straight-forward comparison. SplitRacer consists of several processing steps: i) download of data per FDSNWS, ii) direct reading of miniSEED-files and an initial screening and categorizing of XKS-waveforms using a pre-set SNR-threshold. iii) an analysis of the particle motion of selected phases and successive correction of the sensor miss-alignment based on the long-axis of the particle motion. iv) splitting analysis of selected events: seismograms are first rotated into radial and transverse components, then the energy-minimization method is applied, which provides the polarization and delay time of the phase. To estimate errors, the analysis is done for different randomly-chosen time windows. v) joint-splitting analysis for all events for one station, where the energy content of all phases is inverted simultaneously. This allows to decrease the influence of noise and to increase robustness of the measurement

  7. Semi-automatic software increases CT measurement accuracy but not response classification of colorectal liver metastases after chemotherapy.

    PubMed

    van Kessel, Charlotte S; van Leeuwen, Maarten S; Witteveen, Petronella O; Kwee, Thomas C; Verkooijen, Helena M; van Hillegersberg, Richard

    2012-10-01

    This study evaluates intra- and interobserver variability of automatic diameter and volume measurements of colorectal liver metastases (CRLM) before and after chemotherapy and its influence on response classification. Pre-and post-chemotherapy CT-scans of 33 patients with 138 CRLM were evaluated. Two observers measured all metastases three times on pre-and post-chemotherapy CT-scans, using three different techniques: manual diameter (MD), automatic diameter (AD) and automatic volume (AV). RECIST 1.0 criteria were used to define response classification. For each technique, we assessed intra- and interobserver reliability by determining the intraclass correlation coefficient (α-level 0.05). Intra-observer agreement was estimated by the variance coefficient (%). For inter-observer agreement the relative measurement error (%) was calculated using Bland-Altman analysis. In addition, we compared agreement in response classification by calculating kappa-scores (κ) and estimating proportions of discordance between methods (%). Intra-observer variability was 6.05%, 4.28% and 12.72% for MD, AD and AV, respectively. Inter-observer variability was 4.23%, 2.02% and 14.86% for MD, AD and AV, respectively. Chemotherapy marginally affected these estimates. Agreement in response classification did not improve using AD or AV (MD κ=0.653, AD κ=0.548, AV κ=0.548) and substantial discordance between observers was observed with all three methods (MD 17.8%, AD 22.2%, AV 22.2%). Semi-automatic software allows repeatable and reproducible measurement of both diameter and volume measurements of CRLM, but does not reduce variability in response classification. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  8. Computer Systems Technician.

    ERIC Educational Resources Information Center

    Ohio State Univ., Columbus. Center on Education and Training for Employment.

    This document contains 17 units to consider for use in a tech prep competency profile for the occupation of computer systems technician. All the units listed will not necessarily apply to every situation or tech prep consortium, nor will all the competencies within each unit be appropriate. Several units appear within each specific occupation and…

  9. Computer Systems Technician.

    ERIC Educational Resources Information Center

    Ohio State Univ., Columbus. Center on Education and Training for Employment.

    This document contains 17 units to consider for use in a tech prep competency profile for the occupation of computer systems technician. All the units listed will not necessarily apply to every situation or tech prep consortium, nor will all the competencies within each unit be appropriate. Several units appear within each specific occupation and…

  10. Computer Vision Systems

    NASA Astrophysics Data System (ADS)

    Gunasekaran, Sundaram

    Food quality is of paramount consideration for all consumers, and its importance is perhaps only second to food safety. By some definition, food safety is also incorporated into the broad categorization of food quality. Hence, the need for careful and accurate evaluation of food quality is at the forefront of research and development both in the academia and industry. Among the many available methods for food quality evaluation, computer vision has proven to be the most powerful, especially for nondestructively extracting and quantifying many features that have direct relevance to food quality assessment and control. Furthermore, computer vision systems serve to rapidly evaluate the most readily observable foods quality attributes - the external characteristics such as color, shape, size, surface texture etc. In addition, it is now possible, using advanced computer vision technologies, to “see” inside a food product and/or package to examine important quality attributes ordinarily unavailable to human evaluators. With rapid advances in electronic hardware and other associated imaging technologies, the cost-effectiveness and speed of computer vision systems have greatly improved and many practical systems are already in place in the food industry.

  11. Computational Systems Chemical Biology

    PubMed Central

    Oprea, Tudor I.; May, Elebeoba E.; Leitão, Andrei; Tropsha, Alexander

    2013-01-01

    There is a critical need for improving the level of chemistry awareness in systems biology. The data and information related to modulation of genes and proteins by small molecules continue to accumulate at the same time as simulation tools in systems biology and whole body physiologically-based pharmacokinetics (PBPK) continue to evolve. We called this emerging area at the interface between chemical biology and systems biology systems chemical biology, SCB (Oprea et al., 2007). The overarching goal of computational SCB is to develop tools for integrated chemical-biological data acquisition, filtering and processing, by taking into account relevant information related to interactions between proteins and small molecules, possible metabolic transformations of small molecules, as well as associated information related to genes, networks, small molecules and, where applicable, mutants and variants of those proteins. There is yet an unmet need to develop an integrated in silico pharmacology / systems biology continuum that embeds drug-target-clinical outcome (DTCO) triplets, a capability that is vital to the future of chemical biology, pharmacology and systems biology. Through the development of the SCB approach, scientists will be able to start addressing, in an integrated simulation environment, questions that make the best use of our ever-growing chemical and biological data repositories at the system-wide level. This chapter reviews some of the major research concepts and describes key components that constitute the emerging area of computational systems chemical biology. PMID:20838980

  12. Computational systems chemical biology.

    PubMed

    Oprea, Tudor I; May, Elebeoba E; Leitão, Andrei; Tropsha, Alexander

    2011-01-01

    There is a critical need for improving the level of chemistry awareness in systems biology. The data and information related to modulation of genes and proteins by small molecules continue to accumulate at the same time as simulation tools in systems biology and whole body physiologically based pharmacokinetics (PBPK) continue to evolve. We called this emerging area at the interface between chemical biology and systems biology systems chemical biology (SCB) (Nat Chem Biol 3: 447-450, 2007).The overarching goal of computational SCB is to develop tools for integrated chemical-biological data acquisition, filtering and processing, by taking into account relevant information related to interactions between proteins and small molecules, possible metabolic transformations of small molecules, as well as associated information related to genes, networks, small molecules, and, where applicable, mutants and variants of those proteins. There is yet an unmet need to develop an integrated in silico pharmacology/systems biology continuum that embeds drug-target-clinical outcome (DTCO) triplets, a capability that is vital to the future of chemical biology, pharmacology, and systems biology. Through the development of the SCB approach, scientists will be able to start addressing, in an integrated simulation environment, questions that make the best use of our ever-growing chemical and biological data repositories at the system-wide level. This chapter reviews some of the major research concepts and describes key components that constitute the emerging area of computational systems chemical biology.

  13. Technical computing system evaluations

    SciTech Connect

    Shaw, B.R.

    1987-05-01

    The acquisition of technical computing hardware and software is an extremely personal process. Although most commercial system configurations have one of several general organizations, individual requirements of the purchaser can have a large impact on successful implementation even though differences between products may seem small. To assure adequate evaluation and appropriate system selection, it is absolutely essential to establish written goals, create a real benchmark data set and testing procedure, and finally test and evaluate the system using the purchaser's technical staff, not the vendor's. BHP P(A) (formerly Monsanto Oil Company) was given the opportunity to acquire a technical computing system that would meet the needs of the geoscience community, provide future growth avenues, and maintain corporate hardware and software standards of stability and reliability. The system acquisition team consisted of a staff geologist, geophysicist, and manager of information systems. The eight-month evaluation allowed the development procedures to personalize and evaluate BHP needs as well as the vendor's products. The goal-driven benchmark process has become the standard procedure for system additions and expansions as well as product acceptance evaluations.

  14. Semi-automatic measures of activity in selected south polar regions of Mars using morphological image analysis

    NASA Astrophysics Data System (ADS)

    Aye, Klaus-Michael; Portyankina, Ganna; Pommerol, Antoine; Thomas, Nicolas

    results of these semi-automatically determined seasonal fan count evolutions for Inca City, Ithaca and Manhattan ROIs, compare these evolutionary patterns with each other and with surface reflectance evolutions of both HiRISE and CRISM for the same locations. References: Aye, K.-M. et. al. (2010), LPSC 2010, 2707 Hansen, C. et. al (2010) Icarus, 205, Issue 1, p. 283-295 Kieffer, H.H. (2007), JGR 112 Portyankina, G. et. al. (2010), Icarus, 205, Issue 1, p. 311-320 Thomas, N. et. Al. (2009), Vol. 4, EPSC2009-478

  15. Systemization of Secure Computation

    DTIC Science & Technology

    2015-11-01

    previously solved problems in garbled circuits. If the generator does not use the correct values, then it reduces to the problem of creating an incor- rect...FOR THE DIRECTOR: / S / / S / CARL R. THOMAS MARK H. LINDERMAN Work Unit Manager Technical Advisor, Computing & Communications Division...database-as-a-service” application paradigm), but it also creates privacy risks. To mitigate these risks, database- management systems can use

  16. Computer memory management system

    DOEpatents

    Kirk, III, Whitson John

    2002-01-01

    A computer memory management system utilizing a memory structure system of "intelligent" pointers in which information related to the use status of the memory structure is designed into the pointer. Through this pointer system, The present invention provides essentially automatic memory management (often referred to as garbage collection) by allowing relationships between objects to have definite memory management behavior by use of coding protocol which describes when relationships should be maintained and when the relationships should be broken. In one aspect, the present invention system allows automatic breaking of strong links to facilitate object garbage collection, coupled with relationship adjectives which define deletion of associated objects. In another aspect, The present invention includes simple-to-use infinite undo/redo functionality in that it has the capability, through a simple function call, to undo all of the changes made to a data model since the previous `valid state` was noted.

  17. Semi-automatic cone beam CT segmentation of in vivo pre-clinical subcutaneous tumours provides an efficient non-invasive alternative for tumour volume measurements.

    PubMed

    Brodin, N P; Tang, J; Skalina, K; Quinn, T J; Basu, I; Guha, C; Tomé, W A

    2015-06-01

    To evaluate the feasibility and accuracy of using cone beam CT (CBCT) scans obtained in radiation studies using the small-animal radiation research platform to perform semi-automatic tumour segmentation of pre-clinical tumour volumes. Volume measurements were evaluated for different anatomical tumour sites, the flank, thigh and dorsum of the hind foot, for a variety of tumour cell lines. The estimated tumour volumes from CBCT and manual calliper measurements using different volume equations were compared with the "gold standard", measured by weighing the tumours following euthanasia and tumour resection. The correlation between tumour volumes estimated with the different methods, compared with the gold standard, was estimated by the Spearman's rank correlation coefficient, root-mean-square deviation and the coefficient of determination. The semi-automatic CBCT volume segmentation performed favourably compared with manual calliper measures for flank tumours ≤2 cm(3) and thigh tumours ≤1 cm(3). For tumours >2 cm(3) or foot tumours, the CBCT method was not able to accurately segment the tumour volumes and manual calliper measures were superior. We demonstrated that tumour volumes of flank and thigh tumours, obtained as a part of radiation studies using image-guided small-animal irradiators, can be estimated more efficiently and accurately using semi-automatic segmentation from CBCT scans. This is the first study evaluating tumour volume assessment of pre-clinical subcutaneous tumours in different anatomical sites using on-board CBCT imaging. We also compared the accuracy of the CBCT method to manual calliper measures, using various volume calculation equations.

  18. Semi-automatic characterization of fractured rock masses using 3D point clouds: discontinuity orientation, spacing and SMR geomechanical classification

    NASA Astrophysics Data System (ADS)

    Riquelme, Adrian; Tomas, Roberto; Abellan, Antonio; Cano, Miguel; Jaboyedoff, Michel

    2015-04-01

    Investigation of fractured rock masses for different geological applications (e.g. fractured reservoir exploitation, rock slope instability, rock engineering, etc.) requires a deep geometric understanding of the discontinuity sets affecting rock exposures. Recent advances in 3D data acquisition using photogrammetric and/or LiDAR techniques currently allow a quick and an accurate characterization of rock mass discontinuities. This contribution presents a methodology for: (a) use of 3D point clouds for the identification and analysis of planar surfaces outcropping in a rocky slope; (b) calculation of the spacing between different discontinuity sets; (c) semi-automatic calculation of the parameters that play a capital role in the Slope Mass Rating geomechanical classification. As for the part a) (discontinuity orientation), our proposal identifies and defines the algebraic equations of the different discontinuity sets of the rock slope surface by applying an analysis based on a neighbouring points coplanarity test. Additionally, the procedure finds principal orientations by Kernel Density Estimation and identifies clusters (Riquelme et al., 2014). As a result of this analysis, each point is classified with a discontinuity set and with an outcrop plane (cluster). Regarding the part b) (discontinuity spacing) our proposal utilises the previously classified point cloud to investigate how different outcropping planes are linked in space. Discontinuity spacing is calculated for each pair of linked clusters within the same discontinuity set, and then spacing values are analysed calculating their statistic values. Finally, as for the part c) the previous results are used to calculate parameters F_1, F2 and F3 of the Slope Mass Rating geomechanical classification. This analysis is carried out for each discontinuity set using their respective orientation extracted in part a). The open access tool SMRTool (Riquelme et al., 2014) is then used to calculate F1 to F3 correction

  19. Semi-automatic mapping of fault rocks on a Digital Outcrop Model, Gole Larghe Fault Zone (Southern Alps, Italy)

    NASA Astrophysics Data System (ADS)

    Mittempergher, Silvia; Vho, Alice; Bistacchi, Andrea

    2016-04-01

    A quantitative analysis of fault-rock distribution in outcrops of exhumed fault zones is of fundamental importance for studies of fault zone architecture, fault and earthquake mechanics, and fluid circulation. We present a semi-automatic workflow for fault-rock mapping on a Digital Outcrop Model (DOM), developed on the Gole Larghe Fault Zone (GLFZ), a well exposed strike-slip fault in the Adamello batholith (Italian Southern Alps). The GLFZ has been exhumed from ca. 8-10 km depth, and consists of hundreds of individual seismogenic slip surfaces lined by green cataclasites (crushed wall rocks cemented by the hydrothermal epidote and K-feldspar) and black pseudotachylytes (solidified frictional melts, considered as a marker for seismic slip). A digital model of selected outcrop exposures was reconstructed with photogrammetric techniques, using a large number of high resolution digital photographs processed with VisualSFM software. The resulting DOM has a resolution up to 0.2 mm/pixel. Most of the outcrop was imaged using images each one covering a 1 x 1 m2 area, while selected structural features, such as sidewall ripouts or stepovers, were covered with higher-resolution images covering 30 x 40 cm2 areas.Image processing algorithms were preliminarily tested using the ImageJ-Fiji package, then a workflow in Matlab was developed to process a large collection of images sequentially. Particularly in detailed 30 x 40 cm images, cataclasites and hydrothermal veins were successfully identified using spectral analysis in RGB and HSV color spaces. This allows mapping the network of cataclasites and veins which provided the pathway for hydrothermal fluid circulation, and also the volume of mineralization, since we are able to measure the thickness of cataclasites and veins on the outcrop surface. The spectral signature of pseudotachylyte veins is indistinguishable from that of biotite grains in the wall rock (tonalite), so we tested morphological analysis tools to discriminate

  20. Comparison of a semi-automatic annotation tool and a natural language processing application for the generation of clinical statement entries

    PubMed Central

    Lin, Ching-Heng; Wu, Nai-Yuan; Lai, Wei-Shao; Liou, Der-Ming

    2015-01-01

    Background and objective Electronic medical records with encoded entries should enhance the semantic interoperability of document exchange. However, it remains a challenge to encode the narrative concept and to transform the coded concepts into a standard entry-level document. This study aimed to use a novel approach for the generation of entry-level interoperable clinical documents. Methods Using HL7 clinical document architecture (CDA) as the example, we developed three pipelines to generate entry-level CDA documents. The first approach was a semi-automatic annotation pipeline (SAAP), the second was a natural language processing (NLP) pipeline, and the third merged the above two pipelines. We randomly selected 50 test documents from the i2b2 corpora to evaluate the performance of the three pipelines. Results The 50 randomly selected test documents contained 9365 words, including 588 Observation terms and 123 Procedure terms. For the Observation terms, the merged pipeline had a significantly higher F-measure than the NLP pipeline (0.89 vs 0.80, p<0.0001), but a similar F-measure to that of the SAAP (0.89 vs 0.87). For the Procedure terms, the F-measure was not significantly different among the three pipelines. Conclusions The combination of a semi-automatic annotation approach and the NLP application seems to be a solution for generating entry-level interoperable clinical documents. PMID:25332357

  1. An integrated strategy for the rapid extraction and screening of phosphatidylcholines and lysophosphatidylcholines using semi-automatic solid phase extraction and data processing technology.

    PubMed

    Zhang, Zhenzhu; Zhang, Yani; Yin, Jia; Li, Yubo

    2016-08-26

    This study attempts to establish a comprehensive strategy for the rapid extraction and screening of phosphatidylcholines (PCs) and lysophosphatidylcholines (LysoPCs) in biological samples using semi-automatic solid phase extraction (SPE) and data processing technology based on ultra-performance liquid chromatography-quadrupole-time of flight-mass spectrometry (UPLC-Q-TOF-MS). First, the Ostro sample preparation method (i.e., semi-automatic SPE) was compared with the Bligh-Dyer method in terms of substance coverage, reproducibility and sample preparation time. Meanwhile, the screening method for PCs and LysoPCs was built through mass range screening, mass defect filtering and diagnostic fragments filtering. Then, the Ostro sample preparation method and the aforementioned screening method were combined under optimal conditions to establish a rapid extraction and screening platform. Finally, this developed method was validated and applied to the preparation and data analysis of tissue samples. Through a systematic evaluation, this developed method was shown to provide reliable and high-throughput experimental results and was suitable for the preparation and analysis of tissue samples. Our method provides a novel strategy for the rapid extraction and analysis of functional phospholipids. In addition, this study will promote further study of phospholipids in disease research.

  2. Multi-parametric (ADC/PWI/T2-w) image fusion approach for accurate semi-automatic segmentation of tumorous regions in glioblastoma multiforme.

    PubMed

    Fathi Kazerooni, Anahita; Mohseni, Meysam; Rezaei, Sahar; Bakhshandehpour, Gholamreza; Saligheh Rad, Hamidreza

    2015-02-01

    Glioblastoma multiforme (GBM) brain tumor is heterogeneous in nature, so its quantification depends on how to accurately segment different parts of the tumor, i.e. viable tumor, edema and necrosis. This procedure becomes more effective when metabolic and functional information, provided by physiological magnetic resonance (MR) imaging modalities, like diffusion-weighted-imaging (DWI) and perfusion-weighted-imaging (PWI), is incorporated with the anatomical magnetic resonance imaging (MRI). In this preliminary tumor quantification work, the idea is to characterize different regions of GBM tumors in an MRI-based semi-automatic multi-parametric approach to achieve more accurate characterization of pathogenic regions. For this purpose, three MR sequences, namely T2-weighted imaging (anatomical MR imaging), PWI and DWI of thirteen GBM patients, were acquired. To enhance the delineation of the boundaries of each pathogenic region (peri-tumoral edema, viable tumor and necrosis), the spatial fuzzy C-means algorithm is combined with the region growing method. The results show that exploiting the multi-parametric approach along with the proposed semi-automatic segmentation method can differentiate various tumorous regions with over 80 % sensitivity, specificity and dice score. The proposed MRI-based multi-parametric segmentation approach has the potential to accurately segment tumorous regions, leading to an efficient design of the pre-surgical treatment planning.

  3. A semi-automatic microextraction in packed sorbent, using a digitally controlled syringe, combined with ultra-high pressure liquid chromatography as a new and ultra-fast approach for the determination of prenylflavonoids in beers.

    PubMed

    Gonçalves, João L; Alves, Vera L; Rodrigues, Fátima P; Figueira, José A; Câmara, José S

    2013-08-23

    In this work a highly selective and sensitive analytical procedure based on semi-automatic microextraction by packed sorbents (MEPS) technique, using a new digitally controlled syringe (eVol(®)) combined with ultra-high pressure liquid chromatography (UHPLC), is proposed to determine the prenylated chalcone derived from the hop (Humulus lupulus L.), xanthohumol (XN), and its isomeric flavonone isoxanthohumol (IXN) in beers. Extraction and UHPLC parameters were accurately optimized to achieve the highest recoveries and to enhance the analytical characteristics of the method. Important parameters affecting MEPS performance, namely the type of sorbent material (C2, C8, C18, SIL, and M1), elution solvent system, number of extraction cycles (extract-discard), sample volume, elution volume, and sample pH, were evaluated. The optimal experimental conditions involves the loading of 500μL of sample through a C18 sorbent in a MEPS syringe placed in the semi-automatic eVol(®) syringe followed by elution using 250μL of acetonitrile (ACN) in a 10 extractions cycle (about 5min for the entire sample preparation step). The obtained extract is directly analyzed in the UHPLC system using a binary mobile phase composed of aqueous 0.1% formic acid (eluent A) and ACN (eluent B) in the gradient elution mode (10min total analysis). Under optimized conditions good results were obtained in terms of linearity within the established concentration range with correlation coefficients (R) values higher than 0.986, with a residual deviation for each calibration point below 12%. The limit of detection (LOD) and limit of quantification (LOQ) obtained were 0.4ngmL(-1) and 1.0ngmL(-1) for IXN, and 0.9ngmL(-1) and 3.0ngmL(-1) for XN, respectively. Precision was lower than 4.6% for IXN and 8.4% for XN. Typical recoveries ranged between 67.1% and 99.3% for IXN and between 74.2% and 99.9% for XN, with relative standard deviations %RSD no larger than 8%. The applicability of the proposed analytical

  4. System monitors discrete computer inputs

    NASA Technical Reports Server (NTRS)

    Burns, J. J.

    1966-01-01

    Computer system monitors inputs from checkout devices. The comparing, addressing, and controlling functions are performed in the I/O unit. This leaves the computer main frame free to handle memory, access priority, and interrupt instructions.

  5. Fusion of dynamic contrast-enhanced magnetic resonance mammography at 3.0T with X-ray mammograms: pilot study evaluation using dedicated semi-automatic registration software.

    PubMed

    Dietzel, Matthias; Hopp, Torsten; Ruiter, Nicole; Zoubi, Ramy; Runnebaum, Ingo B; Kaiser, Werner A; Baltzer, Pascal A T

    2011-08-01

    To evaluate the semi-automatic image registration accuracy of X-ray-mammography (XR-M) with high-resolution high-field (3.0T) MR-mammography (MR-M) in an initial pilot study. MR-M was acquired on a high-field clinical scanner at 3.0T (T1-weighted 3D VIBE ± Gd). XR-M was obtained with state-of-the-art full-field digital systems. Seven patients with clearly delineable mass lesions >10mm both in XR-M and MR-M were enrolled (exclusion criteria: previous breast surgery; surgical intervention between XR-M and MR-M). XR-M and MR-M were matched using a dedicated image-registration algorithm allowing semi-automatic non-linear deformation of MR-M based on finite-element modeling. To identify registration errors (RE) a virtual craniocaudal 2D mammogram was calculated by the software from MR-M (with and w/o Gadodiamide/Gd) and matched with corresponding XR-M. To quantify REs the geometric center of the lesions in the virtual vs. conventional mammogram were subtracted. The robustness of registration was quantified by registration of X-MRs to both MR-Ms with and w/o Gadodiamide. Image registration was performed successfully for all patients. Overall RE was 8.2mm (1 min after Gd; confidence interval/CI: 2.0-14.4mm, standard deviation/SD: 6.7 mm) vs. 8.9 mm (no Gd; CI: 4.0-13.9 mm, SD: 5.4mm). The mean difference between pre- vs. post-contrast was 0.7 mm (SD: 1.9 mm). Image registration of high-field 3.0T MR-mammography with X-ray-mammography is feasible. For this study applying a high-resolution protocol at 3.0T, the registration was robust and the overall registration error was sufficient for clinical application. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  6. Threats to Computer Systems

    DTIC Science & Technology

    1973-03-01

    subjects and objects of attacks contribute to the uniqueness of computer-related crime. For example, as the cashless , checkless society approaches...advancing computer tech- nology and security methods, and proliferation of computers in bringing about the paperless society . The universal use of...organizations do to society . Jerry Schneider, one of the known perpetrators, said that he was motivated to perform his acts to make money, for the

  7. Central nervous system and computation.

    PubMed

    Guidolin, Diego; Albertin, Giovanna; Guescini, Michele; Fuxe, Kjell; Agnati, Luigi F

    2011-12-01

    Computational systems are useful in neuroscience in many ways. For instance, they may be used to construct maps of brain structure and activation, or to describe brain processes mathematically. Furthermore, they inspired a powerful theory of brain function, in which the brain is viewed as a system characterized by intrinsic computational activities or as a "computational information processor. "Although many neuroscientists believe that neural systems really perform computations, some are more cautious about computationalism or reject it. Thus, does the brain really compute? Answering this question requires getting clear on a definition of computation that is able to draw a line between physical systems that compute and systems that do not, so that we can discern on which side of the line the brain (or parts of it) could fall. In order to shed some light on the role of computational processes in brain function, available neurobiological data will be summarized from the standpoint of a recently proposed taxonomy of notions of computation, with the aim of identifying which brain processes can be considered computational. The emerging picture shows the brain as a very peculiar system, in which genuine computational features act in concert with noncomputational dynamical processes, leading to continuous self-organization and remodeling under the action of external stimuli from the environment and from the rest of the organism.

  8. GPU computing for systems biology.

    PubMed

    Dematté, Lorenzo; Prandi, Davide

    2010-05-01

    The development of detailed, coherent, models of complex biological systems is recognized as a key requirement for integrating the increasing amount of experimental data. In addition, in-silico simulation of bio-chemical models provides an easy way to test different experimental conditions, helping in the discovery of the dynamics that regulate biological systems. However, the computational power required by these simulations often exceeds that available on common desktop computers and thus expensive high performance computing solutions are required. An emerging alternative is represented by general-purpose scientific computing on graphics processing units (GPGPU), which offers the power of a small computer cluster at a cost of approximately $400. Computing with a GPU requires the development of specific algorithms, since the programming paradigm substantially differs from traditional CPU-based computing. In this paper, we review some recent efforts in exploiting the processing power of GPUs for the simulation of biological systems.

  9. Semi-automatic determination of the Azores Current axis using satellite altimetry: Application to the study of the current variability during 1995-2006

    NASA Astrophysics Data System (ADS)

    Lázaro, C.; Juliano, M. F.; Fernandes, M. J.

    2013-06-01

    Satellite altimetry has been widely used to study the variability of the ocean currents such as the Azores Current (AzC) in the North Atlantic. Most analyses are performed over the region that encloses the current, thus being somehow affected by other oceanographic signals, e.g., eddies. In this study, a new approach for extracting the axis of a zonal current solely based on satellite altimetry is presented. This is a semi-automatic procedure that searches for the maximum values of the gradient of absolute dynamic topography (ADT), using the geostrophic velocity as auxiliary information. The advantage of this approach is to allow the analyses to be performed over a buffer centered on the current axis instead of using a wider region. It is here applied to the AzC for the period June 1995-October 2006.

  10. Computer Security Systems Enable Access.

    ERIC Educational Resources Information Center

    Riggen, Gary

    1989-01-01

    A good security system enables access and protects information from damage or tampering, but the most important aspects of a security system aren't technical. A security procedures manual addresses the human element of computer security. (MLW)

  11. Robot, computer problem solving system

    NASA Technical Reports Server (NTRS)

    Becker, J. D.; Merriam, E. W.

    1973-01-01

    The TENEX computer system, the ARPA network, and computer language design technology was applied to support the complex system programs. By combining the pragmatic and theoretical aspects of robot development, an approach is created which is grounded in realism, but which also has at its disposal the power that comes from looking at complex problems from an abstract analytical point of view.

  12. Computer System Maintenance and Enhancements

    DTIC Science & Technology

    1989-02-23

    Modular Computer Systems Monitor Monitor Computer MVS IBM’s Multiple Virtual Operating System PCAL Pressure CALibration PLC Programmable Logic Controller PLCI... Programmable Logic Controller #1 PLC2 Programmable Logic Controller #2 POTX Propulsion Technology Preston Analog to digital signal converter

  13. AutoTag and AutoSnap: Standardized, semi-automatic capture of regions of interest from whole slide images

    PubMed Central

    Marien, Koen M.; Andries, Luc; De Schepper, Stefanie; Kockx, Mark M.; De Meyer, Guido R.Y.

    2015-01-01

    Tumor angiogenesis is measured by counting microvessels in tissue sections at high power magnification as a potential prognostic or predictive biomarker. Until now, regions of interest1 (ROIs) were selected by manual operations within a tumor by using a systematic uniform random sampling2 (SURS) approach. Although SURS is the most reliable sampling method, it implies a high workload. However, SURS can be semi-automated and in this way contribute to the development of a validated quantification method for microvessel counting in the clinical setting. Here, we report a method to use semi-automated SURS for microvessel counting: • Whole slide imaging with Pannoramic SCAN (3DHISTECH) • Computer-assisted sampling in Pannoramic Viewer (3DHISTECH) extended by two self-written AutoHotkey applications (AutoTag and AutoSnap) • The use of digital grids in Photoshop® and Bridge® (Adobe Systems) This rapid procedure allows traceability essential for high throughput protein analysis of immunohistochemically stained tissue. PMID:26150998

  14. The UCLA MEDLARS computer system.

    PubMed

    Garvis, F J

    1966-01-01

    Under a subcontract with UCLA the Planning Research Corporation has changed the MEDLARS system to make it possible to use the IBM 7094/7040 direct-couple computer instead of the Honeywell 800 for demand searches. The major tasks were the rewriting of the programs in COBOL and copying of the stored information on the narrower tapes that IBM computers require. (In the future NLM will copy the tapes for IBM computer users.) The differences in the software required by the two computers are noted. Major and costly revisions would be needed to adapt the large MEDLARS system to the smaller IBM 1401 and 1410 computers. In general, MEDLARS is transferrable to other computers of the IBM 7000 class, the new IBM 360, and those of like size, such as the CDC 1604 or UNIVAC 1108, although additional changes are necessary. Potential future improvements are suggested.

  15. NASA's computed tomography system

    NASA Astrophysics Data System (ADS)

    Engel, H. Peter

    1989-03-01

    The computerized industrial tomographic analyzer (CITA) is designed to examine the internal structure and material integrity of a wide variety of aerospace-related objects, particularly in the NASA space program. The nondestructive examination is performed by producing a two-dimensional picture of a selected slice through an object. The penetrating sources that yield data for reconstructing the slice picture are radioactive cobalt or a high-power X-ray tube. A series of pictures and computed tomograms are presented which illustrate a few of the applications the CITA has been used for since its August 1986 initial service at the Kennedy Space Center.

  16. Computer Automated Ultrasonic Inspection System

    DTIC Science & Technology

    1985-02-06

    Microcomputer CRT Cathode Ray Tube SBC Single Board Computer xiii 1.0 INTRODUCTION 1.1 Background Standard ultrasonic inspection techniques used in industry...30 Microcomputer The heart of the bridge control microcomputer is an Intel single board computer using a high-speed 8085 HA-2 microprocessor chip ...subsystems (bridge, bridge drive electronics, bridge control microcomputer , ultrasonic unit, and master computer system), development of bridge control and

  17. A web-based computer aided system for liver surgery planning: initial implementation on RayPlus

    NASA Astrophysics Data System (ADS)

    Luo, Ming; Yuan, Rong; Sun, Zhi; Li, Tianhong; Xie, Qingguo

    2016-03-01

    At present, computer aided systems for liver surgery design and risk evaluation are widely used in clinical all over the world. However, most systems are local applications that run on high-performance workstations, and the images have to processed offline. Compared with local applications, a web-based system is accessible anywhere and for a range of regardless of relative processing power or operating system. RayPlus (http://rayplus.life.hust.edu.cn), a B/S platform for medical image processing, was developed to give a jump start on web-based medical image processing. In this paper, we implement a computer aided system for liver surgery planning on the architecture of RayPlus. The system consists of a series of processing to CT images including filtering, segmentation, visualization and analyzing. Each processing is packaged into an executable program and runs on the server side. CT images in DICOM format are processed step by to interactive modeling on browser with zero-installation and server-side computing. The system supports users to semi-automatically segment the liver, intrahepatic vessel and tumor from the pre-processed images. Then, surface and volume models are built to analyze the vessel structure and the relative position between adjacent organs. The results show that the initial implementation meets satisfactorily its first-order objectives and provide an accurate 3D delineation of the liver anatomy. Vessel labeling and resection simulation are planned to add in the future. The system is available on Internet at the link mentioned above and an open username for testing is offered.

  18. Students "Hacking" School Computer Systems

    ERIC Educational Resources Information Center

    Stover, Del

    2005-01-01

    This article deals with students hacking school computer systems. School districts are getting tough with students "hacking" into school computers to change grades, poke through files, or just pit their high-tech skills against district security. Dozens of students have been prosecuted recently under state laws on identity theft and unauthorized…

  19. Students "Hacking" School Computer Systems

    ERIC Educational Resources Information Center

    Stover, Del

    2005-01-01

    This article deals with students hacking school computer systems. School districts are getting tough with students "hacking" into school computers to change grades, poke through files, or just pit their high-tech skills against district security. Dozens of students have been prosecuted recently under state laws on identity theft and unauthorized…

  20. Computer controlled antenna system

    NASA Technical Reports Server (NTRS)

    Raumann, N. A.

    1972-01-01

    Digital techniques are discussed for application to the servo and control systems of large antennas. The tracking loop for an antenna at a STADAN tracking site is illustrated. The augmentation mode is also considered.

  1. User computer system pilot project

    SciTech Connect

    Eimutis, E.C.

    1989-09-06

    The User Computer System (UCS) is a general purpose unclassified, nonproduction system for Mound users. The UCS pilot project was successfully completed, and the system currently has more than 250 users. Over 100 tables were installed on the UCS for use by subscribers, including tables containing data on employees, budgets, and purchasing. In addition, a UCS training course was developed and implemented.

  2. Robot computer problem solving system

    NASA Technical Reports Server (NTRS)

    Becker, J. D.; Merriam, E. W.

    1974-01-01

    The conceptual, experimental, and practical aspects of the development of a robot computer problem solving system were investigated. The distinctive characteristics were formulated of the approach taken in relation to various studies of cognition and robotics. Vehicle and eye control systems were structured, and the information to be generated by the visual system is defined.

  3. Operating systems. [of computers

    NASA Technical Reports Server (NTRS)

    Denning, P. J.; Brown, R. L.

    1984-01-01

    A counter operating system creates a hierarchy of levels of abstraction, so that at a given level all details concerning lower levels can be ignored. This hierarchical structure separates functions according to their complexity, characteristic time scale, and level of abstraction. The lowest levels include the system's hardware; concepts associated explicitly with the coordination of multiple tasks appear at intermediate levels, which conduct 'primitive processes'. Software semaphore is the mechanism controlling primitive processes that must be synchronized. At higher levels lie, in rising order, the access to the secondary storage devices of a particular machine, a 'virtual memory' scheme for managing the main and secondary memories, communication between processes by way of a mechanism called a 'pipe', access to external input and output devices, and a hierarchy of directories cataloguing the hardware and software objects to which access must be controlled.

  4. Operating systems. [of computers

    NASA Technical Reports Server (NTRS)

    Denning, P. J.; Brown, R. L.

    1984-01-01

    A counter operating system creates a hierarchy of levels of abstraction, so that at a given level all details concerning lower levels can be ignored. This hierarchical structure separates functions according to their complexity, characteristic time scale, and level of abstraction. The lowest levels include the system's hardware; concepts associated explicitly with the coordination of multiple tasks appear at intermediate levels, which conduct 'primitive processes'. Software semaphore is the mechanism controlling primitive processes that must be synchronized. At higher levels lie, in rising order, the access to the secondary storage devices of a particular machine, a 'virtual memory' scheme for managing the main and secondary memories, communication between processes by way of a mechanism called a 'pipe', access to external input and output devices, and a hierarchy of directories cataloguing the hardware and software objects to which access must be controlled.

  5. In vivo analysis of hippocampal subfield atrophy in mild cognitive impairment via semi-automatic segmentation of T2-weighted MRI

    PubMed Central

    Pluta, John; Yushkevich, Paul; Das, Sandhitsu; Wolk, David

    2012-01-01

    The measurement of hippocampal volumes using MRI is a useful in-vivo biomarker for detection and monitoring of early Alzheimer’s Disease (AD), including during the amnestic Mild Cognitive Impairment (a-MCI) stage. The pathology underlying AD has regionally selective effects within the hippocampus. As such, we predict that hippocampal subfields are more sensitive in discriminating prodromal AD (i.e., a-MCI) from cognitively normal controls than whole hippocampal volumes, and attempt to demonstrate this using a semi-automatic method that can accurately segment hippocampal subfields. High-resolution coronal-oblique T2-weighted images of the hippocampal formation were acquired in 45 subjects (28 controls and 17 a-MCI (mean age: 69.5 ± 9.2; 70.2 ± 7.6)). CA1, CA2, CA3, and CA4/DG subfields, along with head and tail regions, were segmented using an automatic algorithm. CA1 and CA4/DG segmentations were manually edited. Whole hippocampal volumes were obtained from the subjects’ T1-weighted anatomical images. Automatic segmentation produced significant group differences in the following subfields: CA1 (left: p=0.001, right: p=0.038), CA4/DG (left: p=0.002, right: p=0.043), head (left: p=0.018, right: p=0.002), and tail (left: p=0.019). After manual correction, differences were increased in CA1 (left: p<0.001, right: p=0.002), and reduced in CA4/DG (left: p=0.029, right: p=0.221). Whole hippocampal volumes significantly differed bilaterally (left: p=0.028, right: p=0.009). This pattern of atrophy in a-MCI is consistent with the topography of AD pathology observed in postmortem studies, and corrected left CA1 provided stronger discrimination than whole hippocampal volume (p=0.03). These results suggest that semi-automatic segmentation of hippocampal subfields is efficient and may provide additional sensitivity beyond whole hippocampal volumes. PMID:22504319

  6. Revised adage graphics computer system

    NASA Technical Reports Server (NTRS)

    Tulppo, J. S.

    1980-01-01

    Bootstrap loader and mode-control options for Adage Graphics Computer System Significantly simplify operations procedures. Normal load and control functions are performed quickly and easily from control console. Operating characteristics of revised system include greatly increased speed, convenience, and reliability.

  7. Robot computer problem solving system

    NASA Technical Reports Server (NTRS)

    Becker, J. D.

    1972-01-01

    Continuing research is reported in a program aimed at the development of a robot computer problem solving system. The motivation and results are described of a theoretical investigation concerning the general properties of behavioral systems. Some of the important issues which a general theory of behavioral organization should encompass are outlined and discussed.

  8. Mission operations computing systems evolution

    NASA Technical Reports Server (NTRS)

    Kurzhals, P. R.

    1981-01-01

    As part of its preparation for the operational Shuttle era, the Goddard Space Flight Center (GSFC) is currently replacing most of the mission operations computing complexes that have supported near-earth space missions since the late 1960's. Major associated systems include the Metric Data Facility (MDF) which preprocesses, stores, and forwards all near-earth satellite tracking data; the Orbit Computation System (OCS) which determines related production orbit and attitude information; the Flight Dynamics System (FDS) which formulates spacecraft attitude and orbit maneuvers; and the Command Management System (CMS) which handles mission planning, scheduling, and command generation and integration. Management issues and experiences for the resultant replacement process are driven by a wide range of possible future mission requirements, flight-critical system aspects, complex internal system interfaces, extensive existing applications software, and phasing to optimize systems evolution.

  9. Mission operations computing systems evolution

    NASA Technical Reports Server (NTRS)

    Kurzhals, P. R.

    1981-01-01

    As part of its preparation for the operational Shuttle era, the Goddard Space Flight Center (GSFC) is currently replacing most of the mission operations computing complexes that have supported near-earth space missions since the late 1960's. Major associated systems include the Metric Data Facility (MDF) which preprocesses, stores, and forwards all near-earth satellite tracking data; the Orbit Computation System (OCS) which determines related production orbit and attitude information; the Flight Dynamics System (FDS) which formulates spacecraft attitude and orbit maneuvers; and the Command Management System (CMS) which handles mission planning, scheduling, and command generation and integration. Management issues and experiences for the resultant replacement process are driven by a wide range of possible future mission requirements, flight-critical system aspects, complex internal system interfaces, extensive existing applications software, and phasing to optimize systems evolution.

  10. Computational capabilities of physical systems.

    PubMed

    Wolpert, David H

    2002-01-01

    In this paper strong limits on the accuracy of real-world physical computation are established. To derive these results a non-Turing machine formulation of physical computation is used. First it is proven that there cannot be a physical computer C to which one can pose any and all computational tasks concerning the physical universe. Next it is proven that no physical computer C can correctly carry out every computational task in the subset of such tasks that could potentially be posed to C. This means in particular that there cannot be a physical computer that can be assured of correctly "processing information faster than the universe does." Because this result holds independent of how or if the computer is physically coupled to the rest of the universe, it also means that there cannot exist an infallible, general-purpose observation apparatus, nor an infallible, general-purpose control apparatus. These results do not rely on systems that are infinite, and/or nonclassical, and/or obey chaotic dynamics. They also hold even if one could use an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing machine (TM). After deriving these results analogs of the TM Halting theorem are derived for the novel kind of computer considered in this paper, as are results concerning the (im)possibility of certain kinds of error-correcting codes. In addition, an analog of algorithmic information complexity, "prediction complexity," is elaborated. A task-independent bound is derived on how much the prediction complexity of a computational task can differ for two different reference universal physical computers used to solve that task. This is analogous to the "encoding" bound governing how much the algorithm information complexity of a TM calculation can differ for two reference universal TMs. It is proven that either the Hamiltonian of our universe proscribes a certain type of computation, or prediction complexity is unique (unlike

  11. Semi-automatic segmentation and modeling of the cervical spinal cord for volume quantification in multiple sclerosis patients from magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Sonkova, Pavlina; Evangelou, Iordanis E.; Gallo, Antonio; Cantor, Fredric K.; Ohayon, Joan; McFarland, Henry F.; Bagnato, Francesca

    2008-03-01

    Spinal cord (SC) tissue loss is known to occur in some patients with multiple sclerosis (MS), resulting in SC atrophy. Currently, no measurement tools exist to determine the magnitude of SC atrophy from Magnetic Resonance Images (MRI). We have developed and implemented a novel semi-automatic method for quantifying the cervical SC volume (CSCV) from Magnetic Resonance Images (MRI) based on level sets. The image dataset consisted of SC MRI exams obtained at 1.5 Tesla from 12 MS patients (10 relapsing-remitting and 2 secondary progressive) and 12 age- and gender-matched healthy volunteers (HVs). 3D high resolution image data were acquired using an IR-FSPGR sequence acquired in the sagittal plane. The mid-sagittal slice (MSS) was automatically located based on the entropy calculation for each of the consecutive sagittal slices. The image data were then pre-processed by 3D anisotropic diffusion filtering for noise reduction and edge enhancement before segmentation with a level set formulation which did not require re-initialization. The developed method was tested against manual segmentation (considered ground truth) and intra-observer and inter-observer variability were evaluated.

  12. A new generic method for the semi-automatic extraction of river and road networks in low and mid-resolution satellite images

    SciTech Connect

    Grazzini, Jacopo; Dillard, Scott; Soille, Pierre

    2010-10-21

    This paper addresses the problem of semi-automatic extraction of road or hydrographic networks in satellite images. For that purpose, we propose an approach combining concepts arising from mathematical morphology and hydrology. The method exploits both geometrical and topological characteristics of rivers/roads and their tributaries in order to reconstruct the complete networks. It assumes that the images satisfy the following two general assumptions, which are the minimum conditions for a road/river network to be identifiable and are usually verified in low- to mid-resolution satellite images: (i) visual constraint: most pixels composing the network have similar spectral signature that is distinguishable from most of the surrounding areas; (ii) geometric constraint: a line is a region that is relatively long and narrow, compared with other objects in the image. While this approach fully exploits local (roads/rivers are modeled as elongated regions with a smooth spectral signature in the image and a maximum width) and global (they are structured like a tree) characteristics of the networks, further directional information about the image structures is incorporated. Namely, an appropriate anisotropic metric is designed by using both the characteristic features of the target network and the eigen-decomposition of the gradient structure tensor of the image. Following, the geodesic propagation from a given network seed with this metric is combined with hydrological operators for overland flow simulation to extract the paths which contain most line evidence and identify them with the target network.

  13. Morphological criteria of feminine upper eyelashes, quantified by a new semi-automatized image analysis: Application to the objective assessment of mascaras.

    PubMed

    Shaiek, A; Flament, F; François, G; Vicic, M; Cointereau-Chardron, S; Curval, E; Canevet-Zaida, S; Coubard, O; Idelcaid, Y

    2017-09-24

    The wide diversity of feminine eyelashes in shape, length, and curvature makes it a complex domain that remains to be quantified in vivo, together with their changes brought by application of mascaras that are visually assessed by women themselves or make-up experts. A dedicated software was developed to semi-automatically extract and quantify, from digital images (frontal and lateral pictures), the major parameters of feminine eyelashes of Mexican and Caucasian women and to record the changes brought by the applications of various mascaras and their brushes, being self or professionally applied. The diversity of feminine eyelashes appears as a major influencing factor in the application of mascaras and their related results. Eight marketed mascaras and their respective brushes were tested and their quantitative profiles, in terms of coverage, morphology, or curvature were assessed. Standard applications by trained aestheticians led to higher and more homogeneous deposits of mascara, as compared to those resulting from self-applications. The developed software appears a precious tool for both quantifying the major characteristics of eyelashes and assessing the making-up results brought by mascaras and their associated brushes. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  14. Semi-automatic delineation of the spino-laminar junction curve on lateral x-ray radiographs of the cervical spine

    NASA Astrophysics Data System (ADS)

    Narang, Benjamin; Phillips, Michael; Knapp, Karen; Appelboam, Andy; Reuben, Adam; Slabaugh, Greg

    2015-03-01

    Assessment of the cervical spine using x-ray radiography is an important task when providing emergency room care to trauma patients suspected of a cervical spine injury. In routine clinical practice, a physician will inspect the alignment of the cervical spine vertebrae by mentally tracing three alignment curves along the anterior and posterior sides of the cervical vertebral bodies, as well as one along the spinolaminar junction. In this paper, we propose an algorithm to semi-automatically delineate the spinolaminar junction curve, given a single reference point and the corners of each vertebral body. From the reference point, our method extracts a region of interest, and performs template matching using normalized cross-correlation to find matching regions along the spinolaminar junction. Matching points are then fit to a third order spline, producing an interpolating curve. Experimental results demonstrate promising results, on average producing a modified Hausdorff distance of 1.8 mm, validated on a dataset consisting of 29 patients including those with degenerative change, retrolisthesis, and fracture.

  15. A Semi-Automatic Method to Extract Canal Pathways in 3D Micro-CT Images of Octocorals

    PubMed Central

    Morales Pinzón, Alfredo; Orkisz, Maciej; Rodríguez Useche, Catalina María; Torres González, Juan Sebastián; Teillaud, Stanislas; Sánchez, Juan Armando; Hernández Hoyos, Marcela

    2014-01-01

    The long-term goal of our study is to understand the internal organization of the octocoral stem canals, as well as their physiological and functional role in the growth of the colonies, and finally to assess the influence of climatic changes on this species. Here we focus on imaging tools, namely acquisition and processing of three-dimensional high-resolution images, with emphasis on automated extraction of canal pathways. Our aim was to evaluate the feasibility of the whole process, to point out and solve – if possible – technical problems related to the specimen conditioning, to determine the best acquisition parameters and to develop necessary image-processing algorithms. The pathways extracted are expected to facilitate the structural analysis of the colonies, namely to help observing the distribution, formation and number of canals along the colony. Five volumetric images of Muricea muricata specimens were successfully acquired by X-ray computed tomography with spatial resolution ranging from 4.5 to 25 micrometers. The success mainly depended on specimen immobilization. More than of the canals were successfully detected and tracked by the image-processing method developed. Thus obtained three-dimensional representation of the canal network was generated for the first time without the need of histological or other destructive methods. Several canal patterns were observed. Although most of them were simple, i.e. only followed the main branch or “turned” into a secondary branch, many others bifurcated or fused. A majority of bifurcations were observed at branching points. However, some canals appeared and/or ended anywhere along a branch. At the tip of a branch, all canals fused into a unique chamber. Three-dimensional high-resolution tomographic imaging gives a non-destructive insight to the coral ultrastructure and helps understanding the organization of the canal network. Advanced image-processing techniques greatly reduce human observer's effort and

  16. A semi-automatic method to extract canal pathways in 3D micro-CT images of Octocorals.

    PubMed

    Morales Pinzón, Alfredo; Orkisz, Maciej; Rodríguez Useche, Catalina María; Torres González, Juan Sebastián; Teillaud, Stanislas; Sánchez, Juan Armando; Hernández Hoyos, Marcela

    2014-01-01

    The long-term goal of our study is to understand the internal organization of the octocoral stem canals, as well as their physiological and functional role in the growth of the colonies, and finally to assess the influence of climatic changes on this species. Here we focus on imaging tools, namely acquisition and processing of three-dimensional high-resolution images, with emphasis on automated extraction of canal pathways. Our aim was to evaluate the feasibility of the whole process, to point out and solve - if possible - technical problems related to the specimen conditioning, to determine the best acquisition parameters and to develop necessary image-processing algorithms. The pathways extracted are expected to facilitate the structural analysis of the colonies, namely to help observing the distribution, formation and number of canals along the colony. Five volumetric images of Muricea muricata specimens were successfully acquired by X-ray computed tomography with spatial resolution ranging from 4.5 to 25 micrometers. The success mainly depended on specimen immobilization. More than [Formula: see text] of the canals were successfully detected and tracked by the image-processing method developed. Thus obtained three-dimensional representation of the canal network was generated for the first time without the need of histological or other destructive methods. Several canal patterns were observed. Although most of them were simple, i.e. only followed the main branch or "turned" into a secondary branch, many others bifurcated or fused. A majority of bifurcations were observed at branching points. However, some canals appeared and/or ended anywhere along a branch. At the tip of a branch, all canals fused into a unique chamber. Three-dimensional high-resolution tomographic imaging gives a non-destructive insight to the coral ultrastructure and helps understanding the organization of the canal network. Advanced image-processing techniques greatly reduce human observer

  17. A program for semi-automatic sequential resonance assignments in protein 1H nuclear magnetic resonance spectra

    NASA Astrophysics Data System (ADS)

    Billeter, M.; Basus, V. J.; Kuntz, I. D.

    A new approach to the sequential resonance assignment of protein 1H NMR spectra based on a computer program is presented. Two main underlying concepts were used in the design of this program. First, it considers at any time all possible assignments that are consistent with the currently available data. If new information is added then assignments that have become inconsistent are eliminated. Second, the process of the assignment is split into formal steps that follow strictly from the available data and steps that involve the interpretation of ambiguous NMR data. The first kind of step is safe in the sense that it never leads to false assignments provided that the input does not contain any error; these steps are executed automatically by the program when the input files are read and whenever new data have been entered interactively. The second kind of step is left to the user: An interactive dialog provides detailed information on the current situation of the assignment and indicates what kind of new data would be most promising for further assignment. The user then provides new data to the program and restarts the automatic part which will attempt to draw logical conclusions from the joint use of the new data and the earlier available information and will eliminate assignments that have become inconsistent. Results of test problems using simulated NMR data for proteins consisting of up to 99 residues as well as the application of the program to obtain the complete assignment of α-bungarotoxin, a 74-residue snake neurotoxin, are reported.

  18. Data mining support systems

    NASA Astrophysics Data System (ADS)

    Zhao, Yinliang; Yao, JingTao; Yao, Yiyu

    2004-04-01

    The main stream of research in data mining (or knowledge discovery in databases) focuses on algorithms and automatic or semi-automatic processes for discovering knowledge hidden in data. In this paper, we adopt a more general and goal oriented view of data mining. Data mining is regarded as a field of study covering the theories, methodologies, techniques, and activities with the goal of discovering new and useful knowledge. One of its objectives is to design and implement data mining systems. A miner solves problems of data mining manually, or semi-automatically by using such systems. However, there is a lack of studies on how to assist a miner in solving data mining problems. From the experiences and lessons of decision support systems, we introduce the concept of data mining support systems (DMSS). We draw an analogy between the field of decision-making and the field of data mining, and between the role of a manager and the role of a data miner. A DMSS is an active and highly interactive computer system that assists data mining activities. The needs and the basic features of DMSS are discussed.

  19. View planning and mesh refinement effects on a semi-automatic three-dimensional photorealistic texture mapping procedure

    NASA Astrophysics Data System (ADS)

    Shih, Chihhsiong; Yang, Yuanfan

    2012-02-01

    A novel three-dimensional (3-D) photorealistic texturing process is presented that applies a view-planning and view-sequencing algorithm to the 3-D coarse model to determine a set of best viewing angles for capturing the individual real-world objects/building's images. The best sequence of views will generate sets of visible edges in each view to serve as a guide for camera field shots by either manual adjustment or equipment alignment. The best view tries to cover as many objects/building surfaces as possible in one shot. This will lead to a smaller total number of shots taken for a complete model reconstruction requiring texturing with photo-realistic effects. The direct linear transformation method (DLT) is used for reprojection of 3-D model vertices onto a two-dimensional (2-D) images plane for actual texture mapping. Given this method, the actual camera orientations do not have to be unique and can be set arbitrarily without heavy and expensive positioning equipment. We also present results of a study on the texture-mapping precision as a function of the level of visible mesh subdivision. In addition, the control points selection for the DLT method used for reprojection of 3-D model vertices onto 2-D textured images is also investigated for its effects on mapping precision. By using DLT and perspective projection theories on a coarse model feature points, this technique will allow accurate 3-D texture mapping of refined model meshes of real-world buildings. The novel integration flow of this research not only greatly reduces the human labor and intensive equipment requirements of traditional methods, but also generates a more appealing photo-realistic appearance of reconstructed models, which is useful in many multimedia applications. The roles of view planning (VP) are multifold. VP can (1) reduce the repetitive texture-mapping computation load, (2) can present a set of visible model wireframe edges that can serve as a guide for images with sharp edges and

  20. SYRIT Computer School Systems Report.

    ERIC Educational Resources Information Center

    Maldonado, Carmen

    The 1991-92 and 1993-94 audit for SYRIT Computer School Systems revealed noncompliance of appropriate law and regulations in certifying students for Tuition Assistance Program (TAP) awards. SYRIT was overpaid $2,817,394 because school officials incorrectly certified student eligibility. The audit also discovered that students graduated and were…

  1. Computer-Assisted Placement System.

    ERIC Educational Resources Information Center

    Nordlund, Willis J.

    The detailed study deals with the two basic types of computer-assisted placement mechanisms now operating as components of the United State Employment Service (USTES). Job banks receive job orders, organize, edit, and display; job-matching systems perform similar functions but in addition attempt to screen and match jobs and job applicants. The…

  2. Policy Information System Computer Program.

    ERIC Educational Resources Information Center

    Hamlin, Roger E.; And Others

    The concepts and methodologies outlined in "A Policy Information System for Vocational Education" are presented in a simple computer format in this booklet. It also contains a sample output representing 5-year projections of various planning needs for vocational education. Computerized figures in the eight areas corresponding to those in the…

  3. Policy Information System Computer Program.

    ERIC Educational Resources Information Center

    Hamlin, Roger E.; And Others

    The concepts and methodologies outlined in "A Policy Information System for Vocational Education" are presented in a simple computer format in this booklet. It also contains a sample output representing 5-year projections of various planning needs for vocational education. Computerized figures in the eight areas corresponding to those in the…

  4. Robot computer problem solving system

    NASA Technical Reports Server (NTRS)

    Becker, J. D.; Merriam, E. W.

    1974-01-01

    The conceptual, experimental, and practical phases of developing a robot computer problem solving system are outlined. Robot intelligence, conversion of the programming language SAIL to run under the THNEX monitor, and the use of the network to run several cooperating jobs at different sites are discussed.

  5. A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.

    PubMed

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

  6. A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration

    PubMed Central

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum. PMID:23112656

  7. Robot, computer problem solving system

    NASA Technical Reports Server (NTRS)

    Becker, J. D.

    1972-01-01

    The development of a computer problem solving system is reported that considers physical problems faced by an artificial robot moving around in a complex environment. Fundamental interaction constraints with a real environment are simulated for the robot by visual scan and creation of an internal environmental model. The programming system used in constructing the problem solving system for the simulated robot and its simulated world environment is outlined together with the task that the system is capable of performing. A very general framework for understanding the relationship between an observed behavior and an adequate description of that behavior is included.

  8. Construction, implementation and testing of an image identification system using computer vision methods for fruit flies with economic importance (Diptera: Tephritidae).

    PubMed

    Wang, Jiang-Ning; Chen, Xiao-Lin; Hou, Xin-Wen; Zhou, Li-Bing; Zhu, Chao-Dong; Ji, Li-Qiang

    2017-07-01

    Many species of Tephritidae are damaging to fruit, which might negatively impact international fruit trade. Automatic or semi-automatic identification of fruit flies are greatly needed for diagnosing causes of damage and quarantine protocols for economically relevant insects. A fruit fly image identification system named AFIS1.0 has been developed using 74 species belonging to six genera, which include the majority of pests in the Tephritidae. The system combines automated image identification and manual verification, balancing operability and accuracy. AFIS1.0 integrates image analysis and expert system into a content-based image retrieval framework. In the the automatic identification module, AFIS1.0 gives candidate identification results. Afterwards users can do manual selection based on comparing unidentified images with a subset of images corresponding to the automatic identification result. The system uses Gabor surface features in automated identification and yielded an overall classification success rate of 87% to the species level by Independent Multi-part Image Automatic Identification Test. The system is useful for users with or without specific expertise on Tephritidae in the task of rapid and effective identification of fruit flies. It makes the application of computer vision technology to fruit fly recognition much closer to production level. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  9. Production optimization of 99Mo/99mTc zirconium molybate gel generators at semi-automatic device: DISIGEG.

    PubMed

    Monroy-Guzman, F; Rivero Gutiérrez, T; López Malpica, I Z; Hernández Cortes, S; Rojas Nava, P; Vazquez Maldonado, J C; Vazquez, A

    2012-01-01

    DISIGEG is a synthesis installation of zirconium (99)Mo-molybdate gels for (99)Mo/(99m)Tc generator production, which has been designed, built and installed at the ININ. The device consists of a synthesis reactor and five systems controlled via keyboard: (1) raw material access, (2) chemical air stirring, (3) gel dried by air and infrared heating, (4) moisture removal and (5) gel extraction. DISIGEG operation is described and dried condition effects of zirconium (99)Mo- molybdate gels on (99)Mo/(99m)Tc generator performance were evaluated as well as some physical-chemical properties of these gels. The results reveal that temperature, time and air flow applied during the drying process directly affects zirconium (99)Mo-molybdate gel generator performance. All gels prepared have a similar chemical structure probably constituted by three-dimensional network, based on zirconium pentagonal bipyramids and molybdenum octahedral. Basic structural variations cause a change in gel porosity and permeability, favouring or inhibiting (99m)TcO(4)(-) diffusion into the matrix. The (99m)TcO(4)(-) eluates produced by (99)Mo/(99m)Tc zirconium (99)Mo-molybdate gel generators prepared in DISIGEG, air dried at 80°C for 5h and using an air flow of 90mm, satisfied all the Pharmacopoeias regulations: (99m)Tc yield between 70-75%, (99)Mo breakthrough less than 3×10(-3)%, radiochemical purities about 97% sterile and pyrogen-free eluates with a pH of 6.

  10. Semi-automatic digital image impact assessments of Maize Lethal Necrosis (MLN) at the leaf, whole plant and plot levels

    NASA Astrophysics Data System (ADS)

    Kefauver, S. C.; Vergara-Diaz, O.; El-Haddad, G.; Das, B.; Suresh, L. M.; Cairns, J.; Araus, J. L.

    2016-12-01

    Maize is the top staple crop for low-income populations in Sub-Saharan Africa and is currently suffering from the appearance of new diseases, which, together with increased abiotic stresses from climate change, are challenging the very sustainability of African societies. Current constraints in field phenotyping remain a major bottleneck for future breeding advances, but RGB-based High-Throughput Phenotyping Platforms (HTPPs) have demonstrated promise for rapidly developing both disease-resistant and weather-resilient crops. RGB HTTPs have proven cost-effective in studies assessing the effect of abiotic stresses, but have yet to be fully exploited to phenotype disease resistance. RGB image quantification using different alternate color space transforms, including BreedPix indices, were produced as part of a FIJI plug-in (http://fiji.sc/Fiji; http://github.com/george-haddad/CIMMYT). For validation, Maize Lethal Necrosis (MLN) visual scale impact assessments from 1 to 5 were scored by the resident CIMMYT plant pathologist, with 1 being MLN resistant (healthy plants with no visual symptoms) and 5 being totally susceptible (entirely necrotic with no green tissue). Individual RGB vegetation indexes outperformed NDVI (Normalized Difference Vegetation Index), with correlation values up to 0.72, compared to 0.56 for NDVI. Specifically, Hue, Green Area (GA), and the Normalized Green Red Difference Index (NGRDI) consistently outperformed NDVI in estimating MLN disease severity. In multivariate linear and various decision tree models, Necrosis Area (NA) and Chlorosis Area (CA), calculated similar to GA and GGA from Breedpix, also contributed significantly to estimating MLN impact scores. Results using UAS (Unmanned Aerial Systems), proximal field photography of plants and plots and flatbed scanners of individual leaves have produced similar results, demonstrating the robustness of these cost-effective RGB indexes. Furthermore, the application of the indices using

  11. Computer control system of TRISTAN

    NASA Astrophysics Data System (ADS)

    Akiyama, A.; Ishii, K.; Kadokura, E.; Katoh, T.; Kikutani, E.; Kimura, Y.; Komada, I.; Kudo, K.; Kurokawa, S.; Oide, K.; Takeda, S.; Uchino, K.

    The 8 GeV accumulation ring and the 30 GeV × 30 GeV main ring of TRISTAN, an accelerator-storage ring complex at KEK, are controlled by a single computer system. About twenty minicomputers (Hitachi HIDIC 80-E's) are linked to each other by optical fiber cables to form an N-to-N token-passing ring network of 10 Mbps transmission speed. The software system is based on the NODAL interpreter developed at CERN SPS. The KEK version of NODAL uses the compiler-interpreter method to increase its execution speed. In addition to it, a multi-computer file system, a screen editor, and dynamic linkage of datamodules and functions are the characteristics of KEK NODAL.

  12. Computational Aeroacoustic Analysis System Development

    NASA Technical Reports Server (NTRS)

    Hadid, A.; Lin, W.; Ascoli, E.; Barson, S.; Sindir, M.

    2001-01-01

    Many industrial and commercial products operate in a dynamic flow environment and the aerodynamically generated noise has become a very important factor in the design of these products. In light of the importance in characterizing this dynamic environment, Rocketdyne has initiated a multiyear effort to develop an advanced general-purpose Computational Aeroacoustic Analysis System (CAAS) to address these issues. This system will provide a high fidelity predictive capability for aeroacoustic design and analysis. The numerical platform is able to provide high temporal and spatial accuracy that is required for aeroacoustic calculations through the development of a high order spectral element numerical algorithm. The analysis system is integrated with well-established CAE tools, such as a graphical user interface (GUI) through PATRAN, to provide cost-effective access to all of the necessary tools. These include preprocessing (geometry import, grid generation and boundary condition specification), code set up (problem specification, user parameter definition, etc.), and postprocessing. The purpose of the present paper is to assess the feasibility of such a system and to demonstrate the efficiency and accuracy of the numerical algorithm through numerical examples. Computations of vortex shedding noise were carried out in the context of a two-dimensional low Mach number turbulent flow past a square cylinder. The computational aeroacoustic approach that is used in CAAS relies on coupling a base flow solver to the acoustic solver throughout a computational cycle. The unsteady fluid motion, which is responsible for both the generation and propagation of acoustic waves, is calculated using a high order flow solver. The results of the flow field are then passed to the acoustic solver through an interpolator to map the field values into the acoustic grid. The acoustic field, which is governed by the linearized Euler equations, is then calculated using the flow results computed

  13. Robot computer problem solving system

    NASA Technical Reports Server (NTRS)

    Merriam, E. W.; Becker, J. D.

    1973-01-01

    A robot computer problem solving system which represents a robot exploration vehicle in a simulated Mars environment is described. The model exhibits changes and improvements made on a previously designed robot in a city environment. The Martian environment is modeled in Cartesian coordinates; objects are scattered about a plane; arbitrary restrictions on the robot's vision have been removed; and the robot's path contains arbitrary curves. New environmental features, particularly the visual occlusion of objects by other objects, were added to the model. Two different algorithms were developed for computing occlusion. Movement and vision capabilities of the robot were established in the Mars environment, using LISP/FORTRAN interface for computational efficiency. The graphical display program was redesigned to reflect the change to the Mars-like environment.

  14. Computer access security code system

    NASA Technical Reports Server (NTRS)

    Collins, Earl R., Jr. (Inventor)

    1990-01-01

    A security code system for controlling access to computer and computer-controlled entry situations comprises a plurality of subsets of alpha-numeric characters disposed in random order in matrices of at least two dimensions forming theoretical rectangles, cubes, etc., such that when access is desired, at least one pair of previously unused character subsets not found in the same row or column of the matrix is chosen at random and transmitted by the computer. The proper response to gain access is transmittal of subsets which complete the rectangle, and/or a parallelepiped whose opposite corners were defined by first groups of code. Once used, subsets are not used again to absolutely defeat unauthorized access by eavesdropping, and the like.

  15. Stratified object-based image analysis of high-res laser altimetry data for semi-automatic geomorphological mapping in an alpine area

    NASA Astrophysics Data System (ADS)

    Anders, Niels S.; Seijmonsbergen, Arie C.; Bouten, Willem

    2010-05-01

    Classic geomorphological mapping is gradually replaced by (semi) automated techniques to rapidly obtain geomorphological information in remote, steep and/or forested areas. To ensure a high accuracy of these semi-automated maps, there is a need to optimize automated mapping procedures. Within this context, we present a novel approach to semi-automatically map alpine geomorphology using a stratified object-based image analysis approach, in contrast to traditional object-based image analysis. We used a 1 m ‘Light Detection And Ranging' (LiDAR) Digital Terrain Model (DTM) from a mountainous area in Vorarlberg (western Austria). From the DTM, we calculated various terrain derivatives which served as input for segmentation of the DTM and object-based classification. We assessed the segmentation results by comparing the generated image objects with a reference dataset. In this way, we optimized image segmentation parameters which were used for classifying karst, glacial, fluvial and denudational landforms. To evaluate our approach, the classification results were compared with results from traditional object-based image analysis. Our results show that landform-specific segmentation parameters are needed to extract and classify alpine landforms in a step-wise manner, producing a geomorphological map with higher accuracy than maps resulting from traditional object-based image analysis. We conclude that the stratified object-based image analysis of high-resolution laser altimetry data substantially improves classification results in the study area. Using this approach, geomorphological maps can be produced more accurately and efficiently than before in difficult-to-access alpine areas. A further step may be the development of specific landform segmentation/classification signatures which can be transferred and applied in other mountain regions.

  16. Estimating ice albedo from fine debris cover quantified by a semi-automatic method: the case study of Forni Glacier, Italian Alps

    NASA Astrophysics Data System (ADS)

    Azzoni, Roberto Sergio; Senese, Antonella; Zerboni, Andrea; Maugeri, Maurizio; Smiraglia, Claudio; Diolaiuti, Guglielmina Adele

    2016-03-01

    In spite of the quite abundant literature focusing on fine debris deposition over glacier accumulation areas, less attention has been paid to the glacier melting surface. Accordingly, we proposed a novel method based on semi-automatic image analysis to estimate ice albedo from fine debris coverage (d). Our procedure was tested on the surface of a wide Alpine valley glacier (the Forni Glacier, Italy), in summer 2011, 2012 and 2013, acquiring parallel data sets of in situ measurements of ice albedo and high-resolution surface images. Analysis of 51 images yielded d values ranging from 0.01 to 0.63 and albedo was found to vary from 0.06 to 0.32. The estimated d values are in a linear relation with the natural logarithm of measured ice albedo (R = -0.84). The robustness of our approach in evaluating d was analyzed through five sensitivity tests, and we found that it is largely replicable. On the Forni Glacier, we also quantified a mean debris coverage rate (Cr) equal to 6 g m-2 per day during the ablation season of 2013, thus supporting previous studies that describe ongoing darkening phenomena at Alpine debris-free glaciers surface. In addition to debris coverage, we also considered the impact of water (both from melt and rainfall) as a factor that tunes albedo: meltwater occurs during the central hours of the day, decreasing the albedo due to its lower reflectivity; instead, rainfall causes a subsequent mean daily albedo increase slightly higher than 20 %, although it is short-lasting (from 1 to 4 days).

  17. In vivo semi-automatic segmentation of multicontrast cardiovascular magnetic resonance for prospective cohort studies on plaque tissue composition: initial experience.

    PubMed

    Yoneyama, Taku; Sun, Jie; Hippe, Daniel S; Balu, Niranjan; Xu, Dongxiang; Kerwin, William S; Hatsukami, Thomas S; Yuan, Chun

    2016-01-01

    Automatic in vivo segmentation of multicontrast (multisequence) carotid magnetic resonance for plaque composition has been proposed as a substitute for manual review to save time and reduce inter-reader variability in large-scale or multicenter studies. Using serial images from a prospective longitudinal study, we sought to compare a semi-automatic approach versus expert human reading in analyzing carotid atherosclerosis progression. Baseline and 6-month follow-up multicontrast carotid images from 59 asymptomatic subjects with 16-79 % carotid stenosis were reviewed by both trained radiologists with 2-4 years of specialized experience in carotid plaque characterization with MRI and a previously reported automatic atherosclerotic plaque segmentation algorithm, referred to as morphology-enhanced probabilistic plaque segmentation (MEPPS). Agreement on measurements from individual time points, as well as on compositional changes, was assessed using the intraclass correlation coefficient (ICC). There was good agreement between manual and MEPPS reviews on individual time points for calcification (CA) (area: ICC; 0.85-0.91; volume: ICC; 0.92-0.95) and lipid-rich necrotic core (LRNC) (area: ICC; 0.78-0.82; volume: ICC; 0.84-0.86). For compositional changes, agreement was good for CA volume change (ICC; 0.78) and moderate for LRNC volume change (ICC; 0.49). Factors associated with LRNC progression as detected by MEPPS review included intraplaque hemorrhage (positive association) and reduction in low-density lipoprotein cholesterol (negative association), which were consistent with previous findings from manual review. Automatic classifier for plaque composition produced results similar to expert manual review in a prospective serial MRI study of carotid atherosclerosis progression. Such automatic classification tools may be beneficial in large-scale multicenter studies by reducing image analysis time and avoiding bias between human reviewers.

  18. An investigation into the factors that influence toolmark identifications on ammunition discharged from semi-automatic pistols recovered from car fires.

    PubMed

    Collender, Mark A; Doherty, Kevin A J; Stanton, Kenneth T

    2017-01-01

    Following a shooting incident where a vehicle is used to convey the culprits to and from the scene, both the getaway car and the firearm are often deliberately burned in an attempt to destroy any forensic evidence which may be subsequently recovered. Here we investigate the factors that influence the ability to make toolmark identifications on ammunition discharged from pistols recovered from such car fires. This work was carried out by conducting a number of controlled furnace tests in conjunction with real car fire tests in which three 9mm semi-automatic pistols were burned. Comparisons between pre-burn and post burn test fired ammunition discharged from these pistols were then performed to establish if identifications were still possible. The surfaces of the furnace heated samples and car fire samples were examined following heating/burning to establish what factors had influenced their surface morphology. The primary influence on the surfaces of the furnace heated and car fire samples was the formation of oxide layers. The car fire samples were altered to a greater extent than the furnace heated samples. Identifications were still possible between pre- and post-burn discharged cartridge cases, but this was not the case for the discharged bullets. It is suggested that the reason for this is a difference between the types of firearms discharge-generated toolmarks impressed onto the base of cartridge cases compared to those striated along the surfaces of bullets. It was also found that the temperatures recorded in the front foot wells were considerably less than those recorded on top of the rear seats during the car fires. These factors should be assessed by forensic firearms examiners when performing casework involving pistols recovered from car fires. Copyright © 2016 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.

  19. Resources Required for Semi-Automatic Volumetric Measurements in Metastatic Chordoma: Is Potentially Improved Tumor Burden Assessment Worth the Time Burden?

    PubMed

    Fenerty, Kathleen E; Patronas, Nicholas J; Heery, Christopher R; Gulley, James L; Folio, Les R

    2016-06-01

    The Response Evaluation Criteria in Solid Tumors (RECIST) is the current standard for assessing therapy response in patients with malignant solid tumors; however, volumetric assessments are thought to be more representative of actual tumor size and hence superior in predicting patient outcomes. We segmented all primary and metastatic lesions in 21 chordoma patients for comparison to RECIST. Primary tumors were segmented on MR and validated by a neuroradiologist. Metastatic lesions were segmented on CT and validated by a general radiologist. We estimated times for a research assistant to segment all primary and metastatic chordoma lesions using semi-automated volumetric segmentation tools available within our PACS (v12.0, Carestream, Rochester, NY), as well as time required for radiologists to validate the segmentations. We also report success rates of semi-automatic segmentation in metastatic lesions on CT and time required to export data. Furthermore, we discuss the feasibility of volumetric segmentation workflow in research and clinical settings. The research assistant spent approximately 65 h segmenting 435 lesions in 21 patients. This resulted in 1349 total segmentations (average 2.89 min per lesion) and over 13,000 data points. Combined time for the neuroradiologist and general radiologist to validate segmentations was 45.7 min per patient. Exportation time for all patients totaled only 6 h, providing time-saving opportunities for data managers and oncologists. Perhaps cost-neutral resource reallocation can help acquire volumes paralleling our example workflow. Our results will provide researchers with benchmark resources required for volumetric assessments within PACS and help prepare institutions for future volumetric assessment criteria.

  20. Visualizing Parallel Computer System Performance

    NASA Technical Reports Server (NTRS)

    Malony, Allen D.; Reed, Daniel A.

    1988-01-01

    Parallel computer systems are among the most complex of man's creations, making satisfactory performance characterization difficult. Despite this complexity, there are strong, indeed, almost irresistible, incentives to quantify parallel system performance using a single metric. The fallacy lies in succumbing to such temptations. A complete performance characterization requires not only an analysis of the system's constituent levels, it also requires both static and dynamic characterizations. Static or average behavior analysis may mask transients that dramatically alter system performance. Although the human visual system is remarkedly adept at interpreting and identifying anomalies in false color data, the importance of dynamic, visual scientific data presentation has only recently been recognized Large, complex parallel system pose equally vexing performance interpretation problems. Data from hardware and software performance monitors must be presented in ways that emphasize important events while eluding irrelevant details. Design approaches and tools for performance visualization are the subject of this paper.

  1. Computer-aided system design

    NASA Technical Reports Server (NTRS)

    Walker, Carrie K.

    1991-01-01

    A technique has been developed for combining features of a systems architecture design and assessment tool and a software development tool. This technique reduces simulation development time and expands simulation detail. The Architecture Design and Assessment System (ADAS), developed at the Research Triangle Institute, is a set of computer-assisted engineering tools for the design and analysis of computer systems. The ADAS system is based on directed graph concepts and supports the synthesis and analysis of software algorithms mapped to candidate hardware implementations. Greater simulation detail is provided by the ADAS functional simulator. With the functional simulator, programs written in either Ada or C can be used to provide a detailed description of graph nodes. A Computer-Aided Software Engineering tool developed at the Charles Stark Draper Laboratory (CSDL CASE) automatically generates Ada or C code from engineering block diagram specifications designed with an interactive graphical interface. A technique to use the tools together has been developed, which further automates the design process.

  2. On evaluating parallel computer systems

    NASA Technical Reports Server (NTRS)

    Adams, George B., III; Brown, Robert L.; Denning, Peter J.

    1985-01-01

    A workshop was held in an attempt to program real problems on the MIT Static Data Flow Machine. Most of the architecture of the machine was specified but some parts were incomplete. The main purpose for the workshop was to explore principles for the evaluation of computer systems employing new architectures. Principles explored were: (1) evaluation must be an integral, ongoing part of a project to develop a computer of radically new architecture; (2) the evaluation should seek to measure the usability of the system as well as its performance; (3) users from the application domains must be an integral part of the evaluation process; and (4) evaluation results should be fed back into the design process. It is concluded that the general organizational principles are achievable in practice from this workshop.

  3. The CESR computer control system

    NASA Astrophysics Data System (ADS)

    Helmke, R. G.; Rice, D. H.; Strohman, C.

    1986-06-01

    The control system for the Cornell Electron Storage Ring (CESR) has functioned satisfactorily since its implementation in 1979. Key characteristics are fast tuning response, almost exclusive use of FORTRAN as a programming language, and efficient coordinated ramping of CESR guide field elements. This original system has not, however, been able to keep pace with the increasing complexity of operation of CESR associated with performance upgrades. Limitations in address space, expandability, access to data system-wide, and program development impediments have prompted the undertaking of a major upgrade. The system under development accommodates up to 8 VAX computers for all applications programs. The database and communications semaphores reside in a shared multi-ported memory, and each hardware interface bus is controlled by a dedicated 32 bit micro-processor in a VME based system.

  4. Semi-automatic delimitation of volcanic edifice boundaries: Validation and application to the cinder cones of the Tancitaro-Nueva Italia region (Michoacán-Guanajuato Volcanic Field, Mexico)

    NASA Astrophysics Data System (ADS)

    Di Traglia, Federico; Morelli, Stefano; Casagli, Nicola; Garduño Monroy, Victor Hugo

    2014-08-01

    The shape and size of monogenetic volcanoes are the result of complex evolutions involving the interaction of eruptive activity, structural setting and degradational processes. Morphological studies of cinder cones aim to evaluate volcanic hazard on the Earth and to decipher the origins of various structures on extraterrestrial planets. Efforts have been dedicated so far to the characterization of the cinder cone morphology in a systematic and comparable manner. However, manual delimitation is time-consuming and influenced by the user subjectivity but, on the other hand, automatic boundary delimitation of volcanic terrains can be affected by irregular topography. In this work, the semi-automatic delimitation of volcanic edifice boundaries proposed by Grosse et al. (2009) for stratovolcanoes was tested for the first time over monogenetic cinder cones. The method, based on the integration of the DEM-derived slope and curvature maps, is applied here to the Tancitaro-Nueva Italia region of the Michoacán-Guanajuato Volcanic Field (Mexico), where 309 Plio-Quaternary cinder cones are located. The semiautomatic extraction allowed identification of 137 of the 309 cinder cones of the Tancitaro-Nueva Italia region, recognized by means of the manual extraction. This value corresponds to the 44.3% of the total number of cinder cones. Analysis on vent alignments allowed us to identify NE-SW vent alignments and cone elongations, consistent with a NE-SW σmax and a NW-SE σmin. Constructing a vent intensity map, based on computing the number of vents within a radius r centred on each vent of the data set and choosing r = 5 km, four vent intensity maxima were derived: one is positioned in the NW with respect to the Volcano Tancitaro, one in the NE, one to the S and another vent cluster located at the SE boundary of the studied area. The spacing of centroid of each cluster (24 km) can be related to the thickness of the crust (9-10 km) overlying the magma reservoir.

  5. Millimeter wave transmissometer computer system

    SciTech Connect

    Wiberg, J.D.; Widener, K.B.

    1990-04-01

    A millimeter wave transmissometer has been designed and built by the Pacific Northwest Laboratory in Richland, Washington for the US Army at the Dugway Proving Grounds in Dugway, Utah. This real-time data acquisition and control system is used to test and characterize battlefield obscurants according to the transmittance of electromagnetic radiation in the millimeter wavelengths. It is an advanced five-frequency instrumentation radar system consisting of a transceiver van and a receiver van deployed at opposite sides of a test grid. The transceiver computer systems is the successful integration of a Digital Equipment Corporation (DEC) VAX 8350, multiple VME bus systems with Motorola M68020 processors (one for each radar frequency), an IEEE-488 instrumentation bus, and an Aptec IOC-24 I/O computer. The software development platforms are the VAX 8350 and an IBM PC/AT. A variety of compilers, cross-assemblers, microcode assemblers, and linkers were employed to facilitate development of the system software. Transmittance measurements from each radar are taken forty times per second under control of a VME based M68020.

  6. Automated validation of a computer operating system

    NASA Technical Reports Server (NTRS)

    Dervage, M. M.; Milberg, B. A.

    1970-01-01

    Programs apply selected input/output loads to complex computer operating system and measure performance of that system under such loads. Technique lends itself to checkout of computer software designed to monitor automated complex industrial systems.

  7. CAESY - COMPUTER AIDED ENGINEERING SYSTEM

    NASA Technical Reports Server (NTRS)

    Wette, M. R.

    1994-01-01

    Many developers of software and algorithms for control system design have recognized that current tools have limits in both flexibility and efficiency. Many forces drive the development of new tools including the desire to make complex system modeling design and analysis easier and the need for quicker turnaround time in analysis and design. Other considerations include the desire to make use of advanced computer architectures to help in control system design, adopt new methodologies in control, and integrate design processes (e.g., structure, control, optics). CAESY was developed to provide a means to evaluate methods for dealing with user needs in computer-aided control system design. It is an interpreter for performing engineering calculations and incorporates features of both Ada and MATLAB. It is designed to be reasonably flexible and powerful. CAESY includes internally defined functions and procedures, as well as user defined ones. Support for matrix calculations is provided in the same manner as MATLAB. However, the development of CAESY is a research project, and while it provides some features which are not found in commercially sold tools, it does not exhibit the robustness that many commercially developed tools provide. CAESY is written in C-language for use on Sun4 series computers running SunOS 4.1.1 and later. The program is designed to optionally use the LAPACK math library. The LAPACK math routines are available through anonymous ftp from research.att.com. CAESY requires 4Mb of RAM for execution. The standard distribution medium is a .25 inch streaming magnetic tape cartridge (QIC-24) in UNIX tar format. CAESY was developed in 1993 and is a copyrighted work with all copyright vested in NASA.

  8. CAESY - COMPUTER AIDED ENGINEERING SYSTEM

    NASA Technical Reports Server (NTRS)

    Wette, M. R.

    1994-01-01

    Many developers of software and algorithms for control system design have recognized that current tools have limits in both flexibility and efficiency. Many forces drive the development of new tools including the desire to make complex system modeling design and analysis easier and the need for quicker turnaround time in analysis and design. Other considerations include the desire to make use of advanced computer architectures to help in control system design, adopt new methodologies in control, and integrate design processes (e.g., structure, control, optics). CAESY was developed to provide a means to evaluate methods for dealing with user needs in computer-aided control system design. It is an interpreter for performing engineering calculations and incorporates features of both Ada and MATLAB. It is designed to be reasonably flexible and powerful. CAESY includes internally defined functions and procedures, as well as user defined ones. Support for matrix calculations is provided in the same manner as MATLAB. However, the development of CAESY is a research project, and while it provides some features which are not found in commercially sold tools, it does not exhibit the robustness that many commercially developed tools provide. CAESY is written in C-language for use on Sun4 series computers running SunOS 4.1.1 and later. The program is designed to optionally use the LAPACK math library. The LAPACK math routines are available through anonymous ftp from research.att.com. CAESY requires 4Mb of RAM for execution. The standard distribution medium is a .25 inch streaming magnetic tape cartridge (QIC-24) in UNIX tar format. CAESY was developed in 1993 and is a copyrighted work with all copyright vested in NASA.

  9. Automated Computer Access Request System

    NASA Technical Reports Server (NTRS)

    Snook, Bryan E.

    2010-01-01

    The Automated Computer Access Request (AutoCAR) system is a Web-based account provisioning application that replaces the time-consuming paper-based computer-access request process at Johnson Space Center (JSC). Auto- CAR combines rules-based and role-based functionality in one application to provide a centralized system that is easily and widely accessible. The system features a work-flow engine that facilitates request routing, a user registration directory containing contact information and user metadata, an access request submission and tracking process, and a system administrator account management component. This provides full, end-to-end disposition approval chain accountability from the moment a request is submitted. By blending both rules-based and rolebased functionality, AutoCAR has the flexibility to route requests based on a user s nationality, JSC affiliation status, and other export-control requirements, while ensuring a user s request is addressed by either a primary or backup approver. All user accounts that are tracked in AutoCAR are recorded and mapped to the native operating system schema on the target platform where user accounts reside. This allows for future extensibility for supporting creation, deletion, and account management directly on the target platforms by way of AutoCAR. The system s directory-based lookup and day-today change analysis of directory information determines personnel moves, deletions, and additions, and automatically notifies a user via e-mail to revalidate his/her account access as a result of such changes. AutoCAR is a Microsoft classic active server page (ASP) application hosted on a Microsoft Internet Information Server (IIS).

  10. Computer Security for the Computer Systems Manager.

    DTIC Science & Technology

    1982-12-01

    CEVI Ietwork ... *. . 641 7.3 CPU Usage by Time of bity . 0 . 0 a 0 0 e a a * 65 7.41 Interactive Terminals in world wide Network . . 66 7.5 Data...two reasons. First, it illustrates the initial efforts of the federal government to establish a huge distributed system of data bases that could...but they are all determined in a, more or less, subjective manner. Decomposing threats into threat categories is the first step. A manager may wish

  11. Research on computer systems benchmarking

    NASA Technical Reports Server (NTRS)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  12. Semi-automatic methods for landslide features and channel network extraction in a complex mountainous terrain: new opportunities but also challenges from high resolution topography

    NASA Astrophysics Data System (ADS)

    Tarolli, Paolo; Sofia, Giulia; Pirotti, Francesco; Dalla Fontana, Giancarlo

    2010-05-01

    In recent years, remotely sensed technologies such as airborne and terrestrial laser scanner have improved the detail of analysis providing high-resolution and high-quality topographic data over large areas better than other technologies. A new generation of high resolution (~ 1m) Digital Terrain Models (DTMs) are now available for different landscapes. These data call for the development of the new generation of methodologies for objective extraction of geomorphic features, such as channel heads, channel networks, bank geometry, landslide scars, service roads, etc. The most important benefit of a high resolution DTM is the detailed recognition of surface features. It is possible to recognize in detail divergent-convex landforms, associated with the dominance of hillslope processes, and convergent-concave landforms, associated with fluvial-dominated erosion. In this work, we test the performance of new methodologies for objective extraction of geomorphic features related to landsliding and channelized processes in order to provide a semi-automatic method for channel network and landslide features recognition in a complex mountainous terrain. The methodologies are based on the detection of thresholds derived by statistical analysis of variability of surface curvature. We considered a study area located in the eastern Italian Alps where a high-quality set of LiDAR data is available and where channel heads, related channel network, and landslides have been mapped in the field by DGPS. In the analysis we derived 1 m DTMs from bare ground LiDAR points, and we used different smoothing factors for the curvature calculation in order to set the more suitable curvature maps for the recognition of selected features. Our analyses suggest that: i) the scale for curvature calculations has to be a function of the scale of the features to be detected, (ii) rougher curvature maps are not optimal as they do not explore a sufficient range at which features occur, while smoother

  13. A simple and unsupervised semi-automatic workflow to detect shallow landslides in Alpine areas based on VHR remote sensing data

    NASA Astrophysics Data System (ADS)

    Amato, Gabriele; Eisank, Clemens; Albrecht, Florian

    2017-04-01

    Landslide detection from Earth observation imagery is an important preliminary work for landslide mapping, landslide inventories and landslide hazard assessment. In this context, the object-based image analysis (OBIA) concept has been increasingly used over the last decade. Within the framework of the Land@Slide project (Earth observation based landslide mapping: from methodological developments to automated web-based information delivery) a simple, unsupervised, semi-automatic and object-based approach for the detection of shallow landslides has been developed and implemented in the InterIMAGE open-source software. The method was applied to an Alpine case study in western Austria, exploiting spectral information from pansharpened 4-bands WorldView-2 satellite imagery (0.5 m spatial resolution) in combination with digital elevation models. First, we divided the image into sub-images, i.e. tiles, and then we applied the workflow to each of them without changing the parameters. The workflow was implemented as top-down approach: at the image tile level, an over-classification of the potential landslide area was produced; the over-estimated area was re-segmented and re-classified by several processing cycles until most false positive objects have been eliminated. In every step a Baatz algorithm based segmentation generates polygons "candidates" to be landslides. At the same time, the average values of normalized difference vegetation index (NDVI) and brightness are calculated for these polygons; after that, these values are used as thresholds to perform an objects selection in order to improve the quality of the classification results. In combination, also empirically determined values of slope and roughness are used in the selection process. Results for each tile were merged to obtain the landslide map for the test area. For final validation, the landslide map was compared to a geological map and a supervised landslide classification in order to estimate its accuracy

  14. When does a physical system compute?

    PubMed

    Horsman, Clare; Stepney, Susan; Wagner, Rob C; Kendon, Viv

    2014-09-08

    Computing is a high-level process of a physical system. Recent interest in non-standard computing systems, including quantum and biological computers, has brought this physical basis of computing to the forefront. There has been, however, no consensus on how to tell if a given physical system is acting as a computer or not; leading to confusion over novel computational devices, and even claims that every physical event is a computation. In this paper, we introduce a formal framework that can be used to determine whether a physical system is performing a computation. We demonstrate how the abstract computational level interacts with the physical device level, in comparison with the use of mathematical models in experimental science. This powerful formulation allows a precise description of experiments, technology, computation and simulation, giving our central conclusion: physical computing is the use of a physical system to predict the outcome of an abstract evolution. We give conditions for computing, illustrated using a range of non-standard computing scenarios. The framework also covers broader computing contexts, where there is no obvious human computer user. We introduce the notion of a 'computational entity', and its critical role in defining when computing is taking place in physical systems.

  15. When does a physical system compute?

    PubMed Central

    Horsman, Clare; Stepney, Susan; Wagner, Rob C.; Kendon, Viv

    2014-01-01

    Computing is a high-level process of a physical system. Recent interest in non-standard computing systems, including quantum and biological computers, has brought this physical basis of computing to the forefront. There has been, however, no consensus on how to tell if a given physical system is acting as a computer or not; leading to confusion over novel computational devices, and even claims that every physical event is a computation. In this paper, we introduce a formal framework that can be used to determine whether a physical system is performing a computation. We demonstrate how the abstract computational level interacts with the physical device level, in comparison with the use of mathematical models in experimental science. This powerful formulation allows a precise description of experiments, technology, computation and simulation, giving our central conclusion: physical computing is the use of a physical system to predict the outcome of an abstract evolution. We give conditions for computing, illustrated using a range of non-standard computing scenarios. The framework also covers broader computing contexts, where there is no obvious human computer user. We introduce the notion of a ‘computational entity’, and its critical role in defining when computing is taking place in physical systems. PMID:25197245

  16. A remote assessment system with a vision robot and wearable sensors.

    PubMed

    Zhang, Tong; Wang, Jue; Ren, Yumiao; Li, Jianjun

    2004-01-01

    This paper describes an ongoing researched remote rehabilitation assessment system that has a 6-freedom double-eyes vision robot to catch vision information, and a group of wearable sensors to acquire biomechanical signals. A server computer is fixed on the robot, to provide services to the robot's controller and all the sensors. The robot is connected to Internet by wireless channel, and so do the sensors to the robot. Rehabilitation professionals can semi-automatically practise an assessment program via Internet. The preliminary results show that the smart device, including the robot and the sensors, can improve the quality of remote assessment, and reduce the complexity of operation at a distance.

  17. Computer systems and software engineering

    NASA Technical Reports Server (NTRS)

    Mckay, Charles W.

    1988-01-01

    The High Technologies Laboratory (HTL) was established in the fall of 1982 at the University of Houston Clear Lake. Research conducted at the High Tech Lab is focused upon computer systems and software engineering. There is a strong emphasis on the interrelationship of these areas of technology and the United States' space program. In Jan. of 1987, NASA Headquarters announced the formation of its first research center dedicated to software engineering. Operated by the High Tech Lab, the Software Engineering Research Center (SERC) was formed at the University of Houston Clear Lake. The High Tech Lab/Software Engineering Research Center promotes cooperative research among government, industry, and academia to advance the edge-of-knowledge and the state-of-the-practice in key topics of computer systems and software engineering which are critical to NASA. The center also recommends appropriate actions, guidelines, standards, and policies to NASA in matters pertinent to the center's research. Results of the research conducted at the High Tech Lab/Software Engineering Research Center have given direction to many decisions made by NASA concerning the Space Station Program.

  18. Deadlock Detection in Distributed Computing Systems.

    DTIC Science & Technology

    1982-06-01

    With the advent of distributed computing systems, the problem of deadlock, which has been essentially solved for centralized computing systems, has...reappeared. Existing centralized deadlock detection techniques are either too expensive or they do not work correctly in distributed computing systems...incorrect. Additionally, although fault-tolerance is usually listed as an advantage of distributed computing systems, little has been done to analyze

  19. On Deadlock Detection in Distributed Computing Systems.

    DTIC Science & Technology

    1983-04-01

    With the advent of distributed computing systems, the problem of deadlock, which has been essentially solved for centralized computing systems, has...reappeared. Existing centralized deadlock detection techniques are either too expensive or they do not work correctly in distributed computing systems

  20. Impact of new computing systems on finite element computations

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Storassili, O. O.; Fulton, R. E.

    1983-01-01

    Recent advances in computer technology that are likely to impact finite element computations are reviewed. The characteristics of supersystems, highly parallel systems, and small systems (mini and microcomputers) are summarized. The interrelations of numerical algorithms and software with parallel architectures are discussed. A scenario is presented for future hardware/software environment and finite element systems. A number of research areas which have high potential for improving the effectiveness of finite element analysis in the new environment are identified.

  1. Transient Faults in Computer Systems

    NASA Technical Reports Server (NTRS)

    Masson, Gerald M.

    1993-01-01

    A powerful technique particularly appropriate for the detection of errors caused by transient faults in computer systems was developed. The technique can be implemented in either software or hardware; the research conducted thus far primarily considered software implementations. The error detection technique developed has the distinct advantage of having provably complete coverage of all errors caused by transient faults that affect the output produced by the execution of a program. In other words, the technique does not have to be tuned to a particular error model to enhance error coverage. Also, the correctness of the technique can be formally verified. The technique uses time and software redundancy. The foundation for an effective, low-overhead, software-based certification trail approach to real-time error detection resulting from transient fault phenomena was developed.

  2. System for Computer Automated Typesetting (SCAT) of Computer Authored Texts.

    ERIC Educational Resources Information Center

    Keeler, F. Laurence

    This description of the System for Automated Typesetting (SCAT), an automated system for typesetting text and inserting special graphic symbols in programmed instructional materials created by the computer aided authoring system AUTHOR, provides an outline of the design architecture of the system and an overview including the component…

  3. Integrated Computer System of Management in Logistics

    NASA Astrophysics Data System (ADS)

    Chwesiuk, Krzysztof

    2011-06-01

    This paper aims at presenting a concept of an integrated computer system of management in logistics, particularly in supply and distribution chains. Consequently, the paper includes the basic idea of the concept of computer-based management in logistics and components of the system, such as CAM and CIM systems in production processes, and management systems for storage, materials flow, and for managing transport, forwarding and logistics companies. The platform which integrates computer-aided management systems is that of electronic data interchange.

  4. Digital optical computers at the optoelectronic computing systems center

    NASA Technical Reports Server (NTRS)

    Jordan, Harry F.

    1991-01-01

    The Digital Optical Computing Program within the National Science Foundation Engineering Research Center for Opto-electronic Computing Systems has as its specific goal research on optical computing architectures suitable for use at the highest possible speeds. The program can be targeted toward exploiting the time domain because other programs in the Center are pursuing research on parallel optical systems, exploiting optical interconnection and optical devices and materials. Using a general purpose computing architecture as the focus, we are developing design techniques, tools and architecture for operation at the speed of light limit. Experimental work is being done with the somewhat low speed components currently available but with architectures which will scale up in speed as faster devices are developed. The design algorithms and tools developed for a general purpose, stored program computer are being applied to other systems such as optimally controlled optical communication networks.

  5. The Remote Computer Control (RCC) system

    NASA Technical Reports Server (NTRS)

    Holmes, W.

    1980-01-01

    A system to remotely control job flow on a host computer from any touchtone telephone is briefly described. Using this system a computer programmer can submit jobs to a host computer from any touchtone telephone. In addition the system can be instructed by the user to call back when a job is finished. Because of this system every touchtone telephone becomes a conversant computer peripheral. This system known as the Remote Computer Control (RCC) system utilizes touchtone input, touchtone output, voice input, and voice output. The RCC system is microprocessor based and is currently using the INTEL 80/30microcomputer. Using the RCC system a user can submit, cancel, and check the status of jobs on a host computer. The RCC system peripherals consist of a CRT for operator control, a printer for logging all activity, mass storage for the storage of user parameters, and a PROM card for program storage.

  6. Computer-Assisted Education System for Psychopharmacology.

    ERIC Educational Resources Information Center

    McDougall, William Donald

    An approach to the use of computer assisted instruction (CAI) for teaching psychopharmacology is presented. A project is described in which, using the TUTOR programing language on the PLATO IV computer system, several computer programs were developed to demonstrate the concepts of aminergic transmitters in the central nervous system. Response…

  7. Universal blind quantum computation for hybrid system

    NASA Astrophysics Data System (ADS)

    Huang, He-Liang; Bao, Wan-Su; Li, Tan; Li, Feng-Guang; Fu, Xiang-Qun; Zhang, Shuo; Zhang, Hai-Long; Wang, Xiang

    2017-08-01

    As progress on the development of building quantum computer continues to advance, first-generation practical quantum computers will be available for ordinary users in the cloud style similar to IBM's Quantum Experience nowadays. Clients can remotely access the quantum servers using some simple devices. In such a situation, it is of prime importance to keep the security of the client's information. Blind quantum computation protocols enable a client with limited quantum technology to delegate her quantum computation to a quantum server without leaking any privacy. To date, blind quantum computation has been considered only for an individual quantum system. However, practical universal quantum computer is likely to be a hybrid system. Here, we take the first step to construct a framework of blind quantum computation for the hybrid system, which provides a more feasible way for scalable blind quantum computation.

  8. Computer Programs For Automated Welding System

    NASA Technical Reports Server (NTRS)

    Agapakis, John E.

    1993-01-01

    Computer programs developed for use in controlling automated welding system described in MFS-28578. Together with control computer, computer input and output devices and control sensors and actuators, provide flexible capability for planning and implementation of schemes for automated welding of specific workpieces. Developed according to macro- and task-level programming schemes, which increases productivity and consistency by reducing amount of "teaching" of system by technician. System provides for three-dimensional mathematical modeling of workpieces, work cells, robots, and positioners.

  9. Specification of Computer Systems by Objectives.

    ERIC Educational Resources Information Center

    Eltoft, Douglas

    1989-01-01

    Discusses the evolution of mainframe and personal computers, and presents a case study of a network developed at the University of Iowa called the Iowa Computer-Aided Engineering Network (ICAEN) that combines Macintosh personal computers with Apollo workstations. Functional objectives are stressed as the best measure of system performance. (LRW)

  10. Monochromator Stabilization System at SPring-8

    SciTech Connect

    Kudo, Togo; Tanida, Hajime; Inoue, Shinobu; Hirono, Toko; Furukakwa, Yukito; Suzuki, Motohiro

    2007-01-19

    A monochromator stabilization system with a semi-automatic tuning procedure has been developed. The system comprises an X-ray beam position/intensity monitor, a control electronics unit, a computer program that operates on a personal computer or workstation, a piezo translator attached to the first crystal of a double crystal monochromator, and a phase-sensitive detector as an optional component. The system suppressed the fluctuations of the photon beam intensity and the beam position by {approx}0.1% and {approx}1 {mu}m, respectively, at the sample locations in the beamlines of SPring-8 with a frequency of less than 10 Hz. The system with the phase-sensitive detector holds the peak of the rocking curve of the double crystal monochromator, which is effective in reducing the time required to perform the energy scan measurement.

  11. Reliability models for dataflow computer systems

    NASA Technical Reports Server (NTRS)

    Kavi, K. M.; Buckles, B. P.

    1985-01-01

    The demands for concurrent operation within a computer system and the representation of parallelism in programming languages have yielded a new form of program representation known as data flow (DENN 74, DENN 75, TREL 82a). A new model based on data flow principles for parallel computations and parallel computer systems is presented. Necessary conditions for liveness and deadlock freeness in data flow graphs are derived. The data flow graph is used as a model to represent asynchronous concurrent computer architectures including data flow computers.

  12. Task allocation in a distributed computing system

    NASA Technical Reports Server (NTRS)

    Seward, Walter D.

    1987-01-01

    A conceptual framework is examined for task allocation in distributed systems. Application and computing system parameters critical to task allocation decision processes are discussed. Task allocation techniques are addressed which focus on achieving a balance in the load distribution among the system's processors. Equalization of computing load among the processing elements is the goal. Examples of system performance are presented for specific applications. Both static and dynamic allocation of tasks are considered and system performance is evaluated using different task allocation methodologies.

  13. System Level Applications of Adaptive Computing (SLAAC)

    DTIC Science & Technology

    2003-11-01

    AFRL-IF-RS-TR-2003-255 Final Technical Report November 2003 SYSTEM LEVEL APPLICATIONS OF ADAPTIVE COMPUTING (SLAAC...NOVEMBER 2003 3. REPORT TYPE AND DATES COVERED Final Jul 97 – May 03 4. TITLE AND SUBTITLE SYSTEM LEVEL APPLICATIONS OF ADAPTIVE COMPUTING (SLAAC...importance are described further in Sandia’s final report . 6.2 Technical Approach The CDI algorithm requires significantly more computation power and

  14. Computer Microvision for Microelectromechanical Systems (MEMS)

    DTIC Science & Technology

    2003-11-01

    AFRL-IF-RS-TR-2003-270 Final Technical Report November 2003 COMPUTER MICROVISION FOR MICROELECTROMECHANICAL SYSTEMS (MEMS...May 97 – Jun 03 4. TITLE AND SUBTITLE COMPUTER MICROVISION FOR MICROELECTROMECHANICAL SYSTEMS (MEMS) 6. AUTHOR(S) Dennis M. Freeman 5...developed a patented multi-beam interferometric method for imaging MEMS, launched a collaborative Computer Microvision Remote Test Facility using DARPA’s

  15. Computer-Controlled, Motorized Positioning System

    NASA Technical Reports Server (NTRS)

    Vargas-Aburto, Carlos; Liff, Dale R.

    1994-01-01

    Computer-controlled, motorized positioning system developed for use in robotic manipulation of samples in custom-built secondary-ion mass spectrometry (SIMS) system. Positions sample repeatably and accurately, even during analysis in three linear orthogonal coordinates and one angular coordinate under manual local control, or microprocessor-based local control or remote control by computer via general-purpose interface bus (GPIB).

  16. Advanced Hybrid Computer Systems. Software Technology.

    DTIC Science & Technology

    This software technology final report evaluates advances made in Advanced Hybrid Computer System software technology . The report describes what...automatic patching software is available as well as which analog/hybrid programming languages would be most feasible for the Advanced Hybrid Computer...compiler software . The problem of how software would interface with the hybrid system is also presented.

  17. Computer-Controlled, Motorized Positioning System

    NASA Technical Reports Server (NTRS)

    Vargas-Aburto, Carlos; Liff, Dale R.

    1994-01-01

    Computer-controlled, motorized positioning system developed for use in robotic manipulation of samples in custom-built secondary-ion mass spectrometry (SIMS) system. Positions sample repeatably and accurately, even during analysis in three linear orthogonal coordinates and one angular coordinate under manual local control, or microprocessor-based local control or remote control by computer via general-purpose interface bus (GPIB).

  18. Computer Literacy in a Distance Education System

    ERIC Educational Resources Information Center

    Farajollahi, Mehran; Zandi, Bahman; Sarmadi, Mohamadreza; Keshavarz, Mohsen

    2015-01-01

    In a Distance Education (DE) system, students must be equipped with seven skills of computer (ICDL) usage. This paper aims at investigating the effect of a DE system on the computer literacy of Master of Arts students at Tehran University. The design of this study is quasi-experimental. Pre-test and post-test were used in both control and…

  19. Computer Literacy in a Distance Education System

    ERIC Educational Resources Information Center

    Farajollahi, Mehran; Zandi, Bahman; Sarmadi, Mohamadreza; Keshavarz, Mohsen

    2015-01-01

    In a Distance Education (DE) system, students must be equipped with seven skills of computer (ICDL) usage. This paper aims at investigating the effect of a DE system on the computer literacy of Master of Arts students at Tehran University. The design of this study is quasi-experimental. Pre-test and post-test were used in both control and…

  20. Automating the segmentation of medical images for the production of voxel tomographic computational models.

    PubMed

    Caon, M; Mohyla, J

    2001-12-01

    Radiation dosimetry for the diagnostic medical imaging procedures performed on humans requires anatomically accurate, computational models. These may be constructed from medical images as voxel-based tomographic models. However, they are time consuming to produce and as a consequence, there are few available. This paper discusses the emergence of semi-automatic segmentation techniques and describes an application (iRAD) written in Microsoft Visual Basic that allows the bitmap of a medical image to be segmented interactively and semi-automatically while displayed in Microsoft Excel. iRAD will decrease the time required to construct voxel models.

  1. Automated Fuel Element Closure Welding System

    SciTech Connect

    Wahlquist, D.R.

    1993-03-01

    The Automated Fuel Element Closure Welding System is a robotic device that will load and weld top end plugs onto nuclear fuel elements in a highly radioactive and inert gas environment. The system was developed at Argonne National Laboratory-West as part of the Fuel Cycle Demonstration. The welding system performs four main functions, it (1) injects a small amount of a xenon/krypton gas mixture into specific fuel elements, and (2) loads tiny end plugs into the tops of fuel element jackets, and (3) welds the end plugs to the element jackets, and (4) performs a dimensional inspection of the pre- and post-welded fuel elements. The system components are modular to facilitate remote replacement of failed parts. The entire system can be operated remotely in manual, semi-automatic, or fully automatic modes using a computer control system. The welding system is currently undergoing software testing and functional checkout.

  2. Automated Fuel Element Closure Welding System

    SciTech Connect

    Wahlquist, D.R.

    1993-01-01

    The Automated Fuel Element Closure Welding System is a robotic device that will load and weld top end plugs onto nuclear fuel elements in a highly radioactive and inert gas environment. The system was developed at Argonne National Laboratory-West as part of the Fuel Cycle Demonstration. The welding system performs four main functions, it (1) injects a small amount of a xenon/krypton gas mixture into specific fuel elements, and (2) loads tiny end plugs into the tops of fuel element jackets, and (3) welds the end plugs to the element jackets, and (4) performs a dimensional inspection of the pre- and post-welded fuel elements. The system components are modular to facilitate remote replacement of failed parts. The entire system can be operated remotely in manual, semi-automatic, or fully automatic modes using a computer control system. The welding system is currently undergoing software testing and functional checkout.

  3. Biomolecular computing systems: principles, progress and potential.

    PubMed

    Benenson, Yaakov

    2012-06-12

    The task of information processing, or computation, can be performed by natural and man-made 'devices'. Man-made computers are made from silicon chips, whereas natural 'computers', such as the brain, use cells and molecules. Computation also occurs on a much smaller scale in regulatory and signalling pathways in individual cells and even within single biomolecules. Indeed, much of what we recognize as life results from the remarkable capacity of biological building blocks to compute in highly sophisticated ways. Rational design and engineering of biological computing systems can greatly enhance our ability to study and to control biological systems. Potential applications include tissue engineering and regeneration and medical treatments. This Review introduces key concepts and discusses recent progress that has been made in biomolecular computing.

  4. Computers as Augmentative Communication Systems.

    ERIC Educational Resources Information Center

    Vanderheiden, Gregg C.

    The paper describes concepts and principles resulting in successful applications of computer technology to the needs of the disabled. The first part describes what a microcomputer is and is not, emphasizing the microcomputer as a machine that simply carries out instructions, the role of programming, and the use of prepared application programs.…

  5. Partitioning of regular computation on multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Lee, Fung Fung

    1988-01-01

    Problem partitioning of regular computation over two dimensional meshes on multiprocessor systems is examined. The regular computation model considered involves repetitive evaluation of values at each mesh point with local communication. The computational workload and the communication pattern are the same at each mesh point. The regular computation model arises in numerical solutions of partial differential equations and simulations of cellular automata. Given a communication pattern, a systematic way to generate a family of partitions is presented. The influence of various partitioning schemes on performance is compared on the basis of computation to communication ratio.

  6. Partitioning of regular computation on multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Lee, Fung F.

    1990-01-01

    Problem partitioning of regular computation over two dimensional meshes on multiprocessor systems is examined. The regular computation model considered involves repetitive evaluation of values at each mesh point with local communication. The computational workload and the communication pattern are the same at each mesh point. The regular computation model arises in numerical solutions of partial differential equations and simulations of cellular automata. Given a communication pattern, a systematic way to generate a family of partitions is presented. The influence of various partitioning schemes on performance is compared on the basis of computation to communication ratio.

  7. Partitioning of regular computation on multiprocessor systems

    SciTech Connect

    Lee, F. . Computer Systems Lab.)

    1990-07-01

    Problem partitioning of regular computation over two-dimensional meshes on multiprocessor systems is examined. The regular computation model considered involves repetitive evaluation of values at each mesh point with local communication. The computational workload and the communication pattern are the same at each mesh point. The regular computation model arises in numerical solutions of partial differential equations and simulations of cellular automata. Given a communication pattern, a systematic way to generate a family of partitions is presented. The influence of various partitioning schemes on performance is compared on the basis of computation to communication ratio.

  8. Autonomic Computing for Spacecraft Ground Systems

    NASA Technical Reports Server (NTRS)

    Li, Zhenping; Savkli, Cetin; Jones, Lori

    2007-01-01

    Autonomic computing for spacecraft ground systems increases the system reliability and reduces the cost of spacecraft operations and software maintenance. In this paper, we present an autonomic computing solution for spacecraft ground systems at NASA Goddard Space Flight Center (GSFC), which consists of an open standard for a message oriented architecture referred to as the GMSEC architecture (Goddard Mission Services Evolution Center), and an autonomic computing tool, the Criteria Action Table (CAT). This solution has been used in many upgraded ground systems for NASA 's missions, and provides a framework for developing solutions with higher autonomic maturity.

  9. Laser Imaging Systems For Computer Vision

    NASA Astrophysics Data System (ADS)

    Vlad, Ionel V.; Ionescu-Pallas, Nicholas; Popa, Dragos; Apostol, Ileana; Vlad, Adriana; Capatina, V.

    1989-05-01

    The computer vision is becoming an essential feature of the high level artificial intelligence. Laser imaging systems act as special kind of image preprocessors/converters enlarging the access of the computer "intelligence" to the inspection, analysis and decision in new "world" : nanometric, three-dimensionals(3D), ultrafast, hostile for humans etc. Considering that the heart of the problem is the matching of the optical methods and the compu-ter software , some of the most promising interferometric,projection and diffraction systems are reviewed with discussions of our present results and of their potential in the precise 3D computer vision.

  10. Computer Bits: The Ideal Computer System for Your Center.

    ERIC Educational Resources Information Center

    Brown, Dennis; Neugebauer, Roger

    1986-01-01

    Reviews five computer systems that can address the needs of a child care center: (1) Sperry PC IT with Bernoulli Box, (2) Compaq DeskPro 286, (3) Macintosh Plus, (4) Epson Equity II, and (5) Leading Edge Model "D." (HOD)

  11. Computer Bits: The Ideal Computer System for Your Center.

    ERIC Educational Resources Information Center

    Brown, Dennis; Neugebauer, Roger

    1986-01-01

    Reviews five computer systems that can address the needs of a child care center: (1) Sperry PC IT with Bernoulli Box, (2) Compaq DeskPro 286, (3) Macintosh Plus, (4) Epson Equity II, and (5) Leading Edge Model "D." (HOD)

  12. MTA Computer Based Evaluation System.

    ERIC Educational Resources Information Center

    Brenner, Lisa P.; And Others

    The MTA PLATO-based evaluation system, which has been implemented by a consortium of schools of medical technology, is designed to be general-purpose, modular, data-driven, and interactive, and to accommodate other national and local item banks. The system provides a comprehensive interactive item-banking system in conjunction with online student…

  13. MTA Computer Based Evaluation System.

    ERIC Educational Resources Information Center

    Brenner, Lisa P.; And Others

    The MTA PLATO-based evaluation system, which has been implemented by a consortium of schools of medical technology, is designed to be general-purpose, modular, data-driven, and interactive, and to accommodate other national and local item banks. The system provides a comprehensive interactive item-banking system in conjunction with online student…

  14. Computer-Based Medical System

    NASA Technical Reports Server (NTRS)

    1998-01-01

    SYMED, Inc., developed a unique electronic medical records and information management system. The S2000 Medical Interactive Care System (MICS) incorporates both a comprehensive and interactive medical care support capability and an extensive array of digital medical reference materials in either text or high resolution graphic form. The system was designed, in cooperation with NASA, to improve the effectiveness and efficiency of physician practices. The S2000 is a MS (Microsoft) Windows based software product which combines electronic forms, medical documents, records management, and features a comprehensive medical information system for medical diagnostic support and treatment. SYMED, Inc. offers access to its medical systems to all companies seeking competitive advantages.

  15. Micro-computer system aids in scheduling

    SciTech Connect

    Anker, R.

    1986-11-01

    MLP (Mainline Pipeline Ltd.) is a petroleum system operated since 1973, by Esso, on behalf of three other oil companies and itself. To assist in pre- and post-startup planning of the revised pipe line network, Scicon developed a computer-based system based on a Scicon program, SCI-CLOPS. This pipe line simulator applies to a wide range of pipe line networks. By using a computer-based system, Esso plans operations far more effectively than with manual methods.

  16. ESPC Computational Efficiency of Earth System Models

    DTIC Science & Technology

    2014-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. ESPC Computational Efficiency of Earth System Models...00-00-2014 to 00-00-2014 4. TITLE AND SUBTITLE ESPC Computational Efficiency of Earth System Models 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c...optimization in this system. 3 Figure 1 – Plot showing seconds per forecast day wallclock time for a T639L64 (~21 km at the equator) NAVGEM

  17. Advanced Computed-Tomography Inspection System

    NASA Technical Reports Server (NTRS)

    Harris, Lowell D.; Gupta, Nand K.; Smith, Charles R.; Bernardi, Richard T.; Moore, John F.; Hediger, Lisa

    1993-01-01

    Advanced Computed Tomography Inspection System (ACTIS) is computed-tomography x-ray apparatus revealing internal structures of objects in wide range of sizes and materials. Three x-ray sources and adjustable scan geometry gives system unprecedented versatility. Gantry contains translation and rotation mechanisms scanning x-ray beam through object inspected. Distance between source and detector towers varied to suit object. System used in such diverse applications as development of new materials, refinement of manufacturing processes, and inspection of components.

  18. Computer Jet-Engine-Monitoring System

    NASA Technical Reports Server (NTRS)

    Disbrow, James D.; Duke, Eugene L.; Ray, Ronald J.

    1992-01-01

    "Intelligent Computer Assistant for Engine Monitoring" (ICAEM), computer-based monitoring system intended to distill and display data on conditions of operation of two turbofan engines of F-18, is in preliminary state of development. System reduces burden on propulsion engineer by providing single display of summary information on statuses of engines and alerting engineer to anomalous conditions. Effective use of prior engine-monitoring system requires continuous attention to multiple displays.

  19. Computer Jet-Engine-Monitoring System

    NASA Technical Reports Server (NTRS)

    Disbrow, James D.; Duke, Eugene L.; Ray, Ronald J.

    1992-01-01

    "Intelligent Computer Assistant for Engine Monitoring" (ICAEM), computer-based monitoring system intended to distill and display data on conditions of operation of two turbofan engines of F-18, is in preliminary state of development. System reduces burden on propulsion engineer by providing single display of summary information on statuses of engines and alerting engineer to anomalous conditions. Effective use of prior engine-monitoring system requires continuous attention to multiple displays.

  20. Design and Implementation of Instructional Computer Systems.

    ERIC Educational Resources Information Center

    Graczyk, Sandra L.

    1989-01-01

    Presents an input-process-output (IPO) model that can facilitate the design and implementation of instructional micro and minicomputer systems in school districts. A national survey of school districts with outstanding computer systems is described, a systems approach to develop the model is explained, and evaluation of the system is discussed.…

  1. Selecting and Implementing the Right Computer System.

    ERIC Educational Resources Information Center

    Evancoe, Donna Clark

    1985-01-01

    Steps that should be followed in choosing and implementing an administrative computer system are discussed. Three stages are involved: institutional assessment, system selection, and implementation. The first step is to define the current status of the data processing systems and the management information systems at the institutions. Future…

  2. SUMC fault tolerant computer system

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The results of the trade studies are presented. These trades cover: establishing the basic configuration, establishing the CPU/memory configuration, establishing an approach to crosstrapping interfaces, defining the requirements of the redundancy management unit (RMU), establishing a spare plane switching strategy for the fault-tolerant memory (FTM), and identifying the most cost effective way of extending the memory addressing capability beyond the 64 K-bytes (K=1024) of SUMC-II B. The results of the design are compiled in Contract End Item (CEI) Specification for the NASA Standard Spacecraft Computer II (NSSC-II), IBM 7934507. The implementation of the FTM and memory address expansion.

  3. Computation of Weapons Systems Effectiveness

    DTIC Science & Technology

    2013-09-01

    Deflection Compute Adjusted REP/DEP and CEP Obtain Ballistic Partials from Zero- Drag Trajectory Program σx- Harp Anglet, σx-Slant Range, σVx-aircraft...The last method is to take the harp angle of the weapon as the impact angle to cater for the scenario where the weapon flies directly to the...target upon weapon release as laser guidance is available throughout its flight. The harp angle is the line-of-sight (LOS) angle between the aircraft and

  4. Aviation Safety Modeling and Simulation (ASMM) Propulsion Fleet Modeling: A Tool for Semi-Automatic Construction of CORBA-based Applications from Legacy Fortran Programs

    NASA Technical Reports Server (NTRS)

    Sang, Janche

    2003-01-01

    Within NASA's Aviation Safety Program, NASA GRC participates in the Modeling and Simulation Project called ASMM. NASA GRC s focus is to characterize the propulsion systems performance from a fleet management and maintenance perspective by modeling and through simulation predict the characteristics of two classes of commercial engines (CFM56 and GE90). In prior years, the High Performance Computing and Communication (HPCC) program funded, NASA Glenn in developing a large scale, detailed simulations for the analysis and design of aircraft engines called the Numerical Propulsion System Simulation (NPSS). Three major aspects of this modeling included the integration of different engine components, coupling of multiple disciplines, and engine component zooming at appropriate level fidelity, require relatively tight coupling of different analysis codes. Most of these codes in aerodynamics and solid mechanics are written in Fortran. Refitting these legacy Fortran codes with distributed objects can increase these codes reusability. Aviation Safety s modeling and simulation use in characterizing fleet management has similar needs. The modeling and simulation of these propulsion systems use existing Fortran and C codes that are instrumental in determining the performance of the fleet. The research centers on building a CORBA-based development environment for programmers to easily wrap and couple legacy Fortran codes. This environment consists of a C++ wrapper library to hide the details of CORBA and an efficient remote variable scheme to facilitate data exchange between the client and the server model. Additionally, a Web Service model should also be constructed for evaluation of this technology s use over the next two- three years.

  5. Aviation Safety Modeling and Simulation (ASMM) Propulsion Fleet Modeling: A Tool for Semi-Automatic Construction of CORBA-based Applications from Legacy Fortran Programs

    NASA Technical Reports Server (NTRS)

    Sang, Janche

    2003-01-01

    Within NASA's Aviation Safety Program, NASA GRC participates in the Modeling and Simulation Project called ASMM. NASA GRC s focus is to characterize the propulsion systems performance from a fleet management and maintenance perspective by modeling and through simulation predict the characteristics of two classes of commercial engines (CFM56 and GE90). In prior years, the High Performance Computing and Communication (HPCC) program funded, NASA Glenn in developing a large scale, detailed simulations for the analysis and design of aircraft engines called the Numerical Propulsion System Simulation (NPSS). Three major aspects of this modeling included the integration of different engine components, coupling of multiple disciplines, and engine component zooming at appropriate level fidelity, require relatively tight coupling of different analysis codes. Most of these codes in aerodynamics and solid mechanics are written in Fortran. Refitting these legacy Fortran codes with distributed objects can increase these codes reusability. Aviation Safety s modeling and simulation use in characterizing fleet management has similar needs. The modeling and simulation of these propulsion systems use existing Fortran and C codes that are instrumental in determining the performance of the fleet. The research centers on building a CORBA-based development environment for programmers to easily wrap and couple legacy Fortran codes. This environment consists of a C++ wrapper library to hide the details of CORBA and an efficient remote variable scheme to facilitate data exchange between the client and the server model. Additionally, a Web Service model should also be constructed for evaluation of this technology s use over the next two- three years.

  6. Understanding the computing system domain of advanced computing with microcomputers

    SciTech Connect

    Hake, K.A.

    1990-01-01

    Accepting the challenge by the Executive Office of the President, Office of Science and Technology Policy for research to keep pace with technology, the author surveys the knowledge domain of advanced microcomputers. The paper provides a general background for social scientists in technology traditionally relegated to computer science and engineering. The concept of systems integration serves as a framework of understanding for the various elements of the knowledge domain of advanced microcomputing. The systems integration framework is viewed as a series of interrelated building blocks composed of the domain elements. These elements are: the processor platform, operating system, display technology, mass storage, application software, and human-computer interface. References come from recent articles in popular magazines and journals to help emphasize the easy access of this information, its appropriate technical level for the social scientist, and its transient currency. 78 refs., 3 figs.

  7. Computer simulation of breathing systems for divers

    SciTech Connect

    Sexton, P.G.; Nuckols, M.L.

    1983-02-01

    A powerful new tool for the analysis and design of underwater breathing gas systems is being developed. A versatile computer simulator is described which makes possible the modular ''construction'' of any conceivable breathing gas system from computer memory-resident components. The analysis of a typical breathing gas system is demonstrated using this simulation technique, and the effects of system modifications on performance of the breathing system are shown. This modeling technique will ultimately serve as the foundation for a proposed breathing system simulator under development by the Navy. The marriage of this computer modeling technique with an interactive graphics system will provide the designer with an efficient, cost-effective tool for the development of new and improved diving systems.

  8. Method and system for benchmarking computers

    DOEpatents

    Gustafson, John L.

    1993-09-14

    A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

  9. Refurbishment program of HANARO control computer system

    SciTech Connect

    Kim, H. K.; Choe, Y. S.; Lee, M. W.; Doo, S. K.; Jung, H. S.

    2012-07-01

    HANARO, an open-tank-in-pool type research reactor with 30 MW thermal power, achieved its first criticality in 1995. The programmable controller system MLC (Multi Loop Controller) manufactured by MOORE has been used to control and regulate HANARO since 1995. We made a plan to replace the control computer because the system supplier no longer provided technical support and thus no spare parts were available. Aged and obsolete equipment and the shortage of spare parts supply could have caused great problems. The first consideration for a replacement of the control computer dates back to 2007. The supplier did not produce the components of MLC so that this system would no longer be guaranteed. We established the upgrade and refurbishment program in 2009 so as to keep HANARO up to date in terms of safety. We designed the new control computer system that would replace MLC. The new computer system is HCCS (HANARO Control Computer System). The refurbishing activity is in progress and will finish in 2013. The goal of the refurbishment program is a functional replacement of the reactor control system in consideration of suitable interfaces, compliance with no special outage for installation and commissioning, and no change of the well-proved operation philosophy. HCCS is a DCS (Discrete Control System) using PLC manufactured by RTP. To enhance the reliability, we adapt a triple processor system, double I/O system and hot swapping function. This paper describes the refurbishment program of the HANARO control system including the design requirements of HCCS. (authors)

  10. Computer Reconstruction of Plant Growth and Chlorophyll Fluorescence Emission in Three Spatial Dimensions

    PubMed Central

    Bellasio, Chandra; Olejníčková, Julie; Tesař, Radek; Šebela, David; Nedbal, Ladislav

    2012-01-01

    Plant leaves grow and change their orientation as well their emission of chlorophyll fluorescence in time. All these dynamic plant properties can be semi-automatically monitored by a 3D imaging system that generates plant models by the method of coded light illumination, fluorescence imaging and computer 3D reconstruction. Here, we describe the essentials of the method, as well as the system hardware. We show that the technique can reconstruct, with a high fidelity, the leaf size, the leaf angle and the plant height. The method fails with wilted plants when leaves overlap obscuring their true area. This effect, naturally, also interferes when the method is applied to measure plant growth under water stress. The method is, however, very potent in capturing the plant dynamics under mild stress and without stress. The 3D reconstruction is also highly effective in correcting geometrical factors that distort measurements of chlorophyll fluorescence emission of naturally positioned plant leaves. PMID:22368511

  11. Computer reconstruction of plant growth and chlorophyll fluorescence emission in three spatial dimensions.

    PubMed

    Bellasio, Chandra; Olejníčková, Julie; Tesař, Radek; Sebela, David; Nedbal, Ladislav

    2012-01-01

    Plant leaves grow and change their orientation as well their emission of chlorophyll fluorescence in time. All these dynamic plant properties can be semi-automatically monitored by a 3D imaging system that generates plant models by the method of coded light illumination, fluorescence imaging and computer 3D reconstruction. Here, we describe the essentials of the method, as well as the system hardware. We show that the technique can reconstruct, with a high fidelity, the leaf size, the leaf angle and the plant height. The method fails with wilted plants when leaves overlap obscuring their true area. This effect, naturally, also interferes when the method is applied to measure plant growth under water stress. The method is, however, very potent in capturing the plant dynamics under mild stress and without stress. The 3D reconstruction is also highly effective in correcting geometrical factors that distort measurements of chlorophyll fluorescence emission of naturally positioned plant leaves.

  12. Computer-aided dispatching system design specification

    SciTech Connect

    Briggs, M.G.

    1997-12-16

    This document defines the performance requirements for a graphic display dispatching system to support Hanford Patrol Operations Center. This document reflects the as-built requirements for the system that was delivered by GTE Northwest, Inc. This system provided a commercial off-the-shelf computer-aided dispatching system and alarm monitoring system currently in operations at the Hanford Patrol Operations Center, Building 2721E. This system also provides alarm back-up capability for the Plutonium Finishing Plant (PFP).

  13. Data security in medical computer systems.

    PubMed

    White, R

    1986-10-01

    A computer is secure if it works reliably and if problems that do arise can be corrected easily. The steps that can be taken to ensure hardware, software, procedural, physical, and legal security are outlined. Most computer systems are vulnerable because their operators do not have sufficient procedural safeguards in place.

  14. Design of a modular digital computer system

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A design tradeoff study is reported for a modular spaceborne computer system that is responsive to many mission types and phases. The computer uses redundancy to maximize reliability, and multiprocessing to maximize processing capacity. Fault detection and recovery features provide optimal reliability.

  15. A Hierarchical Architecture for Computer Mail Systems,

    DTIC Science & Technology

    1981-05-01

    Block 20, if different from Report) IS. SUPPLEMENTARY NOTES 19. KEY WORDS (Continue on reverse side if necessary and Identify by block number) Electronic ... Mail Computer Message Systems Computer Mail 20. ABSTRACT (Continue on r’eree aide If neceelry and Idenity by block number) In this paper we present an

  16. Design of a modular digital computer system

    NASA Technical Reports Server (NTRS)

    1980-01-01

    A Central Control Element (CCE) module which controls the Automatically Reconfigurable Modular System (ARMS) and allows both redundant processing and multi-computing in the same computer with real time mode switching, is discussed. The same hardware is used for either reliability enhancement, speed enhancement, or for a combination of both.

  17. Automatic system for computer program documentation

    NASA Technical Reports Server (NTRS)

    Simmons, D. B.; Elliott, R. W.; Arseven, S.; Colunga, D.

    1972-01-01

    Work done on a project to design an automatic system for computer program documentation aids was made to determine what existing programs could be used effectively to document computer programs. Results of the study are included in the form of an extensive bibliography and working papers on appropriate operating systems, text editors, program editors, data structures, standards, decision tables, flowchart systems, and proprietary documentation aids. The preliminary design for an automated documentation system is also included. An actual program has been documented in detail to demonstrate the types of output that can be produced by the proposed system.

  18. Computational representation of biological systems

    SciTech Connect

    Frazier, Zach; McDermott, Jason E.; Guerquin, Michal; Samudrala, Ram

    2009-04-20

    Integration of large and diverse biological data sets is a daunting problem facing systems biology researchers. Exploring the complex issues of data validation, integration, and representation, we present a systematic approach for the management and analysis of large biological data sets based on data warehouses. Our system has been implemented in the Bioverse, a framework combining diverse protein information from a variety of knowledge areas such as molecular interactions, pathway localization, protein structure, and protein function.

  19. Computer-Assisted Instruction Authoring Systems

    ERIC Educational Resources Information Center

    Dean, Peter M.

    1978-01-01

    Authoring systems are defined as tools used by an educator to translate intents and purposes from his head into a computer program. Alternate ways of preparing code are examined and charts of these coding formats are displayed. (Author/RAO)

  20. Computer automation for feedback system design

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Mathematical techniques and explanations of various steps used by an automated computer program to design feedback systems are summarized. Special attention was given to refining the automatic evaluation suboptimal loop transmission and the translation of time to frequency domain specifications.

  1. Computed Tomography of the Musculoskeletal System.

    PubMed

    Ballegeer, Elizabeth A

    2016-05-01

    Computed tomography (CT) has specific uses in veterinary species' appendicular musculoskeletal system. Parameters for acquisition of images, interpretation limitations, as well as published information regarding its use in small animals is reviewed.

  2. Integration of scheduling and discrete event simulation systems to improve production flow planning

    NASA Astrophysics Data System (ADS)

    Krenczyk, D.; Paprocka, I.; Kempa, W. M.; Grabowik, C.; Kalinowski, K.

    2016-08-01

    The increased availability of data and computer-aided technologies such as MRPI/II, ERP and MES system, allowing producers to be more adaptive to market dynamics and to improve production scheduling. Integration of production scheduling and computer modelling, simulation and visualization systems can be useful in the analysis of production system constraints related to the efficiency of manufacturing systems. A integration methodology based on semi-automatic model generation method for eliminating problems associated with complexity of the model and labour-intensive and time-consuming process of simulation model creation is proposed. Data mapping and data transformation techniques for the proposed method have been applied. This approach has been illustrated through examples of practical implementation of the proposed method using KbRS scheduling system and Enterprise Dynamics simulation system.

  3. A Management System for Computer Performance Evaluation.

    DTIC Science & Technology

    1981-12-01

    large unused capacity indicates a potential cost performance improvement (i.e. the potential to perform more within current costs or reduce costs ...necessary to bring the performance of the computer system in line with operational goals. : (Ref. 18 : 7) The General Accouting Office estimates that the...tasks in attempting to improve the efficiency and effectiveness of their computer systems. Cost began to plan an important role in the life of a

  4. Computer Systems and Services in Hospitals—1979

    PubMed Central

    Veazie, Stephen M.

    1979-01-01

    Starting at the end of 1978 and continuing through the first six months of 1979, the American Hospital Association (AHA) collected information on computer systems and services used in/by hospitals. The information has been compiled into the most comprehensive data base of hospital computer systems and services in existence today. Summaries of the findings of this project will be presented in this paper.

  5. Computer Systems for Distributed and Distance Learning.

    ERIC Educational Resources Information Center

    Anderson, M.; Jackson, David

    2000-01-01

    Discussion of network-based learning focuses on a survey of computer systems for distributed and distance learning. Both Web-based systems and non-Web-based systems are reviewed in order to highlight some of the major trends of past projects and to suggest ways in which progress may be made in the future. (Contains 92 references.) (Author/LRW)

  6. Computer-Aided dispatching system design specification

    SciTech Connect

    Briggs, M.G.

    1996-05-03

    This document defines the performance requirements for a graphic display dispatching system to support Hanford Patrol emergency response. This system is defined as a Commercial-Off the-Shelf computer dispatching system providing both text and graphical display information while interfacing with the diverse reporting system within the Hanford Facility. This system also provided expansion capabilities to integrate Hanford Fire and the Occurrence Notification Center and provides back-up capabilities for the Plutonium Processing Facility.

  7. OCCUPATIONS IN ELECTRONIC COMPUTING SYSTEMS.

    ERIC Educational Resources Information Center

    Bureau of Employment Security (DOL), Washington, DC.

    OCCUPATIONAL INFORMATION FOR USE IN THE PLACEMENT AND COUNSELING SERVICES OF THE AFFILIATED STATE EMPLOYMENT SERVICES IS PRESENTED IN THIS BROCHURE, ESENTIALLY AN UPDATING OF "OCCUPATIONS IN ELECTRONIC DATA-PROCESSING SYSTEMS," PUBLISHED IN 1959. JOB ANALYSES PROVIDED THE PRIMARY SOURCE OF DATA, BUT ADDITIONAL INFORMATION AND DATA WERE OBTAINED…

  8. Selecting a Library Computer System.

    ERIC Educational Resources Information Center

    Schwarz, Philip

    1985-01-01

    Overview of activities and issues that face library administrators considering automation emphasizes acquisition of turnkey systems. Discussion covers reasons for automating, options available, major marketplace issues, keys to a successful automation project, the Request for Information, the Request for Proposal, evaluation techniques, the…

  9. Computational systems biology for aging research.

    PubMed

    Mc Auley, Mark T; Mooney, Kathleen M

    2015-01-01

    Computational modelling is a key component of systems biology and integrates with the other techniques discussed thus far in this book by utilizing a myriad of data that are being generated to quantitatively represent and simulate biological systems. This chapter will describe what computational modelling involves; the rationale for using it, and the appropriateness of modelling for investigating the aging process. How a model is assembled and the different theoretical frameworks that can be used to build a model are also discussed. In addition, the chapter will describe several models which demonstrate the effectiveness of each computational approach for investigating the constituents of a healthy aging trajectory. Specifically, a number of models will be showcased which focus on the complex age-related disorders associated with unhealthy aging. To conclude, we discuss the future applications of computational systems modelling to aging research. 2015 S. Karger AG, Basel.

  10. Semi-automated CCTV surveillance: the effects of system confidence, system accuracy and task complexity on operator vigilance, reliance and workload.

    PubMed

    Dadashi, N; Stedmon, A W; Pridmore, T P

    2013-09-01

    Recent advances in computer vision technology have lead to the development of various automatic surveillance systems, however their effectiveness is adversely affected by many factors and they are not completely reliable. This study investigated the potential of a semi-automated surveillance system to reduce CCTV operator workload in both detection and tracking activities. A further focus of interest was the degree of user reliance on the automated system. A simulated prototype was developed which mimicked an automated system that provided different levels of system confidence information. Dependent variable measures were taken for secondary task performance, reliance and subjective workload. When the automatic component of a semi-automatic CCTV surveillance system provided reliable system confidence information to operators, workload significantly decreased and spare mental capacity significantly increased. Providing feedback about system confidence and accuracy appears to be one important way of making the status of the automated component of the surveillance system more 'visible' to users and hence more effective to use.

  11. Airborne Advanced Reconfigurable Computer System (ARCS)

    NASA Technical Reports Server (NTRS)

    Bjurman, B. E.; Jenkins, G. M.; Masreliez, C. J.; Mcclellan, K. L.; Templeman, J. E.

    1976-01-01

    A digital computer subsystem fault-tolerant concept was defined, and the potential benefits and costs of such a subsystem were assessed when used as the central element of a new transport's flight control system. The derived advanced reconfigurable computer system (ARCS) is a triple-redundant computer subsystem that automatically reconfigures, under multiple fault conditions, from triplex to duplex to simplex operation, with redundancy recovery if the fault condition is transient. The study included criteria development covering factors at the aircraft's operation level that would influence the design of a fault-tolerant system for commercial airline use. A new reliability analysis tool was developed for evaluating redundant, fault-tolerant system availability and survivability; and a stringent digital system software design methodology was used to achieve design/implementation visibility.

  12. Optimizing System Compute and Bandwidth Density for Deployed HPEC Applications

    DTIC Science & Technology

    2007-11-02

    Optimizing System Compute and Bandwidth Density for Deployed HPEC Applications Randy Banton and Richard Jaenicke Mercury Computer Systems, Inc...UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Mercury Computer Systems, Inc. 8. PERFORMING ORGANIZATION REPORT NUMBER 9... Mercury Computer Systems, Inc. Optimizing System Compute Density for Deployed HPEC Applications Randy Banton, Director, Defense Electronics Engineering

  13. Archival-System Computer Program

    NASA Technical Reports Server (NTRS)

    Scott, Peter; Carvajal, Samuel

    1988-01-01

    Files stored with various degrees of relative performance. ARCHIVE system provides permanent storage area for files to which infrequent access is required. Routines designed to provide simple mechanism by which users store and retrieve files. User treats ARCHIVE as interface to "black box" where files are stored. There are five ARCHIVE user commands, though ARCHIVE employs standard VMS directives and VAX BACKUP utility program. Special care taken to provide security needed to insure integrity of files over period of years. ARCHIVE written in DEC VAX DCL.

  14. Radiator design system computer programs

    NASA Technical Reports Server (NTRS)

    Wiggins, C. L.; Oren, J. A.; Dietz, J. B.

    1971-01-01

    Minimum weight space radiator subsystems which can operate over heat load ranges wider than the capabilities of current subsystems are investigated according to projected trends of future long duration space vehicles. Special consideration is given to maximum heat rejection requirements of the low temperature radiators needed for environmental control systems. The set of radiator design programs that have resulted from this investigation are presented in order to provide the analyst with a capability to generate optimum weight radiator panels or sets of panels from practical design considerations, including transient performance. Modifications are also provided for existing programs to improve capability and user convenience.

  15. Open systems for plant process computers

    SciTech Connect

    Norris, D.L.; Pate, R.L.

    1995-03-01

    Arizona Public Service (APS) Company recently upgraded the Emergency Response Facility (ERF) computer at the Palo Verde Nuclear Generating Stations (PVNGS). The project was initiated to provide the ability to record and display plant data for later analysis of plant events and operational problems (one of the great oversights at nearly every nuclear plant constructed) and to resolve a commitment to correct performance problems on the display side of the system. A major forming objective for the project was to lay a foundation with ample capability and flexibility to provide solutions for future real-time data needs at the plants. The Halliburton NUS Corporation`s Idaho Center (NUS) was selected to develop the system. Because of the constant changes occurring in the computer hardware and software industry, NUS designed and implemented a distributed Open Systems solution based on the UNIX Operating System. This Open System is highly portable across a variety of computer architectures and operating systems and is based on NUS` R*TIME{reg_sign}, a mature software system successfully operating in 14 nuclear plants and over 80 fossil plants. Along with R*TIME, NUS developed two Man-Machine Interface (MMI) versions: R*TIME/WIN, a Microsoft Windows application designed for INTEL-based personal computers operating either Microsoft`s Windows 3.1 or Windows NT operating systems; and R*TIME/X, based on the standard X Window System utilizing the Motif Window Manager.

  16. Lewis hybrid computing system, users manual

    NASA Technical Reports Server (NTRS)

    Bruton, W. M.; Cwynar, D. S.

    1979-01-01

    The Lewis Research Center's Hybrid Simulation Lab contains a collection of analog, digital, and hybrid (combined analog and digital) computing equipment suitable for the dynamic simulation and analysis of complex systems. This report is intended as a guide to users of these computing systems. The report describes the available equipment' and outlines procedures for its use. Particular is given to the operation of the PACER 100 digital processor. System software to accomplish the usual digital tasks such as compiling, editing, etc. and Lewis-developed special purpose software are described.

  17. SIMON Host Computer System requirements and recommendations

    SciTech Connect

    Harpring, L.J.

    1990-11-29

    Development Service Order {number sign}90025 requested recommendations for computer hardware, operating systems, and software development utilities based on current and future SIMON software requirements. Since SIMON's main objective is to be dispatched on missions by an operator with little computer experience, user friendly'' hardware and software interfaces are required. Other design criteria include: a fluid software development environment, and hardware and operating systems with minimal maintenance requirements. Also, the hardware should be expandable; extra processor boards should be easily integrated into the existing system. And finally, the use of well established standards for hardware and software should be implemented where practical.

  18. SIMON Host Computer System requirements and recommendations

    SciTech Connect

    Harpring, L.J.

    1990-11-29

    Development Service Order {number_sign}90025 requested recommendations for computer hardware, operating systems, and software development utilities based on current and future SIMON software requirements. Since SIMON`s main objective is to be dispatched on missions by an operator with little computer experience, ``user friendly`` hardware and software interfaces are required. Other design criteria include: a fluid software development environment, and hardware and operating systems with minimal maintenance requirements. Also, the hardware should be expandable; extra processor boards should be easily integrated into the existing system. And finally, the use of well established standards for hardware and software should be implemented where practical.

  19. Telemetry Computer System at Wallops Flight Center

    NASA Technical Reports Server (NTRS)

    Bell, H.; Strock, J.

    1980-01-01

    This paper describes the Telemetry Computer System in operation at NASA's Wallops Flight Center for real-time or off-line processing, storage, and display of telemetry data from rockets and aircraft. The system accepts one or two PCM data streams and one FM multiplex, converting each type of data into computer format and merging time-of-day information. A data compressor merges the active streams, and removes redundant data if desired. Dual minicomputers process data for display, while storing information on computer tape for further processing. Real-time displays are located at the station, at the rocket launch control center, and in the aircraft control tower. The system is set up and run by standard telemetry software under control of engineers and technicians. Expansion capability is built into the system to take care of possible future requirements.

  20. Telemetry Computer System at Wallops Flight Center

    NASA Technical Reports Server (NTRS)

    Bell, H.; Strock, J.

    1980-01-01

    This paper describes the Telemetry Computer System in operation at NASA's Wallops Flight Center for real-time or off-line processing, storage, and display of telemetry data from rockets and aircraft. The system accepts one or two PCM data streams and one FM multiplex, converting each type of data into computer format and merging time-of-day information. A data compressor merges the active streams, and removes redundant data if desired. Dual minicomputers process data for display, while storing information on computer tape for further processing. Real-time displays are located at the station, at the rocket launch control center, and in the aircraft control tower. The system is set up and run by standard telemetry software under control of engineers and technicians. Expansion capability is built into the system to take care of possible future requirements.

  1. Honeywell Modular Automation System Computer Software Documentation

    SciTech Connect

    CUNNINGHAM, L.T.

    1999-09-27

    This document provides a Computer Software Documentation for a new Honeywell Modular Automation System (MAS) being installed in the Plutonium Finishing Plant (PFP). This system will be used to control new thermal stabilization furnaces in HA-211 and vertical denitration calciner in HC-230C-2.

  2. Terrace Layout Using a Computer Assisted System

    USDA-ARS?s Scientific Manuscript database

    Development of a web-based terrace design tool based on the MOTERR program is presented, along with representative layouts for conventional and parallel terrace systems. Using digital elevation maps and geographic information systems (GIS), this tool utilizes personal computers to rapidly construct ...

  3. A New Computer-Based Examination System.

    ERIC Educational Resources Information Center

    Los Arcos, J. M.; Vano, E.

    1978-01-01

    Describes a computer-managed instructional system used to formulate, print, and evaluate true-false questions for testing purposes. The design of the system and its application in medical and nuclear engineering courses in two Spanish institutions of higher learning are detailed. (RAO)

  4. Computation and design of autonomous intelligent systems

    NASA Astrophysics Data System (ADS)

    Fry, Robert L.

    2008-04-01

    This paper describes a theory of intelligent systems and its reduction to engineering practice. The theory is based on a broader theory of computation wherein information and control are defined within the subjective frame of a system. At its most primitive level, the theory describes what it computationally means to both ask and answer questions which, like traditional logic, are also Boolean. The logic of questions describes the subjective rules of computation that are objective in the sense that all the described systems operate according to its principles. Therefore, all systems are autonomous by construct. These systems include thermodynamic, communication, and intelligent systems. Although interesting, the important practical consequence is that the engineering framework for intelligent systems can borrow efficient constructs and methodologies from both thermodynamics and information theory. Thermodynamics provides the Carnot cycle which describes intelligence dynamics when operating in the refrigeration mode. It also provides the principle of maximum entropy. Information theory has recently provided the important concept of dual-matching useful for the design of efficient intelligent systems. The reverse engineered model of computation by pyramidal neurons agrees well with biology and offers a simple and powerful exemplar of basic engineering concepts.

  5. Constructing Stylish Characters on Computer Graphics Systems.

    ERIC Educational Resources Information Center

    Goldman, Gary S.

    1980-01-01

    Computer graphics systems typically produce a single, machine-like character font. At most, these systems enable the user to (1) alter the aspect ratio (height-to-width ratio) of the characters, (2) specify a transformation matrix to slant the characters, and (3) define a virtual pen table to change the lineweight of the plotted characters.…

  6. Data Integration in Computer Distributed Systems

    NASA Astrophysics Data System (ADS)

    Kwiecień, Błażej

    In this article the author analyze a problem of data integration in a computer distributed systems. Exchange of information between different levels in integrated pyramid of enterprise process is fundamental with regard to efficient enterprise work. Communication and data exchange between levels are not always the same cause of necessity of different network protocols usage, communication medium, system response time, etc.

  7. Theoretical kinetic computations in complex reacting systems

    NASA Technical Reports Server (NTRS)

    Bittker, David A.

    1986-01-01

    Nasa Lewis' studies of complex reacting systems at high temperature are discussed. The changes which occur are the result of many different chemical reactions occurring at the same time. Both an experimental and a theoretical approach are needed to fully understand what happens in these systems. The latter approach is discussed. The differential equations which describe the chemical and thermodynamic changes are given. Their solution by numerical techniques using a detailed chemical mechanism is described. Several different comparisons of computed results with experimental measurements are also given. These include the computation of (1) species concentration profiles in batch and flow reactions, (2) rocket performance in nozzle expansions, and (3) pressure versus time profiles in hydrocarbon ignition processes. The examples illustrate the use of detailed kinetic computations to elucidate a chemical mechanism and to compute practical quantities such as rocket performance, ignition delay times, and ignition lengths in flow processes.

  8. Distributed-Computer System Optimizes SRB Joints

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Young, Katherine C.; Barthelemy, Jean-Francois M.

    1991-01-01

    Initial calculations of redesign of joint on solid rocket booster (SRB) that failed during Space Shuttle tragedy showed redesign increased weight. Optimization techniques applied to determine whether weight could be reduced while keeping joint closed and limiting stresses. Analysis system developed by use of existing software coupling structural analysis with optimization computations. Software designed executable on network of computer workstations. Took advantage of parallelism offered by finite-difference technique of computing gradients to enable several workstations to contribute simultaneously to solution of problem. Key features, effective use of redundancies in hardware and flexible software, enabling optimization to proceed with minimal delay and decreased overall time to completion.

  9. System balance analysis for vector computers

    NASA Technical Reports Server (NTRS)

    Knight, J. C.; Poole, W. G., Jr.; Voight, R. G.

    1975-01-01

    The availability of vector processors capable of sustaining computing rates of 10 to the 8th power arithmetic results pers second raised the question of whether peripheral storage devices representing current technology can keep such processors supplied with data. By examining the solution of a large banded linear system on these computers, it was found that even under ideal conditions, the processors will frequently be waiting for problem data.

  10. Decentralized Resource Management in Distributed Computer Systems.

    DTIC Science & Technology

    1982-02-01

    Archons project, which is performing research in the science and eigneering of ’uhet we -term- distributed computersa. By this we mean a computer...Classification of Synchronization Techniques 23 3.2.1 Access Synchronization 23 3.2.2 coordinating Synchronization 25 3.2.3 Meta.synchronization 26 3.3...3.4 Access Synchronization Techniques 29 3.4.1 Access Synchronization in Shared Memory Computer System 30 3.4.2 Concepts and Issues in Distributed

  11. Computer surety: computer system inspection guidance. [Contains glossary

    SciTech Connect

    Not Available

    1981-07-01

    This document discusses computer surety in NRC-licensed nuclear facilities from the perspective of physical protection inspectors. It gives background information and a glossary of computer terms, along with threats and computer vulnerabilities, methods used to harden computer elements, and computer audit controls.

  12. Fault tolerant hypercube computer system architecture

    NASA Technical Reports Server (NTRS)

    Madan, Herb S. (Inventor); Chow, Edward (Inventor)

    1989-01-01

    A fault-tolerant multiprocessor computer system of the hypercube type comprising a hierarchy of computers of like kind which can be functionally substituted for one another as necessary is disclosed. Communication between the working nodes is via one communications network while communications between the working nodes and watch dog nodes and load balancing nodes higher in the structure is via another communications network separate from the first. A typical branch of the hierarchy reporting to a master node or host computer comprises, a plurality of first computing nodes; a first network of message conducting paths for interconnecting the first computing nodes as a hypercube. The first network provides a path for message transfer between the first computing nodes; a first watch dog node; and a second network of message connecting paths for connecting the first computing nodes to the first watch dog node independent from the first network, the second network provides an independent path for test message and reconfiguration affecting transfers between the first computing nodes and the first switch watch dog node. There is additionally, a plurality of second computing nodes; a third network of message conducting paths for interconnecting the second computing nodes as a hypercube. The third network provides a path for message transfer between the second computing nodes; a fourth network of message conducting paths for connecting the second computing nodes to the first watch dog node independent from the third network. The fourth network provides an independent path for test message and reconfiguration affecting transfers between the second computing nodes and the first watch dog node; and a first multiplexer disposed between the first watch dog node and the second and fourth networks for allowing the first watch dog node to selectively communicate with individual ones of the computing nodes through the second and fourth networks; as well as, a second watch dog node

  13. Computer-Aided dispatching system design specification

    SciTech Connect

    Briggs, M.G.

    1996-09-27

    This document defines the performance requirements for a graphic display dispatching system to support Hanford Patrol emergency response. This document outlines the negotiated requirements as agreed to by GTE Northwest during technical contract discussions. This system defines a commercial off-the-shelf computer dispatching system providing both test and graphic display information while interfacing with diverse alarm reporting system within the Hanford Site. This system provided expansion capability to integrate Hanford Fire and the Occurrence Notification Center. The system also provided back-up capability for the Plutonium Processing Facility (PFP).

  14. Monitoring SLAC High Performance UNIX Computing Systems

    SciTech Connect

    Lettsome, Annette K.; /Bethune-Cookman Coll. /SLAC

    2005-12-15

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface.

  15. Building safe computer-controlled systems.

    PubMed

    Leveson, N G

    1984-10-01

    Software safety becomes an issue when life-critical systems are built with computers as important components. In order to make these systems safe, software developers have concentrated on making them ultrareliable. Unfortunately, this will not necessarily make them safe. This paper discusses why reliability enhancement techniques are not adequate to ensure safety and describes what needs to be done to protect life and property in these systems.

  16. Computer Resources Handbook for Flight Critical Systems.

    DTIC Science & Technology

    1985-01-01

    in avionic systems are suspected of being due to software. In a study of software reliability for digital flight controls conducted by SoHaR for the...aircraft and flight crew -- the use of computers in flight critical applications. Special reliability and fault tolerance (RAFT) techniques are being Used...tolerance in flight critical systems. Conventional reliability techniques and analysis and reliability improvement techniques at the system level are

  17. Low Power Computing in Distributed Systems

    DTIC Science & Technology

    2006-04-01

    IEEE Communications Magazine, Volume 40, Issue 8, pp. 102-114, Aug. 2002. [3] E. R . Post and M. Orth, “Smart Fabric, or Wearable Computing,” Proc...www.cse.psu.edu/~mdl/software.htm [20] http://carlsberg.mit.edu/JouleTrack/ [21] M. Srivastava, A. Chandrakasan. R . Brodersen, “Predictive system shutdown...Dynamic Load Balancing in Distributed Systems,” IEEE International Conference on Systems, Man and Cybernetics, pp. 3795-3799, 1995. [27] A. Talukder

  18. Model for personal computer system selection.

    PubMed

    Blide, L

    1987-12-01

    Successful computer software and hardware selection is best accomplished by following an organized approach such as the one described in this article. The first step is to decide what you want to be able to do with the computer. Secondly, select software that is user friendly, well documented, bug free, and that does what you want done. Next, you select the computer, printer and other needed equipment from the group of machines on which the software will run. Key factors here are reliability and compatibility with other microcomputers in your facility. Lastly, you select a reliable vendor who will provide good, dependable service in a reasonable time. The ability to correctly select computer software and hardware is a key skill needed by medical record professionals today and in the future. Professionals can make quality computer decisions by selecting software and systems that are compatible with other computers in their facility, allow for future net-working, ease of use, and adaptability for expansion as new applications are identified. The key to success is to not only provide for your present needs, but to be prepared for future rapid expansion and change in your computer usage as technology and your skills grow.

  19. Achieving reuse of computable guideline systems.

    PubMed

    Johnson, P; Tu, S; Jones, N

    2001-01-01

    We describe an architecture for reusing computable guidelines and the programs used to interpret them across varied legacy clinical systems. Developed for the PRODIGY 3 project, our architecture aims to support interactive, point of care use of guidelines in primary care. Legacy medical record systems in UK primary care are diverse, using different terminologies, different data models, and varying user-interface philosophies. However, our goal is to provide common guideline knowledge bases and system components, while achieving full integration with the host medical record system, and a user interface tailored to that system. In conjunction with system suppliers, we identified areas of standardization required to achieve this goal. Firstly, standardized interfaces were created for mediation with the legacy system medical record and for act management. Secondly, a standard interface was developed for communication with the User Interface for guideline interaction. Thirdly, a terminology mapping knowledge base and system component was provided. Lastly, we developed a numeric unit conversion knowledge base and system component. The standardization of this architecture was achieved by close collaboration with existing vendors of Primary Care computing systems in the UK. The work has been verified by two suppliers successfully building and deploying systems with User Interfaces which mirror their normal look and feel, communicating fully with existing medical records, while using identical Guideline Interpreter components and knowledge bases. Encouragingly further experiments in other areas of clinical decision support have not required extension of our interfaces.

  20. Computational stability analysis of dynamical systems

    NASA Astrophysics Data System (ADS)

    Nikishkov, Yuri Gennadievich

    2000-10-01

    Due to increased available computer power, the analysis of nonlinear flexible multi-body systems, fixed-wing aircraft and rotary-wing vehicles is relying on increasingly complex, large scale models. An important aspect of the dynamic response of flexible multi-body systems is the potential presence of instabilities. Stability analysis is typically performed on simplified models with the smallest number of degrees of freedom required to capture the physical phenomena that cause the instability. The system stability boundaries are then evaluated using the characteristic exponent method or Floquet theory for systems with constant or periodic coefficients, respectively. As the number of degrees of freedom used to represent the system increases, these methods become increasingly cumbersome, and quickly unmanageable. In this work, a novel approach is proposed, the Implicit Floquet Analysis, which evaluates the largest eigenvalues of the transition matrix using the Arnoldi algorithm, without the explicit computation of this matrix. This method is far more computationally efficient than the classical approach and is ideally suited for systems involving a large number of degrees of freedom. The proposed approach is conveniently implemented as a postprocessing step to any existing simulation tool. The application of the method to a geometrically nonlinear multi-body dynamics code is presented. This work also focuses on the implementation of trimming algorithms and the development of tools for the graphical representation of numerical simulations and stability information for multi-body systems.

  1. Scalable Evolutionary Computation for Efficient Information Extraction from Remote Sensed Imagery

    NASA Astrophysics Data System (ADS)

    Almutairi, L. M.; Shetty, S.; Momm, H. G.

    2014-11-01

    Evolutionary computation, in the form of genetic programming, is used to aid information extraction process from high-resolution satellite imagery in a semi-automatic fashion. Distributing and parallelizing the task of evaluating all candidate solutions during the evolutionary process could significantly reduce the inherent computational cost of evolving solutions that are composed of multichannel large images. In this study, we present the design and implementation of a system that leverages cloud-computing technology to expedite supervised solution development in a centralized evolutionary framework. The system uses the MapReduce programming model to implement a distributed version of the existing framework in a cloud-computing platform. The proposed system has two major subsystems; (i) data preparation: the generation of random spectral indices; and (ii) distributed processing: the distributed implementation of genetic programming, which is used to spectrally distinguish the features of interest from the remaining image background in the cloud computing environment in order to improve scalability. The proposed system reduces response time by leveraging the vast computational and storage resources in a cloud computing environment. The results demonstrate that distributing the candidate solutions reduces the execution time by 91.58%. These findings indicate that such technology could be applied to more complex problems that involve a larger population size and number of generations.

  2. Some Unexpected Results Using Computer Algebra Systems.

    ERIC Educational Resources Information Center

    Alonso, Felix; Garcia, Alfonsa; Garcia, Francisco; Hoya, Sara; Rodriguez, Gerardo; de la Villa, Agustin

    2001-01-01

    Shows how teachers can often use unexpected outputs from Computer Algebra Systems (CAS) to reinforce concepts and to show students the importance of thinking about how they use the software and reflecting on their results. Presents different examples where DERIVE, MAPLE, or Mathematica does not work as expected and suggests how to use them as a…

  3. A universal computer control system for motors

    NASA Technical Reports Server (NTRS)

    Szakaly, Zoltan F. (Inventor)

    1991-01-01

    A control system for a multi-motor system such as a space telerobot, having a remote computational node and a local computational node interconnected with one another by a high speed data link is described. A Universal Computer Control System (UCCS) for the telerobot is located at each node. Each node is provided with a multibus computer system which is characterized by a plurality of processors with all processors being connected to a common bus, and including at least one command processor. The command processor communicates over the bus with a plurality of joint controller cards. A plurality of direct current torque motors, of the type used in telerobot joints and telerobot hand-held controllers, are connected to the controller cards and responds to digital control signals from the command processor. Essential motor operating parameters are sensed by analog sensing circuits and the sensed analog signals are converted to digital signals for storage at the controller cards where such signals can be read during an address read/write cycle of the command processing processor.

  4. A universal computer control system for motors

    NASA Astrophysics Data System (ADS)

    Szakaly, Zoltan F.

    1991-09-01

    A control system for a multi-motor system such as a space telerobot, having a remote computational node and a local computational node interconnected with one another by a high speed data link is described. A Universal Computer Control System (UCCS) for the telerobot is located at each node. Each node is provided with a multibus computer system which is characterized by a plurality of processors with all processors being connected to a common bus, and including at least one command processor. The command processor communicates over the bus with a plurality of joint controller cards. A plurality of direct current torque motors, of the type used in telerobot joints and telerobot hand-held controllers, are connected to the controller cards and responds to digital control signals from the command processor. Essential motor operating parameters are sensed by analog sensing circuits and the sensed analog signals are converted to digital signals for storage at the controller cards where such signals can be read during an address read/write cycle of the command processing processor.

  5. Logical Access Control Mechanisms in Computer Systems.

    ERIC Educational Resources Information Center

    Hsiao, David K.

    The subject of access control mechanisms in computer systems is concerned with effective means to protect the anonymity of private information on the one hand, and to regulate the access to shareable information on the other hand. Effective means for access control may be considered on three levels: memory, process and logical. This report is a…

  6. A System for Generating Instructional Computer Graphics.

    ERIC Educational Resources Information Center

    Nygard, Kendall E.; Ranganathan, Babusankar

    1983-01-01

    Description of the Tektronix-Based Interactive Graphics System for Instruction (TIGSI), which was developed for generating graphics displays in computer-assisted instruction materials, discusses several applications (e.g., reinforcing learning of concepts, principles, rules, and problem-solving techniques) and presents advantages of the TIGSI…

  7. Computer Systems for Teaching Complex Concepts.

    ERIC Educational Resources Information Center

    Feurzeig, Wallace

    Four Programing systems--Mentor, Stringcomp, Simon, and Logo--were designed and implemented as integral parts of research into the various ways computers may be used for teaching problem-solving concepts and skills. Various instructional contexts, among them medicine, mathematics, physics, and basic problem-solving for elementary school children,…

  8. Computer system failure: planning disaster recovery.

    PubMed

    Poker, A M

    1996-07-01

    A disaster recovery plan (DRP) defines the scope of restoration, establishes responsibilities and lists specific actions to be taken after the disaster. Although not actually involved in a DRP or computer systems, nurse managers must understand the steps involved and identify and communicate nursing's requirements.

  9. Workshop on Computing and Intelligent Systems.

    DTIC Science & Technology

    1994-09-01

    This grant provided block international travel funds to help enable researchers to attend an international workshop on "Computing and Intelligent ... Systems " held in Bangalore, India from December 20-23, 1993. The conference venue was the Indian Institute of Science, Bangalore, a Premier Research

  10. Final Report Computational Analysis of Dynamical Systems

    SciTech Connect

    Guckenheimer, John

    2012-05-08

    This is the final report for DOE Grant DE-FG02-93ER25164, initiated in 1993. This grant supported research of John Guckenheimer on computational analysis of dynamical systems. During that period, seventeen individuals received PhD degrees under the supervision of Guckenheimer and over fifty publications related to the grant were produced. This document contains copies of these publications.

  11. Computer Based Instructional Systems--1985-1995.

    ERIC Educational Resources Information Center

    Micheli, Gene S.; And Others

    This report discusses developments in computer based instruction (CBI) and presents initiatives for the improvement of Navy instructional management in the 1985 to 1995 time frame. The state of the art in instructional management and delivery is assessed, projections for the capabilities for instructional management and delivery systems during…

  12. Privacy and Security in Computer Systems.

    ERIC Educational Resources Information Center

    Liu, Yung-Ying

    Materials in the Library of Congress (LC) concerned with the topic of privacy and security in computer systems are listed in this "LC Science Tracer Bullet." The guide includes a total of 59 sources: (1) an introductory source; (2) relevant LC subject headings; (3) basic and additional texts; (4) handbooks, encyclopedias, and…

  13. Lumber Grading With A Computer Vision System

    Treesearch

    Richard W. Conners; Tai-Hoon Cho; Philip A. Araman

    1989-01-01

    Over the past few years significant progress has been made in developing a computer vision system for locating and identifying defects on surfaced hardwood lumber. Unfortunately, until September of 1988 little research had gone into developing methods for analyzing rough lumber. This task is arguably more complex than the analysis of surfaced lumber. The prime...

  14. A rule based computer aided design system

    NASA Technical Reports Server (NTRS)

    Premack, T.

    1986-01-01

    A Computer Aided Design (CAD) system is presented which supports the iterative process of design, the dimensional continuity between mating parts, and the hierarchical structure of the parts in their assembled configuration. Prolog, an interactive logic programming language, is used to represent and interpret the data base. The solid geometry representing the parts is defined in parameterized form using the swept volume method. The system is demonstrated with a design of a spring piston.

  15. Computer system SANC: its development and applications

    NASA Astrophysics Data System (ADS)

    Arbuzov, A.; Bardin, D.; Bondarenko, S.; Christova, P.; Kalinovskaya, L.; Sadykov, R.; Sapronov, A.; Riemann, T.

    2016-10-01

    The SANC system is used for systematic calculations of various processes within the Standard Model in the one-loop approximation. QED, electroweak, and QCD corrections are computed to a number of processes being of interest for modern and future high-energy experiments. Several applications for the LHC physics program are presented. Development of the system and the general problems and perspectives for future improvement of the theoretical precision are discussed.

  16. Integrative Genomics and Computational Systems Medicine

    SciTech Connect

    McDermott, Jason E.; Huang, Yufei; Zhang, Bing; Xu, Hua; Zhao, Zhongming

    2014-01-01

    The exponential growth in generation of large amounts of genomic data from biological samples has driven the emerging field of systems medicine. This field is promising because it improves our understanding of disease processes at the systems level. However, the field is still in its young stage. There exists a great need for novel computational methods and approaches to effectively utilize and integrate various omics data.

  17. Space systems computer-aided design technology

    NASA Technical Reports Server (NTRS)

    Garrett, L. B.

    1984-01-01

    The interactive Design and Evaluation of Advanced Spacecraft (IDEAS) system is described, together with planned capability increases in the IDEAS system. The system's disciplines consist of interactive graphics and interactive computing. A single user at an interactive terminal can create, design, analyze, and conduct parametric studies of earth-orbiting satellites, which represents a timely and cost-effective method during the conceptual design phase where various missions and spacecraft options require evaluation. Spacecraft concepts evaluated include microwave radiometer satellites, communication satellite systems, solar-powered lasers, power platforms, and orbiting space stations.

  18. Adaptive Fuzzy Systems in Computational Intelligence

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.

    1996-01-01

    In recent years, the interest in computational intelligence techniques, which currently includes neural networks, fuzzy systems, and evolutionary programming, has grown significantly and a number of their applications have been developed in the government and industry. In future, an essential element in these systems will be fuzzy systems that can learn from experience by using neural network in refining their performances. The GARIC architecture, introduced earlier, is an example of a fuzzy reinforcement learning system which has been applied in several control domains such as cart-pole balancing, simulation of to Space Shuttle orbital operations, and tether control. A number of examples from GARIC's applications in these domains will be demonstrated.

  19. Computer-aided protective system (CAPS)

    SciTech Connect

    Squire, R.K.

    1988-01-01

    A method of improving the security of materials in transit is described. The system provides a continuously monitored position location system for the transport vehicle, an internal computer-based geographic delimiter that makes continuous comparisons of actual positions with the preplanned routing and schedule, and a tamper detection/reaction system. The position comparison is utilized to institute preprogrammed reactive measures if the carrier is taken off course or schedule, penetrated, or otherwise interfered with. The geographic locater could be an independent internal platform or an external signal-dependent system utilizing GPS, Loran or similar source of geographic information; a small (micro) computer could provide adequate memory and computational capacity; the insurance of integrity of the system indicates the need for a tamper-proof container and built-in intrusion sensors. A variant of the system could provide real-time transmission of the vehicle position and condition to a central control point for; such transmission could be encrypted to preclude spoofing.

  20. Cluster Computing for Embedded/Real-Time Systems

    NASA Technical Reports Server (NTRS)

    Katz, D.; Kepner, J.

    1999-01-01

    Embedded and real-time systems, like other computing systems, seek to maximize computing power for a given price, and thus can significantly benefit from the advancing capabilities of cluster computing.

  1. Fingertips detection for human computer interaction system

    NASA Astrophysics Data System (ADS)

    Alam, Md. Jahangir; Nasierding, Gulisong; Sajjanhar, Atul; Chowdhury, Morshed

    2014-01-01

    Fingertips of human hand play an important role in hand-based interaction with computers. Identification of fingertips' positions in hand images is vital for developing a human computer interaction system. This paper proposes a novel method for detecting fingertips of a hand image analyzing the concept of the geometrical structural information of fingers. The research is divided into three parts: First, hand image is segmented for detecting hand; Second, invariant features (curvature zero-crossing points) are extracted from the boundary of the hand; Third, fingertips are detected. Experimental results show that the proposed approach is promising.

  2. Embedded systems for supporting computer accessibility.

    PubMed

    Mulfari, Davide; Celesti, Antonio; Fazio, Maria; Villari, Massimo; Puliafito, Antonio

    2015-01-01

    Nowadays, customized AT software solutions allow their users to interact with various kinds of computer systems. Such tools are generally available on personal devices (e.g., smartphones, laptops and so on) commonly used by a person with a disability. In this paper, we investigate a way of using the aforementioned AT equipments in order to access many different devices without assistive preferences. The solution takes advantage of open source hardware and its core component consists of an affordable Linux embedded system: it grabs data coming from the assistive software, which runs on the user's personal device, then, after processing, it generates native keyboard and mouse HID commands for the target computing device controlled by the end user. This process supports any operating system available on the target machine and it requires no specialized software installation; therefore the user with a disability can rely on a single assistive tool to control a wide range of computing platforms, including conventional computers and many kinds of mobile devices, which receive input commands through the USB HID protocol.

  3. Autonomous Systems, Robotics, and Computing Systems Capability Roadmap: NRC Dialogue

    NASA Technical Reports Server (NTRS)

    Zornetzer, Steve; Gage, Douglas

    2005-01-01

    Contents include the following: Introduction. Process, Mission Drivers, Deliverables, and Interfaces. Autonomy. Crew-Centered and Remote Operations. Integrated Systems Health Management. Autonomous Vehicle Control. Autonomous Process Control. Robotics. Robotics for Solar System Exploration. Robotics for Lunar and Planetary Habitation. Robotics for In-Space Operations. Computing Systems. Conclusion.

  4. Network visualization system for computational chemistry.

    PubMed

    Kozhin, Mikhail; Yanov, Ilya; Leszczynski, Jerzy

    2003-10-01

    Network Visualization System for Computational Chemistry (NVSCC) is a molecular graphics program designed for the visualization of molecular assemblies. NVSCC accepts the output files from the most popular ab initio quantum chemical programs, GAUSSIAN and GAMESS, and provides visualization of molecular structures based on atomic coordinates. The main differences between NVSCC and other programs are: Network support due to built-in FTP and telnet clients, which allows for the processing of output from and the sending of input to different computer systems and operating systems. The possibility of working with output files in real time mode. The possibility of animation from an output file during all steps of optimization. The quick processing of huge volumes of data. The development of custom interfaces. Copyright 2003 Wiley Periodicals, Inc.

  5. National Ignition Facility integrated computer control system

    NASA Astrophysics Data System (ADS)

    Van Arsdall, Paul J.; Bettenhausen, R. C.; Holloway, Frederick W.; Saroyan, R. A.; Woodruff, J. P.

    1999-07-01

    The NIF design team is developing the Integrated Computer Control System (ICCS), which is based on an object-oriented software framework applicable to event-driven control system. The framework provides an open, extensive architecture that is sufficiently abstract to construct future mission-critical control systems. The ICCS will become operational when the first 8 out of 192 beams are activated in mid 2000. THe ICCS consists of 300 front-end processors attached to 60,000 control points coordinated by a supervisory system. Computers running either Solaris or VxWorks are networked over a hybrid configuration of switched fast Ethernet and asynchronous transfer mode (ATM). ATM carries digital motion video from sensor to operator consoles. Supervisory software is constructed by extending the reusable framework components for each specific application. The framework incorporates services for database persistence, system configuration, graphical user interface, status monitoring, event logging, scripting language, alert management, and access control. More than twenty collaborating software applications are derived from the common framework. The framework is interoperable among different kinds of computers and functions as a plug-in software bus by leveraging a common object request brokering architecture (CORBA). CORBA transparently distributes the software objects across the network. Because of the pivotal role played, CORBA was tested to ensure adequate performance.

  6. National Ignition Facility integrated computer control system

    SciTech Connect

    Van Arsdall, P.J., LLNL

    1998-06-01

    The NIF design team is developing the Integrated Computer Control System (ICCS), which is based on an object-oriented software framework applicable to event-driven control systems. The framework provides an open, extensible architecture that is sufficiently abstract to construct future mission-critical control systems. The ICCS will become operational when the first 8 out of 192 beams are activated in mid 2000. The ICCS consists of 300 front-end processors attached to 60,000 control points coordinated by a supervisory system. Computers running either Solaris or VxWorks are networked over a hybrid configuration of switched fast Ethernet and asynchronous transfer mode (ATM). ATM carries digital motion video from sensors to operator consoles. Supervisory software is constructed by extending the reusable framework components for each specific application. The framework incorporates services for database persistence, system configuration, graphical user interface, status monitoring, event logging, scripting language, alert management, and access control. More than twenty collaborating software applications are derived from the common framework. The framework is interoperable among different kinds of computers and functions as a plug-in software bus by leveraging a common object request brokering architecture (CORBA). CORBA transparently distributes the software objects across the network. Because of the pivotal role played, CORBA was tested to ensure adequate performance.

  7. Thermodynamics of Computational Copying in Biochemical Systems

    NASA Astrophysics Data System (ADS)

    Ouldridge, Thomas E.; Govern, Christopher C.; ten Wolde, Pieter Rein

    2017-04-01

    Living cells use readout molecules to record the state of receptor proteins, similar to measurements or copies in typical computational devices. But is this analogy rigorous? Can cells be optimally efficient, and if not, why? We show that, as in computation, a canonical biochemical readout network generates correlations; extracting no work from these correlations sets a lower bound on dissipation. For general input, the biochemical network cannot reach this bound, even with arbitrarily slow reactions or weak thermodynamic driving. It faces an accuracy-dissipation trade-off that is qualitatively distinct from and worse than implied by the bound, and more complex steady-state copy processes cannot perform better. Nonetheless, the cost remains close to the thermodynamic bound unless accuracy is extremely high. Additionally, we show that biomolecular reactions could be used in thermodynamically optimal devices under exogenous manipulation of chemical fuels, suggesting an experimental system for testing computational thermodynamics.

  8. Computer-controlled radiation monitoring system

    SciTech Connect

    Homann, S.G.

    1994-09-27

    A computer-controlled radiation monitoring system was designed and installed at the Lawrence Livermore National Laboratory`s Multiuser Tandem Laboratory (10 MV tandem accelerator from High Voltage Engineering Corporation). The system continuously monitors the photon and neutron radiation environment associated with the facility and automatically suspends accelerator operation if preset radiation levels are exceeded. The system has proved reliable real-time radiation monitoring over the past five years, and has been a valuable tool for maintaining personnel exposure as low as reasonably achievable.

  9. Standards and ontologies in computational systems biology.

    PubMed

    Sauro, Herbert M; Bergmann, Frank T

    2008-01-01

    With the growing importance of computational models in systems biology there has been much interest in recent years to develop standard model interchange languages that permit biologists to easily exchange models between different software tools. In the present chapter two chief model exchange standards, SBML (Systems Biology Markup Language) and CellML are described. In addition, other related features including visual layout initiatives, ontologies and best practices for model annotation are discussed. Software tools such as developer libraries and basic editing tools are also introduced, together with a discussion on the future of modelling languages and visualization tools in systems biology.

  10. Analog system for computing sparse codes

    DOEpatents

    Rozell, Christopher John; Johnson, Don Herrick; Baraniuk, Richard Gordon; Olshausen, Bruno A.; Ortman, Robert Lowell

    2010-08-24

    A parallel dynamical system for computing sparse representations of data, i.e., where the data can be fully represented in terms of a small number of non-zero code elements, and for reconstructing compressively sensed images. The system is based on the principles of thresholding and local competition that solves a family of sparse approximation problems corresponding to various sparsity metrics. The system utilizes Locally Competitive Algorithms (LCAs), nodes in a population continually compete with neighboring units using (usually one-way) lateral inhibition to calculate coefficients representing an input in an over complete dictionary.

  11. Transient upset models in computer systems

    NASA Technical Reports Server (NTRS)

    Mason, G. M.

    1983-01-01

    Essential factors for the design of transient upset monitors for computers are discussed. The upset is a system level event that is software dependent. It can occur in the program flow, the opcode set, the opcode address domain, the read address domain, and the write address domain. Most upsets are in the program flow. It is shown that simple, external monitors functioning transparently relative to the system operations can be built if a detailed accounting is made of the characteristics of the faults that can happen. Sample applications are provided for different states of the Z-80 and 8085 based system.

  12. Computing Lyapunov exponents of switching systems

    NASA Astrophysics Data System (ADS)

    Guglielmi, Nicola; Protasov, Vladimir

    2016-06-01

    We discuss a new approach for constructing polytope Lyapunov functions for continuous-time linear switching systems. The method we propose allows to decide the uniform stability of a switching system and to compute the Lyapunov exponent with an arbitrary precision. The method relies on the discretization of the system and provides - for any given discretization stepsize - a lower and an upper bound for the Lyapunov exponent. The efficiency of the new method is illustrated by numerical examples. For a more extensive discussion we remand the reader to [8].

  13. Focus stacking: Comparing commercial top-end set-ups with a semi-automatic low budget approach. A possible solution for mass digitization of type specimens

    PubMed Central

    Brecko, Jonathan; Mathys, Aurore; Dekoninck, Wouter; Leponce, Maurice; VandenSpiegel, Didier; Semal, Patrick

    2014-01-01

    Abstract In this manuscript we present a focus stacking system, composed of commercial photographic equipment. The system is inexpensive compared to high-end commercial focus stacking solutions. We tested this system and compared the results with several different software packages (CombineZP, Auto-Montage, Helicon Focus and Zerene Stacker). We tested our final stacked picture with a picture obtained from two high-end focus stacking solutions: a Leica MZ16A with DFC500 and a Leica Z6APO with DFC290. Zerene Stacker and Helicon Focus both provided satisfactory results. However, Zerene Stacker gives the user more possibilities in terms of control of the software, batch processing and retouching. The outcome of the test on high-end solutions demonstrates that our approach performs better in several ways. The resolution of the tested extended focus pictures is much higher than those from the Leica systems. The flash lighting inside the Ikea closet creates an evenly illuminated picture, without struggling with filters, diffusers, etc. The largest benefit is the price of the set-up which is approximately € 3,000, which is 8 and 10 times less than the LeicaZ6APO and LeicaMZ16A set-up respectively. Overall, this enables institutions to purchase multiple solutions or to start digitising the type collection on a large scale even with a small budget. PMID:25589866

  14. Focus stacking: Comparing commercial top-end set-ups with a semi-automatic low budget approach. A possible solution for mass digitization of type specimens.

    PubMed

    Brecko, Jonathan; Mathys, Aurore; Dekoninck, Wouter; Leponce, Maurice; VandenSpiegel, Didier; Semal, Patrick

    2014-01-01

    In this manuscript we present a focus stacking system, composed of commercial photographic equipment. The system is inexpensive compared to high-end commercial focus stacking solutions. We tested this system and compared the results with several different software packages (CombineZP, Auto-Montage, Helicon Focus and Zerene Stacker). We tested our final stacked picture with a picture obtained from two high-end focus stacking solutions: a Leica MZ16A with DFC500 and a Leica Z6APO with DFC290. Zerene Stacker and Helicon Focus both provided satisfactory results. However, Zerene Stacker gives the user more possibilities in terms of control of the software, batch processing and retouching. The outcome of the test on high-end solutions demonstrates that our approach performs better in several ways. The resolution of the tested extended focus pictures is much higher than those from the Leica systems. The flash lighting inside the Ikea closet creates an evenly illuminated picture, without struggling with filters, diffusers, etc. The largest benefit is the price of the set-up which is approximately € 3,000, which is 8 and 10 times less than the LeicaZ6APO and LeicaMZ16A set-up respectively. Overall, this enables institutions to purchase multiple solutions or to start digitising the type collection on a large scale even with a small budget.

  15. Advanced high-performance computer system architectures

    NASA Astrophysics Data System (ADS)

    Vinogradov, V. I.

    2007-02-01

    Convergence of computer systems and communication technologies are moving to switched high-performance modular system architectures on the basis of high-speed switched interconnections. Multi-core processors become more perspective way to high-performance system, and traditional parallel bus system architectures (VME/VXI, cPCI/PXI) are moving to new higher speed serial switched interconnections. Fundamentals in system architecture development are compact modular component strategy, low-power processor, new serial high-speed interface chips on the board, and high-speed switched fabric for SAN architectures. Overview of advanced modular concepts and new international standards for development high-performance embedded and compact modular systems for real-time applications are described.

  16. Computer aided system engineering for space construction

    NASA Technical Reports Server (NTRS)

    Racheli, Ugo

    1989-01-01

    This viewgraph presentation covers the following topics. Construction activities envisioned for the assembly of large platforms in space (as well as interplanetary spacecraft and bases on extraterrestrial surfaces) require computational tools that exceed the capability of conventional construction management programs. The Center for Space Construction is investigating the requirements for new computational tools and, at the same time, suggesting the expansion of graduate and undergraduate curricula to include proficiency in Computer Aided Engineering (CAE) though design courses and individual or team projects in advanced space systems design. In the center's research, special emphasis is placed on problems of constructability and of the interruptability of planned activity sequences to be carried out by crews operating under hostile environmental conditions. The departure point for the planned work is the acquisition of the MCAE I-DEAS software, developed by the Structural Dynamics Research Corporation (SDRC), and its expansion to the level of capability denoted by the acronym IDEAS**2 currently used for configuration maintenance on Space Station Freedom. In addition to improving proficiency in the use of I-DEAS and IDEAS**2, it is contemplated that new software modules will be developed to expand the architecture of IDEAS**2. Such modules will deal with those analyses that require the integration of a space platform's configuration with a breakdown of planned construction activities and with a failure modes analysis to support computer aided system engineering (CASE) applied to space construction.

  17. Human systems dynamics: Toward a computational model

    NASA Astrophysics Data System (ADS)

    Eoyang, Glenda H.

    2012-09-01

    A robust and reliable computational model of complex human systems dynamics could support advancements in theory and practice for social systems at all levels, from intrapersonal experience to global politics and economics. Models of human interactions have evolved from traditional, Newtonian systems assumptions, which served a variety of practical and theoretical needs of the past. Another class of models has been inspired and informed by models and methods from nonlinear dynamics, chaos, and complexity science. None of the existing models, however, is able to represent the open, high dimension, and nonlinear self-organizing dynamics of social systems. An effective model will represent interactions at multiple levels to generate emergent patterns of social and political life of individuals and groups. Existing models and modeling methods are considered and assessed against characteristic pattern-forming processes in observed and experienced phenomena of human systems. A conceptual model, CDE Model, based on the conditions for self-organizing in human systems, is explored as an alternative to existing models and methods. While the new model overcomes the limitations of previous models, it also provides an explanatory base and foundation for prospective analysis to inform real-time meaning making and action taking in response to complex conditions in the real world. An invitation is extended to readers to engage in developing a computational model that incorporates the assumptions, meta-variables, and relationships of this open, high dimension, and nonlinear conceptual model of the complex dynamics of human systems.

  18. Applicability of computational systems biology in toxicology.

    PubMed

    Kongsbak, Kristine; Hadrup, Niels; Audouze, Karine; Vinggaard, Anne Marie

    2014-07-01

    Systems biology as a research field has emerged within the last few decades. Systems biology, often defined as the antithesis of the reductionist approach, integrates information about individual components of a biological system. In integrative systems biology, large data sets from various sources and databases are used to model and predict effects of chemicals on, for instance, human health. In toxicology, computational systems biology enables identification of important pathways and molecules from large data sets; tasks that can be extremely laborious when performed by a classical literature search. However, computational systems biology offers more advantages than providing a high-throughput literature search; it may form the basis for establishment of hypotheses on potential links between environmental chemicals and human diseases, which would be very difficult to establish experimentally. This is possible due to the existence of comprehensive databases containing information on networks of human protein-protein interactions and protein-disease associations. Experimentally determined targets of the specific chemical of interest can be fed into these networks to obtain additional information that can be used to establish hypotheses on links between the chemical and human diseases. Such information can also be applied for designing more intelligent animal/cell experiments that can test the established hypotheses. Here, we describe how and why to apply an integrative systems biology method in the hypothesis-generating phase of toxicological research.

  19. From three-dimensional long-term tectonic numerical models to synthetic structural data: semi-automatic extraction of instantaneous & finite strain quantities

    NASA Astrophysics Data System (ADS)

    Duclaux, Guillaume; May, Dave

    2017-04-01

    compute individual ellipsoid's parameters (orientation, shape, etc.) and represent the finite deformation for any region of interest in a Flinn diagram. In addition, we can use the finite strain ellipsoids to estimate the prevailing foliation and/or lineation directions anywhere in the model. These two methods are applied to measure the instantaneous and finite deformation patterns within an oblique rift zone ongoing constant extension in the absence of surface processes.

  20. Interactive computer-enhanced remote viewing system

    SciTech Connect

    Tourtellott, J.A.; Wagner, J.F.

    1995-10-01

    Remediation activities such as decontamination and decommissioning (D&D) typically involve materials and activities hazardous to humans. Robots are an attractive way to conduct such remediation, but for efficiency they need a good three-dimensional (3-D) computer model of the task space where they are to function. This model can be created from engineering plans and architectural drawings and from empirical data gathered by various sensors at the site. The model is used to plan robotic tasks and verify that selected paths are clear of obstacles. This report describes the development of an Interactive Computer-Enhanced Remote Viewing System (ICERVS), a software system to provide a reliable geometric description of a robotic task space, and enable robotic remediation to be conducted more effectively and more economically.

  1. Thermoelectric property measurements with computer controlled systems

    NASA Technical Reports Server (NTRS)

    Chmielewski, A. B.; Wood, C.

    1984-01-01

    A joint JPL-NASA program to develop an automated system to measure the thermoelectric properties of newly developed materials is described. Consideration is given to the difficulties created by signal drift in measurements of Hall voltage and the Large Delta T Seebeck coefficient. The benefits of a computerized system were examined with respect to error reduction and time savings for human operators. It is shown that the time required to measure Hall voltage can be reduced by a factor of 10 when a computer is used to fit a curve to the ratio of the measured signal and its standard deviation. The accuracy of measurements of the Large Delta T Seebeck coefficient and thermal diffusivity was also enhanced by the use of computers.

  2. Checkpoint triggering in a computer system

    SciTech Connect

    Cher, Chen-Yong

    2016-09-06

    According to an aspect, a method for triggering creation of a checkpoint in a computer system includes executing a task in a processing node of the computer system and determining whether it is time to read a monitor associated with a metric of the task. The monitor is read to determine a value of the metric based on determining that it is time to read the monitor. A threshold for triggering creation of the checkpoint is determined based on the value of the metric. Based on determining that the value of the metric has crossed the threshold, the checkpoint including state data of the task is created to enable restarting execution of the task upon a restart operation.

  3. Thermoelectric property measurements with computer controlled systems

    NASA Technical Reports Server (NTRS)

    Chmielewski, A. B.; Wood, C.

    1984-01-01

    A joint JPL-NASA program to develop an automated system to measure the thermoelectric properties of newly developed materials is described. Consideration is given to the difficulties created by signal drift in measurements of Hall voltage and the Large Delta T Seebeck coefficient. The benefits of a computerized system were examined with respect to error reduction and time savings for human operators. It is shown that the time required to measure Hall voltage can be reduced by a factor of 10 when a computer is used to fit a curve to the ratio of the measured signal and its standard deviation. The accuracy of measurements of the Large Delta T Seebeck coefficient and thermal diffusivity was also enhanced by the use of computers.

  4. CAVES - Computer-Aided Vehicle Embarkation System.

    DTIC Science & Technology

    1985-06-01

    14. SUPPLEMENTARY NOTES It. KEY WORDS (Continue on rewer@* olde it necessary and identify by block number) Pallet loading Vehicle loading Cutting...loading problem. A computer-aided vehicle embarkation system (CAVES) is developed to assist embarkation personnel to load vehicles on board a ship. Caves...load vehicles on board a ship. CAVES provides the Embarkation Officer the flexibility and portability needed to make real time decisions about vehicle

  5. Personal Computer and Workstation Operating Systems Tutorial

    DTIC Science & Technology

    1994-03-01

    1971 Intel Corporation has released eight major CPU designs, each an improvement over the previous design. Intel and its competitor Motorola have both...processes. The minimum CPU requirement -or running the UNIX operating system is the Intel 80386 or the Motorola MC68030 [Ref. 10:p. 205]. Memory...the Motorola 68000 series for their computers. Varieties of UNIX can be found on both Intel and Motorola CPU platforms. Apple’s A/UX is a Motorola -based

  6. Physical Optics Based Computational Imaging Systems

    NASA Astrophysics Data System (ADS)

    Olivas, Stephen Joseph

    There is an ongoing demand on behalf of the consumer, medical and military industries to make lighter weight, higher resolution, wider field-of-view and extended depth-of-focus cameras. This leads to design trade-offs between performance and cost, be it size, weight, power, or expense. This has brought attention to finding new ways to extend the design space while adhering to cost constraints. Extending the functionality of an imager in order to achieve extraordinary performance is a common theme of computational imaging, a field of study which uses additional hardware along with tailored algorithms to formulate and solve inverse problems in imaging. This dissertation details four specific systems within this emerging field: a Fiber Bundle Relayed Imaging System, an Extended Depth-of-Focus Imaging System, a Platform Motion Blur Image Restoration System, and a Compressive Imaging System. The Fiber Bundle Relayed Imaging System is part of a larger project, where the work presented in this thesis was to use image processing techniques to mitigate problems inherent to fiber bundle image relay and then, form high-resolution wide field-of-view panoramas captured from multiple sensors within a custom state-of-the-art imager. The Extended Depth-of-Focus System goals were to characterize the angular and depth dependence of the PSF of a focal swept imager in order to increase the acceptably focused imaged scene depth. The goal of the Platform Motion Blur Image Restoration System was to build a system that can capture a high signal-to-noise ratio (SNR), long-exposure image which is inherently blurred while at the same time capturing motion data using additional optical sensors in order to deblur the degraded images. Lastly, the objective of the Compressive Imager was to design and build a system functionally similar to the Single Pixel Camera and use it to test new sampling methods for image generation and to characterize it against a traditional camera. These computational

  7. System/360 Computer Assisted Network Scheduling (CANS) System

    NASA Technical Reports Server (NTRS)

    Brewer, A. C.

    1972-01-01

    Computer assisted scheduling techniques that produce conflict-free and efficient schedules have been developed and implemented to meet needs of the Manned Space Flight Network. CANS system provides effective management of resources in complex scheduling environment. System is automated resource scheduling, controlling, planning, information storage and retrieval tool.

  8. View southeast of computer controlled energy monitoring system. System replaced ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    View southeast of computer controlled energy monitoring system. System replaced strip chart recorders and other instruments under the direct observation of the load dispatcher. - Thirtieth Street Station, Load Dispatch Center, Thirtieth & Market Streets, Railroad Station, Amtrak (formerly Pennsylvania Railroad Station), Philadelphia, Philadelphia County, PA

  9. Tools for Embedded Computing Systems Software

    NASA Technical Reports Server (NTRS)

    1978-01-01

    A workshop was held to assess the state of tools for embedded systems software and to determine directions for tool development. A synopsis of the talk and the key figures of each workshop presentation, together with chairmen summaries, are presented. The presentations covered four major areas: (1) tools and the software environment (development and testing); (2) tools and software requirements, design, and specification; (3) tools and language processors; and (4) tools and verification and validation (analysis and testing). The utility and contribution of existing tools and research results for the development and testing of embedded computing systems software are described and assessed.

  10. Some queuing network models of computer systems

    NASA Technical Reports Server (NTRS)

    Herndon, E. S.

    1980-01-01

    Queuing network models of a computer system operating with a single workload type are presented. Program algorithms are adapted for use on the Texas Instruments SR-52 programmable calculator. By slightly altering the algorithm to process the G and H matrices row by row instead of column by column, six devices and an unlimited job/terminal population could be handled on the SR-52. Techniques are also introduced for handling a simple load dependent server and for studying interactive systems with fixed multiprogramming limits.

  11. Honeywell Modular Automation System Computer Software Documentation

    SciTech Connect

    STUBBS, A.M.

    2000-12-04

    The purpose of this Computer Software Document (CSWD) is to provide configuration control of the Honeywell Modular Automation System (MAS) in use at the Plutonium Finishing Plant (PFP). This CSWD describes hardware and PFP developed software for control of stabilization furnaces. The Honeywell software can generate configuration reports for the developed control software. These reports are described in the following section and are attached as addendum's. This plan applies to PFP Engineering Manager, Thermal Stabilization Cognizant Engineers, and the Shift Technical Advisors responsible for the Honeywell MAS software/hardware and administration of the Honeywell System.

  12. Standards and Ontologies in Computational Systems Biology

    PubMed Central

    Sauro, Herbert M.; Bergmann, Frank

    2009-01-01

    With the growing importance of computational models in systems biology there has been much interest in recent years to develop standard model interchange languages that permit biologists to easily exchange models between different software tools. In this chapter two chief model exchange standards, SBML and CellML are described. In addition, other related features including visual layout initiatives, ontologies and best practices for model annotation are discussed. Software tools such as developer libraries and basic editing tools are also introduced together with a discussion on the future of modeling languages and visualization tools in systems biology. PMID:18793134

  13. Computer simulations of learning in neural systems.

    PubMed

    Salu, Y

    1983-04-01

    Recent experiments have shown that, in some cases, strengths of synaptic ties are being modified in learning. However, it is not known what the rules that control those modifications are, especially what determines which synapses will be modified and which will remain unchanged during a learning episode. Two postulated rules that may solve that problem are introduced. To check their effectiveness, the rules are tested in many computer models that simulate learning in neural systems. The simulations demonstrate that, theoretically, the two postulated rules are effective in organizing the synaptic changes. If they are found to also exist in biological systems, these postulated rules may be an important element in the learning process.

  14. Semi-automatic 2D-to-3D conversion of human-centered videos enhanced by age and gender estimation

    NASA Astrophysics Data System (ADS)

    Fard, Mani B.; Bayazit, Ulug

    2014-01-01

    In this work, we propose a feasible 3D video generation method to enable high quality visual perception using a monocular uncalibrated camera. Anthropometric distances between face standard landmarks are approximated based on the person's age and gender. These measurements are used in a 2-stage approach to facilitate the construction of binocular stereo images. Specifically, one view of the background is registered in initial stage of video shooting. It is followed by an automatically guided displacement of the camera toward its secondary position. At the secondary position the real-time capturing is started and the foreground (viewed person) region is extracted for each frame. After an accurate parallax estimation the extracted foreground is placed in front of the background image that was captured at the initial position. So the constructed full view of the initial position combined with the view of the secondary (current) position, form the complete binocular pairs during real-time video shooting. The subjective evaluation results present a competent depth perception quality through the proposed system.

  15. Visual Turing test for computer vision systems

    PubMed Central

    Geman, Donald; Geman, Stuart; Hallonquist, Neil; Younes, Laurent

    2015-01-01

    Today, computer vision systems are tested by their accuracy in detecting and localizing instances of objects. As an alternative, and motivated by the ability of humans to provide far richer descriptions and even tell a story about an image, we construct a “visual Turing test”: an operator-assisted device that produces a stochastic sequence of binary questions from a given test image. The query engine proposes a question; the operator either provides the correct answer or rejects the question as ambiguous; the engine proposes the next question (“just-in-time truthing”). The test is then administered to the computer-vision system, one question at a time. After the system’s answer is recorded, the system is provided the correct answer and the next question. Parsing is trivial and deterministic; the system being tested requires no natural language processing. The query engine employs statistical constraints, learned from a training set, to produce questions with essentially unpredictable answers—the answer to a question, given the history of questions and their correct answers, is nearly equally likely to be positive or negative. In this sense, the test is only about vision. The system is designed to produce streams of questions that follow natural story lines, from the instantiation of a unique object, through an exploration of its properties, and on to its relationships with other uniquely instantiated objects. PMID:25755262

  16. Software simulator for multiple computer simulation system

    NASA Technical Reports Server (NTRS)

    Ogrady, E. P.

    1983-01-01

    A description is given of the structure and use of a computer program that simulates the operation of a parallel processor simulation system. The program is part of an investigation to determine algorithms that are suitable for simulating continous systems on a parallel processor configuration. The simulator is designed to accurately simulate the problem-solving phase of a simulation study. Care has been taken to ensure the integrity and correctness of data exchanges and to correctly sequence periods of computation and periods of data exchange. It is pointed out that the functions performed during a problem-setup phase or a reset phase are not simulated. In particular, there is no attempt to simulate the downloading process that loads object code into the local, transfer, and mapping memories of processing elements or the memories of the run control processor and the system control processor. The main program of the simulator carries out some problem-setup functions of the system control processor in that it requests the user to enter values for simulation system parameters and problem parameters. The method by which these values are transferred to the other processors, however, is not simulated.

  17. TU-A-9A-06: Semi-Automatic Segmentation of Skin Cancer in High-Frequency Ultrasound Images: Initial Comparison with Histology

    SciTech Connect

    Gao, Y; Li, X; Fishman, K; Yang, X; Liu, T

    2014-06-15

    Purpose: In skin-cancer radiotherapy, the assessment of skin lesion is challenging, particularly with important features such as the depth and width hard to determine. The aim of this study is to develop interative segmentation method to delineate tumor boundary using high-frequency ultrasound images and to correlate the segmentation results with the histopathological tumor dimensions. Methods: We analyzed 6 patients who comprised a total of 10 skin lesions involving the face, scalp, and hand. The patient’s various skin lesions were scanned using a high-frequency ultrasound system (Episcan, LONGPORT, INC., PA, U.S.A), with a 30-MHz single-element transducer. The lateral resolution was 14.6 micron and the axial resolution was 3.85 micron for the ultrasound image. Semiautomatic image segmentation was performed to extract the cancer region, using a robust statistics driven active contour algorithm. The corresponding histology images were also obtained after tumor resection and served as the reference standards in this study. Results: Eight out of the 10 lesions are successfully segmented. The ultrasound tumor delineation correlates well with the histology assessment, in all the measurements such as depth, size, and shape. The depths measured by the ultrasound have an average of 9.3% difference comparing with that in the histology images. The remaining 2 cases suffered from the situation of mismatching between pathology and ultrasound images. Conclusion: High-frequency ultrasound is a noninvasive, accurate and easy-accessible modality to image skin cancer. Our segmentation method, combined with high-frequency ultrasound technology, provides a promising tool to estimate the extent of the tumor to guide the radiotherapy procedure and monitor treatment response.

  18. Computer-Assisted Photo Interpretation System

    NASA Astrophysics Data System (ADS)

    Niedzwiadek, Harry A.

    1981-11-01

    A computer-assisted photo interpretation research (CAPIR) system has been developed at the U.S. Army Engineer Topographic Laboratories (ETL), Fort Belvoir, Virginia. The system is based around the APPS-IV analytical plotter, a photogrammetric restitution device that was designed and developed by Autometric specifically for interactive, computerized data collection activities involving high-resolution, stereo aerial photographs. The APPS-IV is ideally suited for feature analysis and feature extraction, the primary functions of a photo interpreter. The APPS-IV is interfaced with a minicomputer and a geographic information system called AUTOGIS. The AUTOGIS software provides the tools required to collect or update digital data using an APPS-IV, construct and maintain a geographic data base, and analyze or display the contents of the data base. Although the CAPIR system is fully functional at this time, considerable enhancements are planned for the future.

  19. A computer system for geosynchronous satellite navigation

    NASA Technical Reports Server (NTRS)

    Koch, D. W.

    1980-01-01

    A computer system specifically designed to estimate and predict Geostationary Operational Environmental Satellite (GOES-4) navigation parameters using Earth imagery is described. The estimates are needed for spacecraft maneuvers while prediction provide the capability for near real-time image registration. System software is composed of four functional subsystems: (1) data base management; (2) image processing; (3) navigation; and (4) output. Hardware consists of a host minicomputer, a cathode ray tube terminal, a graphics/video display unit, and associated input/output peripherals. System validity is established through the processing of actual imagery obtained by sensors on board the Synchronous Meteorological Satellite (SMS-2). Results indicate the system is capable of operationally providing both accurate GOES-4 navigation estimates and images with a potential registration accuracy of several picture elements (pixels).

  20. Computer systems for automatic earthquake detection

    USGS Publications Warehouse

    Stewart, S.W.

    1974-01-01

    U.S Geological Survey seismologists in Menlo park, California, are utilizing the speed, reliability, and efficiency of minicomputers to monitor seismograph stations and to automatically detect earthquakes. An earthquake detection computer system, believed to be the only one of its kind in operation, automatically reports about 90 percent of all local earthquakes recorded by a network of over 100 central California seismograph stations. The system also monitors the stations for signs of malfunction or abnormal operation. Before the automatic system was put in operation, all of the earthquakes recorded had to be detected by manually searching the records, a time-consuming process. With the automatic detection system, the stations are efficiently monitored continuously. 

  1. Computational systems biology in cancer brain metastasis.

    PubMed

    Peng, Huiming; Tan, Hua; Zhao, Weiling; Jin, Guangxu; Sharma, Sambad; Xing, Fei; Watabe, Kounosuke; Zhou, Xiaobo

    2016-01-01

    Brain metastases occur in 20-40% of patients with advanced malignancies. A better understanding of the mechanism of this disease will help us to identify novel therapeutic strategies. In this review, we will discuss the systems biology approaches used in this area, including bioinformatics and mathematical modeling. Bioinformatics has been used for identifying the molecular mechanisms driving brain metastasis and mathematical modeling methods for analyzing dynamics of a system and predicting optimal therapeutic strategies. We will illustrate the strategies, procedures, and computational techniques used for studying systems biology in cancer brain metastases. We will give examples on how to use a systems biology approach to analyze a complex disease. Some of the approaches used to identify relevant networks, pathways, and possibly biomarkers in metastasis will be reviewed into details. Finally, certain challenges and possible future directions in this area will also be discussed.

  2. Computation in Dynamically Bounded Asymmetric Systems

    PubMed Central

    Rutishauser, Ueli; Slotine, Jean-Jacques; Douglas, Rodney

    2015-01-01

    Previous explanations of computations performed by recurrent networks have focused on symmetrically connected saturating neurons and their convergence toward attractors. Here we analyze the behavior of asymmetrical connected networks of linear threshold neurons, whose positive response is unbounded. We show that, for a wide range of parameters, this asymmetry brings interesting and computationally useful dynamical properties. When driven by input, the network explores potential solutions through highly unstable ‘expansion’ dynamics. This expansion is steered and constrained by negative divergence of the dynamics, which ensures that the dimensionality of the solution space continues to reduce until an acceptable solution manifold is reached. Then the system contracts stably on this manifold towards its final solution trajectory. The unstable positive feedback and cross inhibition that underlie expansion and divergence are common motifs in molecular and neuronal networks. Therefore we propose that very simple organizational constraints that combine these motifs can lead to spontaneous computation and so to the spontaneous modification of entropy that is characteristic of living systems. PMID:25617645

  3. Computer Data-Entry System Facilitates Proofreading

    NASA Technical Reports Server (NTRS)

    Woo, John, Jr.; Woo, Daniel N.

    1992-01-01

    Visual optical-electronic display for encoding and measurement (VODEM) is system of computer data-entry and display equipment and associated software. Designed to reduce significantly rate of errors in text or other data entered manually or by optical character-recognition equipment and eases task of proofreading those data. Accuracy increased and stress reduced. Provides head-on display including two texts or sets of data to be compared. Developed in large-screen and small-screen version differing mainly in display equipment. Large-screen version includes cathode-ray-tube video display; small-screen version includes smaller liquid-crystal display mounted on computer-controlled x-y drive.

  4. Systems analysis of the space shuttle. [communication systems, computer systems, and power distribution

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.; Oh, S. J.; Thau, F.

    1975-01-01

    Developments in communications systems, computer systems, and power distribution systems for the space shuttle are described. The use of high speed delta modulation for bit rate compression in the transmission of television signals is discussed. Simultaneous Multiprocessor Organization, an approach to computer organization, is presented. Methods of computer simulation and automatic malfunction detection for the shuttle power distribution system are also described.

  5. Computer-based Guideline Implementation Systems

    PubMed Central

    Shiffman, Richard N.; Liaw, Yischon; Brandt, Cynthia A.; Corb, Geoffrey J.

    1999-01-01

    In this systematic review, the authors analyze the functionality provided by recent computer-based guideline implementation systems and characterize the effectiveness of the systems. Twenty-five studies published between 1992 and January 1998 were identified. Articles were included if the authors indicated an intent to implement guideline recommendations for clinicians and if the effectiveness of the system was evaluated. Provision of eight information management services and effects on guideline adherence, documentation, user satisfaction, and patient outcome were noted. All systems provided patient-specific recommendations. In 19, recommendations were available concurrently with care. Explanation services were described for nine systems. Nine systems allowed interactive documentation, and 17 produced paper-based output. Communication services were present most often in systems integrated with electronic medical records. Registration, calculation, and aggregation services were infrequently reported. There were 10 controlled trials (9 randomized) and 10 time-series correlational studies. Guideline adherence improved in 14 of 18 systems in which it was measured. Documentation improved in 4 of 4 studies. PMID:10094063

  6. Computer-based anesthesiology paging system.

    PubMed

    Abenstein, John P; Allan, Jonathan A; Ferguson, Jennifer A; Deick, Steven D; Rose, Steven H; Narr, Bradly J

    2003-07-01

    For more than a century, Mayo Clinic has used various communication strategies to optimize the efficiency of physicians. Anesthesiology has used colored wooden tabs, colored lights, and, most recently, a distributed video paging system (VPS) that was near the end of its useful life. A computer-based anesthesiology paging system (CAPS) was developed to replace the VPS. The CAPS uses a hands-off paradigm with ubiquitous displays to inform the practice where personnel are needed. The system consists of a dedicated Ethernet network connecting redundant central servers, terminal servers, programmable keypads, and light-emitting diode displays. Commercially available hardware and software tools minimized development and maintenance costs. The CAPS was installed in >200 anesthetizing and support locations. Downtime for the CAPS averaged 0.144 min/day, as compared with 24.2 min/day for the VPS. During installation, neither system was available and the department used beepers for communications. With a beeper, the median response time of an anesthesiologist to a page from a beeper was 2.78 min, and with the CAPS 1.57 min; this difference was statistically significant (P = 0.021, t(67) = 2.36). We conclude that the CAPS is a reliable and efficient paging system that may contribute to the efficiency of the practice. Mayo Clinic installed a computer-based anesthesiology paging system (CAPS) to inform operating suite personnel when assistance is needed in procedure and recovery areas. The CAPS is more reliable than the system it replaced. Anesthesiologists arrive at a patient's bedside faster when they are paged with the CAPS than with a beeper.

  7. Visual computing model for immune system and medical system.

    PubMed

    Gong, Tao; Cao, Xinxue; Xiong, Qin

    2015-01-01

    Natural immune system is an intelligent self-organizing and adaptive system, which has a variety of immune cells with different types of immune mechanisms. The mutual cooperation between the immune cells shows the intelligence of this immune system, and modeling this immune system has an important significance in medical science and engineering. In order to build a comprehensible model of this immune system for better understanding with the visualization method than the traditional mathematic model, a visual computing model of this immune system was proposed and also used to design a medical system with the immune system, in this paper. Some visual simulations of the immune system were made to test the visual effect. The experimental results of the simulations show that the visual modeling approach can provide a more effective way for analyzing this immune system than only the traditional mathematic equations.

  8. Epilepsy analytic system with cloud computing.

    PubMed

    Shen, Chia-Ping; Zhou, Weizhi; Lin, Feng-Seng; Sung, Hsiao-Ya; Lam, Yan-Yu; Chen, Wei; Lin, Jeng-Wei; Pan, Ming-Kai; Chiu, Ming-Jang; Lai, Feipei

    2013-01-01

    Biomedical data analytic system has played an important role in doing the clinical diagnosis for several decades. Today, it is an emerging research area of analyzing these big data to make decision support for physicians. This paper presents a parallelized web-based tool with cloud computing service architecture to analyze the epilepsy. There are many modern analytic functions which are wavelet transform, genetic algorithm (GA), and support vector machine (SVM) cascaded in the system. To demonstrate the effectiveness of the system, it has been verified by two kinds of electroencephalography (EEG) data, which are short term EEG and long term EEG. The results reveal that our approach achieves the total classification accuracy higher than 90%. In addition, the entire training time accelerate about 4.66 times and prediction time is also meet requirements in real time.

  9. Implementing a modular system of computer codes

    SciTech Connect

    Vondy, D.R.; Fowler, T.B.

    1983-07-01

    A modular computation system has been developed for nuclear reactor core analysis. The codes can be applied repeatedly in blocks without extensive user input data, as needed for reactor history calculations. The primary control options over the calculational paths and task assignments within the codes are blocked separately from other instructions, admitting ready access by user input instruction or directions from automated procedures and promoting flexible and diverse applications at minimum application cost. Data interfacing is done under formal specifications with data files manipulated by an informed manager. This report emphasizes the system aspects and the development of useful capability, hopefully informative and useful to anyone developing a modular code system of much sophistication. Overall, this report in a general way summarizes the many factors and difficulties that are faced in making reactor core calculations, based on the experience of the authors. It provides the background on which work on HTGR reactor physics is being carried out.

  10. Multiscale Computational Models of Complex Biological Systems

    PubMed Central

    Walpole, Joseph; Papin, Jason A.; Peirce, Shayn M.

    2014-01-01

    Integration of data across spatial, temporal, and functional scales is a primary focus of biomedical engineering efforts. The advent of powerful computing platforms, coupled with quantitative data from high-throughput experimental platforms, has allowed multiscale modeling to expand as a means to more comprehensively investigate biological phenomena in experimentally relevant ways. This review aims to highlight recently published multiscale models of biological systems while using their successes to propose the best practices for future model development. We demonstrate that coupling continuous and discrete systems best captures biological information across spatial scales by selecting modeling techniques that are suited to the task. Further, we suggest how to best leverage these multiscale models to gain insight into biological systems using quantitative, biomedical engineering methods to analyze data in non-intuitive ways. These topics are discussed with a focus on the future of the field, the current challenges encountered, and opportunities yet to be realized. PMID:23642247

  11. Semi-automatic analysis of fire debris

    PubMed

    Touron; Malaquin; Gardebas; Nicolai

    2000-05-08

    Automated analysis of fire residues involves a strategy which deals with the wide variety of received criminalistic samples. Because of unknown concentration of accelerant in a sample and the wide range of flammable products, full attention from the analyst is required. Primary detection with a photoionisator resolves the first problem, determining the right method to use: the less responsive classical head-space determination or absorption on active charcoal tube, a better fitted method more adapted to low concentrations can thus be chosen. The latter method is suitable for automatic thermal desorption (ATD400), to avoid any risk of cross contamination. A PONA column (50 mx0.2 mm i.d.) allows the separation of volatile hydrocarbons from C(1) to C(15) and the update of a database. A specific second column is used for heavy hydrocarbons. Heavy products (C(13) to C(40)) were extracted from residues using a very small amount of pentane, concentrated to 1 ml at 50 degrees C and then placed on an automatic carousel. Comparison of flammables with referenced chromatograms provided expected identification, possibly using mass spectrometry. This analytical strategy belongs to the IRCGN quality program, resulting in analysis of 1500 samples per year by two technicians.

  12. Semi-Automatic Assembly of Learning Resources

    ERIC Educational Resources Information Center

    Verbert, K.; Ochoa, X.; Derntl, M.; Wolpers, M.; Pardo, A.; Duval, E.

    2012-01-01

    Technology Enhanced Learning is a research field that has matured considerably over the last decade. Many technical solutions to support design, authoring and use of learning activities and resources have been developed. The first datasets that reflect the tracking of actual use of these tools in real-life settings are beginning to become…

  13. RASCAL: A Rudimentary Adaptive System for Computer-Aided Learning.

    ERIC Educational Resources Information Center

    Stewart, John Christopher

    Both the background of computer-assisted instruction (CAI) systems in general and the requirements of a computer-aided learning system which would be a reasonable assistant to a teacher are discussed. RASCAL (Rudimentary Adaptive System for Computer-Aided Learning) is a first attempt at defining a CAI system which would individualize the learning…

  14. 10 CFR 35.457 - Therapy-related computer systems.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 1 2012-01-01 2012-01-01 false Therapy-related computer systems. 35.457 Section 35.457... Therapy-related computer systems. The licensee shall perform acceptance testing on the treatment planning system of therapy-related computer systems in accordance with published protocols accepted by nationally...

  15. 10 CFR 35.457 - Therapy-related computer systems.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 1 2013-01-01 2013-01-01 false Therapy-related computer systems. 35.457 Section 35.457... Therapy-related computer systems. The licensee shall perform acceptance testing on the treatment planning system of therapy-related computer systems in accordance with published protocols accepted by nationally...

  16. 10 CFR 35.457 - Therapy-related computer systems.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 1 2014-01-01 2014-01-01 false Therapy-related computer systems. 35.457 Section 35.457... Therapy-related computer systems. The licensee shall perform acceptance testing on the treatment planning system of therapy-related computer systems in accordance with published protocols accepted by nationally...

  17. 10 CFR 35.457 - Therapy-related computer systems.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Therapy-related computer systems. 35.457 Section 35.457... Therapy-related computer systems. The licensee shall perform acceptance testing on the treatment planning system of therapy-related computer systems in accordance with published protocols accepted by nationally...

  18. 10 CFR 35.457 - Therapy-related computer systems.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 1 2011-01-01 2011-01-01 false Therapy-related computer systems. 35.457 Section 35.457... Therapy-related computer systems. The licensee shall perform acceptance testing on the treatment planning system of therapy-related computer systems in accordance with published protocols accepted by nationally...

  19. Computer Assisted Reference Locator (CARL) System: An Overview.

    ERIC Educational Resources Information Center

    Sands, William A.

    The Computer Assisted Reference Locator (CARL) is a computer-based information retrieval system which uses coordinate indexing. Objectives established in designing the system are: (1) simplicity of reference query and retrieval; (2) ease of system maintenance; and (3) adaptability for alternative computer systems. The source documents input into…

  20. Computer system for a hospital microbiology laboratory.

    PubMed

    Delorme, J; Cournoyer, G

    1980-07-01

    An online computer system has been developed for a university hospital laboratory in microbiology that processes more than 125,000 specimens yearly. The system performs activities such as the printing of reports, fiscal and administrative tasks, quality control of data and technics, epidemiologic assistance, germ identification, and teaching and research in the different subspecialties of microbiology. Features of interest are smooth sequential transmission of clinical microbiologic test results from the laboratory to medical records, instantaneous display of all results for as long as 16 months, and updating of patient status, room number, and attending physician before the printing of reports. All data stored in the computer-file can be retrieved by any data item or combination of such. The reports are normally produced in the laboratory area by a teleprinter or by batch at night in case of mechanical failure of the terminal. If the system breaks down, the manually completed request forms can be sent to medical records. Programs were written in COBOL and ASSEMBLY languages.