Use of Severity of Illness Indexes for Assessing Health Care Provider Performance
1985-07-01
Exclusive reliance on manual selection of the inpatient records would have proven difficult and time consuming. Appendix B contains correspondence...presence of mortalities or complications following surgery. Results obtained from the two-fold PASBA screen and manual selection process represent a...information provided include the last four numbers of each patient’s social security number, register number, procedure and occurrence found. A manual
Automatic recognition of lactating sow behaviors through depth image processing
USDA-ARS?s Scientific Manuscript database
Manual observation and classification of animal behaviors is laborious, time-consuming, and of limited ability to process large amount of data. A computer vision-based system was developed that automatically recognizes sow behaviors (lying, sitting, standing, kneeling, feeding, drinking, and shiftin...
NASA Astrophysics Data System (ADS)
Yussup, N.; Ibrahim, M. M.; Rahman, N. A. A.; Mokhtar, M.; Salim, N. A. A.; Soh@Shaari, S. C.; Azman, A.; Lombigit, L.; Azman, A.; Omar, S. A.
2018-01-01
Most of the procedures in neutron activation analysis (NAA) process that has been established in Malaysian Nuclear Agency (Nuclear Malaysia) since 1980s were performed manually. These manual procedures carried out by the NAA laboratory personnel are time consuming and inefficient especially for sample counting and measurement process. The sample needs to be changed and the measurement software needs to be setup for every one hour counting time. Both of these procedures are performed manually for every sample. Hence, an automatic sample changer system (ASC) that consists of hardware and software is developed to automate sample counting process for up to 30 samples consecutively. This paper describes the ASC control software for NAA process which is designed and developed to control the ASC hardware and call GammaVision software for sample measurement. The software is developed by using National Instrument LabVIEW development package.
FRAME (Force Review Automation Environment): MATLAB-based AFM data processor.
Partola, Kostyantyn R; Lykotrafitis, George
2016-05-03
Data processing of force-displacement curves generated by atomic force microscopes (AFMs) for elastic moduli and unbinding event measurements is very time consuming and susceptible to user error or bias. There is an evident need for consistent, dependable, and easy-to-use AFM data processing software. We have developed an open-source software application, the force review automation environment (or FRAME), that provides users with an intuitive graphical user interface, automating data processing, and tools for expediting manual processing. We did not observe a significant difference between manually processed and automatically processed results from the same data sets. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Taboada, B.; Vega-Alvarado, L.; Córdova-Aguilar, M. S.; Galindo, E.; Corkidi, G.
2006-09-01
Characterization of multiphase systems occurring in fermentation processes is a time-consuming and tedious process when manual methods are used. This work describes a new semi-automatic methodology for the on-line assessment of diameters of oil drops and air bubbles occurring in a complex simulated fermentation broth. High-quality digital images were obtained from the interior of a mechanically stirred tank. These images were pre-processed to find segments of edges belonging to the objects of interest. The contours of air bubbles and oil drops were then reconstructed using an improved Hough transform algorithm which was tested in two, three and four-phase simulated fermentation model systems. The results were compared against those obtained manually by a trained observer, showing no significant statistical differences. The method was able to reduce the total processing time for the measurements of bubbles and drops in different systems by 21-50% and the manual intervention time for the segmentation procedure by 80-100%.
Automated Interactive Simulation Model (AISIM) VAX Version 5.0 Training Manual.
1987-05-29
action, activity, decision , etc. that consumes time. The entity is automatically created by the system when an ACTION Primitive is placed. 1.3.2.4 The...MODELED SYSTEM 1.3.2.1 The Process Entity. A Process is used to represent the operations, decisions , actions or activities that can be decomposed and...is associated with the Action entity described below, is included in Process definitions to indicate the time a certain Action (or process, decision
Consumer Education Reference Manual.
ERIC Educational Resources Information Center
Tennessee Univ., Knoxville. State Agency for Title I.
This manual contains information for consumer education, which is defined as the process of imparting to an individual the skills, concepts, knowledges, and insights required to help each person evolve his or her own values, evaluate alternative choices in the marketplace, manage personal resources effectively, and obtain the best buys for his or…
NASA Astrophysics Data System (ADS)
Yussup, N.; Rahman, N. A. A.; Ibrahim, M. M.; Mokhtar, M.; Salim, N. A. A.; Soh@Shaari, S. C.; Azman, A.
2017-01-01
Neutron Activation Analysis (NAA) process has been established in Malaysian Nuclear Agency (Nuclear Malaysia) since 1980s. Most of the procedures established especially from sample registration to sample analysis are performed manually. These manual procedures carried out by the NAA laboratory personnel are time consuming and inefficient. Hence, a software to support the system automation is developed to provide an effective method to replace redundant manual data entries and produce faster sample analysis and calculation process. This paper describes the design and development of automation software for NAA process which consists of three sub-programs. The sub-programs are sample registration, hardware control and data acquisition; and sample analysis. The data flow and connection between the sub-programs will be explained. The software is developed by using National Instrument LabView development package.
ERIC Educational Resources Information Center
New York State Education Dept., Albany.
This manual provides teachers with lesson plans in consumer education. Each lesson contains background material offering the teacher specific information on the subject of the lesson, development of understandings, student worksheets, and discussion questions to encourage student involvement. The ten lesson plans are--Buying on time, Retail…
Open-source hardware is a low-cost alternative for scientific instrumentation and research
USDA-ARS?s Scientific Manuscript database
Scientific research requires the collection of data in order to study, monitor, analyze, describe, or understand a particular process or event. Data collection efforts are often a compromise: manual measurements can be time-consuming and labor-intensive, resulting in data being collected at a low f...
Predicting Stored Grain Insect Population Densities Using an Electronic Probe Trap
USDA-ARS?s Scientific Manuscript database
Manual sampling of insects in stored grain is a laborious and time consuming process. Automation of grain sampling should help to increase the adoption of stored-grain integrated pest management. A new commercial electronic grain probe trap (OPI Insector™) has recently been marketed. We field tested...
2011-01-01
Background Reprocessing of endoscopes generally requires labour-intensive manual cleaning followed by high-level disinfection in an automated endoscope reprocessor (AER). EVOTECH Endoscope Cleaner and Reprocessor (ECR) is approved for fully automated cleaning and disinfection whereas AERs require manual cleaning prior to the high-level disinfection procedure. The purpose of this economic evaluation was to determine the cost-efficiency of the ECR versus AER methods of endoscopy reprocessing in an actual practice setting. Methods A time and motion study was conducted at a Canadian hospital to collect data on the personnel resources and consumable supplies costs associated with the use of EVOTECH ECR versus manual cleaning followed by AER with Medivators DSD-201. Reprocessing of all endoscopes was observed and timed for both reprocessor types over three days. Laboratory staff members were interviewed regarding the consumption and cost of all disposable supplies and equipment. Exact Wilcoxon rank sum test was used for assessing differences in total cycle reprocessing time. Results Endoscope reprocessing was significantly shorter with the ECR than with manual cleaning followed by AER. The differences in median time were 12.46 minutes per colonoscope (p < 0.0001), 6.31 minutes per gastroscope (p < 0.0001), and 5.66 minutes per bronchoscope (p = 0.0040). Almost 2 hours of direct labour time was saved daily with the ECR. The total per cycle cost of consumables and labour for maintenance was slightly higher for EVOTECH ECR versus manual cleaning followed by AER ($8.91 versus $8.31, respectively). Including the cost of direct labour time consumed in reprocessing scopes, the per cycle and annual costs of using the EVOTECH ECR was less than the cost of manual cleaning followed by AER disinfection ($11.50 versus $11.88). Conclusions The EVOTECH ECR was more efficient and less costly to use for the reprocessing of endoscopes than manual cleaning followed by AER disinfection. Although the cost of consumable supplies required to reprocess endoscopes with EVOTECH ECR was slightly higher, the value of the labour time saved with EVOTECH ECR more than offset the additional consumables cost. The increased efficiency with EVOTECH ECR could lead to even further cost-savings by shifting endoscopy laboratory personnel responsibilities but further study is required. PMID:21967345
Forte, Lindy; Shum, Cynthia
2011-10-03
Reprocessing of endoscopes generally requires labour-intensive manual cleaning followed by high-level disinfection in an automated endoscope reprocessor (AER). EVOTECH Endoscope Cleaner and Reprocessor (ECR) is approved for fully automated cleaning and disinfection whereas AERs require manual cleaning prior to the high-level disinfection procedure. The purpose of this economic evaluation was to determine the cost-efficiency of the ECR versus AER methods of endoscopy reprocessing in an actual practice setting. A time and motion study was conducted at a Canadian hospital to collect data on the personnel resources and consumable supplies costs associated with the use of EVOTECH ECR versus manual cleaning followed by AER with Medivators DSD-201. Reprocessing of all endoscopes was observed and timed for both reprocessor types over three days. Laboratory staff members were interviewed regarding the consumption and cost of all disposable supplies and equipment. Exact Wilcoxon rank sum test was used for assessing differences in total cycle reprocessing time. Endoscope reprocessing was significantly shorter with the ECR than with manual cleaning followed by AER. The differences in median time were 12.46 minutes per colonoscope (p < 0.0001), 6.31 minutes per gastroscope (p < 0.0001), and 5.66 minutes per bronchoscope (p = 0.0040). Almost 2 hours of direct labour time was saved daily with the ECR. The total per cycle cost of consumables and labour for maintenance was slightly higher for EVOTECH ECR versus manual cleaning followed by AER ($8.91 versus $8.31, respectively). Including the cost of direct labour time consumed in reprocessing scopes, the per cycle and annual costs of using the EVOTECH ECR was less than the cost of manual cleaning followed by AER disinfection ($11.50 versus $11.88). The EVOTECH ECR was more efficient and less costly to use for the reprocessing of endoscopes than manual cleaning followed by AER disinfection. Although the cost of consumable supplies required to reprocess endoscopes with EVOTECH ECR was slightly higher, the value of the labour time saved with EVOTECH ECR more than offset the additional consumables cost. The increased efficiency with EVOTECH ECR could lead to even further cost-savings by shifting endoscopy laboratory personnel responsibilities but further study is required.
Odland, Audun; Server, Andres; Saxhaug, Cathrine; Breivik, Birger; Groote, Rasmus; Vardal, Jonas; Larsson, Christopher; Bjørnerud, Atle
2015-11-01
Volumetric magnetic resonance imaging (MRI) is now widely available and routinely used in the evaluation of high-grade gliomas (HGGs). Ideally, volumetric measurements should be included in this evaluation. However, manual tumor segmentation is time-consuming and suffers from inter-observer variability. Thus, tools for semi-automatic tumor segmentation are needed. To present a semi-automatic method (SAM) for segmentation of HGGs and to compare this method with manual segmentation performed by experts. The inter-observer variability among experts manually segmenting HGGs using volumetric MRIs was also examined. Twenty patients with HGGs were included. All patients underwent surgical resection prior to inclusion. Each patient underwent several MRI examinations during and after adjuvant chemoradiation therapy. Three experts performed manual segmentation. The results of tumor segmentation by the experts and by the SAM were compared using Dice coefficients and kappa statistics. A relatively close agreement was seen among two of the experts and the SAM, while the third expert disagreed considerably with the other experts and the SAM. An important reason for this disagreement was a different interpretation of contrast enhancement as either surgically-induced or glioma-induced. The time required for manual tumor segmentation was an average of 16 min per scan. Editing of the tumor masks produced by the SAM required an average of less than 2 min per sample. Manual segmentation of HGG is very time-consuming and using the SAM could increase the efficiency of this process. However, the accuracy of the SAM ultimately depends on the expert doing the editing. Our study confirmed a considerable inter-observer variability among experts defining tumor volume from volumetric MRIs. © The Foundation Acta Radiologica 2014.
Stormo, Svein K; Ernstsen, Arild; Nilsen, Heidi; Heia, Karsten; Sivertsen, Agnar H; Elvevoll, Edel
2004-07-01
The objective of this study was to contribute to the development of technology that will be able to replace manual operations in processing of fish fillets. Removal of parasites, black lining, remnants of skin, and bloodstains are costly and time-consuming operations to the fish processing industry. The presence of parasites in fish products tends to spoil consumers' appetites. Recent reports questioning the safety of eating cod infected with parasites might lower consumer acceptance of seafood. Presently, parasites are detected and removed manually. An average efficiency of about 75% under commercial conditions has been reported. In this study, we focused on biochemical differences between cod muscle and the prevalent anisakine nematode species (Anisakis simplex and Pseudoterranova decipiens) infecting Atlantic cod (Gadus morhua). Using reversed phase high-performance liquid chromatography equipped with a photodiode array detector, substances absorbing in the range 300 to 600 nm were identified in extracts from parasite material. These substances were not detected in extracts from cod tissue. Significant biochemical differences between cod muscle and parasite material have thus been demonstrated.
Autonomous characterization of plastic-bonded explosives
NASA Astrophysics Data System (ADS)
Linder, Kim Dalton; DeRego, Paul; Gomez, Antonio; Baumgart, Chris
2006-08-01
Plastic-Bonded Explosives (PBXs) are a newer generation of explosive compositions developed at Los Alamos National Laboratory (LANL). Understanding the micromechanical behavior of these materials is critical. The size of the crystal particles and porosity within the PBX influences their shock sensitivity. Current methods to characterize the prominent structural characteristics include manual examination by scientists and attempts to use commercially available image processing packages. Both methods are time consuming and tedious. LANL personnel, recognizing this as a manually intensive process, have worked with the Kansas City Plant / Kirtland Operations to develop a system which utilizes image processing and pattern recognition techniques to characterize PBX material. System hardware consists of a CCD camera, zoom lens, two-dimensional, motorized stage, and coaxial, cross-polarized light. System integration of this hardware with the custom software is at the core of the machine vision system. Fundamental processing steps involve capturing images from the PBX specimen, and extraction of void, crystal, and binder regions. For crystal extraction, a Quadtree decomposition segmentation technique is employed. Benefits of this system include: (1) reduction of the overall characterization time; (2) a process which is quantifiable and repeatable; (3) utilization of personnel for intelligent review rather than manual processing; and (4) significantly enhanced characterization accuracy.
Automated Signal Processing Applied to Volatile-Based Inspection of Greenhouse Crops
Jansen, Roel; Hofstee, Jan Willem; Bouwmeester, Harro; van Henten, Eldert
2010-01-01
Gas chromatograph–mass spectrometers (GC-MS) have been used and shown utility for volatile-based inspection of greenhouse crops. However, a widely recognized difficulty associated with GC-MS application is the large and complex data generated by this instrument. As a consequence, experienced analysts are often required to process this data in order to determine the concentrations of the volatile organic compounds (VOCs) of interest. Manual processing is time-consuming, labour intensive and may be subject to errors due to fatigue. The objective of this study was to assess whether or not GC-MS data can also be automatically processed in order to determine the concentrations of crop health associated VOCs in a greenhouse. An experimental dataset that consisted of twelve data files was processed both manually and automatically to address this question. Manual processing was based on simple peak integration while the automatic processing relied on the algorithms implemented in the MetAlign™ software package. The results of automatic processing of the experimental dataset resulted in concentrations similar to that after manual processing. These results demonstrate that GC-MS data can be automatically processed in order to accurately determine the concentrations of crop health associated VOCs in a greenhouse. When processing GC-MS data automatically, noise reduction, alignment, baseline correction and normalisation are required. PMID:22163594
Measurement of thermally ablated lesions in sonoelastographic images using level set methods
NASA Astrophysics Data System (ADS)
Castaneda, Benjamin; Tamez-Pena, Jose Gerardo; Zhang, Man; Hoyt, Kenneth; Bylund, Kevin; Christensen, Jared; Saad, Wael; Strang, John; Rubens, Deborah J.; Parker, Kevin J.
2008-03-01
The capability of sonoelastography to detect lesions based on elasticity contrast can be applied to monitor the creation of thermally ablated lesion. Currently, segmentation of lesions depicted in sonoelastographic images is performed manually which can be a time consuming process and prone to significant intra- and inter-observer variability. This work presents a semi-automated segmentation algorithm for sonoelastographic data. The user starts by planting a seed in the perceived center of the lesion. Fast marching methods use this information to create an initial estimate of the lesion. Subsequently, level set methods refine its final shape by attaching the segmented contour to edges in the image while maintaining smoothness. The algorithm is applied to in vivo sonoelastographic images from twenty five thermal ablated lesions created in porcine livers. The estimated area is compared to results from manual segmentation and gross pathology images. Results show that the algorithm outperforms manual segmentation in accuracy, inter- and intra-observer variability. The processing time per image is significantly reduced.
ERIC Educational Resources Information Center
Patterson, Olga
2012-01-01
Domain adaptation of natural language processing systems is challenging because it requires human expertise. While manual effort is effective in creating a high quality knowledge base, it is expensive and time consuming. Clinical text adds another layer of complexity to the task due to privacy and confidentiality restrictions that hinder the…
ERIC Educational Resources Information Center
Vlas, Radu Eduard
2012-01-01
Open source projects do have requirements; they are, however, mostly informal, text descriptions found in requests, forums, and other correspondence. Understanding such requirements provides insight into the nature of open source projects. Unfortunately, manual analysis of natural language requirements is time-consuming, and for large projects,…
Product Manuals: A Consumer Perspective.
ERIC Educational Resources Information Center
Showers, Linda S.; And Others
1993-01-01
Qualitative analysis of insights from consumer focus groups on product manual usage reveals consumer perceptions and preferences regarding manual and safety message format. Results can be used to improve manual design and content. (JOW)
Properties of induced seismicity at the geothermal reservoir Insheim, Germany
NASA Astrophysics Data System (ADS)
Olbert, Kai; Küperkoch, Ludger; Thomas, Meier
2017-04-01
Within the framework of the German MAGS2 Project the processing of induced events at the geothermal power plant Insheim, Germany, has been reassessed and evaluated. The power plant is located close to the western rim of the Upper Rhine Graben in a region with a strongly heterogeneous subsurface. Therefore, the location of seismic events particularly the depth estimation is challenging. The seismic network consisting of up to 50 stations has an aperture of approximately 15 km around the power plant. Consequently, the manual processing is time consuming. Using a waveform similarity detection algorithm, the existing dataset from 2012 to 2016 has been reprocessed to complete the catalog of induced seismic events. Based on the waveform similarity clusters of similar events have been detected. Automated P- and S-arrival time determination using an improved multi-component autoregressive prediction algorithm yields approximately 14.000 P- and S-arrivals for 758 events. Applying a dataset of manual picks as reference the automated picking algorithm has been optimized resulting in a standard deviation of the residuals between automated and manual picks of about 0.02s. The automated locations show uncertainties comparable to locations of the manual reference dataset. 90 % of the automated relocations fall within the error ellipsoid of the manual locations. The remaining locations are either badly resolved due to low numbers of picks or so well resolved that the automatic location is outside the error ellipsoid although located close to the manual location. The developed automated processing scheme proved to be a useful tool to supplement real-time monitoring. The event clusters are located at small patches of faults known from reflection seismic studies. The clusters are observed close to both the injection as well as the production wells.
Guimaraes, Carolina V; Grzeszczuk, Robert; Bisset, George S; Donnelly, Lane F
2018-03-01
When implementing or monitoring department-sanctioned standardized radiology reports, feedback about individual faculty performance has been shown to be a useful driver of faculty compliance. Most commonly, these data are derived from manual audit, which can be both time-consuming and subject to sampling error. The purpose of this study was to evaluate whether a software program using natural language processing and machine learning could accurately audit radiologist compliance with the use of standardized reports compared with performed manual audits. Radiology reports from a 1-month period were loaded into such a software program, and faculty compliance with use of standardized reports was calculated. For that same period, manual audits were performed (25 reports audited for each of 42 faculty members). The mean compliance rates calculated by automated auditing were then compared with the confidence interval of the mean rate by manual audit. The mean compliance rate for use of standardized reports as determined by manual audit was 91.2% with a confidence interval between 89.3% and 92.8%. The mean compliance rate calculated by automated auditing was 92.0%, within that confidence interval. This study shows that by use of natural language processing and machine learning algorithms, an automated analysis can accurately define whether reports are compliant with use of standardized report templates and language, compared with manual audits. This may avoid significant labor costs related to conducting the manual auditing process. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Design and evaluation of a service oriented architecture for paperless ICU tarification.
Steurbaut, Kristof; Colpaert, Kirsten; Van Hoecke, Sofie; Steurbaut, Sabrina; Danneels, Chris; Decruyenaere, Johan; De Turck, Filip
2012-06-01
The computerization of Intensive Care Units provides an overwhelming amount of electronic data for both medical and financial analysis. However, the current tarification, which is the process to tick and count patients' procedures, is still a repetitive, time-consuming process on paper. Nurses and secretaries keep track manually of the patients' medical procedures. This paper describes the design methodology and implementation of automated tarification services. In this study we investigate if the tarification can be modeled in service oriented architecture as a composition of interacting services. Services are responsible for data collection, automatic assignment of records to physicians and application of rules. Performance is evaluated in terms of execution time, cost evaluation and return on investment based on tracking of real procedures. The services provide high flexibility in terms of maintenance, integration and rules support. It is shown that services offer a more accurate, less time-consuming and cost-effective tarification.
A Fully Automated Approach to Spike Sorting.
Chung, Jason E; Magland, Jeremy F; Barnett, Alex H; Tolosa, Vanessa M; Tooker, Angela C; Lee, Kye Y; Shah, Kedar G; Felix, Sarah H; Frank, Loren M; Greengard, Leslie F
2017-09-13
Understanding the detailed dynamics of neuronal networks will require the simultaneous measurement of spike trains from hundreds of neurons (or more). Currently, approaches to extracting spike times and labels from raw data are time consuming, lack standardization, and involve manual intervention, making it difficult to maintain data provenance and assess the quality of scientific results. Here, we describe an automated clustering approach and associated software package that addresses these problems and provides novel cluster quality metrics. We show that our approach has accuracy comparable to or exceeding that achieved using manual or semi-manual techniques with desktop central processing unit (CPU) runtimes faster than acquisition time for up to hundreds of electrodes. Moreover, a single choice of parameters in the algorithm is effective for a variety of electrode geometries and across multiple brain regions. This algorithm has the potential to enable reproducible and automated spike sorting of larger scale recordings than is currently possible. Copyright © 2017 Elsevier Inc. All rights reserved.
Automatic mouse ultrasound detector (A-MUD): A new tool for processing rodent vocalizations.
Zala, Sarah M; Reitschmidt, Doris; Noll, Anton; Balazs, Peter; Penn, Dustin J
2017-01-01
House mice (Mus musculus) emit complex ultrasonic vocalizations (USVs) during social and sexual interactions, which have features similar to bird song (i.e., they are composed of several different types of syllables, uttered in succession over time to form a pattern of sequences). Manually processing complex vocalization data is time-consuming and potentially subjective, and therefore, we developed an algorithm that automatically detects mouse ultrasonic vocalizations (Automatic Mouse Ultrasound Detector or A-MUD). A-MUD is a script that runs on STx acoustic software (S_TOOLS-STx version 4.2.2), which is free for scientific use. This algorithm improved the efficiency of processing USV files, as it was 4-12 times faster than manual segmentation, depending upon the size of the file. We evaluated A-MUD error rates using manually segmented sound files as a 'gold standard' reference, and compared them to a commercially available program. A-MUD had lower error rates than the commercial software, as it detected significantly more correct positives, and fewer false positives and false negatives. The errors generated by A-MUD were mainly false negatives, rather than false positives. This study is the first to systematically compare error rates for automatic ultrasonic vocalization detection methods, and A-MUD and subsequent versions will be made available for the scientific community.
Hemorrhage Detection and Segmentation in Traumatic Pelvic Injuries
Davuluri, Pavani; Wu, Jie; Tang, Yang; Cockrell, Charles H.; Ward, Kevin R.; Najarian, Kayvan; Hargraves, Rosalyn H.
2012-01-01
Automated hemorrhage detection and segmentation in traumatic pelvic injuries is vital for fast and accurate treatment decision making. Hemorrhage is the main cause of deaths in patients within first 24 hours after the injury. It is very time consuming for physicians to analyze all Computed Tomography (CT) images manually. As time is crucial in emergence medicine, analyzing medical images manually delays the decision-making process. Automated hemorrhage detection and segmentation can significantly help physicians to analyze these images and make fast and accurate decisions. Hemorrhage segmentation is a crucial step in the accurate diagnosis and treatment decision-making process. This paper presents a novel rule-based hemorrhage segmentation technique that utilizes pelvic anatomical information to segment hemorrhage accurately. An evaluation measure is used to quantify the accuracy of hemorrhage segmentation. The results show that the proposed method is able to segment hemorrhage very well, and the results are promising. PMID:22919433
[Medical image elastic registration smoothed by unconstrained optimized thin-plate spline].
Zhang, Yu; Li, Shuxiang; Chen, Wufan; Liu, Zhexing
2003-12-01
Elastic registration of medical image is an important subject in medical image processing. Previous work has concentrated on selecting the corresponding landmarks manually and then using thin-plate spline interpolating to gain the elastic transformation. However, the landmarks extraction is always prone to error, which will influence the registration results. Localizing the landmarks manually is also difficult and time-consuming. We the optimization theory to improve the thin-plate spline interpolation, and based on it, used an automatic method to extract the landmarks. Combining these two steps, we have proposed an automatic, exact and robust registration method and have gained satisfactory registration results.
NASA Astrophysics Data System (ADS)
Huber, Matthew S.; Ferriãre, Ludovic; Losiak, Anna; Koeberl, Christian
2011-09-01
Abstract- Planar deformation features (PDFs) in quartz, one of the most commonly used diagnostic indicators of shock metamorphism, are planes of amorphous material that follow crystallographic orientations, and can thus be distinguished from non-shock-induced fractures in quartz. The process of indexing data for PDFs from universal-stage measurements has traditionally been performed using a manual graphical method, a time-consuming process in which errors can easily be introduced. A mathematical method and computer algorithm, which we call the Automated Numerical Index Executor (ANIE) program for indexing PDFs, was produced, and is presented here. The ANIE program is more accurate and faster than the manual graphical determination of Miller-Bravais indices, as it allows control of the exact error used in the calculation and removal of human error from the process.
NASA Astrophysics Data System (ADS)
Rahman, Nur Aira Abd; Yussup, Nolida; Salim, Nazaratul Ashifa Bt. Abdullah; Ibrahim, Maslina Bt. Mohd; Mokhtar, Mukhlis B.; Soh@Shaari, Syirrazie Bin Che; Azman, Azraf B.; Ismail, Nadiah Binti
2015-04-01
Neutron Activation Analysis (NAA) had been established in Nuclear Malaysia since 1980s. Most of the procedures established were done manually including sample registration. The samples were recorded manually in a logbook and given ID number. Then all samples, standards, SRM and blank were recorded on the irradiation vial and several forms prior to irradiation. These manual procedures carried out by the NAA laboratory personnel were time consuming and not efficient. Sample registration software is developed as part of IAEA/CRP project on `Development of Process Automation in the Neutron Activation Analysis (NAA) Facility in Malaysia Nuclear Agency (RC17399)'. The objective of the project is to create a pc-based data entry software during sample preparation stage. This is an effective method to replace redundant manual data entries that needs to be completed by laboratory personnel. The software developed will automatically generate sample code for each sample in one batch, create printable registration forms for administration purpose, and store selected parameters that will be passed to sample analysis program. The software is developed by using National Instruments Labview 8.6.
Baldewijns, Greet; Luca, Stijn; Nagels, William; Vanrumste, Bart; Croonenborghs, Tom
2015-01-01
It has been shown that gait speed and transfer times are good measures of functional ability in elderly. However, data currently acquired by systems that measure either gait speed or transfer times in the homes of elderly people require manual reviewing by healthcare workers. This reviewing process is time-consuming. To alleviate this burden, this paper proposes the use of statistical process control methods to automatically detect both positive and negative changes in transfer times. Three SPC techniques: tabular CUSUM, standardized CUSUM and EWMA, known for their ability to detect small shifts in the data, are evaluated on simulated transfer times. This analysis shows that EWMA is the best-suited method with a detection accuracy of 82% and an average detection time of 9.64 days.
Kaur, Indreshpal; Zulovich, Jane M; Gonzalez, Marissa; McGee, Kara M; Ponweera, Nirmali; Thandi, Daljit; Alvarez, Enrique F; Annandale, Kathy; Flagge, Frank; Rezvani, Katayoun; Shpall, Elizabeth
2017-03-01
Umbilical cord blood (CB) is being used as a source of hematopoietic stem cells (HSCs) and immune cells to treat many disorders. Because these cells are present in low numbers in CB, investigators have developed strategies to expand HSCs and other immune cells such as natural killer (NK) cells. The initial step in this process is to enrich mononuclear cells (MNCs) while depleting unwanted cells. The manual method of MNC enrichment is routinely used by many centers; however, it is an open system, time-consuming and operator dependent. For clinical manufacturing, it is important to have a closed system to avoid microbial contamination. In this study, we optimized an automated, closed system (Sepax) for enriching MNCs from cryopreserved CB units. Using Sepax, we observed higher recovery of total nucleated cells (TNC), CD34 + cells, NK cells and monocytes when compared to manual enrichment, despite similar TNC and CD34 + viability with the two methods. Even though the depletion of red blood cells, granulocytes and platelets was superior using the manual method, significantly higher CFU-GM were obtained in MNCs enriched using Sepax compared to the manual method. This is likely related to the fact that the automated Sepax significantly shortened the processing time (Sepax: 74 - 175 minutes versus manual method: 180 - 290 minutes). The use of DNAse and MgCl 2 during the Sepax thaw and wash procedure prevents clumping of cells and loss of viability, resulting in improved post-thaw cell recovery. We optimized enrichment of MNCs from cryopreserved CB products in a closed system using the Sepax which is a walk away and automated processing system. Copyright © 2017 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.
Sigoillot, Frederic D; Huckins, Jeremy F; Li, Fuhai; Zhou, Xiaobo; Wong, Stephen T C; King, Randall W
2011-01-01
Automated time-lapse microscopy can visualize proliferation of large numbers of individual cells, enabling accurate measurement of the frequency of cell division and the duration of interphase and mitosis. However, extraction of quantitative information by manual inspection of time-lapse movies is too time-consuming to be useful for analysis of large experiments. Here we present an automated time-series approach that can measure changes in the duration of mitosis and interphase in individual cells expressing fluorescent histone 2B. The approach requires analysis of only 2 features, nuclear area and average intensity. Compared to supervised learning approaches, this method reduces processing time and does not require generation of training data sets. We demonstrate that this method is as sensitive as manual analysis in identifying small changes in interphase or mitotic duration induced by drug or siRNA treatment. This approach should facilitate automated analysis of high-throughput time-lapse data sets to identify small molecules or gene products that influence timing of cell division.
Torres, Viviana; Cerda, Mauricio; Knaup, Petra; Löpprich, Martin
2016-01-01
An important part of the electronic information available in Hospital Information System (HIS) has the potential to be automatically exported to Electronic Data Capture (EDC) platforms for improving clinical research. This automation has the advantage of reducing manual data transcription, a time consuming and prone to errors process. However, quantitative evaluations of the process of exporting data from a HIS to an EDC system have not been reported extensively, in particular comparing with manual transcription. In this work an assessment to study the quality of an automatic export process, focused in laboratory data from a HIS is presented. Quality of the laboratory data was assessed in two types of processes: (1) a manual process of data transcription, and (2) an automatic process of data transference. The automatic transference was implemented as an Extract, Transform and Load (ETL) process. Then, a comparison was carried out between manual and automatic data collection methods. The criteria to measure data quality were correctness and completeness. The manual process had a general error rate of 2.6% to 7.1%, obtaining the lowest error rate if data fields with a not clear definition were removed from the analysis (p < 10E-3). In the case of automatic process, the general error rate was 1.9% to 12.1%, where lowest error rate is obtained when excluding information missing in the HIS but transcribed to the EDC from other physical sources. The automatic ETL process can be used to collect laboratory data for clinical research if data in the HIS as well as physical documentation not included in HIS, are identified previously and follows a standardized data collection protocol.
Joelsson, Daniel; Moravec, Phil; Troutman, Matthew; Pigeon, Joseph; DePhillips, Pete
2008-08-20
Transferring manual ELISAs to automated platforms requires optimizing the assays for each particular robotic platform. These optimization experiments are often time consuming and difficult to perform using a traditional one-factor-at-a-time strategy. In this manuscript we describe the development of an automated process using statistical design of experiments (DOE) to quickly optimize immunoassays for precision and robustness on the Tecan EVO liquid handler. By using fractional factorials and a split-plot design, five incubation time variables and four reagent concentration variables can be optimized in a short period of time.
Automated in vivo 3D high-definition optical coherence tomography skin analysis system.
Ai Ping Yow; Jun Cheng; Annan Li; Srivastava, Ruchir; Jiang Liu; Wong, Damon Wing Kee; Hong Liang Tey
2016-08-01
The in vivo assessment and visualization of skin structures can be performed through the use of high resolution optical coherence tomography imaging, also known as HD-OCT. However, the manual assessment of such images can be exhaustive and time consuming. In this paper, we present an analysis system to automatically identify and quantify the skin characteristics such as the topography of the surface of the skin and thickness of the epidermis in HD-OCT images. Comparison of this system with manual clinical measurements demonstrated its potential for automatic objective skin analysis and diseases diagnosis. To our knowledge, this is the first report of an automated system to process and analyse HD-OCT skin images.
TRAFIC: fiber tract classification using deep learning
NASA Astrophysics Data System (ADS)
Ngattai Lam, Prince D.; Belhomme, Gaetan; Ferrall, Jessica; Patterson, Billie; Styner, Martin; Prieto, Juan C.
2018-03-01
We present TRAFIC, a fully automated tool for the labeling and classification of brain fiber tracts. TRAFIC classifies new fibers using a neural network trained using shape features computed from previously traced and manually corrected fiber tracts. It is independent from a DTI Atlas as it is applied to already traced fibers. This work is motivated by medical applications where the process of extracting fibers from a DTI atlas, or classifying fibers manually is time consuming and requires knowledge about brain anatomy. With this new approach we were able to classify traced fiber tracts obtaining encouraging results. In this report we will present in detail the methods used and the results achieved with our approach.
Automatic mouse ultrasound detector (A-MUD): A new tool for processing rodent vocalizations
Reitschmidt, Doris; Noll, Anton; Balazs, Peter; Penn, Dustin J.
2017-01-01
House mice (Mus musculus) emit complex ultrasonic vocalizations (USVs) during social and sexual interactions, which have features similar to bird song (i.e., they are composed of several different types of syllables, uttered in succession over time to form a pattern of sequences). Manually processing complex vocalization data is time-consuming and potentially subjective, and therefore, we developed an algorithm that automatically detects mouse ultrasonic vocalizations (Automatic Mouse Ultrasound Detector or A-MUD). A-MUD is a script that runs on STx acoustic software (S_TOOLS-STx version 4.2.2), which is free for scientific use. This algorithm improved the efficiency of processing USV files, as it was 4–12 times faster than manual segmentation, depending upon the size of the file. We evaluated A-MUD error rates using manually segmented sound files as a ‘gold standard’ reference, and compared them to a commercially available program. A-MUD had lower error rates than the commercial software, as it detected significantly more correct positives, and fewer false positives and false negatives. The errors generated by A-MUD were mainly false negatives, rather than false positives. This study is the first to systematically compare error rates for automatic ultrasonic vocalization detection methods, and A-MUD and subsequent versions will be made available for the scientific community. PMID:28727808
Duke, Jon D.; Friedlin, Jeff
2010-01-01
Evaluating medications for potential adverse events is a time-consuming process, typically involving manual lookup of information by physicians. This process can be expedited by CDS systems that support dynamic retrieval and filtering of adverse drug events (ADE’s), but such systems require a source of semantically-coded ADE data. We created a two-component system that addresses this need. First we created a natural language processing application which extracts adverse events from Structured Product Labels and generates a standardized ADE knowledge base. We then built a decision support service that consumes a Continuity of Care Document and returns a list of patient-specific ADE’s. Our database currently contains 534,125 ADE’s from 5602 product labels. An NLP evaluation of 9529 ADE’s showed recall of 93% and precision of 95%. On a trial set of 30 CCD’s, the system provided adverse event data for 88% of drugs and returned these results in an average of 620ms. PMID:21346964
ERIC Educational Resources Information Center
Antonelli, Sharon
These three instruction manuals are designed as aids for faculty and staff teaching consumer education, nutrition, and parenting. They include resources for teaching limited English speaking students. A 17-page Vocational English as a Second Language (VESL) annotated bibliography precedes the instruction manuals. Each manual consists of 18 units.…
24 CFR 3282.207 - Manufactured home consumer manual requirements.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 24 Housing and Urban Development 5 2014-04-01 2014-04-01 false Manufactured home consumer manual... HOUSING AND URBAN DEVELOPMENT MANUFACTURED HOME PROCEDURAL AND ENFORCEMENT REGULATIONS Manufacturer Inspection and Certification Requirements § 3282.207 Manufactured home consumer manual requirements. (a) The...
24 CFR 3282.207 - Manufactured home consumer manual requirements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 24 Housing and Urban Development 5 2012-04-01 2012-04-01 false Manufactured home consumer manual... HOUSING AND URBAN DEVELOPMENT MANUFACTURED HOME PROCEDURAL AND ENFORCEMENT REGULATIONS Manufacturer Inspection and Certification Requirements § 3282.207 Manufactured home consumer manual requirements. (a) The...
24 CFR 3282.207 - Manufactured home consumer manual requirements.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 24 Housing and Urban Development 5 2013-04-01 2013-04-01 false Manufactured home consumer manual... HOUSING AND URBAN DEVELOPMENT MANUFACTURED HOME PROCEDURAL AND ENFORCEMENT REGULATIONS Manufacturer Inspection and Certification Requirements § 3282.207 Manufactured home consumer manual requirements. (a) The...
24 CFR 3282.207 - Manufactured home consumer manual requirements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 24 Housing and Urban Development 5 2011-04-01 2011-04-01 false Manufactured home consumer manual... HOUSING AND URBAN DEVELOPMENT MANUFACTURED HOME PROCEDURAL AND ENFORCEMENT REGULATIONS Manufacturer Inspection and Certification Requirements § 3282.207 Manufactured home consumer manual requirements. (a) The...
Computational Methods for Analyzing Health News Coverage
ERIC Educational Resources Information Center
McFarlane, Delano J.
2011-01-01
Researchers that investigate the media's coverage of health have historically relied on keyword searches to retrieve relevant health news coverage, and manual content analysis methods to categorize and score health news text. These methods are problematic. Manual content analysis methods are labor intensive, time consuming, and inherently…
Niklasson, Markus; Ahlner, Alexandra; Andresen, Cecilia; Marsh, Joseph A; Lundström, Patrik
2015-01-01
The process of resonance assignment is fundamental to most NMR studies of protein structure and dynamics. Unfortunately, the manual assignment of residues is tedious and time-consuming, and can represent a significant bottleneck for further characterization. Furthermore, while automated approaches have been developed, they are often limited in their accuracy, particularly for larger proteins. Here, we address this by introducing the software COMPASS, which, by combining automated resonance assignment with manual intervention, is able to achieve accuracy approaching that from manual assignments at greatly accelerated speeds. Moreover, by including the option to compensate for isotope shift effects in deuterated proteins, COMPASS is far more accurate for larger proteins than existing automated methods. COMPASS is an open-source project licensed under GNU General Public License and is available for download from http://www.liu.se/forskning/foass/tidigare-foass/patrik-lundstrom/software?l=en. Source code and binaries for Linux, Mac OS X and Microsoft Windows are available.
Niklasson, Markus; Ahlner, Alexandra; Andresen, Cecilia; Marsh, Joseph A.; Lundström, Patrik
2015-01-01
The process of resonance assignment is fundamental to most NMR studies of protein structure and dynamics. Unfortunately, the manual assignment of residues is tedious and time-consuming, and can represent a significant bottleneck for further characterization. Furthermore, while automated approaches have been developed, they are often limited in their accuracy, particularly for larger proteins. Here, we address this by introducing the software COMPASS, which, by combining automated resonance assignment with manual intervention, is able to achieve accuracy approaching that from manual assignments at greatly accelerated speeds. Moreover, by including the option to compensate for isotope shift effects in deuterated proteins, COMPASS is far more accurate for larger proteins than existing automated methods. COMPASS is an open-source project licensed under GNU General Public License and is available for download from http://www.liu.se/forskning/foass/tidigare-foass/patrik-lundstrom/software?l=en. Source code and binaries for Linux, Mac OS X and Microsoft Windows are available. PMID:25569628
24 CFR 3286.7 - Consumer information.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 24 Housing and Urban Development 5 2011-04-01 2011-04-01 false Consumer information. 3286.7... Requirements § 3286.7 Consumer information. (a) Manufacturer's consumer manual. In each consumer manual... manufactured home, the retailer must provide the purchaser or lessee with a consumer disclosure. This...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rahman, Nur Aira Abd, E-mail: nur-aira@nuclearmalaysia.gov.my; Yussup, Nolida; Ibrahim, Maslina Bt. Mohd
Neutron Activation Analysis (NAA) had been established in Nuclear Malaysia since 1980s. Most of the procedures established were done manually including sample registration. The samples were recorded manually in a logbook and given ID number. Then all samples, standards, SRM and blank were recorded on the irradiation vial and several forms prior to irradiation. These manual procedures carried out by the NAA laboratory personnel were time consuming and not efficient. Sample registration software is developed as part of IAEA/CRP project on ‘Development of Process Automation in the Neutron Activation Analysis (NAA) Facility in Malaysia Nuclear Agency (RC17399)’. The objective ofmore » the project is to create a pc-based data entry software during sample preparation stage. This is an effective method to replace redundant manual data entries that needs to be completed by laboratory personnel. The software developed will automatically generate sample code for each sample in one batch, create printable registration forms for administration purpose, and store selected parameters that will be passed to sample analysis program. The software is developed by using National Instruments Labview 8.6.« less
Knowledge-driven information mining in remote-sensing image archives
NASA Astrophysics Data System (ADS)
Datcu, M.; Seidel, K.; D'Elia, S.; Marchetti, P. G.
2002-05-01
Users in all domains require information or information-related services that are focused, concise, reliable, low cost and timely and which are provided in forms and formats compatible with the user's own activities. In the current Earth Observation (EO) scenario, the archiving centres generally only offer data, images and other "low level" products. The user's needs are being only partially satisfied by a number of, usually small, value-adding companies applying time-consuming (mostly manual) and expensive processes relying on the knowledge of experts to extract information from those data or images.
Development of an indexed integrated neuroradiology reports for teaching file creation
NASA Astrophysics Data System (ADS)
Tameem, Hussain Z.; Morioka, Craig; Bennett, David; El-Saden, Suzie; Sinha, Usha; Taira, Ricky; Bui, Alex; Kangarloo, Hooshang
2007-03-01
The decrease in reimbursement rates for radiology procedures has placed even more pressure on radiology departments to increase their clinical productivity. Clinical faculties have less time for teaching residents, but with the advent and prevalence of an electronic environment that includes PACS, RIS, and HIS, there is an opportunity to create electronic teaching files for fellows, residents, and medical students. Experienced clinicians, who select the most appropriate radiographic image, and clinical information relevant to that patient, create these teaching files. Important cases are selected based on the difficulty in determining the diagnosis or the manifestation of rare diseases. This manual process of teaching file creation is time consuming and may not be practical under the pressure of increased demands on the radiologist. It is the goal of this research to automate the process of teaching file creation by manually selecting key images and automatically extracting key sections from clinical reports and laboratories. The text report is then processed for indexing to two standard nomenclatures UMLS and RADLEX. Interesting teaching files can then be queried based on specific anatomy and findings found within the clinical reports.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 5 2010-04-01 2010-04-01 false Manufactured home procedural and enforcement regulations and consumer manual requirements. 3280.3 Section 3280.3 Housing and Urban Development... consumer manual requirements. A manufacturer must comply with the requirements of this part 3280, part 3282...
Hunter, Gail; Burns, Laurie; Bone, Brian; Mintel, Thomas; Jimenez, Eduardo
2012-01-01
This paper summarizes the results of a longitudinal usability research study of a specially engineered sonic powered toothbrush with unique sensing and control technologies. The usability test was conducted with fourteen (14) consumers from the St. Louis, MO, USA area who use manual toothbrushes. The study consisted of consumers using the specially engineered sonic powered toothbrush with unique sensing and control technologies for three weeks. During the study, users participated in four toothbrush trials during weekly visits to the research facility. These trials were videotaped and were analyzed regarding brushing time, behavior, and technique. In addition, the users were required to use the toothbrush twice a day for their at-home brushing. The toothbrush had a positive impact on consumers' tooth brushing behavior. Users spent more time brushing their teeth with this toothbrush as compared to their manual toothbrush. In addition, users spent more time keeping the sonic toothbrush in the recommended angle during use. Finally, users perceived their teeth to be cleaner when using the specially engineered sonic powered toothbrush with unique sensing and control technologies. The specially engineered sonic powered toothbrush with unique sensing and control technologies left a positive impression on the users. The users perceived the toothbrush to clean their teeth better than a manual toothbrush.
Optimizing the 3D-reconstruction technique for serial block-face scanning electron microscopy.
Wernitznig, Stefan; Sele, Mariella; Urschler, Martin; Zankel, Armin; Pölt, Peter; Rind, F Claire; Leitinger, Gerd
2016-05-01
Elucidating the anatomy of neuronal circuits and localizing the synaptic connections between neurons, can give us important insights in how the neuronal circuits work. We are using serial block-face scanning electron microscopy (SBEM) to investigate the anatomy of a collision detection circuit including the Lobula Giant Movement Detector (LGMD) neuron in the locust, Locusta migratoria. For this, thousands of serial electron micrographs are produced that allow us to trace the neuronal branching pattern. The reconstruction of neurons was previously done manually by drawing cell outlines of each cell in each image separately. This approach was very time consuming and troublesome. To make the process more efficient a new interactive software was developed. It uses the contrast between the neuron under investigation and its surrounding for semi-automatic segmentation. For segmentation the user sets starting regions manually and the algorithm automatically selects a volume within the neuron until the edges corresponding to the neuronal outline are reached. Internally the algorithm optimizes a 3D active contour segmentation model formulated as a cost function taking the SEM image edges into account. This reduced the reconstruction time, while staying close to the manual reference segmentation result. Our algorithm is easy to use for a fast segmentation process, unlike previous methods it does not require image training nor an extended computing capacity. Our semi-automatic segmentation algorithm led to a dramatic reduction in processing time for the 3D-reconstruction of identified neurons. Copyright © 2016 Elsevier B.V. All rights reserved.
Computer-Assisted Automated Scoring of Polysomnograms Using the Somnolyzer System.
Punjabi, Naresh M; Shifa, Naima; Dorffner, Georg; Patil, Susheel; Pien, Grace; Aurora, Rashmi N
2015-10-01
Manual scoring of polysomnograms is a time-consuming and tedious process. To expedite the scoring of polysomnograms, several computerized algorithms for automated scoring have been developed. The overarching goal of this study was to determine the validity of the Somnolyzer system, an automated system for scoring polysomnograms. The analysis sample comprised of 97 sleep studies. Each polysomnogram was manually scored by certified technologists from four sleep laboratories and concurrently subjected to automated scoring by the Somnolyzer system. Agreement between manual and automated scoring was examined. Sleep staging and scoring of disordered breathing events was conducted using the 2007 American Academy of Sleep Medicine criteria. Clinical sleep laboratories. A high degree of agreement was noted between manual and automated scoring of the apnea-hypopnea index (AHI). The average correlation between the manually scored AHI across the four clinical sites was 0.92 (95% confidence interval: 0.90-0.93). Similarly, the average correlation between the manual and Somnolyzer-scored AHI values was 0.93 (95% confidence interval: 0.91-0.96). Thus, interscorer correlation between the manually scored results was no different than that derived from manual and automated scoring. Substantial concordance in the arousal index, total sleep time, and sleep efficiency between manual and automated scoring was also observed. In contrast, differences were noted between manually and automated scored percentages of sleep stages N1, N2, and N3. Automated analysis of polysomnograms using the Somnolyzer system provides results that are comparable to manual scoring for commonly used metrics in sleep medicine. Although differences exist between manual versus automated scoring for specific sleep stages, the level of agreement between manual and automated scoring is not significantly different than that between any two human scorers. In light of the burden associated with manual scoring, automated scoring platforms provide a viable complement of tools in the diagnostic armamentarium of sleep medicine. © 2015 Associated Professional Sleep Societies, LLC.
NASA Astrophysics Data System (ADS)
Chiu, L.; Vongsaard, J.; El-Ghazawi, T.; Weinman, J.; Yang, R.; Kafatos, M.
U Due to the poor temporal sampling by satellites, data gaps exist in satellite derived time series of precipitation. This poses a challenge for assimilating rain- fall data into forecast models. To yield a continuous time series, the classic image processing technique of digital image morphing has been used. However, the digital morphing technique was applied manually and that is time consuming. In order to avoid human intervention in the process, an automatic procedure for image morphing is needed for real-time operations. For this purpose, Genetic Algorithm Based Image Registration Automatic Morphing (GRAM) model was developed and tested in this paper. Specifically, automatic morphing technique was integrated with Genetic Algo- rithm and Feature Based Image Metamorphosis technique to fill in data gaps between satellite coverage. The technique was tested using NOWRAD data which are gener- ated from the network of NEXRAD radars. Time series of NOWRAD data from storm Floyd that occurred at the US eastern region on September 16, 1999 for 00:00, 01:00, 02:00,03:00, and 04:00am were used. The GRAM technique was applied to data col- lected at 00:00 and 04:00am. These images were also manually morphed. Images at 01:00, 02:00 and 03:00am were interpolated from the GRAM and manual morphing and compared with the original NOWRAD rainrates. The results show that the GRAM technique outperforms manual morphing. The correlation coefficients between the im- ages generated using manual morphing are 0.905, 0.900, and 0.905 for the images at 01:00, 02:00,and 03:00 am, while the corresponding correlation coefficients are 0.946, 0.911, and 0.913, respectively, based on the GRAM technique. Index terms Remote Sensing, Image Registration, Hydrology, Genetic Algorithm, Morphing, NEXRAD
NASA Astrophysics Data System (ADS)
Yong, Yan Ling; Tan, Li Kuo; McLaughlin, Robert A.; Chee, Kok Han; Liew, Yih Miin
2017-12-01
Intravascular optical coherence tomography (OCT) is an optical imaging modality commonly used in the assessment of coronary artery diseases during percutaneous coronary intervention. Manual segmentation to assess luminal stenosis from OCT pullback scans is challenging and time consuming. We propose a linear-regression convolutional neural network to automatically perform vessel lumen segmentation, parameterized in terms of radial distances from the catheter centroid in polar space. Benchmarked against gold-standard manual segmentation, our proposed algorithm achieves average locational accuracy of the vessel wall of 22 microns, and 0.985 and 0.970 in Dice coefficient and Jaccard similarity index, respectively. The average absolute error of luminal area estimation is 1.38%. The processing rate is 40.6 ms per image, suggesting the potential to be incorporated into a clinical workflow and to provide quantitative assessment of vessel lumen in an intraoperative time frame.
SolTrack: an automatic video processing software for in situ interface tracking.
Griesser, S; Pierer, R; Reid, M; Dippenaar, R
2012-10-01
High-Resolution in situ observation of solidification experiments has become a powerful technique to improve the fundamental understanding of solidification processes of metals and alloys. In the present study, high-temperature laser-scanning confocal microscopy (HTLSCM) was utilized to observe and capture in situ solidification and phase transformations of alloys for subsequent post processing and analysis. Until now, this analysis has been very time consuming as frame-by-frame manual evaluation of propagating interfaces was used to determine the interface velocities. SolTrack has been developed using the commercial software package MATLAB and is designed to automatically detect, locate and track propagating interfaces during solidification and phase transformations as well as to calculate interfacial velocities. Different solidification phenomena have been recorded to demonstrate a wider spectrum of applications of this software. A validation, through comparison with manual evaluation, is included where the accuracy is shown to be very high. © 2012 The Authors Journal of Microscopy © 2012 Royal Microscopical Society.
Roelofs, Erik; Persoon, Lucas; Nijsten, Sebastiaan; Wiessler, Wolfgang; Dekker, André; Lambin, Philippe
2016-01-01
Introduction Collecting trial data in a medical environment is at present mostly performed manually and therefore time-consuming, prone to errors and often incomplete with the complex data considered. Faster and more accurate methods are needed to improve the data quality and to shorten data collection times where information is often scattered over multiple data sources. The purpose of this study is to investigate the possible benefit of modern data warehouse technology in the radiation oncology field. Material and methods In this study, a Computer Aided Theragnostics (CAT) data warehouse combined with automated tools for feature extraction was benchmarked against the regular manual data-collection processes. Two sets of clinical parameters were compiled for non-small cell lung cancer (NSCLC) and rectal cancer, using 27 patients per disease. Data collection times and inconsistencies were compared between the manual and the automated extraction method. Results The average time per case to collect the NSCLC data manually was 10.4 ± 2.1 min and 4.3 ± 1.1 min when using the automated method (p < 0.001). For rectal cancer, these times were 13.5 ± 4.1 and 6.8 ± 2.4 min, respectively (p < 0.001). In 3.2% of the data collected for NSCLC and 5.3% for rectal cancer, there was a discrepancy between the manual and automated method. Conclusions Aggregating multiple data sources in a data warehouse combined with tools for extraction of relevant parameters is beneficial for data collection times and offers the ability to improve data quality. The initial investments in digitizing the data are expected to be compensated due to the flexibility of the data analysis. Furthermore, successive investigations can easily select trial candidates and extract new parameters from the existing databases. PMID:23394741
Roelofs, Erik; Persoon, Lucas; Nijsten, Sebastiaan; Wiessler, Wolfgang; Dekker, André; Lambin, Philippe
2013-07-01
Collecting trial data in a medical environment is at present mostly performed manually and therefore time-consuming, prone to errors and often incomplete with the complex data considered. Faster and more accurate methods are needed to improve the data quality and to shorten data collection times where information is often scattered over multiple data sources. The purpose of this study is to investigate the possible benefit of modern data warehouse technology in the radiation oncology field. In this study, a Computer Aided Theragnostics (CAT) data warehouse combined with automated tools for feature extraction was benchmarked against the regular manual data-collection processes. Two sets of clinical parameters were compiled for non-small cell lung cancer (NSCLC) and rectal cancer, using 27 patients per disease. Data collection times and inconsistencies were compared between the manual and the automated extraction method. The average time per case to collect the NSCLC data manually was 10.4 ± 2.1 min and 4.3 ± 1.1 min when using the automated method (p<0.001). For rectal cancer, these times were 13.5 ± 4.1 and 6.8 ± 2.4 min, respectively (p<0.001). In 3.2% of the data collected for NSCLC and 5.3% for rectal cancer, there was a discrepancy between the manual and automated method. Aggregating multiple data sources in a data warehouse combined with tools for extraction of relevant parameters is beneficial for data collection times and offers the ability to improve data quality. The initial investments in digitizing the data are expected to be compensated due to the flexibility of the data analysis. Furthermore, successive investigations can easily select trial candidates and extract new parameters from the existing databases. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Automated Essay Grading using Machine Learning Algorithm
NASA Astrophysics Data System (ADS)
Ramalingam, V. V.; Pandian, A.; Chetry, Prateek; Nigam, Himanshu
2018-04-01
Essays are paramount for of assessing the academic excellence along with linking the different ideas with the ability to recall but are notably time consuming when they are assessed manually. Manual grading takes significant amount of evaluator’s time and hence it is an expensive process. Automated grading if proven effective will not only reduce the time for assessment but comparing it with human scores will also make the score realistic. The project aims to develop an automated essay assessment system by use of machine learning techniques by classifying a corpus of textual entities into small number of discrete categories, corresponding to possible grades. Linear regression technique will be utilized for training the model along with making the use of various other classifications and clustering techniques. We intend to train classifiers on the training set, make it go through the downloaded dataset, and then measure performance our dataset by comparing the obtained values with the dataset values. We have implemented our model using java.
Consumer Decisions. Student Manual.
ERIC Educational Resources Information Center
Florida State Dept. of Education, Tallahassee. Div. of Vocational Education.
This student manual covers five areas relating to consumer decisions. Titles of the five sections are Consumer Law, Consumer Decision Making, Buying a Car, Convenience Foods, and Books for Preschool Children. Each section may contain some or all of these materials: list of objectives, informative sections, questions on the information and answers,…
Validating the Use of Deep Learning Neural Networks for Correction of Large Hydrometric Datasets
NASA Astrophysics Data System (ADS)
Frazier, N.; Ogden, F. L.; Regina, J. A.; Cheng, Y.
2017-12-01
Collection and validation of Earth systems data can be time consuming and labor intensive. In particular, high resolution hydrometric data, including rainfall and streamflow measurements, are difficult to obtain due to a multitude of complicating factors. Measurement equipment is subject to clogs, environmental disturbances, and sensor drift. Manual intervention is typically required to identify, correct, and validate these data. Weirs can become clogged and the pressure transducer may float or drift over time. We typically employ a graphical tool called Time Series Editor to manually remove clogs and sensor drift from the data. However, this process is highly subjective and requires hydrological expertise. Two different people may produce two different data sets. To use this data for scientific discovery and model validation, a more consistent method is needed to processes this field data. Deep learning neural networks have proved to be excellent mechanisms for recognizing patterns in data. We explore the use of Recurrent Neural Networks (RNN) to capture the patterns in the data over time using various gating mechanisms (LSTM and GRU), network architectures, and hyper-parameters to build an automated data correction model. We also explore the required amount of manually corrected training data required to train the network for reasonable accuracy. The benefits of this approach are that the time to process a data set is significantly reduced, and the results are 100% reproducible after training is complete. Additionally, we train the RNN and calibrate a physically-based hydrological model against the same portion of data. Both the RNN and the model are applied to the remaining data using a split-sample methodology. Performance of the machine learning is evaluated for plausibility by comparing with the output of the hydrological model, and this analysis identifies potential periods where additional investigation is warranted.
Image based automatic water meter reader
NASA Astrophysics Data System (ADS)
Jawas, N.; Indrianto
2018-01-01
Water meter is used as a tool to calculate water consumption. This tool works by utilizing water flow and shows the calculation result with mechanical digit counter. Practically, in everyday use, an operator will manually check the digit counter periodically. The Operator makes logs of the number shows by water meter to know the water consumption. This manual operation is time consuming and prone to human error. Therefore, in this paper we propose an automatic water meter digit reader from digital image. The digits sequence is detected by utilizing contour information of the water meter front panel.. Then an OCR method is used to get the each digit character. The digit sequence detection is an important part of overall process. It determines the success of overall system. The result shows promising results especially in sequence detection.
Validation of Interdisciplinary Cooperative Education Manual.
ERIC Educational Resources Information Center
Stone, Sheila D.
A field test examined the validity of the "Interdisciplinary Cooperative Education Curriculum Manual." (Among those topics covered in the manual are the following: vocational student organizations, leadership, civic responsibility, health and safety, human relations, communications, resource management, consumer skills, consumer law,…
Statistical evaluation of manual segmentation of a diffuse low-grade glioma MRI dataset.
Ben Abdallah, Meriem; Blonski, Marie; Wantz-Mezieres, Sophie; Gaudeau, Yann; Taillandier, Luc; Moureaux, Jean-Marie
2016-08-01
Software-based manual segmentation is critical to the supervision of diffuse low-grade glioma patients and to the optimal treatment's choice. However, manual segmentation being time-consuming, it is difficult to include it in the clinical routine. An alternative to circumvent the time cost of manual segmentation could be to share the task among different practitioners, providing it can be reproduced. The goal of our work is to assess diffuse low-grade gliomas' manual segmentation's reproducibility on MRI scans, with regard to practitioners, their experience and field of expertise. A panel of 13 experts manually segmented 12 diffuse low-grade glioma clinical MRI datasets using the OSIRIX software. A statistical analysis gave promising results, as the practitioner factor, the medical specialty and the years of experience seem to have no significant impact on the average values of the tumor volume variable.
NASA Astrophysics Data System (ADS)
Agrawal, Ritu; Sharma, Manisha; Singh, Bikesh Kumar
2018-04-01
Manual segmentation and analysis of lesions in medical images is time consuming and subjected to human errors. Automated segmentation has thus gained significant attention in recent years. This article presents a hybrid approach for brain lesion segmentation in different imaging modalities by combining median filter, k means clustering, Sobel edge detection and morphological operations. Median filter is an essential pre-processing step and is used to remove impulsive noise from the acquired brain images followed by k-means segmentation, Sobel edge detection and morphological processing. The performance of proposed automated system is tested on standard datasets using performance measures such as segmentation accuracy and execution time. The proposed method achieves a high accuracy of 94% when compared with manual delineation performed by an expert radiologist. Furthermore, the statistical significance test between lesion segmented using automated approach and that by expert delineation using ANOVA and correlation coefficient achieved high significance values of 0.986 and 1 respectively. The experimental results obtained are discussed in lieu of some recently reported studies.
Strategies for risk assessment and control in welding: challenges for developing countries.
Hewitt, P J
2001-06-01
Metal arc welding ranges from primitive (manual) to increasingly complex automated welding processes. Welding occupies 1% of the labour force in some industrialised countries and increasing knowledge of health risks, necessitating improved assessment strategies and controls have been identified by the International Institute of Welding (IIW), ILO, WHO and other authoritative bodies. Challenges for developing countries need to be addressed. For small scale production and repair work, predominantly by manual metal arc on mild steel, the focus in developing economies has correctly been on control of obvious physical and acute health affects. Development introduces more sophisticated processes and hazards. Work pieces of stainless steel and consumables with chromium, nickel and manganese constituents are used with increasingly complex semi-manual or automated systems involving variety of fluxes or gasses. Uncritical adoption of new welding technologies by developing countries potentiates future health problems. Control should be integral at the design stage, otherwise substantive detriments and later costs can ensue. Developing countries need particular guidance on selection of the optimised welding consumables and processes to minimise such detriments. The role of the IIW and the MFRU are described. Applications of occupational hygiene principals of prevention and control of welding fume at source by process modification are presented.
Context-Sensitive Spelling Correction of Consumer-Generated Content on Health Care.
Zhou, Xiaofang; Zheng, An; Yin, Jiaheng; Chen, Rudan; Zhao, Xianyang; Xu, Wei; Cheng, Wenqing; Xia, Tian; Lin, Simon
2015-07-31
Consumer-generated content, such as postings on social media websites, can serve as an ideal source of information for studying health care from a consumer's perspective. However, consumer-generated content on health care topics often contains spelling errors, which, if not corrected, will be obstacles for downstream computer-based text analysis. In this study, we proposed a framework with a spelling correction system designed for consumer-generated content and a novel ontology-based evaluation system which was used to efficiently assess the correction quality. Additionally, we emphasized the importance of context sensitivity in the correction process, and demonstrated why correction methods designed for electronic medical records (EMRs) failed to perform well with consumer-generated content. First, we developed our spelling correction system based on Google Spell Checker. The system processed postings acquired from MedHelp, a biomedical bulletin board system (BBS), and saved misspelled words (eg, sertaline) and corresponding corrected words (eg, sertraline) into two separate sets. Second, to reduce the number of words needing manual examination in the evaluation process, we respectively matched the words in the two sets with terms in two biomedical ontologies: RxNorm and Systematized Nomenclature of Medicine -- Clinical Terms (SNOMED CT). The ratio of words which could be matched and appropriately corrected was used to evaluate the correction system's overall performance. Third, we categorized the misspelled words according to the types of spelling errors. Finally, we calculated the ratio of abbreviations in the postings, which remarkably differed between EMRs and consumer-generated content and could largely influence the overall performance of spelling checkers. An uncorrected word and the corresponding corrected word was called a spelling pair, and the two words in the spelling pair were its members. In our study, there were 271 spelling pairs detected, among which 58 (21.4%) pairs had one or two members matched in the selected ontologies. The ratio of appropriate correction in the 271 overall spelling errors was 85.2% (231/271). The ratio of that in the 58 spelling pairs was 86% (50/58), close to the overall ratio. We also found that linguistic errors took up 31.4% (85/271) of all errors detected, and only 0.98% (210/21,358) of words in the postings were abbreviations, which was much lower than the ratio in the EMRs (33.6%). We conclude that our system can accurately correct spelling errors in consumer-generated content. Context sensitivity is indispensable in the correction process. Additionally, it can be confirmed that consumer-generated content differs from EMRs in that consumers seldom use abbreviations. Also, the evaluation method, taking advantage of biomedical ontology, can effectively estimate the accuracy of the correction system and reduce manual examination time.
Automated identification of cone photoreceptors in adaptive optics retinal images.
Li, Kaccie Y; Roorda, Austin
2007-05-01
In making noninvasive measurements of the human cone mosaic, the task of labeling each individual cone is unavoidable. Manual labeling is a time-consuming process, setting the motivation for the development of an automated method. An automated algorithm for labeling cones in adaptive optics (AO) retinal images is implemented and tested on real data. The optical fiber properties of cones aided the design of the algorithm. Out of 2153 manually labeled cones from six different images, the automated method correctly identified 94.1% of them. The agreement between the automated and the manual labeling methods varied from 92.7% to 96.2% across the six images. Results between the two methods disagreed for 1.2% to 9.1% of the cones. Voronoi analysis of large montages of AO retinal images confirmed the general hexagonal-packing structure of retinal cones as well as the general cone density variability across portions of the retina. The consistency of our measurements demonstrates the reliability and practicality of having an automated solution to this problem.
Chriskos, Panteleimon; Frantzidis, Christos A; Gkivogkli, Polyxeni T; Bamidis, Panagiotis D; Kourtidou-Papadeli, Chrysoula
2018-01-01
Sleep staging, the process of assigning labels to epochs of sleep, depending on the stage of sleep they belong, is an arduous, time consuming and error prone process as the initial recordings are quite often polluted by noise from different sources. To properly analyze such data and extract clinical knowledge, noise components must be removed or alleviated. In this paper a pre-processing and subsequent sleep staging pipeline for the sleep analysis of electroencephalographic signals is described. Two novel methods of functional connectivity estimation (Synchronization Likelihood/SL and Relative Wavelet Entropy/RWE) are comparatively investigated for automatic sleep staging through manually pre-processed electroencephalographic recordings. A multi-step process that renders signals suitable for further analysis is initially described. Then, two methods that rely on extracting synchronization features from electroencephalographic recordings to achieve computerized sleep staging are proposed, based on bivariate features which provide a functional overview of the brain network, contrary to most proposed methods that rely on extracting univariate time and frequency features. Annotation of sleep epochs is achieved through the presented feature extraction methods by training classifiers, which are in turn able to accurately classify new epochs. Analysis of data from sleep experiments on a randomized, controlled bed-rest study, which was organized by the European Space Agency and was conducted in the "ENVIHAB" facility of the Institute of Aerospace Medicine at the German Aerospace Center (DLR) in Cologne, Germany attains high accuracy rates, over 90% based on ground truth that resulted from manual sleep staging by two experienced sleep experts. Therefore, it can be concluded that the above feature extraction methods are suitable for semi-automatic sleep staging.
Chriskos, Panteleimon; Frantzidis, Christos A.; Gkivogkli, Polyxeni T.; Bamidis, Panagiotis D.; Kourtidou-Papadeli, Chrysoula
2018-01-01
Sleep staging, the process of assigning labels to epochs of sleep, depending on the stage of sleep they belong, is an arduous, time consuming and error prone process as the initial recordings are quite often polluted by noise from different sources. To properly analyze such data and extract clinical knowledge, noise components must be removed or alleviated. In this paper a pre-processing and subsequent sleep staging pipeline for the sleep analysis of electroencephalographic signals is described. Two novel methods of functional connectivity estimation (Synchronization Likelihood/SL and Relative Wavelet Entropy/RWE) are comparatively investigated for automatic sleep staging through manually pre-processed electroencephalographic recordings. A multi-step process that renders signals suitable for further analysis is initially described. Then, two methods that rely on extracting synchronization features from electroencephalographic recordings to achieve computerized sleep staging are proposed, based on bivariate features which provide a functional overview of the brain network, contrary to most proposed methods that rely on extracting univariate time and frequency features. Annotation of sleep epochs is achieved through the presented feature extraction methods by training classifiers, which are in turn able to accurately classify new epochs. Analysis of data from sleep experiments on a randomized, controlled bed-rest study, which was organized by the European Space Agency and was conducted in the “ENVIHAB” facility of the Institute of Aerospace Medicine at the German Aerospace Center (DLR) in Cologne, Germany attains high accuracy rates, over 90% based on ground truth that resulted from manual sleep staging by two experienced sleep experts. Therefore, it can be concluded that the above feature extraction methods are suitable for semi-automatic sleep staging. PMID:29628883
Concrete Crack Identification Using a UAV Incorporating Hybrid Image Processing.
Kim, Hyunjun; Lee, Junhwa; Ahn, Eunjong; Cho, Soojin; Shin, Myoungsu; Sim, Sung-Han
2017-09-07
Crack assessment is an essential process in the maintenance of concrete structures. In general, concrete cracks are inspected by manual visual observation of the surface, which is intrinsically subjective as it depends on the experience of inspectors. Further, it is time-consuming, expensive, and often unsafe when inaccessible structural members are to be assessed. Unmanned aerial vehicle (UAV) technologies combined with digital image processing have recently been applied to crack assessment to overcome the drawbacks of manual visual inspection. However, identification of crack information in terms of width and length has not been fully explored in the UAV-based applications, because of the absence of distance measurement and tailored image processing. This paper presents a crack identification strategy that combines hybrid image processing with UAV technology. Equipped with a camera, an ultrasonic displacement sensor, and a WiFi module, the system provides the image of cracks and the associated working distance from a target structure on demand. The obtained information is subsequently processed by hybrid image binarization to estimate the crack width accurately while minimizing the loss of the crack length information. The proposed system has shown to successfully measure cracks thicker than 0.1 mm with the maximum length estimation error of 7.3%.
Wei, L; Chen, H; Zhou, Y S; Sun, Y C; Pan, S X
2017-02-18
To compare the technician fabrication time and clinical working time of custom trays fabricated using two different methods, the three-dimensional printing custom trays and the conventional custom trays, and to prove the feasibility of the computer-aided design/computer-aided manufacturing (CAD/CAM) custom trays in clinical use from the perspective of clinical time cost. Twenty edentulous patients were recruited into this study, which was prospective, single blind, randomized self-control clinical trials. Two custom trays were fabricated for each participant. One of the custom trays was fabricated using functional suitable denture (FSD) system through CAD/CAM process, and the other was manually fabricated using conventional methods. Then the final impressions were taken using both the custom trays, followed by utilizing the final impression to fabricate complete dentures respectively. The technician production time of the custom trays and the clinical working time of taking the final impression was recorded. The average time spent on fabricating the three-dimensional printing custom trays using FSD system and fabricating the conventional custom trays manually were (28.6±2.9) min and (31.1±5.7) min, respectively. The average time spent on making the final impression with the three-dimensional printing custom trays using FSD system and the conventional custom trays fabricated manually were (23.4±11.5) min and (25.4±13.0) min, respectively. There was significant difference in the technician fabrication time and the clinical working time between the three-dimensional printing custom trays using FSD system and the conventional custom trays fabricated manually (P<0.05). The average time spent on fabricating three-dimensional printing custom trays using FSD system and making the final impression with the trays are less than those of the conventional custom trays fabricated manually, which reveals that the FSD three-dimensional printing custom trays is less time-consuming both in the clinical and laboratory process than the conventional custom trays. In addition, when we manufacture custom trays by three-dimensional printing method, there is no need to pour preliminary cast after taking the primary impression, therefore, it can save the impression material and model material. As to completing denture restoration, manufacturing custom trays using FSD system is worth being popularized.
Context-Sensitive Spelling Correction of Consumer-Generated Content on Health Care
Chen, Rudan; Zhao, Xianyang; Xu, Wei; Cheng, Wenqing; Lin, Simon
2015-01-01
Background Consumer-generated content, such as postings on social media websites, can serve as an ideal source of information for studying health care from a consumer’s perspective. However, consumer-generated content on health care topics often contains spelling errors, which, if not corrected, will be obstacles for downstream computer-based text analysis. Objective In this study, we proposed a framework with a spelling correction system designed for consumer-generated content and a novel ontology-based evaluation system which was used to efficiently assess the correction quality. Additionally, we emphasized the importance of context sensitivity in the correction process, and demonstrated why correction methods designed for electronic medical records (EMRs) failed to perform well with consumer-generated content. Methods First, we developed our spelling correction system based on Google Spell Checker. The system processed postings acquired from MedHelp, a biomedical bulletin board system (BBS), and saved misspelled words (eg, sertaline) and corresponding corrected words (eg, sertraline) into two separate sets. Second, to reduce the number of words needing manual examination in the evaluation process, we respectively matched the words in the two sets with terms in two biomedical ontologies: RxNorm and Systematized Nomenclature of Medicine -- Clinical Terms (SNOMED CT). The ratio of words which could be matched and appropriately corrected was used to evaluate the correction system’s overall performance. Third, we categorized the misspelled words according to the types of spelling errors. Finally, we calculated the ratio of abbreviations in the postings, which remarkably differed between EMRs and consumer-generated content and could largely influence the overall performance of spelling checkers. Results An uncorrected word and the corresponding corrected word was called a spelling pair, and the two words in the spelling pair were its members. In our study, there were 271 spelling pairs detected, among which 58 (21.4%) pairs had one or two members matched in the selected ontologies. The ratio of appropriate correction in the 271 overall spelling errors was 85.2% (231/271). The ratio of that in the 58 spelling pairs was 86% (50/58), close to the overall ratio. We also found that linguistic errors took up 31.4% (85/271) of all errors detected, and only 0.98% (210/21,358) of words in the postings were abbreviations, which was much lower than the ratio in the EMRs (33.6%). Conclusions We conclude that our system can accurately correct spelling errors in consumer-generated content. Context sensitivity is indispensable in the correction process. Additionally, it can be confirmed that consumer-generated content differs from EMRs in that consumers seldom use abbreviations. Also, the evaluation method, taking advantage of biomedical ontology, can effectively estimate the accuracy of the correction system and reduce manual examination time. PMID:26232246
Computer-Assisted Automated Scoring of Polysomnograms Using the Somnolyzer System
Punjabi, Naresh M.; Shifa, Naima; Dorffner, Georg; Patil, Susheel; Pien, Grace; Aurora, Rashmi N.
2015-01-01
Study Objectives: Manual scoring of polysomnograms is a time-consuming and tedious process. To expedite the scoring of polysomnograms, several computerized algorithms for automated scoring have been developed. The overarching goal of this study was to determine the validity of the Somnolyzer system, an automated system for scoring polysomnograms. Design: The analysis sample comprised of 97 sleep studies. Each polysomnogram was manually scored by certified technologists from four sleep laboratories and concurrently subjected to automated scoring by the Somnolyzer system. Agreement between manual and automated scoring was examined. Sleep staging and scoring of disordered breathing events was conducted using the 2007 American Academy of Sleep Medicine criteria. Setting: Clinical sleep laboratories. Measurements and Results: A high degree of agreement was noted between manual and automated scoring of the apnea-hypopnea index (AHI). The average correlation between the manually scored AHI across the four clinical sites was 0.92 (95% confidence interval: 0.90–0.93). Similarly, the average correlation between the manual and Somnolyzer-scored AHI values was 0.93 (95% confidence interval: 0.91–0.96). Thus, interscorer correlation between the manually scored results was no different than that derived from manual and automated scoring. Substantial concordance in the arousal index, total sleep time, and sleep efficiency between manual and automated scoring was also observed. In contrast, differences were noted between manually and automated scored percentages of sleep stages N1, N2, and N3. Conclusion: Automated analysis of polysomnograms using the Somnolyzer system provides results that are comparable to manual scoring for commonly used metrics in sleep medicine. Although differences exist between manual versus automated scoring for specific sleep stages, the level of agreement between manual and automated scoring is not significantly different than that between any two human scorers. In light of the burden associated with manual scoring, automated scoring platforms provide a viable complement of tools in the diagnostic armamentarium of sleep medicine. Citation: Punjabi NM, Shifa N, Dorffner G, Patil S, Pien G, Aurora RN. Computer-assisted automated scoring of polysomnograms using the Somnolyzer system. SLEEP 2015;38(10):1555–1566. PMID:25902809
Marinova, Mariela; Artusi, Carlo; Brugnolo, Laura; Antonelli, Giorgia; Zaninotto, Martina; Plebani, Mario
2013-11-01
Although, due to its high specificity and sensitivity, LC-MS/MS is an efficient technique for the routine determination of immunosuppressants in whole blood, it involves time-consuming manual sample preparation. The aim of the present study was therefore to develop an automated sample-preparation protocol for the quantification of sirolimus, everolimus and tacrolimus by LC-MS/MS using a liquid handling platform. Six-level commercially available blood calibrators were used for assay development, while four quality control materials and three blood samples from patients under immunosuppressant treatment were employed for the evaluation of imprecision. Barcode reading, sample re-suspension, transfer of whole blood samples into 96-well plates, addition of internal standard solution, mixing, and protein precipitation were performed with a liquid handling platform. After plate filtration, the deproteinised supernatants were submitted for SPE on-line. The only manual steps in the entire process were de-capping of the tubes, and transfer of the well plates to the HPLC autosampler. Calibration curves were linear throughout the selected ranges. The imprecision and accuracy data for all analytes were highly satisfactory. The agreement between the results obtained with manual and those obtained with automated sample preparation was optimal (n=390, r=0.96). In daily routine (100 patient samples) the typical overall total turnaround time was less than 6h. Our findings indicate that the proposed analytical system is suitable for routine analysis, since it is straightforward and precise. Furthermore, it incurs less manual workload and less risk of error in the quantification of whole blood immunosuppressant concentrations than conventional methods. © 2013.
A quality score for coronary artery tree extraction results
NASA Astrophysics Data System (ADS)
Cao, Qing; Broersen, Alexander; Kitslaar, Pieter H.; Lelieveldt, Boudewijn P. F.; Dijkstra, Jouke
2018-02-01
Coronary artery trees (CATs) are often extracted to aid the fully automatic analysis of coronary artery disease on coronary computed tomography angiography (CCTA) images. Automatically extracted CATs often miss some arteries or include wrong extractions which require manual corrections before performing successive steps. For analyzing a large number of datasets, a manual quality check of the extraction results is time-consuming. This paper presents a method to automatically calculate quality scores for extracted CATs in terms of clinical significance of the extracted arteries and the completeness of the extracted CAT. Both right dominant (RD) and left dominant (LD) anatomical statistical models are generated and exploited in developing the quality score. To automatically determine which model should be used, a dominance type detection method is also designed. Experiments are performed on the automatically extracted and manually refined CATs from 42 datasets to evaluate the proposed quality score. In 39 (92.9%) cases, the proposed method is able to measure the quality of the manually refined CATs with higher scores than the automatically extracted CATs. In a 100-point scale system, the average scores for automatically and manually refined CATs are 82.0 (+/-15.8) and 88.9 (+/-5.4) respectively. The proposed quality score will assist the automatic processing of the CAT extractions for large cohorts which contain both RD and LD cases. To the best of our knowledge, this is the first time that a general quality score for an extracted CAT is presented.
Pakhomov, Serguei Vs; Shah, Nilay D; Hanson, Penny; Balasubramaniam, Saranya C; Smith, Steven A
2010-01-01
Low-dose aspirin reduces cardiovascular risk; however, monitoring over-the-counter medication use relies on the time-consuming and costly manual review of medical records. Our objective is to validate natural language processing (NLP) of the electronic medical record (EMR) for extracting medication exposure and contraindication information. The text of EMRs for 499 patients with type 2 diabetes was searched using NLP for evidence of aspirin use and its contraindications. The results were compared to a standardised manual records review. Of the 499 patients, 351 (70%) were using aspirin and 148 (30%) were not, according to manual review. NLP correctly identified 346 of the 351 aspirin-positive and 134 of the 148 aspirin-negative patients, indicating a sensitivity of 99% (95% CI 97-100) and specificity of 91% (95% CI 88-97). Of the 148 aspirin-negative patients, 66 (45%) had contraindications and 82 (55%) did not, according to manual review. NLP search for contraindications correctly identified 61 of the 66 patients with contraindications and 58 of the 82 patients without, yielding a sensitivity of 92% (95% CI 84-97) and a specificity of 71% (95% CI 60-80). NLP of the EMR is accurate in ascertaining documented aspirin use and could potentially be used for epidemiological research as a source of cardiovascular risk factor information.
Use of automated rendezvous trajectory planning to improve spacecraft operations efficiency
NASA Technical Reports Server (NTRS)
Mulder, Tom A.
1991-01-01
The current planning process for space shuttle rendezvous with a second Earth-orbiting vehicle is time consuming and costly. It is a labor-intensive, manual process performed pre-mission with the aid of specialized maneuver processing tools. Real-time execution of a rendezvous plan must closely follow a predicted trajectory, and targeted solutions leading up to the terminal phase are computed on the ground. Despite over 25 years of Gemini, Apollo, Skylab, and shuttle vehicle-to-vehicle rendezvous missions flown to date, rendezvous in Earth orbit still requires careful monitoring and cannot be taken for granted. For example, a significant trajectory offset was experienced during terminal phase rendezvous of the STS-32 Long Duration Exposure Facility retrieval mission. Several improvements can be introduced to the present rendezvous planning process to reduce costs, produce more fuel-efficient profiles, and increase the probability of mission success.
Consumer Education. Information Supplements for Physically Disabled Students. Teacher's Guide.
ERIC Educational Resources Information Center
Tipsord, Barbara; And Others
This manual contains supplementary information for use by instructors who teach consumer education and resources management to physically handicapped students in regular classes. It is subdivided according to typical consumer education topics and handicapping conditions. Addressed in the individual sections of the manual are the folowing topics:…
The Assertive Consumer: Credit and Warranties. Student's Manual.
ERIC Educational Resources Information Center
Clark, Barbara; And Others
This student manual contains materials to be used in a workshop designed to train members of action organizations (consumer, community, educational) in techniques such as role playing, modeling, and developing strong communication skills for assertively securing legal consumer rights in the areas of credits and warranties and to prepare the…
NASA Astrophysics Data System (ADS)
Ansari, Muhammad Ahsan; Zai, Sammer; Moon, Young Shik
2017-01-01
Manual analysis of the bulk data generated by computed tomography angiography (CTA) is time consuming, and interpretation of such data requires previous knowledge and expertise of the radiologist. Therefore, an automatic method that can isolate the coronary arteries from a given CTA dataset is required. We present an automatic yet effective segmentation method to delineate the coronary arteries from a three-dimensional CTA data cloud. Instead of a region growing process, which is usually time consuming and prone to leakages, the method is based on the optimal thresholding, which is applied globally on the Hessian-based vesselness measure in a localized way (slice by slice) to track the coronaries carefully to their distal ends. Moreover, to make the process automatic, we detect the aorta using the Hough transform technique. The proposed segmentation method is independent of the starting point to initiate its process and is fast in the sense that coronary arteries are obtained without any preprocessing or postprocessing steps. We used 12 real clinical datasets to show the efficiency and accuracy of the presented method. Experimental results reveal that the proposed method achieves 95% average accuracy.
... View The Professional Version For doctors and medical students Consumer Version Merck Manual Consumer Version × MERCK MANUAL - ... View The Professional Version For doctors and medical students Home Medical Topics Blood Disorders Bone, Joint, and ...
... View The Professional Version For doctors and medical students Consumer Version Merck Manual Consumer Version × MERCK MANUAL - ... View The Professional Version For doctors and medical students Home Medical Topics Blood Disorders Bone, Joint, and ...
... View The Professional Version For doctors and medical students Consumer Version Merck Manual Consumer Version × MERCK MANUAL - ... View The Professional Version For doctors and medical students Home Medical Topics Blood Disorders Bone, Joint, and ...
... View The Professional Version For doctors and medical students Consumer Version Merck Manual Consumer Version × MERCK MANUAL - ... View The Professional Version For doctors and medical students Home Medical Topics Blood Disorders Bone, Joint, and ...
Plexiform neurofibroma tissue classification
NASA Astrophysics Data System (ADS)
Weizman, L.; Hoch, L.; Ben Sira, L.; Joskowicz, L.; Pratt, L.; Constantini, S.; Ben Bashat, D.
2011-03-01
Plexiform Neurofibroma (PN) is a major complication of NeuroFibromatosis-1 (NF1), a common genetic disease that involving the nervous system. PNs are peripheral nerve sheath tumors extending along the length of the nerve in various parts of the body. Treatment decision is based on tumor volume assessment using MRI, which is currently time consuming and error prone, with limited semi-automatic segmentation support. We present in this paper a new method for the segmentation and tumor mass quantification of PN from STIR MRI scans. The method starts with a user-based delineation of the tumor area in a single slice and automatically detects the PN lesions in the entire image based on the tumor connectivity. Experimental results on seven datasets yield a mean volume overlap difference of 25% as compared to manual segmentation by expert radiologist with a mean computation and interaction time of 12 minutes vs. over an hour for manual annotation. Since the user interaction in the segmentation process is minimal, our method has the potential to successfully become part of the clinical workflow.
Yong, Yan Ling; Tan, Li Kuo; McLaughlin, Robert A; Chee, Kok Han; Liew, Yih Miin
2017-12-01
Intravascular optical coherence tomography (OCT) is an optical imaging modality commonly used in the assessment of coronary artery diseases during percutaneous coronary intervention. Manual segmentation to assess luminal stenosis from OCT pullback scans is challenging and time consuming. We propose a linear-regression convolutional neural network to automatically perform vessel lumen segmentation, parameterized in terms of radial distances from the catheter centroid in polar space. Benchmarked against gold-standard manual segmentation, our proposed algorithm achieves average locational accuracy of the vessel wall of 22 microns, and 0.985 and 0.970 in Dice coefficient and Jaccard similarity index, respectively. The average absolute error of luminal area estimation is 1.38%. The processing rate is 40.6 ms per image, suggesting the potential to be incorporated into a clinical workflow and to provide quantitative assessment of vessel lumen in an intraoperative time frame. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
VIPER: a web application for rapid expert review of variant calls.
Wöste, Marius; Dugas, Martin
2018-06-01
With the rapid development in next-generation sequencing, cost and time requirements for genomic sequencing are decreasing, enabling applications in many areas such as cancer research. Many tools have been developed to analyze genomic variation ranging from single nucleotide variants to whole chromosomal aberrations. As sequencing throughput increases, the number of variants called by such tools also grows. Often employed manual inspection of such calls is thus becoming a time-consuming procedure. We developed the Variant InsPector and Expert Rating tool (VIPER) to speed up this process by integrating the Integrative Genomics Viewer into a web application. Analysts can then quickly iterate through variants, apply filters and make decisions based on the generated images and variant metadata. VIPER was successfully employed in analyses with manual inspection of more than 10 000 calls. VIPER is implemented in Java and Javascript and is freely available at https://github.com/MarWoes/viper. marius.woeste@uni-muenster.de. Supplementary data are available at Bioinformatics online.
... View The Professional Version For doctors and medical students Consumer Version Merck Manual Consumer Version × MERCK MANUAL - ... View The Professional Version For doctors and medical students Home Medical Topics Blood Disorders Bone, Joint, and ...
Overview of Movement Disorders
... View The Professional Version For doctors and medical students Consumer Version Merck Manual Consumer Version × MERCK MANUAL - ... View The Professional Version For doctors and medical students Home Medical Topics Blood Disorders Bone, Joint, and ...
... View The Professional Version For doctors and medical students Consumer Version Merck Manual Consumer Version × MERCK MANUAL - ... View The Professional Version For doctors and medical students Home Medical Topics Blood Disorders Bone, Joint, and ...
A Consumer Education Self-Help Manual for Displaced Homemaker Service Providers.
ERIC Educational Resources Information Center
Williams, Herma; Thompson, Patricia
This manual is designed to allow service providers at displaced homemaker centers to update and refresh their knowledge and information of consumer concepts and to initiate and implement some consumer education services designed to meet the needs of displaced homemakers. Material is divided into five parts. Part 1 focuses on financial management…
Chen, C; Li, H; Zhou, X; Wong, S T C
2008-05-01
Image-based, high throughput genome-wide RNA interference (RNAi) experiments are increasingly carried out to facilitate the understanding of gene functions in intricate biological processes. Automated screening of such experiments generates a large number of images with great variations in image quality, which makes manual analysis unreasonably time-consuming. Therefore, effective techniques for automatic image analysis are urgently needed, in which segmentation is one of the most important steps. This paper proposes a fully automatic method for cells segmentation in genome-wide RNAi screening images. The method consists of two steps: nuclei and cytoplasm segmentation. Nuclei are extracted and labelled to initialize cytoplasm segmentation. Since the quality of RNAi image is rather poor, a novel scale-adaptive steerable filter is designed to enhance the image in order to extract long and thin protrusions on the spiky cells. Then, constraint factor GCBAC method and morphological algorithms are combined to be an integrated method to segment tight clustered cells. Compared with the results obtained by using seeded watershed and the ground truth, that is, manual labelling results by experts in RNAi screening data, our method achieves higher accuracy. Compared with active contour methods, our method consumes much less time. The positive results indicate that the proposed method can be applied in automatic image analysis of multi-channel image screening data.
Hsieh, Anne M-Y; Polyakova, Olena; Fu, Guodong; Chazen, Ronald S; MacMillan, Christina; Witterick, Ian J; Ralhan, Ranju; Walfish, Paul G
2018-04-13
Recognition of noninvasive follicular thyroid neoplasms with papillary-like nuclear features (NIFTP) that distinguishes them from invasive malignant encapsulated follicular variant of papillary thyroid carcinoma (EFVPTC) can prevent overtreatment of NIFTP patients. We and others have previously reported that programmed death-ligand 1 (PD-L1) is a useful biomarker in thyroid tumors; however, all reports to date have relied on manual scoring that is time consuming as well as subject to individual bias. Consequently, we developed a digital image analysis (DIA) protocol for cytoplasmic and membranous stain quantitation (ThyApp) and evaluated three tumor sampling methods [Systemic Uniform Random Sampling, hotspot nucleus, and hotspot nucleus/3,3'-Diaminobenzidine (DAB)]. A patient cohort of 153 cases consisting of 48 NIFTP, 44 EFVPTC, 26 benign nodules and 35 encapsulated follicular lesions/neoplasms with lymphocytic thyroiditis (LT) was studied. ThyApp quantitation of PD-L1 expression revealed a significant difference between invasive EFVPTC and NIFTP; but none between NIFTP and benign nodules. ThyApp integrated with hotspot nucleus tumor sampling method demonstrated to be most clinically relevant, consumed least processing time, and eliminated interobserver variance. In conclusion, the fully automatic DIA algorithm developed using a histomorphological approach objectively quantitated PD-L1 expression in encapsulated thyroid neoplasms and outperformed manual scoring in reproducibility and higher efficiency.
Designed tools for analysis of lithography patterns and nanostructures
NASA Astrophysics Data System (ADS)
Dervillé, Alexandre; Baderot, Julien; Bernard, Guilhem; Foucher, Johann; Grönqvist, Hanna; Labrosse, Aurélien; Martinez, Sergio; Zimmermann, Yann
2017-03-01
We introduce a set of designed tools for the analysis of lithography patterns and nano structures. The classical metrological analysis of these objects has the drawbacks of being time consuming, requiring manual tuning and lacking robustness and user friendliness. With the goal of improving the current situation, we propose new image processing tools at different levels: semi automatic, automatic and machine-learning enhanced tools. The complete set of tools has been integrated into a software platform designed to transform the lab into a virtual fab. The underlying idea is to master nano processes at the research and development level by accelerating the access to knowledge and hence speed up the implementation in product lines.
Abnormal Position and Presentation of the Fetus
... View The Professional Version For doctors and medical students Consumer Version Merck Manual Consumer Version × MERCK MANUAL - ... View The Professional Version For doctors and medical students Home Medical Topics Blood Disorders Bone, Joint, and ...
Syrinx of the Spinal Cord and Brain Stem
... View The Professional Version For doctors and medical students Consumer Version Merck Manual Consumer Version × MERCK MANUAL - ... View The Professional Version For doctors and medical students Home Medical Topics Blood Disorders Bone, Joint, and ...
Genetics Home Reference: progressive familial intrahepatic cholestasis
... the vein that supplies blood to the liver (portal hypertension), and an enlarged liver and spleen (hepatosplenomegaly). There ... Manual Consumer Version: Cholestasis Merck Manual Consumer Version: Portal Hypertension Orphanet: Progressive familial intrahepatic cholestasis Patient Support and ...
Vogeser, Michael; Spöhrer, Ute
2006-01-01
Liquid chromatography tandem-mass spectrometry (LC-MS/MS) is an efficient technology for routine determination of immunosuppressants in whole blood; however, time-consuming manual sample preparation remains a significant limitation of this technique. Using a commercially available robotic pipetting system (Tecan Freedom EVO), we developed an automated sample-preparation protocol for quantification of tacrolimus in whole blood by LC-MS/MS. Barcode reading, sample resuspension, transfer of whole blood aliquots into a deep-well plate, addition of internal standard solution, mixing, and protein precipitation by addition of an organic solvent is performed by the robotic system. After centrifugation of the plate, the deproteinized supernatants are submitted to on-line solid phase extraction, using column switching prior to LC-MS/MS analysis. The only manual actions within the entire process are decapping of the tubes, and transfer of the deep-well plate from the robotic system to a centrifuge and finally to the HPLC autosampler. Whole blood pools were used to assess the reproducibility of the entire analytical system for measuring tacrolimus concentrations. A total coefficient of variation of 1.7% was found for the entire automated analytical process (n=40; mean tacrolimus concentration, 5.3 microg/L). Close agreement between tacrolimus results obtained after manual and automated sample preparation was observed. The analytical system described here, comprising automated protein precipitation, on-line solid phase extraction and LC-MS/MS analysis, is convenient and precise, and minimizes hands-on time and the risk of mistakes in the quantification of whole blood immunosuppressant concentrations compared to conventional methods.
Clinical evaluation of atlas and deep learning based automatic contouring for lung cancer.
Lustberg, Tim; van Soest, Johan; Gooding, Mark; Peressutti, Devis; Aljabar, Paul; van der Stoep, Judith; van Elmpt, Wouter; Dekker, Andre
2018-02-01
Contouring of organs at risk (OARs) is an important but time consuming part of radiotherapy treatment planning. The aim of this study was to investigate whether using institutional created software-generated contouring will save time if used as a starting point for manual OAR contouring for lung cancer patients. Twenty CT scans of stage I-III NSCLC patients were used to compare user adjusted contours after an atlas-based and deep learning contour, against manual delineation. The lungs, esophagus, spinal cord, heart and mediastinum were contoured for this study. The time to perform the manual tasks was recorded. With a median time of 20 min for manual contouring, the total median time saved was 7.8 min when using atlas-based contouring and 10 min for deep learning contouring. Both atlas based and deep learning adjustment times were significantly lower than manual contouring time for all OARs except for the left lung and esophagus of the atlas based contouring. User adjustment of software generated contours is a viable strategy to reduce contouring time of OARs for lung radiotherapy while conforming to local clinical standards. In addition, deep learning contouring shows promising results compared to existing solutions. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Thomason, Deborah J., Ed.
This 4-H manual provides instructions and materials for a consumer education activity. It contains a wide range of activities and learning opportunities for a hypothetical buying situation with several choices or alternatives provided. The manual is designed to teach the participant how to rank the choices and develop oral reasons for that…
Microscopic image analysis for reticulocyte based on watershed algorithm
NASA Astrophysics Data System (ADS)
Wang, J. Q.; Liu, G. F.; Liu, J. G.; Wang, G.
2007-12-01
We present a watershed-based algorithm in the analysis of light microscopic image for reticulocyte (RET), which will be used in an automated recognition system for RET in peripheral blood. The original images, obtained by micrography, are segmented by modified watershed algorithm and are recognized in term of gray entropy and area of connective area. In the process of watershed algorithm, judgment conditions are controlled according to character of the image, besides, the segmentation is performed by morphological subtraction. The algorithm was simulated with MATLAB software. It is similar for automated and manual scoring and there is good correlation(r=0.956) between the methods, which is resulted from 50 pieces of RET images. The result indicates that the algorithm for peripheral blood RETs is comparable to conventional manual scoring, and it is superior in objectivity. This algorithm avoids time-consuming calculation such as ultra-erosion and region-growth, which will speed up the computation consequentially.
NASA Astrophysics Data System (ADS)
Kerekes, Ryan A.; Gleason, Shaun S.; Trivedi, Niraj; Solecki, David J.
2010-03-01
Segmentation, tracking, and tracing of neurons in video imagery are important steps in many neuronal migration studies and can be inaccurate and time-consuming when performed manually. In this paper, we present an automated method for tracing the leading and trailing processes of migrating neurons in time-lapse image stacks acquired with a confocal fluorescence microscope. In our approach, we first locate and track the soma of the cell of interest by smoothing each frame and tracking the local maxima through the sequence. We then trace the leading process in each frame by starting at the center of the soma and stepping repeatedly in the most likely direction of the leading process. This direction is found at each step by examining second derivatives of fluorescent intensity along curves of constant radius around the current point. Tracing terminates after a fixed number of steps or when fluorescent intensity drops below a fixed threshold. We evolve the resulting trace to form an improved trace that more closely follows the approximate centerline of the leading process. We apply a similar algorithm to the trailing process of the cell by starting the trace in the opposite direction. We demonstrate our algorithm on two time-lapse confocal video sequences of migrating cerebellar granule neurons (CGNs). We show that the automated traces closely approximate ground truth traces to within 1 or 2 pixels on average. Additionally, we compute line intensity profiles of fluorescence along the automated traces and quantitatively demonstrate their similarity to manually generated profiles in terms of fluorescence peak locations.
An Automated Circulation System for a Small Technical Library.
ERIC Educational Resources Information Center
Culnan, Mary J.
The traditional manually-controlled circulation records of the Burroughs Corporation Library in Goleta, California, presented problems of inaccuracies, time time-consuming searches, and lack of use statistics. An automated system with the capacity to do file maintenance and statistical record-keeping was implemented on a Burroughts B1700 computer.…
Uav-Based Automatic Tree Growth Measurement for Biomass Estimation
NASA Astrophysics Data System (ADS)
Karpina, M.; Jarząbek-Rychard, M.; Tymków, P.; Borkowski, A.
2016-06-01
Manual in-situ measurements of geometric tree parameters for the biomass volume estimation are time-consuming and economically non-effective. Photogrammetric techniques can be deployed in order to automate the measurement procedure. The purpose of the presented work is an automatic tree growth estimation based on Unmanned Aircraft Vehicle (UAV) imagery. The experiment was conducted in an agriculture test field with scots pine canopies. The data was collected using a Leica Aibotix X6V2 platform equipped with a Nikon D800 camera. Reference geometric parameters of selected sample plants were measured manually each week. In situ measurements were correlated with the UAV data acquisition. The correlation aimed at the investigation of optimal conditions for a flight and parameter settings for image acquisition. The collected images are processed in a state of the art tool resulting in a generation of dense 3D point clouds. The algorithm is developed in order to estimate geometric tree parameters from 3D points. Stem positions and tree tops are identified automatically in a cross section, followed by the calculation of tree heights. The automatically derived height values are compared to the reference measurements performed manually. The comparison allows for the evaluation of automatic growth estimation process. The accuracy achieved using UAV photogrammetry for tree heights estimation is about 5cm.
Concrete Crack Identification Using a UAV Incorporating Hybrid Image Processing
Lee, Junhwa; Ahn, Eunjong; Cho, Soojin; Shin, Myoungsu
2017-01-01
Crack assessment is an essential process in the maintenance of concrete structures. In general, concrete cracks are inspected by manual visual observation of the surface, which is intrinsically subjective as it depends on the experience of inspectors. Further, it is time-consuming, expensive, and often unsafe when inaccessible structural members are to be assessed. Unmanned aerial vehicle (UAV) technologies combined with digital image processing have recently been applied to crack assessment to overcome the drawbacks of manual visual inspection. However, identification of crack information in terms of width and length has not been fully explored in the UAV-based applications, because of the absence of distance measurement and tailored image processing. This paper presents a crack identification strategy that combines hybrid image processing with UAV technology. Equipped with a camera, an ultrasonic displacement sensor, and a WiFi module, the system provides the image of cracks and the associated working distance from a target structure on demand. The obtained information is subsequently processed by hybrid image binarization to estimate the crack width accurately while minimizing the loss of the crack length information. The proposed system has shown to successfully measure cracks thicker than 0.1 mm with the maximum length estimation error of 7.3%. PMID:28880254
Genetics Home Reference: atypical hemolytic-uremic syndrome
... Kidney Diseases: Kidney Failure: Choosing a Treatment That's Right for You Educational Resources (6 links) Disease InfoSearch: Hemolytic uremic syndrome, atypical MalaCards: genetic atypical hemolytic-uremic syndrome Merck Manual Consumer Version: Overview of Anemia Merck Manual Consumer Version: ...
ERIC Educational Resources Information Center
Allegheny Intermediate Unit, Pittsburgh, PA.
Designed for grades K-4, this manual contains suggested teaching strategies for infusing consumer education into the academic areas of art, language arts, mathematics, science/health, and social studies. Each of the twenty to thirty learning activities provided for each of the academic areas is based on competencies related to one of four…
DOT National Transportation Integrated Search
2014-07-01
Pavement Condition surveys are carried out periodically to gather information on pavement distresses that will guide decision-making for maintenance and preservation. Traditional methods involve manual pavement inspections which are time-consuming : ...
High-Throughput Platform for Synthesis of Melamine-Formaldehyde Microcapsules.
Çakir, Seda; Bauters, Erwin; Rivero, Guadalupe; Parasote, Tom; Paul, Johan; Du Prez, Filip E
2017-07-10
The synthesis of microcapsules via in situ polymerization is a labor-intensive and time-consuming process, where many composition and process factors affect the microcapsule formation and its morphology. Herein, we report a novel combinatorial technique for the preparation of melamine-formaldehyde microcapsules, using a custom-made and automated high-throughput platform (HTP). After performing validation experiments for ensuring the accuracy and reproducibility of the novel platform, a design of experiment study was performed. The influence of different encapsulation parameters was investigated, such as the effect of the surfactant, surfactant type, surfactant concentration and core/shell ratio. As a result, this HTP-platform is suitable to be used for the synthesis of different types of microcapsules in an automated and controlled way, allowing the screening of different reaction parameters in a shorter time compared to the manual synthetic techniques.
Maxwell, Susan K.
2010-01-01
Satellite imagery and aerial photography represent a vast resource to significantly enhance environmental mapping and modeling applications for use in understanding spatio-temporal relationships between environment and health. Deriving boundaries of land cover objects, such as trees, buildings, and crop fields, from image data has traditionally been performed manually using a very time consuming process of hand digitizing. Boundary detection algorithms are increasingly being applied using object-based image analysis (OBIA) technology to automate the process. The purpose of this paper is to present an overview and demonstrate the application of OBIA for delineating land cover features at multiple scales using a high resolution aerial photograph (1 m) and a medium resolution Landsat image (30 m) time series in the context of a pesticide spray drift exposure application. PMID:21135917
NASA Technical Reports Server (NTRS)
Jones, Erick C.; Richards, Casey; Herstein, Kelli; Franca, Rodrigo; Yagoda, Evan L.; Vasquez, Reuben
2008-01-01
Current inventory management techniques for consumables and supplies aboard space vehicles are burdensome and time consuming. Inventory of food, clothing, and supplies are taken periodically by manually scanning the barcodes on each item. The inaccuracy of reading barcodes and the excessive amount of time it takes for the astronauts to perform this function would be better spent doing scientific experiments. Therefore, there is a need for an alternative method of inventory control by NASA astronauts. Radio Frequency Identification (RFID) is an automatic data capture technology that has potential to create a more effective and user-friendly inventory management system (IMS). In this paper we introduce a Design for Six Sigma Research (DFSS-R) methodology that allows for reliability testing of RFID systems. The research methodology uses a modified sequential design of experiments process to test and evaluate the quality of commercially available RFID technology. The results from the experimentation are compared to the requirements provided by NASA to evaluate the feasibility of using passive Generation 2 RFID technology to improve inventory control aboard crew exploration vehicles.
Electronics manufacturing and assembly in Japan
NASA Technical Reports Server (NTRS)
Kukowski, John A.; Boulton, William R.
1995-01-01
In the consumer electronics industry, precision processing technology is the basis for enhancing product functions and for minimizing components and end products. Throughout Japan, manufacturing technology is seen as critical to the production and assembly of advanced products. While its population has increased less than 30 percent over twenty-five years, Japan's gross national product has increase thirtyfold; this growth has resulted in large part from rapid replacement of manual operations with innovative, high-speed, large-scale, continuously running, complex machines that process a growing number of miniaturized components. The JTEC panel found that introduction of next-generation electronics products in Japan goes hand-in-hand with introduction of new and improved production equipment. In the panel's judgment, Japan's advanced process technologies and equipment development and its highly automated factories are crucial elements of its domination of the consumer electronics marketplace - and Japan's expertise in manufacturing consumer electronics products gives it potentially unapproachable process expertise in all electronics markets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, X; Li, S; Zheng, D
Purpose: Linac commissioning is a time consuming and labor intensive process, the streamline of which is highly desirable. In particular, manual measurement of output factors for a variety of field sizes and energy greatly hinders the commissioning efficiency. In this study, automated measurement of output factors was demonstrated as ‘one-click’ using data logging of an electrometer. Methods: Beams to be measured were created in the recording and verifying (R&V) system and configured for continuous delivery. An electrometer with an automatic data logging feature enabled continuous data collection for all fields without human intervention. The electrometer saved data into a spreadsheetmore » every 0.5 seconds. A Matlab program was developed to analyze the excel data to monitor and check the data quality. Results: For each photon energy, output factors were measured for five configurations, including open field and four wedges. Each configuration includes 72 fields sizes, ranging from 4×4 to 20×30 cm{sup 2}. Using automation, it took 50 minutes to complete the measurement of 72 field sizes, in contrast to 80 minutes when using the manual approach. The automation avoided the necessity of redundant Linac status checks between fields as in the manual approach. In fact, the only limiting factor in such automation is Linac overheating. The data collection beams in the R&V system are reusable, and the simplified process is less error-prone. In addition, our Matlab program extracted the output factors faithfully from data logging, and the discrepancy between the automatic and manual measurement is within ±0.3%. For two separate automated measurements 30 days apart, consistency check shows a discrepancy within ±1% for 6MV photon with a 60 degree wedge. Conclusion: Automated output factor measurements can save time by 40% when compared with conventional manual approach. This work laid ground for further improvement for the automation of Linac commissioning.« less
Natural Language Processing Methods and Systems for Biomedical Ontology Learning
Liu, Kaihong; Hogan, William R.; Crowley, Rebecca S.
2010-01-01
While the biomedical informatics community widely acknowledges the utility of domain ontologies, there remain many barriers to their effective use. One important requirement of domain ontologies is that they must achieve a high degree of coverage of the domain concepts and concept relationships. However, the development of these ontologies is typically a manual, time-consuming, and often error-prone process. Limited resources result in missing concepts and relationships as well as difficulty in updating the ontology as knowledge changes. Methodologies developed in the fields of natural language processing, information extraction, information retrieval and machine learning provide techniques for automating the enrichment of an ontology from free-text documents. In this article, we review existing methodologies and developed systems, and discuss how existing methods can benefit the development of biomedical ontologies. PMID:20647054
Cottenden, Jennielee; Filter, Emily R; Cottreau, Jon; Moore, David; Bullock, Martin; Huang, Weei-Yuarn; Arnason, Thomas
2018-03-01
- Pathologists routinely assess Ki67 immunohistochemistry to grade gastrointestinal and pancreatic neuroendocrine tumors. Unfortunately, manual counts of the Ki67 index are very time consuming and eyeball estimation has been criticized as unreliable. Manual Ki67 counts performed by cytotechnologists could potentially save pathologist time and improve accuracy. - To assess the concordance between manual Ki67 index counts performed by cytotechnologists versus eyeball estimates and manual Ki67 counts by pathologists. - One Ki67 immunohistochemical stain was retrieved from each of 18 archived gastrointestinal or pancreatic neuroendocrine tumor resections. We compared pathologists' Ki67 eyeball estimates on glass slides and printed color images with manual counts performed by 3 cytotechnologists and gold standard manual Ki67 index counts by 3 pathologists. - Tumor grade agreement between pathologist image eyeball estimate and gold standard pathologist manual count was fair (κ = 0.31; 95% CI, 0.030-0.60). In 9 of 20 cases (45%), the mean pathologist eyeball estimate was 1 grade higher than the mean pathologist manual count. There was almost perfect agreement in classifying tumor grade between the mean cytotechnologist manual count and the mean pathologist manual count (κ = 0.910; 95% CI, 0.697-1.00). In 20 cases, there was only 1 grade disagreement between the 2 methods. Eyeball estimation by pathologists required less than 1 minute, whereas manual counts by pathologists required a mean of 17 minutes per case. - Eyeball estimation of the Ki67 index has a high rate of tumor grade misclassification compared with manual counting. Cytotechnologist manual counts are accurate and save pathologist time.
Vorberg, Ellen; Fleischer, Heidi; Junginger, Steffen; Liu, Hui; Stoll, Norbert; Thurow, Kerstin
2016-10-01
Life science areas require specific sample pretreatment to increase the concentration of the analytes and/or to convert the analytes into an appropriate form for the detection and separation systems. Various workstations are commercially available, allowing for automated biological sample pretreatment. Nevertheless, due to the required temperature, pressure, and volume conditions in typical element and structure-specific measurements, automated platforms are not suitable for analytical processes. Thus, the purpose of the presented investigation was the design, realization, and evaluation of an automated system ensuring high-precision sample preparation for a variety of analytical measurements. The developed system has to enable system adaption and high performance flexibility. Furthermore, the system has to be capable of dealing with the wide range of required vessels simultaneously, allowing for less cost and time-consuming process steps. However, the system's functionality has been confirmed in various validation sequences. Using element-specific measurements, the automated system was up to 25% more precise compared to the manual procedure and as precise as the manual procedure using structure-specific measurements. © 2015 Society for Laboratory Automation and Screening.
Kuich, P. Henning J. L.; Hoffmann, Nils; Kempa, Stefan
2015-01-01
A current bottleneck in GC–MS metabolomics is the processing of raw machine data into a final datamatrix that contains the quantities of identified metabolites in each sample. While there are many bioinformatics tools available to aid the initial steps of the process, their use requires both significant technical expertise and a subsequent manual validation of identifications and alignments if high data quality is desired. The manual validation is tedious and time consuming, becoming prohibitively so as sample numbers increase. We have, therefore, developed Maui-VIA, a solution based on a visual interface that allows experts and non-experts to simultaneously and quickly process, inspect, and correct large numbers of GC–MS samples. It allows for the visual inspection of identifications and alignments, facilitating a unique and, due to its visualization and keyboard shortcuts, very fast interaction with the data. Therefore, Maui-Via fills an important niche by (1) providing functionality that optimizes the component of data processing that is currently most labor intensive to save time and (2) lowering the threshold of expertise required to process GC–MS data. Maui-VIA projects are initiated with baseline-corrected raw data, peaklists, and a database of metabolite spectra and retention indices used for identification. It provides functionality for retention index calculation, a targeted library search, the visual annotation, alignment, correction interface, and metabolite quantification, as well as the export of the final datamatrix. The high quality of data produced by Maui-VIA is illustrated by its comparison to data attained manually by an expert using vendor software on a previously published dataset concerning the response of Chlamydomonas reinhardtii to salt stress. In conclusion, Maui-VIA provides the opportunity for fast, confident, and high-quality data processing validation of large numbers of GC–MS samples by non-experts. PMID:25654076
A semi-automatic annotation tool for cooking video
NASA Astrophysics Data System (ADS)
Bianco, Simone; Ciocca, Gianluigi; Napoletano, Paolo; Schettini, Raimondo; Margherita, Roberto; Marini, Gianluca; Gianforme, Giorgio; Pantaleo, Giuseppe
2013-03-01
In order to create a cooking assistant application to guide the users in the preparation of the dishes relevant to their profile diets and food preferences, it is necessary to accurately annotate the video recipes, identifying and tracking the foods of the cook. These videos present particular annotation challenges such as frequent occlusions, food appearance changes, etc. Manually annotate the videos is a time-consuming, tedious and error-prone task. Fully automatic tools that integrate computer vision algorithms to extract and identify the elements of interest are not error free, and false positive and false negative detections need to be corrected in a post-processing stage. We present an interactive, semi-automatic tool for the annotation of cooking videos that integrates computer vision techniques under the supervision of the user. The annotation accuracy is increased with respect to completely automatic tools and the human effort is reduced with respect to completely manual ones. The performance and usability of the proposed tool are evaluated on the basis of the time and effort required to annotate the same video sequences.
Mechanized Polishing of Optical Rod and Fiber Ends
NASA Technical Reports Server (NTRS)
Gum, J. S.
1987-01-01
Workpiece holder for standard grinding and polishing machine makes it easier to produce optical finish and shape on end of metal or glass rod or bundle of optical fibers. Previously, glass parts lapped and polished manually, time-consuming procedure calling for considerable skill.
An open source automatic quality assurance (OSAQA) tool for the ACR MRI phantom.
Sun, Jidi; Barnes, Michael; Dowling, Jason; Menk, Fred; Stanwell, Peter; Greer, Peter B
2015-03-01
Routine quality assurance (QA) is necessary and essential to ensure MR scanner performance. This includes geometric distortion, slice positioning and thickness accuracy, high contrast spatial resolution, intensity uniformity, ghosting artefact and low contrast object detectability. However, this manual process can be very time consuming. This paper describes the development and validation of an open source tool to automate the MR QA process, which aims to increase physicist efficiency, and improve the consistency of QA results by reducing human error. The OSAQA software was developed in Matlab and the source code is available for download from http://jidisun.wix.com/osaqa-project/. During program execution QA results are logged for immediate review and are also exported to a spreadsheet for long-term machine performance reporting. For the automatic contrast QA test, a user specific contrast evaluation was designed to improve accuracy for individuals on different display monitors. American College of Radiology QA images were acquired over a period of 2 months to compare manual QA and the results from the proposed OSAQA software. OSAQA was found to significantly reduce the QA time from approximately 45 to 2 min. Both the manual and OSAQA results were found to agree with regard to the recommended criteria and the differences were insignificant compared to the criteria. The intensity homogeneity filter is necessary to obtain an image with acceptable quality and at the same time keeps the high contrast spatial resolution within the recommended criterion. The OSAQA tool has been validated on scanners with different field strengths and manufacturers. A number of suggestions have been made to improve both the phantom design and QA protocol in the future.
SWARM : a scientific workflow for supporting Bayesian approaches to improve metabolic models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, X.; Stevens, R.; Mathematics and Computer Science
2008-01-01
With the exponential growth of complete genome sequences, the analysis of these sequences is becoming a powerful approach to build genome-scale metabolic models. These models can be used to study individual molecular components and their relationships, and eventually study cells as systems. However, constructing genome-scale metabolic models manually is time-consuming and labor-intensive. This property of manual model-building process causes the fact that much fewer genome-scale metabolic models are available comparing to hundreds of genome sequences available. To tackle this problem, we design SWARM, a scientific workflow that can be utilized to improve genome-scale metabolic models in high-throughput fashion. SWARM dealsmore » with a range of issues including the integration of data across distributed resources, data format conversions, data update, and data provenance. Putting altogether, SWARM streamlines the whole modeling process that includes extracting data from various resources, deriving training datasets to train a set of predictors and applying Bayesian techniques to assemble the predictors, inferring on the ensemble of predictors to insert missing data, and eventually improving draft metabolic networks automatically. By the enhancement of metabolic model construction, SWARM enables scientists to generate many genome-scale metabolic models within a short period of time and with less effort.« less
Rapid performance modeling and parameter regression of geodynamic models
NASA Astrophysics Data System (ADS)
Brown, J.; Duplyakin, D.
2016-12-01
Geodynamic models run in a parallel environment have many parameters with complicated effects on performance and scientifically-relevant functionals. Manually choosing an efficient machine configuration and mapping out the parameter space requires a great deal of expert knowledge and time-consuming experiments. We propose an active learning technique based on Gaussion Process Regression to automatically select experiments to map out the performance landscape with respect to scientific and machine parameters. The resulting performance model is then used to select optimal experiments for improving the accuracy of a reduced order model per unit of computational cost. We present the framework and evaluate its quality and capability using popular lithospheric dynamics models.
Interior Reconstruction Using the 3d Hough Transform
NASA Astrophysics Data System (ADS)
Dumitru, R.-C.; Borrmann, D.; Nüchter, A.
2013-02-01
Laser scanners are often used to create accurate 3D models of buildings for civil engineering purposes, but the process of manually vectorizing a 3D point cloud is time consuming and error-prone (Adan and Huber, 2011). Therefore, the need to characterize and quantify complex environments in an automatic fashion arises, posing challenges for data analysis. This paper presents a system for 3D modeling by detecting planes in 3D point clouds, based on which the scene is reconstructed at a high architectural level through removing automatically clutter and foreground data. The implemented software detects openings, such as windows and doors and completes the 3D model by inpainting.
OpenCFU, a new free and open-source software to count cell colonies and other circular objects.
Geissmann, Quentin
2013-01-01
Counting circular objects such as cell colonies is an important source of information for biologists. Although this task is often time-consuming and subjective, it is still predominantly performed manually. The aim of the present work is to provide a new tool to enumerate circular objects from digital pictures and video streams. Here, I demonstrate that the created program, OpenCFU, is very robust, accurate and fast. In addition, it provides control over the processing parameters and is implemented in an intuitive and modern interface. OpenCFU is a cross-platform and open-source software freely available at http://opencfu.sourceforge.net.
Consumer-Oriented Laboratory Activities: A Manual for Secondary Science Students.
ERIC Educational Resources Information Center
Anderson, Jacqueline; McDuffie, Thomas E., Jr.
This document provides a laboratory manual for use by secondary level students in performing consumer-oriented laboratory experiments. Each experiment includes an introductory question outlining the purpose of the investigation, a detailed discussion, detailed procedures, questions to be answered upon completing the experiment, and information for…
Grab a coffee: your aerial images are already analyzed
NASA Astrophysics Data System (ADS)
Garetto, Anthony; Rademacher, Thomas; Schulz, Kristian
2015-07-01
For over 2 decades the AIMTM platform has been utilized in mask shops as the standard for actinic review of photomask sites in order to perform defect disposition and repair review. Throughout this time the measurement throughput of the systems has been improved in order to keep pace with the requirements demanded by a manufacturing environment, however the analysis of the sites captured has seen little improvement and remained a manual process. This manual analysis of aerial images is time consuming, subject to error and unreliability and contributes to holding up turn-around time (TAT) and slowing process flow in a manufacturing environment. AutoAnalysis, the first application available for the FAVOR® platform, offers a solution to these problems by providing fully automated data transfer and analysis of AIMTM aerial images. The data is automatically output in a customizable format that can be tailored to your internal needs and the requests of your customers. Savings in terms of operator time arise from the automated analysis which no longer needs to be performed. Reliability is improved as human error is eliminated making sure the most defective region is always and consistently captured. Finally the TAT is shortened and process flow for the back end of the line improved as the analysis is fast and runs in parallel to the measurements. In this paper the concept and approach of AutoAnalysis will be presented as well as an update to the status of the project. A look at the benefits arising from the automation and the customizable approach of the solution will be shown.
ERIC Educational Resources Information Center
Matheson, Jennifer L.
2007-01-01
Transcribing interview data is a time-consuming task that most qualitative researchers dislike. Transcribing is even more difficult for people with physical limitations because traditional transcribing requires manual dexterity and the ability to sit at a computer for long stretches of time. Researchers have begun to explore using an automated…
Advanced Simulation in Undergraduate Pilot Training: Visual Display Development
1975-12-01
properties of each member were calculated manually and were inserted by means of punched cards, thus it was relatively easy (but time consuming ) to...investigations leading to the decision to employ an all-glass approach which consisted of a two-part glass funnel produced by Corning Glass Wo ks... consuming . After complete sets of materials were selected they had to be cemented into a final assembly. This had to be done in two operations because of the
Maxwell, Susan K
2010-12-01
Satellite imagery and aerial photography represent a vast resource to significantly enhance environmental mapping and modeling applications for use in understanding spatio-temporal relationships between environment and health. Deriving boundaries of land cover objects, such as trees, buildings, and crop fields, from image data has traditionally been performed manually using a very time consuming process of hand digitizing. Boundary detection algorithms are increasingly being applied using object-based image analysis (OBIA) technology to automate the process. The purpose of this paper is to present an overview and demonstrate the application of OBIA for delineating land cover features at multiple scales using a high resolution aerial photograph (1 m) and a medium resolution Landsat image (30 m) time series in the context of a pesticide spray drift exposure application. Copyright © 2010. Published by Elsevier Ltd.
An inexpensive open-source ultrasonic sensing system for monitoring fluid levels
USDA-ARS?s Scientific Manuscript database
Fluid levels are measured in a variety of agricultural applications, and are often measured manually, which can be time-consuming and labor-intensive. Rapid advances in electronic technologies have made a variety of inexpensive sensing, monitoring, and control capabilities available. A monitoring ...
Development of an automated pre-sampling plan for construction projects : final report.
DOT National Transportation Integrated Search
1983-03-01
The development of an automated pre-sampling plan was undertaken to free the district construction personnel from the cumbersome and time-consuming task of preparing such plans manually. A computer program was written and linked to a data file which ...
Molecular mapping of two environmentally sensitive male-sterile mutants in soybean
USDA-ARS?s Scientific Manuscript database
In soybean [Glycine max (L.) Merr.], manual cross-pollination to produce large quantities of hybrid seed is difficult and time consuming. Identification of an environmentally stable male-sterility system could make hybrid seed production commercially valuable. In soybean, two environmentally sensi...
Kodama, Naomi; Kimura, Toshifumi; Yonemura, Seiichiro; Kaneda, Satoshi; Ohashi, Mizue; Ikeno, Hidetoshi
2014-01-01
Earthworms are important soil macrofauna inhabiting almost all ecosystems. Their biomass is large and their burrowing and ingestion of soils alters soil physicochemical properties. Because of their large biomass, earthworms are regarded as an indicator of "soil heath". However, primarily because the difficulties in quantifying their behavior, the extent of their impact on soil material flow dynamics and soil health is poorly understood. Image data, with the aid of image processing tools, are a powerful tool in quantifying the movements of objects. Image data sets are often very large and time-consuming to analyze, especially when continuously recorded and manually processed. We aimed to develop a system to quantify earthworm movement from video recordings. Our newly developed program successfully tracked the two-dimensional positions of three separate parts of the earthworm and simultaneously output the change in its body length. From the output data, we calculated the velocity of the earthworm's movement. Our program processed the image data three times faster than the manual tracking system. To date, there are no existing systems to quantify earthworm activity from continuously recorded image data. The system developed in this study will reduce input time by a factor of three compared with manual data entry and will reduce errors involved in quantifying large data sets. Furthermore, it will provide more reliable measured values, although the program is still a prototype that needs further testing and improvement. Combined with other techniques, such as measuring metabolic gas emissions from earthworm bodies, this program could provide continuous observations of earthworm behavior in response to environmental variables under laboratory conditions. In the future, this standardized method will be applied to other animals, and the quantified earthworm movement will be incorporated into models of soil material flow dynamics or behavior in response to chemical substances present in the soil.
NASA Astrophysics Data System (ADS)
Weerts, A.; Wood, A. W.; Clark, M. P.; Carney, S.; Day, G. N.; Lemans, M.; Sumihar, J.; Newman, A. J.
2014-12-01
In the US, the forecasting approach used by the NWS River Forecast Centers and other regional organizations such as the Bonneville Power Administration (BPA) or Tennessee Valley Authority (TVA) has traditionally involved manual model input and state modifications made by forecasters in real-time. This process is time consuming and requires expert knowledge and experience. The benefits of automated data assimilation (DA) as a strategy for avoiding manual modification approaches have been demonstrated in research studies (eg. Seo et al., 2009). This study explores the usage of various ensemble DA algorithms within the operational platform used by TVA. The final goal is to identify a DA algorithm that will guide the manual modification process used by TVA forecasters and realize considerable time gains (without loss of quality or even enhance the quality) within the forecast process. We evaluate the usability of various popular algorithms for DA that have been applied on a limited basis for operational hydrology. To this end, Delft-FEWS was wrapped (via piwebservice) in OpenDA to enable execution of FEWS workflows (and the chained models within these workflows, including SACSMA, UNITHG and LAGK) in a DA framework. Within OpenDA, several filter methods are available. We considered 4 algorithms: particle filter (RRF), Ensemble Kalman Filter and Asynchronous Ensemble Kalman and Particle filter. Retrospective simulation results for one location and algorithm (AEnKF) are illustrated in Figure 1. The initial results are promising. We will present verification results for these methods (and possible more) for a variety of sub basins in the Tennessee River basin. Finally, we will offer recommendations for guided DA based on our results. References Seo, D.-J., L. Cajina, R. Corby and T. Howieson, 2009: Automatic State Updating for Operational Streamflow Forecasting via Variational Data Assimilation, 367, Journal of Hydrology, 255-275. Figure 1. Retrospectively simulated streamflow for the headwater basin above Powell River at Jonesville (red is observed flow, blue is simulated flow without DA, black is simulated flow with DA)
Automated Tracking of Cell Migration with Rapid Data Analysis.
DuChez, Brian J
2017-09-01
Cell migration is essential for many biological processes including development, wound healing, and metastasis. However, studying cell migration often requires the time-consuming and labor-intensive task of manually tracking cells. To accelerate the task of obtaining coordinate positions of migrating cells, we have developed a graphical user interface (GUI) capable of automating the tracking of fluorescently labeled nuclei. This GUI provides an intuitive user interface that makes automated tracking accessible to researchers with no image-processing experience or familiarity with particle-tracking approaches. Using this GUI, users can interactively determine a minimum of four parameters to identify fluorescently labeled cells and automate acquisition of cell trajectories. Additional features allow for batch processing of numerous time-lapse images, curation of unwanted tracks, and subsequent statistical analysis of tracked cells. Statistical outputs allow users to evaluate migratory phenotypes, including cell speed, distance, displacement, and persistence, as well as measures of directional movement, such as forward migration index (FMI) and angular displacement. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.
Automated structure refinement of macromolecular assemblies from cryo-EM maps using Rosetta.
Wang, Ray Yu-Ruei; Song, Yifan; Barad, Benjamin A; Cheng, Yifan; Fraser, James S; DiMaio, Frank
2016-09-26
Cryo-EM has revealed the structures of many challenging yet exciting macromolecular assemblies at near-atomic resolution (3-4.5Å), providing biological phenomena with molecular descriptions. However, at these resolutions, accurately positioning individual atoms remains challenging and error-prone. Manually refining thousands of amino acids - typical in a macromolecular assembly - is tedious and time-consuming. We present an automated method that can improve the atomic details in models that are manually built in near-atomic-resolution cryo-EM maps. Applying the method to three systems recently solved by cryo-EM, we are able to improve model geometry while maintaining the fit-to-density. Backbone placement errors are automatically detected and corrected, and the refinement shows a large radius of convergence. The results demonstrate that the method is amenable to structures with symmetry, of very large size, and containing RNA as well as covalently bound ligands. The method should streamline the cryo-EM structure determination process, providing accurate and unbiased atomic structure interpretation of such maps.
Tsang, Hamilton C; Garcia, Adam; Scott, Robert; Lancaster, David; Geary, Dianne; Nguyen, Anh-Thu; Shankar, Raina; Buchanan, Leslie; Pham, Tho D
2018-05-16
The ordering process at Stanford Health Care involved twice-daily shipments predicated upon current stock levels from the blood center to the hospital transfusion service. Manual census determination is time consuming and error prone. We aimed to enhance inventory management by developing an informatics platform to streamline the ordering process and reallocate staff productivity. The general inventory accounts for more than 50 product categories based on characteristics including component, blood type, irradiation status, and cytomegalovirus serology status. Over a 5-month calibration period, inventory levels were determined algorithmically and electronically. An in-house software program was created to determine inventory levels, optimize the electronic ordering process, and reduce labor time. A 3-month pilot period was implemented using this program. This system showed noninferiority while saving labor time. The average weekly transfused:stocked ratios for cryoprecipitate, plasma, and red blood cells, respectively, were 1.03, 1.21, and 1.48 before the pilot period, compared with 0.88, 1.17, and 1.40 during (p = 0.28). There were 27 (before) and 31 (during) average STAT units ordered per week (p = 0.86). The number of monthly wasted products due to expiration was 226 (before) and 196 (during) units, respectively (p = 0.28). An estimated 7 hours per week of technologist time was reallocated to other tasks. An in-house electronic ordering system can enhance information fidelity, reallocate and optimize valuable staff productivity, and further standardize ordering. This system showed noninferiority to the labor-intensive manual system while freeing up over 360 hours of staff time per year. © 2018 AABB.
Ni-MH battery electrodes made by a dry powder process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, Z.; Sakai, T.; Noreus, D.
1995-12-01
A dry powder roller pressing process, once developed for making both of the electrodes in low cost Ni-Cd consumer batteries, has been utilized to make electrodes for Ni-MH batteries. The process was evaluated by manually making a series of sub-C type cells that were characterized with respect to specific capacity, cycle life, and self-discharge. The performance was comparable in several respects with that of cells made by more complex Ni-foam technologies.
40 CFR Appendix Ix to Part 266 - Methods Manual for Compliance With the BIF Regulations
Code of Federal Regulations, 2013 CFR
2013-07-01
.... However, it is more time-consuming and is recommended only if the basic approach fails to meet the risk... used to evaluate the acceptability of the CEMS at the time of its installation or whenever specified in regulations or permits. The procedures are not designed to evaluate CEMS performance over an extended period...
40 CFR Appendix Ix to Part 266 - Methods Manual for Compliance With the BIF Regulations
Code of Federal Regulations, 2014 CFR
2014-07-01
.... However, it is more time-consuming and is recommended only if the basic approach fails to meet the risk... used to evaluate the acceptability of the CEMS at the time of its installation or whenever specified in regulations or permits. The procedures are not designed to evaluate CEMS performance over an extended period...
40 CFR Appendix Ix to Part 266 - Methods Manual for Compliance With the BIF Regulations
Code of Federal Regulations, 2012 CFR
2012-07-01
.... However, it is more time-consuming and is recommended only if the basic approach fails to meet the risk... used to evaluate the acceptability of the CEMS at the time of its installation or whenever specified in regulations or permits. The procedures are not designed to evaluate CEMS performance over an extended period...
Bhattacharya, Pratik; Van Stavern, Renee; Madhavan, Ramesh
2010-12-01
Use of resident case logs has been considered by the Residency Review Committee for Neurology of the Accreditation Council for Graduate Medical Education (ACGME). This study explores the effectiveness of a data-mining program for creating resident logs and compares the results to a manual data-entry system. Other potential applications of data mining to enhancing resident education are also explored. Patient notes dictated by residents were extracted from the Hospital Information System and analyzed using an unstructured mining program. History, examination and ICD codes were obtained and compared to the existing manual log. The automated data History, examination, and ICD codes were gathered for a 30-day period and compared to manual case logs. The automated method extracted all resident dictations with the dates of encounter and transcription. The automated data-miner processed information from all 19 residents, while only 4 residents logged manually. The manual method identified only broad categories of diseases; the major categories were stroke or vascular disorder 53 (27.6%), epilepsy 28 (14.7%), and pain syndromes 26 (13.5%). In the automated method, epilepsy 114 (21.1%), cerebral atherosclerosis 114 (21.1%), and headache 105 (19.4%) were the most frequent primary diagnoses, and headache 89 (16.5%), seizures 94 (17.4%), and low back pain 47 (9%) were the most common chief complaints. More detailed patient information such as tobacco use 227 (42%), alcohol use 205 (38%), and drug use 38 (7%) were extracted by the data-mining method. Manual case logs are time-consuming, provide limited information, and may be unpopular with residents. Data mining is a time-effective tool that may aid in the assessment of resident experience or the ACGME core competencies or in resident clinical research. More study of this method in larger numbers of residency programs is needed.
Skalec, Tomasz; Górecka-Dolny, Agnieszka; Zieliński, Stanisław; Gibek, Mirosław; Stróżecki, Łukasz; Kübler, Andrzej
2017-01-01
The automatic control module of end-tidal volatile agents (EtC) was designed to reduce the consumption of anaesthetic gases, increase the stability of general anaesthesia and reduce the need for adjustments in the settings of the anaesthesia machine. The aim of this study was to verify these hypotheses. The course of general anaesthesia with the use of the EtC module was analysed for haemodynamic stability, depth of anaesthesia, end-expiratory concentration of anaesthetic, number of ventilator key presses, fentanyl supply, consumption of volatile agents and anaesthesia and operation times. These data were compared with the data obtained during general anaesthesia controlled manually and were processed with statistical tests. Seventy-four patients underwent general anaesthesia for scheduled operations. Group AUTO-ET (n = 35) was anaesthetized with EtC, and group MANUAL-ET (n = 39) was controlled manually. Both populations presented similar anaesthesia stability. No differences were noted in the time of anaesthesia, saturation up to MAC 1.0 or awakening. Data revealed no differences in mean EtAA or the fentanyl dose. The AUTO-ET group exhibited fewer key presses per minute, 0.0603 min⁻¹, whereas the MANUAL-ET exhibited a value of 0.0842 min⁻¹; P = 0.001. The automatic group consumed more anaesthetic and oxygen per minute (sevoflurane 0.1171 mL min⁻¹; IQR: 0.0503; oxygen 1.8286 mL min⁻¹, IQR: 1,3751) than MANUAL-ET (sevoflurane 0.0824 mL min⁻¹, IQR: 0.0305; oxygen 1,288 mL min⁻¹, IQR: 0,6517) (P = 0.0028 and P = 0.0171, respectively). Both methods are equally stable and safe for patients. The consumption of volatile agents was significantly increased in the AUTO-ET group. EtC considerably reduces the number of key presses.
NASA Astrophysics Data System (ADS)
Forsberg, Daniel; Lundström, Claes; Andersson, Mats; Vavruch, Ludvig; Tropp, Hans; Knutsson, Hans
2013-03-01
Reliable measurements of spinal deformities in idiopathic scoliosis are vital, since they are used for assessing the degree of scoliosis, deciding upon treatment and monitoring the progression of the disease. However, commonly used two dimensional methods (e.g. the Cobb angle) do not fully capture the three dimensional deformity at hand in scoliosis, of which axial vertebral rotation (AVR) is considered to be of great importance. There are manual methods for measuring the AVR, but they are often time-consuming and related with a high intra- and inter-observer variability. In this paper, we present a fully automatic method for estimating the AVR in images from computed tomography. The proposed method is evaluated on four scoliotic patients with 17 vertebrae each and compared with manual measurements performed by three observers using the standard method by Aaro-Dahlborn. The comparison shows that the difference in measured AVR between automatic and manual measurements are on the same level as the inter-observer difference. This is further supported by a high intraclass correlation coefficient (0.971-0.979), obtained when comparing the automatic measurements with the manual measurements of each observer. Hence, the provided results and the computational performance, only requiring approximately 10 to 15 s for processing an entire volume, demonstrate the potential clinical value of the proposed method.
A GIS-based hedonic price model for agricultural land
NASA Astrophysics Data System (ADS)
Demetriou, Demetris
2015-06-01
Land consolidation is a very effective land management planning approach that aims towards rural/agricultural sustainable development. Land reallocation which involves land tenure restructuring is the most important, complex and time consuming component of land consolidation. Land reallocation relies on land valuation since its fundamental principle provides that after consolidation, each landowner shall be granted a property of an aggregate value that is approximately the same as the value of the property owned prior to consolidation. Therefore, land value is the crucial factor for the land reallocation process and hence for the success and acceptance of the final land consolidation plan. Land valuation is a process of assigning values to all parcels (and its contents) and it is usually carried out by an ad-hoc committee. However, the process faces some problems such as it is time consuming hence costly, outcomes may present inconsistency since it is carried out manually and empirically without employing systematic analytical tools and in particular spatial analysis tools and techniques such as statistical/mathematical. A solution to these problems can be the employment of mass appraisal land valuation methods using automated valuation models (AVM) based on international standards. In this context, this paper presents a spatial based linear hedonic price model which has been developed and tested in a case study land consolidation area in Cyprus. Results showed that the AVM is capable to produce acceptable in terms of accuracy and reliability land values and to reduce time hence cost required by around 80%.
Chen, Wenjin; Wong, Chung; Vosburgh, Evan; Levine, Arnold J; Foran, David J; Xu, Eugenia Y
2014-07-08
The increasing number of applications of three-dimensional (3D) tumor spheroids as an in vitro model for drug discovery requires their adaptation to large-scale screening formats in every step of a drug screen, including large-scale image analysis. Currently there is no ready-to-use and free image analysis software to meet this large-scale format. Most existing methods involve manually drawing the length and width of the imaged 3D spheroids, which is a tedious and time-consuming process. This study presents a high-throughput image analysis software application - SpheroidSizer, which measures the major and minor axial length of the imaged 3D tumor spheroids automatically and accurately; calculates the volume of each individual 3D tumor spheroid; then outputs the results in two different forms in spreadsheets for easy manipulations in the subsequent data analysis. The main advantage of this software is its powerful image analysis application that is adapted for large numbers of images. It provides high-throughput computation and quality-control workflow. The estimated time to process 1,000 images is about 15 min on a minimally configured laptop, or around 1 min on a multi-core performance workstation. The graphical user interface (GUI) is also designed for easy quality control, and users can manually override the computer results. The key method used in this software is adapted from the active contour algorithm, also known as Snakes, which is especially suitable for images with uneven illumination and noisy background that often plagues automated imaging processing in high-throughput screens. The complimentary "Manual Initialize" and "Hand Draw" tools provide the flexibility to SpheroidSizer in dealing with various types of spheroids and diverse quality images. This high-throughput image analysis software remarkably reduces labor and speeds up the analysis process. Implementing this software is beneficial for 3D tumor spheroids to become a routine in vitro model for drug screens in industry and academia.
Laser-assisted selection and passaging of human pluripotent stem cell colonies.
Terstegge, Stefanie; Rath, Barbara H; Laufenberg, Iris; Limbach, Nina; Buchstaller, Andrea; Schütze, Karin; Brüstle, Oliver
2009-09-10
The derivation of somatic cell products from human embryonic stem cells (hESCs) requires a highly standardized production process with sufficient throughput. To date, the most common technique for hESC passaging is the manual dissection of colonies, which is a gentle, but laborious and time-consuming process and is consequently inappropriate for standardized maintenance of hESC. Here, we present a laser-based technique for the contact-free dissection and isolation of living hESCs (laser microdissection and pressure catapulting, LMPC). Following LMPC treatment, 80.6+/-8.7% of the cells remained viable as compared to 88.6+/-1.7% of manually dissected hESCs. Furthermore, there was no significant difference in the expression of pluripotency-associated markers when compared to the control. Flow cytometry revealed that 83.8+/-4.1% of hESCs isolated by LMPC expressed the surface marker Tra-1-60 (control: 83.9+/-3.6%). In vitro differentiation potential of LMPC treated hESCs as determined by embryoid body formation and multi-germlayer formation was not impaired. Moreover, we could not detect any overt karyotype alterations as a result of the LMPC process. Our data demonstrate the feasibility of standardized laser-based passaging of hESC cultures. This technology should facilitate both colony selection and maintenance culture of pluripotent stem cells.
UAS-based automatic bird count of a common gull colony
NASA Astrophysics Data System (ADS)
Grenzdörffer, G. J.
2013-08-01
The standard procedure to count birds is a manual one. However a manual bird count is a time consuming and cumbersome process, requiring several people going from nest to nest counting the birds and the clutches. High resolution imagery, generated with a UAS (Unmanned Aircraft System) offer an interesting alternative. Experiences and results of UAS surveys for automatic bird count of the last two years are presented for the bird reserve island Langenwerder. For 2011 1568 birds (± 5%) were detected on the image mosaic, based on multispectral image classification and GIS-based post processing. Based on the experiences of 2011 the results and the accuracy of the automatic bird count 2012 became more efficient. For 2012 1938 birds with an accuracy of approx. ± 3% were counted. Additionally a separation of breeding and non-breeding birds was performed with the assumption, that standing birds cause a visible shade. The final section of the paper is devoted to the analysis of the 3D-point cloud. Thereby the point cloud was used to determine the height of the vegetation and the extend and depth of closed sinks, which are unsuitable for breeding birds.
Artificial neuron-glia networks learning approach based on cooperative coevolution.
Mesejo, Pablo; Ibáñez, Oscar; Fernández-Blanco, Enrique; Cedrón, Francisco; Pazos, Alejandro; Porto-Pazos, Ana B
2015-06-01
Artificial Neuron-Glia Networks (ANGNs) are a novel bio-inspired machine learning approach. They extend classical Artificial Neural Networks (ANNs) by incorporating recent findings and suppositions about the way information is processed by neural and astrocytic networks in the most evolved living organisms. Although ANGNs are not a consolidated method, their performance against the traditional approach, i.e. without artificial astrocytes, was already demonstrated on classification problems. However, the corresponding learning algorithms developed so far strongly depends on a set of glial parameters which are manually tuned for each specific problem. As a consequence, previous experimental tests have to be done in order to determine an adequate set of values, making such manual parameter configuration time-consuming, error-prone, biased and problem dependent. Thus, in this paper, we propose a novel learning approach for ANGNs that fully automates the learning process, and gives the possibility of testing any kind of reasonable parameter configuration for each specific problem. This new learning algorithm, based on coevolutionary genetic algorithms, is able to properly learn all the ANGNs parameters. Its performance is tested on five classification problems achieving significantly better results than ANGN and competitive results with ANN approaches.
Efficient Semi-Automatic 3D Segmentation for Neuron Tracing in Electron Microscopy Images
Jones, Cory; Liu, Ting; Cohan, Nathaniel Wood; Ellisman, Mark; Tasdizen, Tolga
2015-01-01
0.1. Background In the area of connectomics, there is a significant gap between the time required for data acquisition and dense reconstruction of the neural processes contained in the same dataset. Automatic methods are able to eliminate this timing gap, but the state-of-the-art accuracy so far is insufficient for use without user corrections. If completed naively, this process of correction can be tedious and time consuming. 0.2. New Method We present a new semi-automatic method that can be used to perform 3D segmentation of neurites in EM image stacks. It utilizes an automatic method that creates a hierarchical structure for recommended merges of superpixels. The user is then guided through each predicted region to quickly identify errors and establish correct links. 0.3. Results We tested our method on three datasets with both novice and expert users. Accuracy and timing were compared with published automatic, semi-automatic, and manual results. 0.4. Comparison with Existing Methods Post-automatic correction methods have also been used in [1] and [2]. These methods do not provide navigation or suggestions in the manner we present. Other semi-automatic methods require user input prior to the automatic segmentation such as [3] and [4] and are inherently different than our method. 0.5. Conclusion Using this method on the three datasets, novice users achieved accuracy exceeding state-of-the-art automatic results, and expert users achieved accuracy on par with full manual labeling but with a 70% time improvement when compared with other examples in publication. PMID:25769273
OpenCFU, a New Free and Open-Source Software to Count Cell Colonies and Other Circular Objects
Geissmann, Quentin
2013-01-01
Counting circular objects such as cell colonies is an important source of information for biologists. Although this task is often time-consuming and subjective, it is still predominantly performed manually. The aim of the present work is to provide a new tool to enumerate circular objects from digital pictures and video streams. Here, I demonstrate that the created program, OpenCFU, is very robust, accurate and fast. In addition, it provides control over the processing parameters and is implemented in an intuitive and modern interface. OpenCFU is a cross-platform and open-source software freely available at http://opencfu.sourceforge.net. PMID:23457446
Camilo, Cesar M; Lima, Gustavo M A; Maluf, Fernando V; Guido, Rafael V C; Polikarpov, Igor
2016-01-01
Following burgeoning genomic and transcriptomic sequencing data, biochemical and molecular biology groups worldwide are implementing high-throughput cloning and mutagenesis facilities in order to obtain a large number of soluble proteins for structural and functional characterization. Since manual primer design can be a time-consuming and error-generating step, particularly when working with hundreds of targets, the automation of primer design process becomes highly desirable. HTP-OligoDesigner was created to provide the scientific community with a simple and intuitive online primer design tool for both laboratory-scale and high-throughput projects of sequence-independent gene cloning and site-directed mutagenesis and a Tm calculator for quick queries.
Computer systems for automatic earthquake detection
Stewart, S.W.
1974-01-01
U.S Geological Survey seismologists in Menlo park, California, are utilizing the speed, reliability, and efficiency of minicomputers to monitor seismograph stations and to automatically detect earthquakes. An earthquake detection computer system, believed to be the only one of its kind in operation, automatically reports about 90 percent of all local earthquakes recorded by a network of over 100 central California seismograph stations. The system also monitors the stations for signs of malfunction or abnormal operation. Before the automatic system was put in operation, all of the earthquakes recorded had to be detected by manually searching the records, a time-consuming process. With the automatic detection system, the stations are efficiently monitored continuously.
Jian, Junming; Xiong, Fei; Xia, Wei; Zhang, Rui; Gu, Jinhui; Wu, Xiaodong; Meng, Xiaochun; Gao, Xin
2018-06-01
Segmentation of colorectal tumors is the basis of preoperative prediction, staging, and therapeutic response evaluation. Due to the blurred boundary between lesions and normal colorectal tissue, it is hard to realize accurate segmentation. Routinely manual or semi-manual segmentation methods are extremely tedious, time-consuming, and highly operator-dependent. In the framework of FCNs, a segmentation method for colorectal tumor was presented. Normalization was applied to reduce the differences among images. Borrowing from transfer learning, VGG-16 was employed to extract features from normalized images. We conducted five side-output blocks from the last convolutional layer of each block of VGG-16 along the network, these side-output blocks can deep dive multiscale features, and produced corresponding predictions. Finally, all of the predictions from side-output blocks were fused to determine the final boundaries of the tumors. A quantitative comparison of 2772 colorectal tumor manual segmentation results from T2-weighted magnetic resonance images shows that the average Dice similarity coefficient, positive predictive value, specificity, sensitivity, Hammoude distance, and Hausdorff distance were 83.56, 82.67, 96.75, 87.85%, 0.2694, and 8.20, respectively. The proposed method is superior to U-net in colorectal tumor segmentation (P < 0.05). There is no difference between cross-entropy loss and Dice-based loss in colorectal tumor segmentation (P > 0.05). The results indicate that the introduction of FCNs contributed to accurate segmentation of colorectal tumors. This method has the potential to replace the present time-consuming and nonreproducible manual segmentation method.
Automatic Assessment of Acquisition and Transmission Losses in Indian Remote Sensing Satellite Data
NASA Astrophysics Data System (ADS)
Roy, D.; Purna Kumari, B.; Manju Sarma, M.; Aparna, N.; Gopal Krishna, B.
2016-06-01
The quality of Remote Sensing data is an important parameter that defines the extent of its usability in various applications. The data from Remote Sensing satellites is received as raw data frames at the ground station. This data may be corrupted with data losses due to interferences during data transmission, data acquisition and sensor anomalies. Thus it is important to assess the quality of the raw data before product generation for early anomaly detection, faster corrective actions and product rejection minimization. Manual screening of raw images is a time consuming process and not very accurate. In this paper, an automated process for identification and quantification of losses in raw data like pixel drop out, line loss and data loss due to sensor anomalies is discussed. Quality assessment of raw scenes based on these losses is also explained. This process is introduced in the data pre-processing stage and gives crucial data quality information to users at the time of browsing data for product ordering. It has also improved the product generation workflow by enabling faster and more accurate quality estimation.
Cai, Jinhai; Okamoto, Mamoru; Atieno, Judith; Sutton, Tim; Li, Yongle; Miklavcic, Stanley J.
2016-01-01
Leaf senescence, an indicator of plant age and ill health, is an important phenotypic trait for the assessment of a plant’s response to stress. Manual inspection of senescence, however, is time consuming, inaccurate and subjective. In this paper we propose an objective evaluation of plant senescence by color image analysis for use in a high throughput plant phenotyping pipeline. As high throughput phenotyping platforms are designed to capture whole-of-plant features, camera lenses and camera settings are inappropriate for the capture of fine detail. Specifically, plant colors in images may not represent true plant colors, leading to errors in senescence estimation. Our algorithm features a color distortion correction and image restoration step prior to a senescence analysis. We apply our algorithm to two time series of images of wheat and chickpea plants to quantify the onset and progression of senescence. We compare our results with senescence scores resulting from manual inspection. We demonstrate that our procedure is able to process images in an automated way for an accurate estimation of plant senescence even from color distorted and blurred images obtained under high throughput conditions. PMID:27348807
DeepPicker: A deep learning approach for fully automated particle picking in cryo-EM.
Wang, Feng; Gong, Huichao; Liu, Gaochao; Li, Meijing; Yan, Chuangye; Xia, Tian; Li, Xueming; Zeng, Jianyang
2016-09-01
Particle picking is a time-consuming step in single-particle analysis and often requires significant interventions from users, which has become a bottleneck for future automated electron cryo-microscopy (cryo-EM). Here we report a deep learning framework, called DeepPicker, to address this problem and fill the current gaps toward a fully automated cryo-EM pipeline. DeepPicker employs a novel cross-molecule training strategy to capture common features of particles from previously-analyzed micrographs, and thus does not require any human intervention during particle picking. Tests on the recently-published cryo-EM data of three complexes have demonstrated that our deep learning based scheme can successfully accomplish the human-level particle picking process and identify a sufficient number of particles that are comparable to those picked manually by human experts. These results indicate that DeepPicker can provide a practically useful tool to significantly reduce the time and manual effort spent in single-particle analysis and thus greatly facilitate high-resolution cryo-EM structure determination. DeepPicker is released as an open-source program, which can be downloaded from https://github.com/nejyeah/DeepPicker-python. Copyright © 2016 Elsevier Inc. All rights reserved.
A semi-automated tool for treatment plan-quality evaluation and clinical trial quality assurance
NASA Astrophysics Data System (ADS)
Wang, Jiazhou; Chen, Wenzhou; Studenski, Matthew; Cui, Yunfeng; Lee, Andrew J.; Xiao, Ying
2013-07-01
The goal of this work is to develop a plan-quality evaluation program for clinical routine and multi-institutional clinical trials so that the overall evaluation efficiency is improved. In multi-institutional clinical trials evaluating the plan quality is a time-consuming and labor-intensive process. In this note, we present a semi-automated plan-quality evaluation program which combines MIMVista, Java/MATLAB, and extensible markup language (XML). More specifically, MIMVista is used for data visualization; Java and its powerful function library are implemented for calculating dosimetry parameters; and to improve the clarity of the index definitions, XML is applied. The accuracy and the efficiency of the program were evaluated by comparing the results of the program with the manually recorded results in two RTOG trials. A slight difference of about 0.2% in volume or 0.6 Gy in dose between the semi-automated program and manual recording was observed. According to the criteria of indices, there are minimal differences between the two methods. The evaluation time is reduced from 10-20 min to 2 min by applying the semi-automated plan-quality evaluation program.
Dixit, Sudeepa; Fox, Mark; Pal, Anupam
2014-01-01
Magnetic resonance imaging (MRI) has advantages for the assessment of gastrointestinal structures and functions; however, processing MRI data is time consuming and this has limited uptake to a few specialist centers. This study introduces a semiautomatic image processing system for rapid analysis of gastrointestinal MRI. For assessment of simpler regions of interest (ROI) such as the stomach, the system generates virtual images along arbitrary planes that intersect the ROI edges in the original images. This generates seed points that are joined automatically to form contours on each adjacent two-dimensional image and reconstructed in three dimensions (3D). An alternative thresholding approach is available for rapid assessment of complex structures like the small intestine. For assessment of dynamic gastrointestinal function, such as gastric accommodation and emptying, the initial 3D reconstruction is used as reference to process adjacent image stacks automatically. This generates four-dimensional (4D) reconstructions of dynamic volume change over time. Compared with manual processing, this semiautomatic system reduced the user input required to analyze a MRI gastric emptying study (estimated 100 vs. 10,000 mouse clicks). This analysis was not subject to variation in volume measurements seen between three human observers. In conclusion, the image processing platform presented processed large volumes of MRI data, such as that produced by gastric accommodation and emptying studies, with minimal user input. 3D and 4D reconstructions of the stomach and, potentially, other gastrointestinal organs are produced faster and more accurately than manual methods. This system will facilitate the application of MRI in gastrointestinal research and clinical practice. PMID:25540229
Automated reticle inspection data analysis for wafer fabs
NASA Astrophysics Data System (ADS)
Summers, Derek; Chen, Gong; Reese, Bryan; Hutchinson, Trent; Liesching, Marcus; Ying, Hai; Dover, Russell
2008-10-01
To minimize potential wafer yield loss due to mask defects, most wafer fabs implement some form of reticle inspection system to monitor photomask quality in high-volume wafer manufacturing environments. Traditionally, experienced operators review reticle defects found by an inspection tool and then manually classify each defect as 'pass, warn, or fail' based on its size and location. However, in the event reticle defects are suspected of causing repeating wafer defects on a completed wafer, potential defects on all associated reticles must be manually searched on a layer-by-layer basis in an effort to identify the reticle responsible for the wafer yield loss. This 'problem reticle' search process is a very tedious and time-consuming task and may cause extended manufacturing line-down situations. Often times, Process Engineers and other team members need to manually investigate several reticle inspection reports to determine if yield loss can be tied to a specific layer. Because of the very nature of this detailed work, calculation errors may occur resulting in an incorrect root cause analysis effort. These delays waste valuable resources that could be spent working on other more productive activities. This paper examines an automated software solution for converting KLA-Tencor reticle inspection defect maps into a format compatible with KLA-Tencor's Klarity DefecTM data analysis database. The objective is to use the graphical charting capabilities of Klarity Defect to reveal a clearer understanding of defect trends for individual reticle layers or entire mask sets. Automated analysis features include reticle defect count trend analysis and potentially stacking reticle defect maps for signature analysis against wafer inspection defect data. Other possible benefits include optimizing reticle inspection sample plans in an effort to support "lean manufacturing" initiatives for wafer fabs.
Automated reticle inspection data analysis for wafer fabs
NASA Astrophysics Data System (ADS)
Summers, Derek; Chen, Gong; Reese, Bryan; Hutchinson, Trent; Liesching, Marcus; Ying, Hai; Dover, Russell
2009-04-01
To minimize potential wafer yield loss due to mask defects, most wafer fabs implement some form of reticle inspection system to monitor photomask quality in high-volume wafer manufacturing environments. Traditionally, experienced operators review reticle defects found by an inspection tool and then manually classify each defect as 'pass, warn, or fail' based on its size and location. However, in the event reticle defects are suspected of causing repeating wafer defects on a completed wafer, potential defects on all associated reticles must be manually searched on a layer-by-layer basis in an effort to identify the reticle responsible for the wafer yield loss. This 'problem reticle' search process is a very tedious and time-consuming task and may cause extended manufacturing line-down situations. Often times, Process Engineers and other team members need to manually investigate several reticle inspection reports to determine if yield loss can be tied to a specific layer. Because of the very nature of this detailed work, calculation errors may occur resulting in an incorrect root cause analysis effort. These delays waste valuable resources that could be spent working on other more productive activities. This paper examines an automated software solution for converting KLA-Tencor reticle inspection defect maps into a format compatible with KLA-Tencor's Klarity Defect(R) data analysis database. The objective is to use the graphical charting capabilities of Klarity Defect to reveal a clearer understanding of defect trends for individual reticle layers or entire mask sets. Automated analysis features include reticle defect count trend analysis and potentially stacking reticle defect maps for signature analysis against wafer inspection defect data. Other possible benefits include optimizing reticle inspection sample plans in an effort to support "lean manufacturing" initiatives for wafer fabs.
Automated reticle inspection data analysis for wafer fabs
NASA Astrophysics Data System (ADS)
Summers, Derek; Chen, Gong; Reese, Bryan; Hutchinson, Trent; Liesching, Marcus; Ying, Hai; Dover, Russell
2009-03-01
To minimize potential wafer yield loss due to mask defects, most wafer fabs implement some form of reticle inspection system to monitor photomask quality in high-volume wafer manufacturing environments. Traditionally, experienced operators review reticle defects found by an inspection tool and then manually classify each defect as 'pass, warn, or fail' based on its size and location. However, in the event reticle defects are suspected of causing repeating wafer defects on a completed wafer, potential defects on all associated reticles must be manually searched on a layer-by-layer basis in an effort to identify the reticle responsible for the wafer yield loss. This 'problem reticle' search process is a very tedious and time-consuming task and may cause extended manufacturing line-down situations. Often times, Process Engineers and other team members need to manually investigate several reticle inspection reports to determine if yield loss can be tied to a specific layer. Because of the very nature of this detailed work, calculation errors may occur resulting in an incorrect root cause analysis effort. These delays waste valuable resources that could be spent working on other more productive activities. This paper examines an automated software solution for converting KLA-Tencor reticle inspection defect maps into a format compatible with KLA-Tencor's Klarity DefectTM data analysis database. The objective is to use the graphical charting capabilities of Klarity Defect to reveal a clearer understanding of defect trends for individual reticle layers or entire mask sets. Automated analysis features include reticle defect count trend analysis and potentially stacking reticle defect maps for signature analysis against wafer inspection defect data. Other possible benefits include optimizing reticle inspection sample plans in an effort to support "lean manufacturing" initiatives for wafer fabs.
Vrooman, Henri A; Cocosco, Chris A; van der Lijn, Fedde; Stokking, Rik; Ikram, M Arfan; Vernooij, Meike W; Breteler, Monique M B; Niessen, Wiro J
2007-08-01
Conventional k-Nearest-Neighbor (kNN) classification, which has been successfully applied to classify brain tissue in MR data, requires training on manually labeled subjects. This manual labeling is a laborious and time-consuming procedure. In this work, a new fully automated brain tissue classification procedure is presented, in which kNN training is automated. This is achieved by non-rigidly registering the MR data with a tissue probability atlas to automatically select training samples, followed by a post-processing step to keep the most reliable samples. The accuracy of the new method was compared to rigid registration-based training and to conventional kNN-based segmentation using training on manually labeled subjects for segmenting gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) in 12 data sets. Furthermore, for all classification methods, the performance was assessed when varying the free parameters. Finally, the robustness of the fully automated procedure was evaluated on 59 subjects. The automated training method using non-rigid registration with a tissue probability atlas was significantly more accurate than rigid registration. For both automated training using non-rigid registration and for the manually trained kNN classifier, the difference with the manual labeling by observers was not significantly larger than inter-observer variability for all tissue types. From the robustness study, it was clear that, given an appropriate brain atlas and optimal parameters, our new fully automated, non-rigid registration-based method gives accurate and robust segmentation results. A similarity index was used for comparison with manually trained kNN. The similarity indices were 0.93, 0.92 and 0.92, for CSF, GM and WM, respectively. It can be concluded that our fully automated method using non-rigid registration may replace manual segmentation, and thus that automated brain tissue segmentation without laborious manual training is feasible.
İnce, Fatma Demet; Ellidağ, Hamit Yaşar; Koseoğlu, Mehmet; Şimşek, Neşe; Yalçın, Hülya; Zengin, Mustafa Osman
2016-08-01
Urinalysis is one of the most commonly performed tests in the clinical laboratory. However, manual microscopic sediment examination is labor-intensive, time-consuming, and lacks standardization in high-volume laboratories. In this study, the concordance of analyses between manual microscopic examination and two different automatic urine sediment analyzers has been evaluated. 209 urine samples were analyzed by the Iris iQ200 ELITE (İris Diagnostics, USA), Dirui FUS-200 (DIRUI Industrial Co., China) automatic urine sediment analyzers and by manual microscopic examination. The degree of concordance (Kappa coefficient) and the rates within the same grading were evaluated. For erythrocytes, leukocytes, epithelial cells, bacteria, crystals and yeasts, the degree of concordance between the two instruments was better than the degree of concordance between the manual microscopic method and the individual devices. There was no concordance between all methods for casts. The results from the automated analyzers for erythrocytes, leukocytes and epithelial cells were similar to the result of microscopic examination. However, in order to avoid any error or uncertainty, some images (particularly: dysmorphic cells, bacteria, yeasts, casts and crystals) have to be analyzed by manual microscopic examination by trained staff. Therefore, the software programs which are used in automatic urine sediment analysers need further development to recognize urinary shaped elements more accurately. Automated systems are important in terms of time saving and standardization.
NASA Technical Reports Server (NTRS)
Liu, Yili; Wickens, Christopher D.
1987-01-01
This paper reports on the first experiment of a series studying the effect of task structure and difficulty demand on time-sharing performance and workload in both automated and corresponding manual systems. The experimental task involves manual control time-shared with spatial and verbal decisions tasks of two levels of difficulty and two modes of response (voice or manual). The results provide strong evidence that tasks and processes competing for common processing resources are time shared less effecively and have higher workload than tasks competing for separate resources. Subjective measures and the structure of multiple resources are used in conjunction to predict dual task performance. The evidence comes from both single-task and from dual-task performance.
Integration of autopatching with automated pipette and cell detection in vitro
Wu (吴秋雨), Qiuyu; Kolb, Ilya; Callahan, Brendan M.; Su, Zhaolun; Stoy, William; Kodandaramaiah, Suhasa B.; Neve, Rachael; Zeng, Hongkui; Boyden, Edward S.; Forest, Craig R.
2016-01-01
Patch clamp is the main technique for measuring electrical properties of individual cells. Since its discovery in 1976 by Neher and Sakmann, patch clamp has been instrumental in broadening our understanding of the fundamental properties of ion channels and synapses in neurons. The conventional patch-clamp method requires manual, precise positioning of a glass micropipette against the cell membrane of a visually identified target neuron. Subsequently, a tight “gigaseal” connection between the pipette and the cell membrane is established, and suction is applied to establish the whole cell patch configuration to perform electrophysiological recordings. This procedure is repeated manually for each individual cell, making it labor intensive and time consuming. In this article we describe the development of a new automatic patch-clamp system for brain slices, which integrates all steps of the patch-clamp process: image acquisition through a microscope, computer vision-based identification of a patch pipette and fluorescently labeled neurons, micromanipulator control, and automated patching. We validated our system in brain slices from wild-type and transgenic mice expressing channelrhodopsin 2 under the Thy1 promoter (line 18) or injected with a herpes simplex virus-expressing archaerhodopsin, ArchT. Our computer vision-based algorithm makes the fluorescent cell detection and targeting user independent. Compared with manual patching, our system is superior in both success rate and average trial duration. It provides more reliable trial-to-trial control of the patching process and improves reproducibility of experiments. PMID:27385800
Griffis, Joseph C; Allendorfer, Jane B; Szaflarski, Jerzy P
2016-01-15
Manual lesion delineation by an expert is the standard for lesion identification in MRI scans, but it is time-consuming and can introduce subjective bias. Alternative methods often require multi-modal MRI data, user interaction, scans from a control population, and/or arbitrary statistical thresholding. We present an approach for automatically identifying stroke lesions in individual T1-weighted MRI scans using naïve Bayes classification. Probabilistic tissue segmentation and image algebra were used to create feature maps encoding information about missing and abnormal tissue. Leave-one-case-out training and cross-validation was used to obtain out-of-sample predictions for each of 30 cases with left hemisphere stroke lesions. Our method correctly predicted lesion locations for 30/30 un-trained cases. Post-processing with smoothing (8mm FWHM) and cluster-extent thresholding (100 voxels) was found to improve performance. Quantitative evaluations of post-processed out-of-sample predictions on 30 cases revealed high spatial overlap (mean Dice similarity coefficient=0.66) and volume agreement (mean percent volume difference=28.91; Pearson's r=0.97) with manual lesion delineations. Our automated approach agrees with manual tracing. It provides an alternative to automated methods that require multi-modal MRI data, additional control scans, or user interaction to achieve optimal performance. Our fully trained classifier has applications in neuroimaging and clinical contexts. Copyright © 2015 Elsevier B.V. All rights reserved.
USDA-ARS?s Scientific Manuscript database
The use of distributed parameter models to address water resource management problems has increased in recent years. Calibration is necessary to reduce the uncertainties associated with model input parameters. Manual calibration of a distributed parameter model is a very time consuming effort. There...
BC4GO: a full-text corpus for the BioCreative IV GO Task
USDA-ARS?s Scientific Manuscript database
Gene function curation via Gene Ontology (GO) annotation is a common task among Model Organism Database (MOD) groups. Due to its manual nature, this task is time-consuming and labor-intensive, and thus considered one of the bottlenecks in literature curation. There have been many previous attempts a...
Manual leak detection and repair (LDAR) programs are currently implemented on a regular basis at refinery sites to limit fugitive emissions of volatile organic compounds (VOC). However, LDAR surveys can be time-consuming and are not always cost-effective. Fence line monitoring of...
"PolyCAFe"--Automatic Support for the Polyphonic Analysis of CSCL Chats
ERIC Educational Resources Information Center
Trausan-Matu, Stefan; Dascalu, Mihai; Rebedea, Traian
2014-01-01
Chat conversations and other types of online communication environments are widely used within CSCL educational scenarios. However, there is a lack of theoretical and methodological background for the analysis of collaboration. Manual assessing of non-moderated chat discussions is difficult and time-consuming, having as a consequence that learning…
Computer-Aided Diagnosis of Acute Lymphoblastic Leukaemia
2018-01-01
Leukaemia is a form of blood cancer which affects the white blood cells and damages the bone marrow. Usually complete blood count (CBC) and bone marrow aspiration are used to diagnose the acute lymphoblastic leukaemia. It can be a fatal disease if not diagnosed at the earlier stage. In practice, manual microscopic evaluation of stained sample slide is used for diagnosis of leukaemia. But manual diagnostic methods are time-consuming, less accurate, and prone to errors due to various human factors like stress, fatigue, and so forth. Therefore, different automated systems have been proposed to wrestle the glitches in the manual diagnostic methods. In recent past, some computer-aided leukaemia diagnosis methods are presented. These automated systems are fast, reliable, and accurate as compared to manual diagnosis methods. This paper presents review of computer-aided diagnosis systems regarding their methodologies that include enhancement, segmentation, feature extraction, classification, and accuracy. PMID:29681996
Vision based tunnel inspection using non-rigid registration
NASA Astrophysics Data System (ADS)
Badshah, Amir; Ullah, Shan; Shahzad, Danish
2015-04-01
Growing numbers of long tunnels across the globe has increased the need for safety measurements and inspections of tunnels in these days. To avoid serious damages, tunnel inspection is highly recommended at regular intervals of time to find any deformations or cracks at the right time. While following the stringent safety and tunnel accessibility standards, conventional geodetic surveying using techniques of civil engineering and other manual and mechanical methods are time consuming and results in troublesome of routine life. An automatic tunnel inspection by image processing techniques using non rigid registration has been proposed. There are many other image processing methods used for image registration purposes. Most of the processes are operation of images in its spatial domain like finding edges and corners by Harris edge detection method. These methods are quite time consuming and fail for some or other reasons like for blurred or images with noise. Due to use of image features directly by these methods in the process, are known by the group, correlation by image features. The other method is featureless correlation, in which the images are converted into its frequency domain and then correlated with each other. The shift in spatial domain is the same as in frequency domain, but the processing is order faster than in spatial domain. In the proposed method modified normalized phase correlation has been used to find any shift between two images. As pre pre-processing the tunnel images i.e. reference and template are divided into small patches. All these relative patches are registered by the proposed modified normalized phase correlation. By the application of the proposed algorithm we get the pixel movement of the images. And then these pixels shifts are converted to measuring units like mm, cm etc. After the complete process if there is any shift in the tunnel at described points are located.
Towards Automatic Classification of Wikipedia Content
NASA Astrophysics Data System (ADS)
Szymański, Julian
Wikipedia - the Free Encyclopedia encounters the problem of proper classification of new articles everyday. The process of assignment of articles to categories is performed manually and it is a time consuming task. It requires knowledge about Wikipedia structure, which is beyond typical editor competence, which leads to human-caused mistakes - omitting or wrong assignments of articles to categories. The article presents application of SVM classifier for automatic classification of documents from The Free Encyclopedia. The classifier application has been tested while using two text representations: inter-documents connections (hyperlinks) and word content. The results of the performed experiments evaluated on hand crafted data show that the Wikipedia classification process can be partially automated. The proposed approach can be used for building a decision support system which suggests editors the best categories that fit new content entered to Wikipedia.
Gabbert, Dominik D; Entenmann, Andreas; Jerosch-Herold, Michael; Frettlöh, Felicitas; Hart, Christopher; Voges, Inga; Pham, Minh; Andrade, Ana; Pardun, Eileen; Wegner, P; Hansen, Traudel; Kramer, Hans-Heiner; Rickers, Carsten
2013-12-01
The determination of right ventricular volumes and function is of increasing interest for the postoperative care of patients with congenital heart defects. The presentation of volumetry data in terms of volume-time curves allows a comprehensive functional assessment. By using manual contour tracing, the generation of volume-time curves is exceedingly time-consuming. This study describes a fast and precise method for determining volume-time curves for the right ventricle and for the right ventricular outflow tract. The method applies contour detection and includes a feature for identifying the right ventricular outflow tract volume. The segregation of the outflow tract is performed by four-dimensional curved smooth boundary surfaces defined by prespecified anatomical landmarks. The comparison with manual contour tracing demonstrates that the method is accurate and improves the precision of the measurement. Compared to manual contour tracing the bias is <0.1% ± 4.1% (right ventricle) and -2.6% ± 20.0% (right ventricular outflow tract). The standard deviations of inter- and intraobserver variabilities for determining the volume of the right ventricular outflow tract are reduced to less than half the values of manual contour tracing. The time consumption per patient is reduced from 341 ± 80 min (right ventricle) and 56 ± 11 min (right ventricular outflow tract) using manual contour tracing to 46 ± 9 min for a combined analysis of right ventricle and right ventricular outflow tract. The analysis of volume-time curves for the right ventricle and its outflow tract discloses new evaluation methods in clinical routine and science. Copyright © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Mizukami, Masato; Makihara, Mitsuhiro
2013-07-01
Conventionally, in intelligent buildings in a metropolitan area network and in small-scale facilities in the optical access network, optical connectors are joined manually using an optical connection board and a patch panel. In this manual connection approach, mistakes occur due to discrepancies between the actual physical settings of the connections and their management because these processes are independent. Moreover, manual cross-connection is time-consuming and expensive because maintenance personnel must be dispatched to remote places to correct mistakes. We have developed a fiber-handling robot and optical connection mechanisms for automatic cross-connection of multiple optical connectors, which are the key elements of automatic optical fiber cross-connect equipment. We evaluate the performance of the equipment, such as its optical characteristics and environmental specifications. We also devise new optical connection mechanisms that enable the automated optical fiber cross-connect module to handle and connect angled physical contact (APC) optical connector plugs. We evaluate the performance of the equipment, such as its optical characteristics. The evaluation results confirm that the automated optical fiber cross-connect equipment can connect APC connectors with low loss and high return loss, indicating that the automated optical fiber cross-connect equipment is suitable for practical use in intelligent buildings and optical access networks.
A Laboratory Manual for Stepwise Cerebral White Matter Fiber Dissection.
Koutsarnakis, Christos; Liakos, Faidon; Kalyvas, Aristotelis V; Sakas, Damianos E; Stranjalis, George
2015-08-01
White matter fiber dissection is an important method in acquiring a thorough neuroanatomic knowledge for surgical practice. Previous studies have definitely improved our understanding of intrinsic brain anatomy and emphasized on the significance of this technique in modern neurosurgery. However, current literature lacks a complete and concentrated laboratory guide about the entire dissection procedure. Hence, our primary objective is to introduce a detailed laboratory manual for cerebral white matter dissection by highlighting consecutive dissection steps, and to stress important technical comments facilitating this complex procedure. Twenty adult, formalin-fixed cerebral hemispheres were included in the study. Ten specimens were dissected in the lateromedial and 10 in the mediolateral direction, respectively, using the fiber dissection technique and the microscope. Eleven and 8 consecutive and distinctive dissection steps are recommended for the lateromedial and mediolateral dissection procedures, respectively. Photographs highlighting various anatomic landmarks accompany every step. Technical recommendations, facilitating the dissection process, are also indicated. The fiber dissection technique, although complex and time consuming, offers a three-dimensional knowledge of intrinsic brain anatomy and architecture, thus improving both the quality of microneurosurgery and the patient's standard of care. The present anatomic study provides a thorough dissection manual to those who study brain anatomy using this technique. Copyright © 2015 Elsevier Inc. All rights reserved.
Operations research methods improve chemotherapy patient appointment scheduling.
Santibáñez, Pablo; Aristizabal, Ruben; Puterman, Martin L; Chow, Vincent S; Huang, Wenhai; Kollmannsberger, Christian; Nordin, Travis; Runzer, Nancy; Tyldesley, Scott
2012-12-01
Clinical complexity, scheduling restrictions, and outdated manual booking processes resulted in frequent clerical rework, long waitlists for treatment, and late appointment notification for patients at a chemotherapy clinic in a large cancer center in British Columbia, Canada. A 17-month study was conducted to address booking, scheduling and workload issues and to develop, implement, and evaluate solutions. A review of scheduling practices included process observation and mapping, analysis of historical appointment data, creation of a new performance metric (final appointment notification lead time), and a baseline patient satisfaction survey. Process improvement involved discrete event simulation to evaluate alternative booking practice scenarios, development of an optimization-based scheduling tool to improve scheduling efficiency, and change management for implementation of process changes. Results were evaluated through analysis of appointment data, a follow-up patient survey, and staff surveys. Process review revealed a two-stage scheduling process. Long waitlists and late notification resulted from an inflexible first-stage process. The second-stage process was time consuming and tedious. After a revised, more flexible first-stage process and an automated second-stage process were implemented, the median percentage of appointments exceeding the final appointment notification lead time target of one week was reduced by 57% and median waitlist size decreased by 83%. Patient surveys confirmed increased satisfaction while staff feedback reported reduced stress levels. Significant operational improvements can be achieved through process redesign combined with operations research methods.
An Automatic Phase-Change Detection Technique for Colloidal Hard Sphere Suspensions
NASA Technical Reports Server (NTRS)
McDowell, Mark; Gray, Elizabeth; Rogers, Richard B.
2005-01-01
Colloidal suspensions of monodisperse spheres are used as physical models of thermodynamic phase transitions and as precursors to photonic band gap materials. However, current image analysis techniques are not able to distinguish between densely packed phases within conventional microscope images, which are mainly characterized by degrees of randomness or order with similar grayscale value properties. Current techniques for identifying the phase boundaries involve manually identifying the phase transitions, which is very tedious and time consuming. We have developed an intelligent machine vision technique that automatically identifies colloidal phase boundaries. The algorithm utilizes intelligent image processing techniques that accurately identify and track phase changes vertically or horizontally for a sequence of colloidal hard sphere suspension images. This technique is readily adaptable to any imaging application where regions of interest are distinguished from the background by differing patterns of motion over time.
Path generation algorithm for UML graphic modeling of aerospace test software
NASA Astrophysics Data System (ADS)
Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Chen, Chao
2018-03-01
Aerospace traditional software testing engineers are based on their own work experience and communication with software development personnel to complete the description of the test software, manual writing test cases, time-consuming, inefficient, loopholes and more. Using the high reliability MBT tools developed by our company, the one-time modeling can automatically generate test case documents, which is efficient and accurate. UML model to describe the process accurately express the need to rely on the path is reached, the existing path generation algorithm are too simple, cannot be combined into a path and branch path with loop, or too cumbersome, too complicated arrangement generates a path is meaningless, for aerospace software testing is superfluous, I rely on our experience of ten load space, tailor developed a description of aerospace software UML graphics path generation algorithm.
ERIC Educational Resources Information Center
Blackburn, Mary Ellen; Hall, Doris N.
Materials are provided for a consumer education activity designed to help teenagers make knowledgeable, rational decisions when purchasing goods and services. A student manual describes how the activity--a consumer judging contest--works. Information is provided on how consumers make decisions. Topics include: needs versus wants; sources of…
Duo, Jia; Dong, Huijin; DeSilva, Binodh; Zhang, Yan J
2013-07-01
Sample dilution and reagent pipetting are time-consuming steps in ligand-binding assays (LBAs). Traditional automation-assisted LBAs use assay-specific scripts that require labor-intensive script writing and user training. Five major script modules were developed on Tecan Freedom EVO liquid handling software to facilitate the automated sample preparation and LBA procedure: sample dilution, sample minimum required dilution, standard/QC minimum required dilution, standard/QC/sample addition, and reagent addition. The modular design of automation scripts allowed the users to assemble an automated assay with minimal script modification. The application of the template was demonstrated in three LBAs to support discovery biotherapeutic programs. The results demonstrated that the modular scripts provided the flexibility in adapting to various LBA formats and the significant time saving in script writing and scientist training. Data generated by the automated process were comparable to those by manual process while the bioanalytical productivity was significantly improved using the modular robotic scripts.
Semi-automated XML markup of biosystematic legacy literature with the GoldenGATE editor.
Sautter, Guido; Böhm, Klemens; Agosti, Donat
2007-01-01
Today, digitization of legacy literature is a big issue. This also applies to the domain of biosystematics, where this process has just started. Digitized biosystematics literature requires a very precise and fine grained markup in order to be useful for detailed search, data linkage and mining. However, manual markup on sentence level and below is cumbersome and time consuming. In this paper, we present and evaluate the GoldenGATE editor, which is designed for the special needs of marking up OCR output with XML. It is built in order to support the user in this process as far as possible: Its functionality ranges from easy, intuitive tagging through markup conversion to dynamic binding of configurable plug-ins provided by third parties. Our evaluation shows that marking up an OCR document using GoldenGATE is three to four times faster than with an off-the-shelf XML editor like XML-Spy. Using domain-specific NLP-based plug-ins, these numbers are even higher.
Hybrid Clustering And Boundary Value Refinement for Tumor Segmentation using Brain MRI
NASA Astrophysics Data System (ADS)
Gupta, Anjali; Pahuja, Gunjan
2017-08-01
The method of brain tumor segmentation is the separation of tumor area from Brain Magnetic Resonance (MR) images. There are number of methods already exist for segmentation of brain tumor efficiently. However it’s tedious task to identify the brain tumor from MR images. The segmentation process is extraction of different tumor tissues such as active, tumor, necrosis, and edema from the normal brain tissues such as gray matter (GM), white matter (WM), as well as cerebrospinal fluid (CSF). As per the survey study, most of time the brain tumors are detected easily from brain MR image using region based approach but required level of accuracy, abnormalities classification is not predictable. The segmentation of brain tumor consists of many stages. Manually segmenting the tumor from brain MR images is very time consuming hence there exist many challenges in manual segmentation. In this research paper, our main goal is to present the hybrid clustering which consists of Fuzzy C-Means Clustering (for accurate tumor detection) and level set method(for handling complex shapes) for the detection of exact shape of tumor in minimal computational time. using this approach we observe that for a certain set of images 0.9412 sec of time is taken to detect tumor which is very less in comparison to recent existing algorithm i.e. Hybrid clustering (Fuzzy C-Means and K Means clustering).
Consumer's Choice: An Interdisciplinary Approach to Consumer Education. Developed for Grades K-4.
ERIC Educational Resources Information Center
Allegheny Intermediate Unit, Pittsburgh, PA.
This manual suggests teaching strategies for integrating consumer education into art, language arts, mathematics, science/health, and social studies in grades K-4. The guide lists consumer education competencies, interdisciplinary structures for consumer education, and provides a chart which relates competencies to page numbers in the guide.…
Patel, Darshan C; Lyu, Yaqi Fara; Gandarilla, Jorge; Doherty, Steve
2018-04-03
In-process sampling and analysis is an important aspect of monitoring kinetic profiles and impurity formation or rejection, both in development and during commercial manufacturing. In pharmaceutical process development, the technology of choice for a substantial portion of this analysis is high-performance liquid chromatography (HPLC). Traditionally, the sample extraction and preparation for reaction characterization have been performed manually. This can be time consuming, laborious, and impractical for long processes. Depending on the complexity of the sample preparation, there can be variability introduced by different analysts, and in some cases, the integrity of the sample can be compromised during handling. While there are commercial instruments available for on-line monitoring with HPLC, they lack capabilities in many key areas. Some do not provide integration of the sampling and analysis, while others afford limited flexibility in sample preparation. The current offerings provide a limited number of unit operations available for sample processing and no option for workflow customizability. This work describes development of a microfluidic automated program (MAP) which fully automates the sample extraction, manipulation, and on-line LC analysis. The flexible system is controlled using an intuitive Microsoft Excel based user interface. The autonomous system is capable of unattended reaction monitoring that allows flexible unit operations and workflow customization to enable complex operations and on-line sample preparation. The automated system is shown to offer advantages over manual approaches in key areas while providing consistent and reproducible in-process data. Copyright © 2017 Elsevier B.V. All rights reserved.
The effect of baking treatments on E9018-B3 manual metal arc welding consumables
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fazackerley, W.; Gee, R.
For the comparison and assessment of steel welding consumables, standard tests involving small model welds are widely used to determine diffusible hydrogen contents. The lowest scale normally quoted is less than 5 ml/100 g deposited metal (e.g., BS5135:1984 Scale D). However, due to industry`s demands for lower hydrogen levels for critical applications, it is now proposed to sub-divide this scale at around 2--3 ml/100 g. This has led to further development by consumable manufacturers in order to meet the new specification. Traditionally, reductions in potential hydrogen levels in manual metal arc welding consumables have been achieved by improved flux formulationsmore » and silicate binder systems. However, there is little published work on the effect of electrode baking treatments. A development program has been employed to study the effect of baking treatments on E9018-B3 type manual metal arc welding consumables. This type of welding consumable is used extensively in the initial fabrication and in the repair and maintenance of power generation plant, where significant risk of HAZ hydrogen cracking exists. These treatments have been assessed using standard tests for weld metal hydrogen content and weld metal composition.« less
Daisne, Jean-François; Blumhofer, Andreas
2013-06-26
Intensity modulated radiotherapy for head and neck cancer necessitates accurate definition of organs at risk (OAR) and clinical target volumes (CTV). This crucial step is time consuming and prone to inter- and intra-observer variations. Automatic segmentation by atlas deformable registration may help to reduce time and variations. We aim to test a new commercial atlas algorithm for automatic segmentation of OAR and CTV in both ideal and clinical conditions. The updated Brainlab automatic head and neck atlas segmentation was tested on 20 patients: 10 cN0-stages (ideal population) and 10 unselected N-stages (clinical population). Following manual delineation of OAR and CTV, automatic segmentation of the same set of structures was performed and afterwards manually corrected. Dice Similarity Coefficient (DSC), Average Surface Distance (ASD) and Maximal Surface Distance (MSD) were calculated for "manual to automatic" and "manual to corrected" volumes comparisons. In both groups, automatic segmentation saved about 40% of the corresponding manual segmentation time. This effect was more pronounced for OAR than for CTV. The edition of the automatically obtained contours significantly improved DSC, ASD and MSD. Large distortions of normal anatomy or lack of iodine contrast were the limiting factors. The updated Brainlab atlas-based automatic segmentation tool for head and neck Cancer patients is timesaving but still necessitates review and corrections by an expert.
Simulations in the Consumer Economics Classroom. Consumer Education Training Module.
ERIC Educational Resources Information Center
Kachaturoff, Grace
This inservice manual provides guidelines to help elementary, secondary, and adult education teachers select, use, and design simulation experiences for consumer education. Four example simulations provide students with opportunities to develop decision-making skills as consumers. Simulations may be used as an introductory, developmental, or…
ALOG user's manual: A Guide to using the spreadsheet-based artificial log generator
Matthew F. Winn; Philip A. Araman; Randolph H. Wynne
2012-01-01
Computer programs that simulate log sawing can be valuable training tools for sawyers, as well as a means oftesting different sawing patterns. Most available simulation programs rely on diagrammed-log databases, which canbe very costly and time consuming to develop. Artificial Log Generator (ALOG) is a user-friendly Microsoft® Excel®...
Correlates and Predictors of Binge Eating among Native American Women
ERIC Educational Resources Information Center
Clark, Julie Dorton; Winterowd, Carrie
2012-01-01
Obesity and being overweight, as determined by body mass index (BMI), each continues to be of concern for many Native American/American Indians (NA/AI). According to the "Diagnostic and Statistical Manual of Mental Disorders," binge eating is excessive eating or consuming large quantities of food over a short period of time and has been associated…
USDA-ARS?s Scientific Manuscript database
Background: Dietary intake assessment with diet records (DR) is a standard research and practice tool in nutrition. Manual entry and analysis of DR is time-consuming and expensive. New electronic tools for diet entry by clients and research participants may reduce the cost and effort of nutrient int...
Egger, Jan; Kappus, Christoph; Freisleben, Bernd; Nimsky, Christopher
2012-08-01
In this contribution, a medical software system for volumetric analysis of different cerebral pathologies in magnetic resonance imaging (MRI) data is presented. The software system is based on a semi-automatic segmentation algorithm and helps to overcome the time-consuming process of volume determination during monitoring of a patient. After imaging, the parameter settings-including a seed point-are set up in the system and an automatic segmentation is performed by a novel graph-based approach. Manually reviewing the result leads to reseeding, adding seed points or an automatic surface mesh generation. The mesh is saved for monitoring the patient and for comparisons with follow-up scans. Based on the mesh, the system performs a voxelization and volume calculation, which leads to diagnosis and therefore further treatment decisions. The overall system has been tested with different cerebral pathologies-glioblastoma multiforme, pituitary adenomas and cerebral aneurysms- and evaluated against manual expert segmentations using the Dice Similarity Coefficient (DSC). Additionally, intra-physician segmentations have been performed to provide a quality measure for the presented system.
Optimal reinforcement of training datasets in semi-supervised landmark-based segmentation
NASA Astrophysics Data System (ADS)
Ibragimov, Bulat; Likar, Boštjan; Pernuš, Franjo; Vrtovec, Tomaž
2015-03-01
During the last couple of decades, the development of computerized image segmentation shifted from unsupervised to supervised methods, which made segmentation results more accurate and robust. However, the main disadvantage of supervised segmentation is a need for manual image annotation that is time-consuming and subjected to human error. To reduce the need for manual annotation, we propose a novel learning approach for training dataset reinforcement in the area of landmark-based segmentation, where newly detected landmarks are optimally combined with reference landmarks from the training dataset and therefore enriches the training process. The approach is formulated as a nonlinear optimization problem, where the solution is a vector of weighting factors that measures how reliable are the detected landmarks. The detected landmarks that are found to be more reliable are included into the training procedure with higher weighting factors, whereas the detected landmarks that are found to be less reliable are included with lower weighting factors. The approach is integrated into the landmark-based game-theoretic segmentation framework and validated against the problem of lung field segmentation from chest radiographs.
MASQOT: a method for cDNA microarray spot quality control
Bylesjö, Max; Eriksson, Daniel; Sjödin, Andreas; Sjöström, Michael; Jansson, Stefan; Antti, Henrik; Trygg, Johan
2005-01-01
Background cDNA microarray technology has emerged as a major player in the parallel detection of biomolecules, but still suffers from fundamental technical problems. Identifying and removing unreliable data is crucial to prevent the risk of receiving illusive analysis results. Visual assessment of spot quality is still a common procedure, despite the time-consuming work of manually inspecting spots in the range of hundreds of thousands or more. Results A novel methodology for cDNA microarray spot quality control is outlined. Multivariate discriminant analysis was used to assess spot quality based on existing and novel descriptors. The presented methodology displays high reproducibility and was found superior in identifying unreliable data compared to other evaluated methodologies. Conclusion The proposed methodology for cDNA microarray spot quality control generates non-discrete values of spot quality which can be utilized as weights in subsequent analysis procedures as well as to discard spots of undesired quality using the suggested threshold values. The MASQOT approach provides a consistent assessment of spot quality and can be considered an alternative to the labor-intensive manual quality assessment process. PMID:16223442
Text Classification for Organizational Researchers
Kobayashi, Vladimer B.; Mol, Stefan T.; Berkers, Hannah A.; Kismihók, Gábor; Den Hartog, Deanne N.
2017-01-01
Organizations are increasingly interested in classifying texts or parts thereof into categories, as this enables more effective use of their information. Manual procedures for text classification work well for up to a few hundred documents. However, when the number of documents is larger, manual procedures become laborious, time-consuming, and potentially unreliable. Techniques from text mining facilitate the automatic assignment of text strings to categories, making classification expedient, fast, and reliable, which creates potential for its application in organizational research. The purpose of this article is to familiarize organizational researchers with text mining techniques from machine learning and statistics. We describe the text classification process in several roughly sequential steps, namely training data preparation, preprocessing, transformation, application of classification techniques, and validation, and provide concrete recommendations at each step. To help researchers develop their own text classifiers, the R code associated with each step is presented in a tutorial. The tutorial draws from our own work on job vacancy mining. We end the article by discussing how researchers can validate a text classification model and the associated output. PMID:29881249
Foadi, James; Aller, Pierre; Alguel, Yilmaz; Cameron, Alex; Axford, Danny; Owen, Robin L; Armour, Wes; Waterman, David G; Iwata, So; Evans, Gwyndaf
2013-08-01
The availability of intense microbeam macromolecular crystallography beamlines at third-generation synchrotron sources has enabled data collection and structure solution from microcrystals of <10 µm in size. The increased likelihood of severe radiation damage where microcrystals or particularly sensitive crystals are used forces crystallographers to acquire large numbers of data sets from many crystals of the same protein structure. The associated analysis and merging of multi-crystal data is currently a manual and time-consuming step. Here, a computer program, BLEND, that has been written to assist with and automate many of the steps in this process is described. It is demonstrated how BLEND has successfully been used in the solution of a novel membrane protein.
An Automated Blur Detection Method for Histological Whole Slide Imaging
Moles Lopez, Xavier; D'Andrea, Etienne; Barbot, Paul; Bridoux, Anne-Sophie; Rorive, Sandrine; Salmon, Isabelle; Debeir, Olivier; Decaestecker, Christine
2013-01-01
Whole slide scanners are novel devices that enable high-resolution imaging of an entire histological slide. Furthermore, the imaging is achieved in only a few minutes, which enables image rendering of large-scale studies involving multiple immunohistochemistry biomarkers. Although whole slide imaging has improved considerably, locally poor focusing causes blurred regions of the image. These artifacts may strongly affect the quality of subsequent analyses, making a slide review process mandatory. This tedious and time-consuming task requires the scanner operator to carefully assess the virtual slide and to manually select new focus points. We propose a statistical learning method that provides early image quality feedback and automatically identifies regions of the image that require additional focus points. PMID:24349343
Foadi, James; Aller, Pierre; Alguel, Yilmaz; Cameron, Alex; Axford, Danny; Owen, Robin L.; Armour, Wes; Waterman, David G.; Iwata, So; Evans, Gwyndaf
2013-01-01
The availability of intense microbeam macromolecular crystallography beamlines at third-generation synchrotron sources has enabled data collection and structure solution from microcrystals of <10 µm in size. The increased likelihood of severe radiation damage where microcrystals or particularly sensitive crystals are used forces crystallographers to acquire large numbers of data sets from many crystals of the same protein structure. The associated analysis and merging of multi-crystal data is currently a manual and time-consuming step. Here, a computer program, BLEND, that has been written to assist with and automate many of the steps in this process is described. It is demonstrated how BLEND has successfully been used in the solution of a novel membrane protein. PMID:23897484
Serra, M; Pereiro, I; Yamada, A; Viovy, J-L; Descroix, S; Ferraro, D
2017-02-14
The sealing of microfluidic devices remains a complex and time-consuming process requiring specific equipment and protocols: a universal method is thus highly desirable. We propose here the use of a commercially available sealing tape as a robust, versatile, reversible solution, compatible with cell and molecular biology protocols, and requiring only the application of manually achievable pressures. The performance of the seal was tested with regards to the most commonly used chip materials. For most materials, the bonding resisted 5 bars at room temperature and 1 bar at 95 °C. This method should find numerous uses, ranging from fast prototyping in the laboratory to implementation in low technology environments or industrial production.
NASA Astrophysics Data System (ADS)
Gao, M.; Li, J.
2018-04-01
Geometric correction is an important preprocessing process in the application of GF4 PMS image. The method of geometric correction that is based on the manual selection of geometric control points is time-consuming and laborious. The more common method, based on a reference image, is automatic image registration. This method involves several steps and parameters. For the multi-spectral sensor GF4 PMS, it is necessary for us to identify the best combination of parameters and steps. This study mainly focuses on the following issues: necessity of Rational Polynomial Coefficients (RPC) correction before automatic registration, base band in the automatic registration and configuration of GF4 PMS spatial resolution.
Accelerating root system phenotyping of seedlings through a computer-assisted processing pipeline.
Dupuy, Lionel X; Wright, Gladys; Thompson, Jacqueline A; Taylor, Anna; Dekeyser, Sebastien; White, Christopher P; Thomas, William T B; Nightingale, Mark; Hammond, John P; Graham, Neil S; Thomas, Catherine L; Broadley, Martin R; White, Philip J
2017-01-01
There are numerous systems and techniques to measure the growth of plant roots. However, phenotyping large numbers of plant roots for breeding and genetic analyses remains challenging. One major difficulty is to achieve high throughput and resolution at a reasonable cost per plant sample. Here we describe a cost-effective root phenotyping pipeline, on which we perform time and accuracy benchmarking to identify bottlenecks in such pipelines and strategies for their acceleration. Our root phenotyping pipeline was assembled with custom software and low cost material and equipment. Results show that sample preparation and handling of samples during screening are the most time consuming task in root phenotyping. Algorithms can be used to speed up the extraction of root traits from image data, but when applied to large numbers of images, there is a trade-off between time of processing the data and errors contained in the database. Scaling-up root phenotyping to large numbers of genotypes will require not only automation of sample preparation and sample handling, but also efficient algorithms for error detection for more reliable replacement of manual interventions.
Rubus: A compiler for seamless and extensible parallelism.
Adnan, Muhammad; Aslam, Faisal; Nawaz, Zubair; Sarwar, Syed Mansoor
2017-01-01
Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU), originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer's expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84 times has been achieved by Rubus on the same GPU. Moreover, Rubus achieves this performance without drastically increasing the memory footprint of a program.
Rubus: A compiler for seamless and extensible parallelism
Adnan, Muhammad; Aslam, Faisal; Sarwar, Syed Mansoor
2017-01-01
Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU), originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer’s expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84 times has been achieved by Rubus on the same GPU. Moreover, Rubus achieves this performance without drastically increasing the memory footprint of a program. PMID:29211758
Grimsley, Jasmine M S; Gadziola, Marie A; Wenstrup, Jeffrey J
2012-01-01
Mouse pups vocalize at high rates when they are cold or isolated from the nest. The proportions of each syllable type produced carry information about disease state and are being used as behavioral markers for the internal state of animals. Manual classifications of these vocalizations identified 10 syllable types based on their spectro-temporal features. However, manual classification of mouse syllables is time consuming and vulnerable to experimenter bias. This study uses an automated cluster analysis to identify acoustically distinct syllable types produced by CBA/CaJ mouse pups, and then compares the results to prior manual classification methods. The cluster analysis identified two syllable types, based on their frequency bands, that have continuous frequency-time structure, and two syllable types featuring abrupt frequency transitions. Although cluster analysis computed fewer syllable types than manual classification, the clusters represented well the probability distributions of the acoustic features within syllables. These probability distributions indicate that some of the manually classified syllable types are not statistically distinct. The characteristics of the four classified clusters were used to generate a Microsoft Excel-based mouse syllable classifier that rapidly categorizes syllables, with over a 90% match, into the syllable types determined by cluster analysis.
ISPATOM: A Generic Real-Time Data Processing Tool Without Programming
NASA Technical Reports Server (NTRS)
Dershowitz, Adam
2007-01-01
Information Sharing Protocol Advanced Tool of Math (ISPATOM) is an application program allowing for the streamlined generation of comps, which subscribe to streams of incoming telemetry data, perform any necessary computations on the data, then send the data to other programs for display and/or further processing in NASA mission control centers. Heretofore, the development of comps was difficult, expensive, and time-consuming: Each comp was custom written manually, in a low-level computing language, by a programmer attempting to follow requirements of flight controllers. ISPATOM enables a flight controller who is not a programmer to write a comp by simply typing in one or more equation( s) at a command line or retrieving the equation(s) from a text file. ISPATOM then subscribes to the necessary input data, performs all of necessary computations, and sends out the results. It sends out new results whenever the input data change. The use of equations in ISPATOM is no more difficult than is entering equations in a spreadsheet. The time involved in developing a comp is thus limited to the time taken to decide on the necessary equations. Thus, ISPATOM is a real-time dynamic calculator.
High‐resolution trench photomosaics from image‐based modeling: Workflow and error analysis
Reitman, Nadine G.; Bennett, Scott E. K.; Gold, Ryan D.; Briggs, Richard; Duross, Christopher
2015-01-01
Photomosaics are commonly used to construct maps of paleoseismic trench exposures, but the conventional process of manually using image‐editing software is time consuming and produces undesirable artifacts and distortions. Herein, we document and evaluate the application of image‐based modeling (IBM) for creating photomosaics and 3D models of paleoseismic trench exposures, illustrated with a case‐study trench across the Wasatch fault in Alpine, Utah. Our results include a structure‐from‐motion workflow for the semiautomated creation of seamless, high‐resolution photomosaics designed for rapid implementation in a field setting. Compared with conventional manual methods, the IBM photomosaic method provides a more accurate, continuous, and detailed record of paleoseismic trench exposures in approximately half the processing time and 15%–20% of the user input time. Our error analysis quantifies the effect of the number and spatial distribution of control points on model accuracy. For this case study, an ∼87 m2 exposure of a benched trench photographed at viewing distances of 1.5–7 m yields a model with <2 cm root mean square error (rmse) with as few as six control points. Rmse decreases as more control points are implemented, but the gains in accuracy are minimal beyond 12 control points. Spreading control points throughout the target area helps to minimize error. We propose that 3D digital models and corresponding photomosaics should be standard practice in paleoseismic exposure archiving. The error analysis serves as a guide for future investigations that seek balance between speed and accuracy during photomosaic and 3D model construction.
Nebbad-Lechani, Biba; Emirian, Aurélie; Maillebuau, Fabienne; Mahjoub, Nadia; Fihman, Vincent; Legrand, Patrick; Decousser, Jean-Winoc
2013-12-01
The microbiological diagnosis of respiratory tract infections requires serial manual dilutions of the clinical specimen before agar plate inoculation, disrupting the workflow in bacteriology clinical laboratories. Automated plating instrument systems have been designed to increase the speed, reproducibility and safety of this inoculating step; nevertheless, data concerning respiratory specimens are lacking. We tested a specific procedure that uses the Previ Isola® (bioMérieux, Craponne, France) to inoculate with broncho-pulmonary specimens (BPS). A total of 350 BPS from a university-affiliated hospital were managed in parallel using the manual reference and the automated methods (expectoration: 75; broncho-alveolar lavage: 68; tracheal aspiration: 17; protected distal sample: 190). A specific enumeration reading grid, a pre-liquefaction step and a fluidity test, performed before the inoculation, were designed for the automated method. The qualitative (i.e., the number of specimens yielding a bacterial count greater than the clinical threshold) and quantitative (i.e., the discrepancy within a 0.5 log value) concordances were 100% and 98.2%, respectively. The slimmest subgroup of expectorations could not be managed by the automated method (8%, 6/75). The technical time and cost savings (i.e., number of consumed plates) reached 50%. Additional studies are required for specific populations, such as cystic fibrosis specimens and associated bacterial variants. An automated decapper should be implemented to increase the biosafety of the process. The PREVI Isola® adapted procedure is a time- and cost-saving method for broncho-pulmonary specimen processing. © 2013.
NASA Astrophysics Data System (ADS)
Świąder, Andrzej
2014-12-01
Digital Terrain Models (DTMs) produced from stereoscopic, submeter-resolution High Resolution Imaging Science Experiment (HiRISE) imagery provide a solid basis for all morphometric analyses of the surface of Mars. In view of the fact that a more effective use of DTMs is hindered by complicated and time-consuming manual handling, the automated process provided by specialists of the Ames Intelligent Robotics Group (NASA), Ames Stereo Pipeline, constitutes a good alternative. Four DTMs, covering the global dichotomy boundary between the southern highlands and northern lowlands along the line of the presumable Arabia shoreline, were produced and analysed. One of them included forms that are likely to be indicative of an oceanic basin that extended across the lowland northern hemisphere of Mars in the geological past. The high resolution DTMs obtained were used in the process of landscape visualisation.
Terminology model discovery using natural language processing and visualization techniques.
Zhou, Li; Tao, Ying; Cimino, James J; Chen, Elizabeth S; Liu, Hongfang; Lussier, Yves A; Hripcsak, George; Friedman, Carol
2006-12-01
Medical terminologies are important for unambiguous encoding and exchange of clinical information. The traditional manual method of developing terminology models is time-consuming and limited in the number of phrases that a human developer can examine. In this paper, we present an automated method for developing medical terminology models based on natural language processing (NLP) and information visualization techniques. Surgical pathology reports were selected as the testing corpus for developing a pathology procedure terminology model. The use of a general NLP processor for the medical domain, MedLEE, provides an automated method for acquiring semantic structures from a free text corpus and sheds light on a new high-throughput method of medical terminology model development. The use of an information visualization technique supports the summarization and visualization of the large quantity of semantic structures generated from medical documents. We believe that a general method based on NLP and information visualization will facilitate the modeling of medical terminologies.
Historical review of die drool phenomenon during plastics extrusion
NASA Astrophysics Data System (ADS)
Musil, Jan; Zatloukal, Martin
2013-04-01
Die drool phenomenon is defined as unwanted spontaneous accumulation of extruded polymer melt on open faces of extrusion die during extrusion process. Such accumulated material builds up on the die exit and frequently or continually sticks onto the extruded product and thus damages it. Since die drool appears, extrusion process must be shut down and die exit must be manually cleaned which is time and money consuming. Although die drool is complex phenomenon and its formation mechanism is not fully understood yet, variety of proposed explanations of its formation mechanism and also many ways to its elimination can be found in open literature. Our review presents in historical order breakthrough works in the field of die drool research, shows many ways to suppress it, introduces methods for its quantitative evaluation and composition analysis and summarizes theories of die drool formation mechanism which can be helpful for extrusion experts.
A method for the automated processing and analysis of images of ULVWF-platelet strings.
Reeve, Scott R; Abbitt, Katherine B; Cruise, Thomas D; Hose, D Rodney; Lawford, Patricia V
2013-01-01
We present a method for identifying and analysing unusually large von Willebrand factor (ULVWF)-platelet strings in noisy low-quality images. The method requires relatively inexpensive, non-specialist equipment and allows multiple users to be employed in the capture of images. Images are subsequently enhanced and analysed, using custom-written software to perform the processing tasks. The formation and properties of ULVWF-platelet strings released in in vitro flow-based assays have recently become a popular research area. Endothelial cells are incorporated into a flow chamber, chemically stimulated to induce ULVWF release and perfused with isolated platelets which are able to bind to the ULVWF to form strings. The numbers and lengths of the strings released are related to characteristics of the flow. ULVWF-platelet strings are routinely identified by eye from video recordings captured during experiments and analysed manually using basic NIH image software to determine the number of strings and their lengths. This is a laborious, time-consuming task and a single experiment, often consisting of data from four to six dishes of endothelial cells, can take 2 or more days to analyse. The method described here allows analysis of the strings to provide data such as the number and length of strings, number of platelets per string and the distance between each platelet to be found. The software reduces analysis time, and more importantly removes user subjectivity, producing highly reproducible results with an error of less than 2% when compared with detailed manual analysis.
Temporally rendered automatic cloud extraction (TRACE) system
NASA Astrophysics Data System (ADS)
Bodrero, Dennis M.; Yale, James G.; Davis, Roger E.; Rollins, John M.
1999-10-01
Smoke/obscurant testing requires that 2D cloud extent be extracted from visible and thermal imagery. These data are used alone or in combination with 2D data from other aspects to make 3D calculations of cloud properties, including dimensions, volume, centroid, travel, and uniformity. Determining cloud extent from imagery has historically been a time-consuming manual process. To reduce time and cost associated with smoke/obscurant data processing, automated methods to extract cloud extent from imagery were investigated. The TRACE system described in this paper was developed and implemented at U.S. Army Dugway Proving Ground, UT by the Science and Technology Corporation--Acuity Imaging Incorporated team with Small Business Innovation Research funding. TRACE uses dynamic background subtraction and 3D fast Fourier transform as primary methods to discriminate the smoke/obscurant cloud from the background. TRACE has been designed to run on a PC-based platform using Windows. The PC-Windows environment was chosen for portability, to give TRACE the maximum flexibility in terms of its interaction with peripheral hardware devices such as video capture boards, removable media drives, network cards, and digital video interfaces. Video for Windows provides all of the necessary tools for the development of the video capture utility in TRACE and allows for interchangeability of video capture boards without any software changes. TRACE is designed to take advantage of future upgrades in all aspects of its component hardware. A comparison of cloud extent determined by TRACE with manual method is included in this paper.
Assessment of cluster yield components by image analysis.
Diago, Maria P; Tardaguila, Javier; Aleixos, Nuria; Millan, Borja; Prats-Montalban, Jose M; Cubero, Sergio; Blasco, Jose
2015-04-01
Berry weight, berry number and cluster weight are key parameters for yield estimation for wine and tablegrape industry. Current yield prediction methods are destructive, labour-demanding and time-consuming. In this work, a new methodology, based on image analysis was developed to determine cluster yield components in a fast and inexpensive way. Clusters of seven different red varieties of grapevine (Vitis vinifera L.) were photographed under laboratory conditions and their cluster yield components manually determined after image acquisition. Two algorithms based on the Canny and the logarithmic image processing approaches were tested to find the contours of the berries in the images prior to berry detection performed by means of the Hough Transform. Results were obtained in two ways: by analysing either a single image of the cluster or using four images per cluster from different orientations. The best results (R(2) between 69% and 95% in berry detection and between 65% and 97% in cluster weight estimation) were achieved using four images and the Canny algorithm. The model's capability based on image analysis to predict berry weight was 84%. The new and low-cost methodology presented here enabled the assessment of cluster yield components, saving time and providing inexpensive information in comparison with current manual methods. © 2014 Society of Chemical Industry.
a Novel Method for Automation of 3d Hydro Break Line Generation from LIDAR Data Using Matlab
NASA Astrophysics Data System (ADS)
Toscano, G. J.; Gopalam, U.; Devarajan, V.
2013-08-01
Water body detection is necessary to generate hydro break lines, which are in turn useful in creating deliverables such as TINs, contours, DEMs from LiDAR data. Hydro flattening follows the detection and delineation of water bodies (lakes, rivers, ponds, reservoirs, streams etc.) with hydro break lines. Manual hydro break line generation is time consuming and expensive. Accuracy and processing time depend on the number of vertices marked for delineation of break lines. Automation with minimal human intervention is desired for this operation. This paper proposes using a novel histogram analysis of LiDAR elevation data and LiDAR intensity data to automatically detect water bodies. Detection of water bodies using elevation information was verified by checking against LiDAR intensity data since the spectral reflectance of water bodies is very small compared with that of land and vegetation in near infra-red wavelength range. Detection of water bodies using LiDAR intensity data was also verified by checking against LiDAR elevation data. False detections were removed using morphological operations and 3D break lines were generated. Finally, a comparison of automatically generated break lines with their semi-automated/manual counterparts was performed to assess the accuracy of the proposed method and the results were discussed.
A Hands-on Approach to the Teaching of Consumer Affairs.
ERIC Educational Resources Information Center
de Ruyter, Ko; Widdows, Richard
1992-01-01
In a course titled Computerized Consumer Responses and Information Systems, Purdue University students operate a consumer hotline for their school. They must promote its existence, answer calls, develop reports, produce training manuals, and set parameters for the computer system. (SK)
Unified Software Solution for Efficient SPR Data Analysis in Drug Research
Dahl, Göran; Steigele, Stephan; Hillertz, Per; Tigerström, Anna; Egnéus, Anders; Mehrle, Alexander; Ginkel, Martin; Edfeldt, Fredrik; Holdgate, Geoff; O’Connell, Nichole; Kappler, Bernd; Brodte, Annette; Rawlins, Philip B.; Davies, Gareth; Westberg, Eva-Lotta; Folmer, Rutger H. A.; Heyse, Stephan
2016-01-01
Surface plasmon resonance (SPR) is a powerful method for obtaining detailed molecular interaction parameters. Modern instrumentation with its increased throughput has enabled routine screening by SPR in hit-to-lead and lead optimization programs, and SPR has become a mainstream drug discovery technology. However, the processing and reporting of SPR data in drug discovery are typically performed manually, which is both time-consuming and tedious. Here, we present the workflow concept, design and experiences with a software module relying on a single, browser-based software platform for the processing, analysis, and reporting of SPR data. The efficiency of this concept lies in the immediate availability of end results: data are processed and analyzed upon loading the raw data file, allowing the user to immediately quality control the results. Once completed, the user can automatically report those results to data repositories for corporate access and quickly generate printed reports or documents. The software module has resulted in a very efficient and effective workflow through saved time and improved quality control. We discuss these benefits and show how this process defines a new benchmark in the drug discovery industry for the handling, interpretation, visualization, and sharing of SPR data. PMID:27789754
Data Processing and Quality Evaluation of a Boat-Based Mobile Laser Scanning System
Vaaja, Matti; Kukko, Antero; Kaartinen, Harri; Kurkela, Matti; Kasvi, Elina; Flener, Claude; Hyyppä, Hannu; Hyyppä, Juha; Järvelä, Juha; Alho, Petteri
2013-01-01
Mobile mapping systems (MMSs) are used for mapping topographic and urban features which are difficult and time consuming to measure with other instruments. The benefits of MMSs include efficient data collection and versatile usability. This paper investigates the data processing steps and quality of a boat-based mobile mapping system (BoMMS) data for generating terrain and vegetation points in a river environment. Our aim in data processing was to filter noise points, detect shorelines as well as points below water surface and conduct ground point classification. Previous studies of BoMMS have investigated elevation accuracies and usability in detection of fluvial erosion and deposition areas. The new findings concerning BoMMS data are that the improved data processing approach allows for identification of multipath reflections and shoreline delineation. We demonstrate the possibility to measure bathymetry data in shallow (0–1 m) and clear water. Furthermore, we evaluate for the first time the accuracy of the BoMMS ground points classification compared to manually classified data. We also demonstrate the spatial variations of the ground point density and assess elevation and vertical accuracies of the BoMMS data. PMID:24048340
Data processing and quality evaluation of a boat-based mobile laser scanning system.
Vaaja, Matti; Kukko, Antero; Kaartinen, Harri; Kurkela, Matti; Kasvi, Elina; Flener, Claude; Hyyppä, Hannu; Hyyppä, Juha; Järvelä, Juha; Alho, Petteri
2013-09-17
Mobile mapping systems (MMSs) are used for mapping topographic and urban features which are difficult and time consuming to measure with other instruments. The benefits of MMSs include efficient data collection and versatile usability. This paper investigates the data processing steps and quality of a boat-based mobile mapping system (BoMMS) data for generating terrain and vegetation points in a river environment. Our aim in data processing was to filter noise points, detect shorelines as well as points below water surface and conduct ground point classification. Previous studies of BoMMS have investigated elevation accuracies and usability in detection of fluvial erosion and deposition areas. The new findings concerning BoMMS data are that the improved data processing approach allows for identification of multipath reflections and shoreline delineation. We demonstrate the possibility to measure bathymetry data in shallow (0-1 m) and clear water. Furthermore, we evaluate for the first time the accuracy of the BoMMS ground points classification compared to manually classified data. We also demonstrate the spatial variations of the ground point density and assess elevation and vertical accuracies of the BoMMS data.
Sally Ride EarthKAM - Automated Image Geo-Referencing Using Google Earth Web Plug-In
NASA Technical Reports Server (NTRS)
Andres, Paul M.; Lazar, Dennis K.; Thames, Robert Q.
2013-01-01
Sally Ride EarthKAM is an educational program funded by NASA that aims to provide the public the ability to picture Earth from the perspective of the International Space Station (ISS). A computer-controlled camera is mounted on the ISS in a nadir-pointing window; however, timing limitations in the system cause inaccurate positional metadata. Manually correcting images within an orbit allows the positional metadata to be improved using mathematical regressions. The manual correction process is time-consuming and thus, unfeasible for a large number of images. The standard Google Earth program allows for the importing of KML (keyhole markup language) files that previously were created. These KML file-based overlays could then be manually manipulated as image overlays, saved, and then uploaded to the project server where they are parsed and the metadata in the database is updated. The new interface eliminates the need to save, download, open, re-save, and upload the KML files. Everything is processed on the Web, and all manipulations go directly into the database. Administrators also have the control to discard any single correction that was made and validate a correction. This program streamlines a process that previously required several critical steps and was probably too complex for the average user to complete successfully. The new process is theoretically simple enough for members of the public to make use of and contribute to the success of the Sally Ride EarthKAM project. Using the Google Earth Web plug-in, EarthKAM images, and associated metadata, this software allows users to interactively manipulate an EarthKAM image overlay, and update and improve the associated metadata. The Web interface uses the Google Earth JavaScript API along with PHP-PostgreSQL to present the user the same interface capabilities without leaving the Web. The simpler graphical user interface will allow the public to participate directly and meaningfully with EarthKAM. The use of similar techniques is being investigated to place ground-based observations in a Google Mars environment, allowing the MSL (Mars Science Laboratory) Science Team a means to visualize the rover and its environment.
Consumers' use of written product information.
Wiese, Bettina S; Sauer, Jürgen; Rüttinger, Bruno
2004-09-15
Two studies were conducted to investigate the predictive role of person-specific, product-specific, and situation-specific influences on the use of instruction manuals in the field of electrical consumer products. In a laboratory study, 42 participants were observed while putting a vacuum cleaner into operation. Situational primes (i.e., receiving a verbal cue that the packaging contains an instruction manual) increased the probability of the user manual being read. Additional verbal information that the manual contains information on energy-saving behaviours was especially motivating for persons with high environmental concern. Self-report data, collected on a wide range of products, suggest that product complexity is the best predictor of instruction manual use. In a second study with 30 participants, different positions of product labels were compared, i.e. placing the information on the packaging or directly onto the product. Information placed directly onto the product had a significantly higher influence on participants' actual behaviour than providing the same information on the packaging.
Wong, M S; Cheng, J C Y; Wong, M W; So, S F
2005-04-01
A study was conducted to compare the CAD/CAM method with the conventional manual method in fabrication of spinal orthoses for patients with adolescent idiopathic scoliosis. Ten subjects were recruited for this study. Efficiency analyses of the two methods were performed from cast filling/ digitization process to completion of cast/image rectification. The dimensional changes of the casts/ models rectified by the two cast rectification methods were also investigated. The results demonstrated that the CAD/CAM method was faster than the conventional manual method in the studied processes. The mean rectification time of the CAD/CAM method was shorter than that of the conventional manual method by 108.3 min (63.5%). This indicated that the CAD/CAM method took about 1/3 of the time of the conventional manual to finish cast rectification. In the comparison of cast/image dimensional differences between the conventional manual method and the CAD/CAM method, five major dimensions in each of the five rectified regions namely the axilla, thoracic, lumbar, abdominal and pelvic regions were involved. There were no significant dimensional differences (p < 0.05) in 19 out of the 25 studied dimensions. This study demonstrated that the CAD/CAM system could save the time in the rectification process and offer a relatively high resemblance in cast rectification as compared with the conventional manual method.
Validation of automatic segmentation of ribs for NTCP modeling.
Stam, Barbara; Peulen, Heike; Rossi, Maddalena M G; Belderbos, José S A; Sonke, Jan-Jakob
2016-03-01
Determination of a dose-effect relation for rib fractures in a large patient group has been limited by the time consuming manual delineation of ribs. Automatic segmentation could facilitate such an analysis. We determine the accuracy of automatic rib segmentation in the context of normal tissue complication probability modeling (NTCP). Forty-one patients with stage I/II non-small cell lung cancer treated with SBRT to 54 Gy in 3 fractions were selected. Using the 4DCT derived mid-ventilation planning CT, all ribs were manually contoured and automatically segmented. Accuracy of segmentation was assessed using volumetric, shape and dosimetric measures. Manual and automatic dosimetric parameters Dx and EUD were tested for equivalence using the Two One-Sided T-test (TOST), and assessed for agreement using Bland-Altman analysis. NTCP models based on manual and automatic segmentation were compared. Automatic segmentation was comparable with the manual delineation in radial direction, but larger near the costal cartilage and vertebrae. Manual and automatic Dx and EUD were significantly equivalent. The Bland-Altman analysis showed good agreement. The two NTCP models were very similar. Automatic rib segmentation was significantly equivalent to manual delineation and can be used for NTCP modeling in a large patient group. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Bogdanov, Anita; Endrész, Valeria; Urbán, Szabolcs; Lantos, Ildikó; Deák, Judit; Burián, Katalin; Önder, Kamil; Ayaydin, Ferhan; Balázs, Péter
2014-01-01
Chlamydiae are obligate intracellular bacteria that propagate in the inclusion, a specific niche inside the host cell. The standard method for counting chlamydiae is immunofluorescent staining and manual counting of chlamydial inclusions. High- or medium-throughput estimation of the reduction in chlamydial inclusions should be the basis of testing antichlamydial compounds and other drugs that positively or negatively influence chlamydial growth, yet low-throughput manual counting is the common approach. To overcome the time-consuming and subjective manual counting, we developed an automatic inclusion-counting system based on a commercially available DNA chip scanner. Fluorescently labeled inclusions are detected by the scanner, and the image is processed by ChlamyCount, a custom plug-in of the ImageJ software environment. ChlamyCount was able to measure the inclusion counts over a 1-log-unit dynamic range with a high correlation to the theoretical counts. ChlamyCount was capable of accurately determining the MICs of the novel antimicrobial compound PCC00213 and the already known antichlamydial antibiotics moxifloxacin and tetracycline. ChlamyCount was also able to measure the chlamydial growth-altering effect of drugs that influence host-bacterium interaction, such as gamma interferon, DEAE-dextran, and cycloheximide. ChlamyCount is an easily adaptable system for testing antichlamydial antimicrobials and other compounds that influence Chlamydia-host interactions. PMID:24189259
Bell, Michael J; Gillespie, Colin S; Swan, Daniel; Lord, Phillip
2012-09-15
Annotations are a key feature of many biological databases, used to convey our knowledge of a sequence to the reader. Ideally, annotations are curated manually, however manual curation is costly, time consuming and requires expert knowledge and training. Given these issues and the exponential increase of data, many databases implement automated annotation pipelines in an attempt to avoid un-annotated entries. Both manual and automated annotations vary in quality between databases and annotators, making assessment of annotation reliability problematic for users. The community lacks a generic measure for determining annotation quality and correctness, which we look at addressing within this article. Specifically we investigate word reuse within bulk textual annotations and relate this to Zipf's Principle of Least Effort. We use the UniProt Knowledgebase (UniProtKB) as a case study to demonstrate this approach since it allows us to compare annotation change, both over time and between automated and manually curated annotations. By applying power-law distributions to word reuse in annotation, we show clear trends in UniProtKB over time, which are consistent with existing studies of quality on free text English. Further, we show a clear distinction between manual and automated analysis and investigate cohorts of protein records as they mature. These results suggest that this approach holds distinct promise as a mechanism for judging annotation quality. Source code is available at the authors website: http://homepages.cs.ncl.ac.uk/m.j.bell1/annotation. phillip.lord@newcastle.ac.uk.
ERIC Educational Resources Information Center
Allegheny Intermediate Unit, Pittsburgh, PA.
This manual identifies activities and resources for infusing consumer education into English, social studies, science, mathematics, and home economics courses in grades five through eight. The activities are intended to help students recognize their rights and responsibilities as consumers in our society and make intelligent decisions in light of…
Pesticide Devices: A Guide for Consumers
This guide for consumers explains key facts about pesticide devices and how they differ from registered pesticide products. Device producers or registrants should see our Pesticide Registration Manual, Chapter 13 for information.
Faita, Francesco; Gemignani, Vincenzo; Bianchini, Elisabetta; Giannarelli, Chiara; Ghiadoni, Lorenzo; Demi, Marcello
2008-09-01
The purpose of this report is to describe an automatic real-time system for evaluation of the carotid intima-media thickness (CIMT) characterized by 3 main features: minimal interobserver and intraobserver variability, real-time capabilities, and great robustness against noise. One hundred fifty carotid B-mode ultrasound images were used to validate the system. Two skilled operators were involved in the analysis. Agreement with the gold standard, defined as the mean of 2 manual measurements of a skilled operator, and the interobserver and intraobserver variability were quantitatively evaluated by regression analysis and Bland-Altman statistics. The automatic measure of the CIMT showed a mean bias +/- SD of 0.001 +/- 0.035 mm toward the manual measurement. The intraobserver variability, evaluated with Bland-Altman plots, showed a bias that was not significantly different from 0, whereas the SD of the differences was greater in the manual analysis (0.038 mm) than in the automatic analysis (0.006 mm). For interobserver variability, the automatic measurement had a bias that was not significantly different from 0, with a satisfactory SD of the differences (0.01 mm), whereas in the manual measurement, a little bias was present (0.012 mm), and the SD of the differences was noticeably greater (0.044 mm). The CIMT has been accepted as a noninvasive marker of early vascular alteration. At present, the manual approach is largely used to estimate CIMT values. However, that method is highly operator dependent and time-consuming. For these reasons, we developed a new system for the CIMT measurement that conjugates precision with real-time analysis, thus providing considerable advantages in clinical practice.
Magellan Project: Evolving enhanced operations efficiency to maximize science value
NASA Technical Reports Server (NTRS)
Cheuvront, Allan R.; Neuman, James C.; Mckinney, J. Franklin
1994-01-01
Magellan has been one of NASA's most successful spacecraft, returning more science data than all planetary spacecraft combined. The Magellan Spacecraft Team (SCT) has maximized the science return with innovative operational techniques to overcome anomalies and to perform activities for which the spacecraft was not designed. Commanding the spacecraft was originally time consuming because the standard development process was envisioned as manual tasks. The Program understood that reducing mission operations costs were essential for an extended mission. Management created an environment which encouraged automation of routine tasks, allowing staff reduction while maximizing the science data returned. Data analysis and trending, command preparation, and command reviews are some of the tasks that were automated. The SCT has accommodated personnel reductions by improving operations efficiency while returning the maximum science data possible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doak, Justin E.; Ingram, Joe; Johnson, Josh
2016-01-06
In the cyber security operations of a typical organization, data from multiple sources are monitored, and when certain conditions in the data are met, an alert is generated in an alert management system. Analysts inspect these alerts to decide if any deserve promotion to an event requiring further scrutiny. This triage process is manual, time-consuming, and detracts from the in-depth investigation of events. We have created a software system that uses supervised machine learning to automatically prioritize these alerts. In particular we utilize active learning to make efficient use of the pool of unlabeled alerts, thereby improving the performance ofmore » our ranking models over passive learning. We have demonstrated the effectiveness of our system on a large, real-world dataset of cyber security alerts.« less
The current role of on-line extraction approaches in clinical and forensic toxicology.
Mueller, Daniel M
2014-08-01
In today's clinical and forensic toxicological laboratories, automation is of interest because of its ability to optimize processes, to reduce manual workload and handling errors and to minimize exposition to potentially infectious samples. Extraction is usually the most time-consuming step; therefore, automation of this step is reasonable. Currently, from the field of clinical and forensic toxicology, methods using the following on-line extraction techniques have been published: on-line solid-phase extraction, turbulent flow chromatography, solid-phase microextraction, microextraction by packed sorbent, single-drop microextraction and on-line desorption of dried blood spots. Most of these published methods are either single-analyte or multicomponent procedures; methods intended for systematic toxicological analysis are relatively scarce. However, the use of on-line extraction will certainly increase in the near future.
Parmar, Chintan; Blezek, Daniel; Estepar, Raul San Jose; Pieper, Steve; Kim, John; Aerts, Hugo J. W. L.
2017-01-01
Purpose Accurate segmentation of lung nodules is crucial in the development of imaging biomarkers for predicting malignancy of the nodules. Manual segmentation is time consuming and affected by inter-observer variability. We evaluated the robustness and accuracy of a publically available semiautomatic segmentation algorithm that is implemented in the 3D Slicer Chest Imaging Platform (CIP) and compared it with the performance of manual segmentation. Methods CT images of 354 manually segmented nodules were downloaded from the LIDC database. Four radiologists performed the manual segmentation and assessed various nodule characteristics. The semiautomatic CIP segmentation was initialized using the centroid of the manual segmentations, thereby generating four contours for each nodule. The robustness of both segmentation methods was assessed using the region of uncertainty (δ) and Dice similarity index (DSI). The robustness of the segmentation methods was compared using the Wilcoxon-signed rank test (pWilcoxon<0.05). The Dice similarity index (DSIAgree) between the manual and CIP segmentations was computed to estimate the accuracy of the semiautomatic contours. Results The median computational time of the CIP segmentation was 10 s. The median CIP and manually segmented volumes were 477 ml and 309 ml, respectively. CIP segmentations were significantly more robust than manual segmentations (median δCIP = 14ml, median dsiCIP = 99% vs. median δmanual = 222ml, median dsimanual = 82%) with pWilcoxon~10−16. The agreement between CIP and manual segmentations had a median DSIAgree of 60%. While 13% (47/354) of the nodules did not require any manual adjustment, minor to substantial manual adjustments were needed for 87% (305/354) of the nodules. CIP segmentations were observed to perform poorly (median DSIAgree≈50%) for non-/sub-solid nodules with subtle appearances and poorly defined boundaries. Conclusion Semi-automatic CIP segmentation can potentially reduce the physician workload for 13% of nodules owing to its computational efficiency and superior stability compared to manual segmentation. Although manual adjustment is needed for many cases, CIP segmentation provides a preliminary contour for physicians as a starting point. PMID:28594880
GASP- General Aviation Synthesis Program. Volume 7: Economics
NASA Technical Reports Server (NTRS)
1978-01-01
The economic analysis includes: manufacturing costs; labor costs; parts costs; operating costs; markups and consumer price. A user's manual for a computer program to calculate the final consumer price is included.
Detection of artery interfaces: a real-time system and its clinical applications
NASA Astrophysics Data System (ADS)
Faita, Francesco; Gemignani, Vincenzo; Bianchini, Elisabetta; Giannarelli, Chiara; Ghiadoni, Lorenzo; Demi, Marcello
2008-03-01
Analyzing the artery mechanics is a crucial issue because of its close relationship with several cardiovascular risk factors, such as hypertension and diabetes. Moreover, most of the work can be carried out by analyzing image sequences obtained with ultrasounds, that is with a non-invasive technique which allows a real-time visualization of the observed structures. For this reason, therefore, an accurate temporal localization of the main vessel interfaces becomes a central task for which the manual approach should be avoided since such a method is rather unreliable and time consuming. Real-time automatic systems are advantageously used to automatically locate the arterial interfaces. The automatic measurement reduces the inter/intra-observer variability with respect to the manual measurement which unavoidably depends on the experience of the operator. The real-time visual feedback, moreover, guides physicians when looking for the best position of the ultrasound probe, thus increasing the global robustness of the system. The automatic system which we developed is a stand-alone video processing system which acquires the analog video signal from the ultrasound equipment, performs all the measurements and shows the results in real-time. The localization algorithm of the artery tunics is based on a new mathematical operator (the first order absolute moment) and on a pattern recognition approach. Various clinical applications have been developed on board and validated through a comparison with gold-standard techniques: the assessment of intima-media thickness, the arterial distension, the flow-mediated dilation and the pulse wave velocity. With this paper, the results obtained on clinical trials are presented.
NASA Astrophysics Data System (ADS)
Shen, Chien-wen
2009-01-01
During the processes of TFT-LCD manufacturing, steps like visual inspection of panel surface defects still heavily rely on manual operations. As the manual inspection time of TFT-LCD manufacturing could range from 4 hours to 1 day, the reliability of time forecasting is thus important for production planning, scheduling and customer response. This study would like to propose a practical and easy-to-implement prediction model through the approach of Bayesian networks for time estimation of manual operated procedures in TFT-LCD manufacturing. Given the lack of prior knowledge about manual operation time, algorithms of necessary path condition and expectation-maximization are used for structural learning and estimation of conditional probability distributions respectively. This study also applied Bayesian inference to evaluate the relationships between explanatory variables and manual operation time. With the empirical applications of this proposed forecasting model, approach of Bayesian networks demonstrates its practicability and prediction accountability.
Automatic detection of cardiac cycle and measurement of the mitral annulus diameter in 4D TEE images
NASA Astrophysics Data System (ADS)
Graser, Bastian; Hien, Maximilian; Rauch, Helmut; Meinzer, Hans-Peter; Heimann, Tobias
2012-02-01
Mitral regurgitation is a wide spread problem. For successful surgical treatment quantification of the mitral annulus, especially its diameter, is essential. Time resolved 3D transesophageal echocardiography (TEE) is suitable for this task. Yet, manual measurement in four dimensions is extremely time consuming, which confirms the need for automatic quantification methods. The method we propose is capable of automatically detecting the cardiac cycle (systole or diastole) for each time step and measuring the mitral annulus diameter. This is done using total variation noise filtering, the graph cut segmentation algorithm and morphological operators. An evaluation took place using expert measurements on 4D TEE data of 13 patients. The cardiac cycle was detected correctly on 78% of all images and the mitral annulus diameter was measured with an average error of 3.08 mm. Its full automatic processing makes the method easy to use in the clinical workflow and it provides the surgeon with helpful information.
ERIC Educational Resources Information Center
Hooshyar, Danial; Yousefi, Moslem; Lim, Heuiseok
2018-01-01
Automated content generation for educational games has become an emerging research problem, as manual authoring is often time consuming and costly. In this article, we present a procedural content generation framework that intends to produce educational game content from the viewpoint of both designer and user. This framework generates content by…
Measuring Diameters Of Large Vessels
NASA Technical Reports Server (NTRS)
Currie, James R.; Kissel, Ralph R.; Oliver, Charles E.; Smith, Earnest C.; Redmon, John W., Sr.; Wallace, Charles C.; Swanson, Charles P.
1990-01-01
Computerized apparatus produces accurate results quickly. Apparatus measures diameter of tank or other large cylindrical vessel, without prior knowledge of exact location of cylindrical axis. Produces plot of inner circumference, estimate of true center of vessel, data on radius, diameter of best-fit circle, and negative and positive deviations of radius from circle at closely spaced points on circumference. Eliminates need for time-consuming and error-prone manual measurements.
Generic and Automated Runtime Program Repair
2012-09-01
other person or corporation; or convey any rights or permission to manufacture, use, or sell any patented invention that may relate to them... PERSON PATRICK M. HURLEY a. REPORT U b. ABSTRACT U c. THIS PAGE U 19b. TELEPONE NUMBER (Include area code) N/A Standard Form 298...Public Release; Distribution Unlimited. 2. Introduction Software bugs are ubiquitous, and fixing them remains a difficult, time- consuming , and manual
Consumer Frauds and Deceptions: A Learning Module.
ERIC Educational Resources Information Center
Waddell, Fred E.; And Others
This manual is designed to assist helping professionals responsible for developing consumer education programs for older adults on the topic of consumer fraud and deception. In a modular presentation format, the materials address the following areas of concern: (1) types of frauds and deceptions such as money schemes, mail order fraud,…
ERIC Educational Resources Information Center
Office of Consumer Affairs, Washington, DC.
This handbook is intended to help consumers exercise their rights in the marketplace in three ways. It shows how to communicate more effectively with manufacturers, retailers, and service providers; it is a self-help manual for resolving individual consumer complaints; and it lists helpful sources of assistance. The handbook has two sections. Part…
COST EVALUATION OF AUTOMATED AND MANUAL POST- CONSUMER PLASTIC BOTTLE SORTING SYSTEMS
This project evaluates, on the basis of performance and cost, two Automated BottleSort® sorting systems for post-consumer commingled plastic containers developed by Magnetic Separation Systems. This study compares the costs to sort mixed bales of post-consumer plastic at these t...
Stein, Dan J; Phillips, Katharine A
2013-05-17
The revision of the Diagnostic and Statistical Manual of Mental Disorders (DSM) provides a useful opportunity to revisit debates about the nature of psychiatric classification. An important debate concerns the involvement of mental health consumers in revisions of the classification. One perspective argues that psychiatric classification is a scientific process undertaken by scientific experts and that including consumers in the revision process is merely pandering to political correctness. A contrasting perspective is that psychiatric classification is a process driven by a range of different values and that the involvement of patients and patient advocates would enhance this process. Here we draw on our experiences with input from the public during the deliberations of the Obsessive Compulsive-Spectrum Disorders subworkgroup of DSM-5, to help make the argument that psychiatric classification does require reasoned debate on a range of different facts and values, and that it is appropriate for scientist experts to review their nosological recommendations in the light of rigorous consideration of patient experience and feedback.
Model-based setup assistant for progressive tools
NASA Astrophysics Data System (ADS)
Springer, Robert; Gräler, Manuel; Homberg, Werner; Henke, Christian; Trächtler, Ansgar
2018-05-01
In the field of production systems, globalization and technological progress lead to increasing requirements regarding part quality, delivery time and costs. Hence, today's production is challenged much more than a few years ago: it has to be very flexible and produce economically small batch sizes to satisfy consumer's demands and avoid unnecessary stock. Furthermore, a trend towards increasing functional integration continues to lead to an ongoing miniaturization of sheet metal components. In the industry of electric connectivity for example, the miniaturized connectors are manufactured by progressive tools, which are usually used for very large batches. These tools are installed in mechanical presses and then set up by a technician, who has to manually adjust a wide range of punch-bending operations. Disturbances like material thickness, temperatures, lubrication or tool wear complicate the setup procedure. In prospect of the increasing demand of production flexibility, this time-consuming process has to be handled more and more often. In this paper, a new approach for a model-based setup assistant is proposed as a solution, which is exemplarily applied in combination with a progressive tool. First, progressive tools, more specifically, their setup process is described and based on that, the challenges are pointed out. As a result, a systematic process to set up the machines is introduced. Following, the process is investigated with an FE-Analysis regarding the effects of the disturbances. In the next step, design of experiments is used to systematically develop a regression model of the system's behaviour. This model is integrated within an optimization in order to calculate optimal machine parameters and the following necessary adjustment of the progressive tool due to the disturbances. Finally, the assistant is tested in a production environment and the results are discussed.
Establishing a gold standard for manual cough counting: video versus digital audio recordings
Smith, Jaclyn A; Earis, John E; Woodcock, Ashley A
2006-01-01
Background Manual cough counting is time-consuming and laborious; however it is the standard to which automated cough monitoring devices must be compared. We have compared manual cough counting from video recordings with manual cough counting from digital audio recordings. Methods We studied 8 patients with chronic cough, overnight in laboratory conditions (diagnoses were 5 asthma, 1 rhinitis, 1 gastro-oesophageal reflux disease and 1 idiopathic cough). Coughs were recorded simultaneously using a video camera with infrared lighting and digital sound recording. The numbers of coughs in each 8 hour recording were counted manually, by a trained observer, in real time from the video recordings and using audio-editing software from the digital sound recordings. Results The median cough frequency was 17.8 (IQR 5.9–28.7) cough sounds per hour in the video recordings and 17.7 (6.0–29.4) coughs per hour in the digital sound recordings. There was excellent agreement between the video and digital audio cough rates; mean difference of -0.3 coughs per hour (SD ± 0.6), 95% limits of agreement -1.5 to +0.9 coughs per hour. Video recordings had poorer sound quality even in controlled conditions and can only be analysed in real time (8 hours per recording). Digital sound recordings required 2–4 hours of analysis per recording. Conclusion Manual counting of cough sounds from digital audio recordings has excellent agreement with simultaneous video recordings in laboratory conditions. We suggest that ambulatory digital audio recording is therefore ideal for validating future cough monitoring devices, as this as this can be performed in the patients own environment. PMID:16887019
Mirion--a software package for automatic processing of mass spectrometric images.
Paschke, C; Leisner, A; Hester, A; Maass, K; Guenther, S; Bouschen, W; Spengler, B
2013-08-01
Mass spectrometric imaging (MSI) techniques are of growing interest for the Life Sciences. In recent years, the development of new instruments employing ion sources that are tailored for spatial scanning allowed the acquisition of large data sets. A subsequent data processing, however, is still a bottleneck in the analytical process, as a manual data interpretation is impossible within a reasonable time frame. The transformation of mass spectrometric data into spatial distribution images of detected compounds turned out to be the most appropriate method to visualize the results of such scans, as humans are able to interpret images faster and easier than plain numbers. Image generation, thus, is a time-consuming and complex yet very efficient task. The free software package "Mirion," presented in this paper, allows the handling and analysis of data sets acquired by mass spectrometry imaging. Mirion can be used for image processing of MSI data obtained from many different sources, as it uses the HUPO-PSI-based standard data format imzML, which is implemented in the proprietary software of most of the mass spectrometer companies. Different graphical representations of the recorded data are available. Furthermore, automatic calculation and overlay of mass spectrometric images promotes direct comparison of different analytes for data evaluation. The program also includes tools for image processing and image analysis.
Calcium (Ca2+) waves data calibration and analysis using image processing techniques
2013-01-01
Background Calcium (Ca2+) propagates within tissues serving as an important information carrier. In particular, cilia beat frequency in oviduct cells is partially regulated by Ca2+ changes. Thus, measuring the calcium density and characterizing the traveling wave plays a key role in understanding biological phenomena. However, current methods to measure propagation velocities and other wave characteristics involve several manual or time-consuming procedures. This limits the amount of information that can be extracted, and the statistical quality of the analysis. Results Our work provides a framework based on image processing procedures that enables a fast, automatic and robust characterization of data from two-filter fluorescence Ca2+ experiments. We calculate the mean velocity of the wave-front, and use theoretical models to extract meaningful parameters like wave amplitude, decay rate and time of excitation. Conclusions Measurements done by different operators showed a high degree of reproducibility. This framework is also extended to a single filter fluorescence experiments, allowing higher sampling rates, and thus an increased accuracy in velocity measurements. PMID:23679062
Improving sensor data analysis through diverse data source integration
NASA Astrophysics Data System (ADS)
Casper, Jennifer; Albuquerque, Ronald; Hyland, Jeremy; Leveille, Peter; Hu, Jing; Cheung, Eddy; Mauer, Dan; Couture, Ronald; Lai, Barry
2009-05-01
Daily sensor data volumes are increasing from gigabytes to multiple terabytes. The manpower and resources needed to analyze the increasing amount of data are not growing at the same rate. Current volumes of diverse data, both live streaming and historical, are not fully analyzed. Analysts are left mostly to analyzing the individual data sources manually. This is both time consuming and mentally exhausting. Expanding data collections only exacerbate this problem. Improved data management techniques and analysis methods are required to process the increasing volumes of historical and live streaming data sources simultaneously. Improved techniques are needed to reduce an analysts decision response time and to enable more intelligent and immediate situation awareness. This paper describes the Sensor Data and Analysis Framework (SDAF) system built to provide analysts with the ability to pose integrated queries on diverse live and historical data sources, and plug in needed algorithms for upstream processing and filtering. The SDAF system was inspired by input and feedback from field analysts and experts. This paper presents SDAF's capabilities, implementation, and reasoning behind implementation decisions. Finally, lessons learned from preliminary tests and deployments are captured for future work.
EMG Processing Based Measures of Fatigue Assessment during Manual Lifting.
Shair, E F; Ahmad, S A; Marhaban, M H; Mohd Tamrin, S B; Abdullah, A R
2017-01-01
Manual lifting is one of the common practices used in the industries to transport or move objects to a desired place. Nowadays, even though mechanized equipment is widely available, manual lifting is still considered as an essential way to perform material handling task. Improper lifting strategies may contribute to musculoskeletal disorders (MSDs), where overexertion contributes as the highest factor. To overcome this problem, electromyography (EMG) signal is used to monitor the workers' muscle condition and to find maximum lifting load, lifting height and number of repetitions that the workers are able to handle before experiencing fatigue to avoid overexertion. Past researchers have introduced several EMG processing techniques and different EMG features that represent fatigue indices in time, frequency, and time-frequency domain. The impact of EMG processing based measures in fatigue assessment during manual lifting are reviewed in this paper. It is believed that this paper will greatly benefit researchers who need a bird's eye view of the biosignal processing which are currently available, thus determining the best possible techniques for lifting applications.
Fatigue Life Assessment of 65Si7 Leaf Springs: A Comparative Study
Arora, Vinkel Kumar; Bhushan, Gian; Aggarwal, M. L.
2014-01-01
The experimental fatigue life prediction of leaf springs is a time consuming process. The engineers working in the field of leaf springs always face a challenge to formulate alternate methods of fatigue life assessment. The work presented in this paper provides alternate methods for fatigue life assessment of leaf springs. A 65Si7 light commercial vehicle leaf spring is chosen for this study. The experimental fatigue life and load rate are determined on a full scale leaf spring testing machine. Four alternate methods of fatigue life assessment have been depicted. Firstly by SAE spring design manual approach the fatigue test stroke is established and by the intersection of maximum and initial stress the fatigue life is predicted. The second method constitutes a graphical method based on modified Goodman's criteria. In the third method codes are written in FORTRAN for fatigue life assessment based on analytical technique. The fourth method consists of computer aided engineering tools. The CAD model of the leaf spring has been prepared in solid works and analyzed using ANSYS. Using CAE tools, ideal type of contact and meshing elements have been proposed. The method which provides fatigue life closer to experimental value and consumes less time is suggested. PMID:27379327
System design for 3D wound imaging using low-cost mobile devices
NASA Astrophysics Data System (ADS)
Sirazitdinova, Ekaterina; Deserno, Thomas M.
2017-03-01
The state-of-the art method of wound assessment is a manual, imprecise and time-consuming procedure. Per- formed by clinicians, it has limited reproducibility and accuracy, large time consumption and high costs. Novel technologies such as laser scanning microscopy, multi-photon microscopy, optical coherence tomography and hyper-spectral imaging, as well as devices relying on the structured light sensors, make accurate wound assessment possible. However, such methods have limitations due to high costs and may lack portability and availability. In this paper, we present a low-cost wound assessment system and architecture for fast and accurate cutaneous wound assessment using inexpensive consumer smartphone devices. Computer vision techniques are applied either on the device or the server to reconstruct wounds in 3D as dense models, which are generated from images taken with a built-in single camera of a smartphone device. The system architecture includes imaging (smartphone), processing (smartphone or PACS) and storage (PACS) devices. It supports tracking over time by alignment of 3D models, color correction using a reference color card placed into the scene and automatic segmentation of wound regions. Using our system, we are able to detect and document quantitative characteristics of chronic wounds, including size, depth, volume, rate of healing, as well as qualitative characteristics as color, presence of necrosis and type of involved tissue.
Knowledge Support and Automation for Performance Analysis with PerfExplorer 2.0
Huck, Kevin A.; Malony, Allen D.; Shende, Sameer; ...
2008-01-01
The integration of scalable performance analysis in parallel development tools is difficult. The potential size of data sets and the need to compare results from multiple experiments presents a challenge to manage and process the information. Simply to characterize the performance of parallel applications running on potentially hundreds of thousands of processor cores requires new scalable analysis techniques. Furthermore, many exploratory analysis processes are repeatable and could be automated, but are now implemented as manual procedures. In this paper, we will discuss the current version of PerfExplorer, a performance analysis framework which provides dimension reduction, clustering and correlation analysis ofmore » individual trails of large dimensions, and can perform relative performance analysis between multiple application executions. PerfExplorer analysis processes can be captured in the form of Python scripts, automating what would otherwise be time-consuming tasks. We will give examples of large-scale analysis results, and discuss the future development of the framework, including the encoding and processing of expert performance rules, and the increasing use of performance metadata.« less
Asou, Hiroya; Imada, N; Sato, T
2010-06-20
On coronary MR angiography (CMRA), cardiac motions worsen the image quality. To improve the image quality, detection of cardiac especially for individual coronary motion is very important. Usually, scan delay and duration were determined manually by the operator. We developed a new evaluation method to calculate static time of individual coronary artery. At first, coronary cine MRI was taken at the level of about 3 cm below the aortic valve (80 images/R-R). Chronological change of the signals were evaluated with Fourier transformation of each pixel of the images were done. Noise reduction with subtraction process and extraction process were done. To extract higher motion such as coronary arteries, morphological filter process and labeling process were added. Using these imaging processes, individual coronary motion was extracted and individual coronary static time was calculated automatically. We compared the images with ordinary manual method and new automated method in 10 healthy volunteers. Coronary static times were calculated with our method. Calculated coronary static time was shorter than that of ordinary manual method. And scan time became about 10% longer than that of ordinary method. Image qualities were improved in our method. Our automated detection method for coronary static time with chronological Fourier transformation has a potential to improve the image quality of CMRA and easy processing.
Hao, Jie; Astle, William; De Iorio, Maria; Ebbels, Timothy M D
2012-08-01
Nuclear Magnetic Resonance (NMR) spectra are widely used in metabolomics to obtain metabolite profiles in complex biological mixtures. Common methods used to assign and estimate concentrations of metabolites involve either an expert manual peak fitting or extra pre-processing steps, such as peak alignment and binning. Peak fitting is very time consuming and is subject to human error. Conversely, alignment and binning can introduce artefacts and limit immediate biological interpretation of models. We present the Bayesian automated metabolite analyser for NMR spectra (BATMAN), an R package that deconvolutes peaks from one-dimensional NMR spectra, automatically assigns them to specific metabolites from a target list and obtains concentration estimates. The Bayesian model incorporates information on characteristic peak patterns of metabolites and is able to account for shifts in the position of peaks commonly seen in NMR spectra of biological samples. It applies a Markov chain Monte Carlo algorithm to sample from a joint posterior distribution of the model parameters and obtains concentration estimates with reduced error compared with conventional numerical integration and comparable to manual deconvolution by experienced spectroscopists. http://www1.imperial.ac.uk/medicine/people/t.ebbels/ t.ebbels@imperial.ac.uk.
Automatic graphene transfer system for improved material quality and efficiency
Boscá, Alberto; Pedrós, Jorge; Martínez, Javier; Palacios, Tomás; Calle, Fernando
2016-01-01
In most applications based on chemical vapor deposition (CVD) graphene, the transfer from the growth to the target substrate is a critical step for the final device performance. Manual procedures are time consuming and depend on handling skills, whereas existing automatic roll-to-roll methods work well for flexible substrates but tend to induce mechanical damage in rigid ones. A new system that automatically transfers CVD graphene to an arbitrary target substrate has been developed. The process is based on the all-fluidic manipulation of the graphene to avoid mechanical damage, strain and contamination, and on the combination of capillary action and electrostatic repulsion between the graphene and its container to ensure a centered sample on top of the target substrate. The improved carrier mobility and yield of the automatically transferred graphene, as compared to that manually transferred, is demonstrated by the optical and electrical characterization of field-effect transistors fabricated on both materials. In particular, 70% higher mobility values, with a 30% decrease in the unintentional doping and a 10% strain reduction are achieved. The system has been developed for lab-scale transfer and proved to be scalable for industrial applications. PMID:26860260
Study on Consumer Opposition to Exporting Recyclable Wastes
NASA Astrophysics Data System (ADS)
Suzuki, Yoshiyuki; Koizumi, Kunishige; Zhou, Weisheng
Trans-boundary trade from Japan to China of recyclable wastes such as waste copper has increased rapidly, because of resource demands through economic growth. These wastes are recycled at high rates thanks to the Chinese manual recycling process by a lot of low wage migrant workers from rural districts. China benefits by supplying jobs to many migrant workers and getting cheap resources. Although, Japanese consumers may have some opposition to exporting end-of-pipe home appliance wastes to foreign countries. From the results of the path-analysis from the questionnaire to Japanese consumers, it became clear that their reluctance came from anxiety about illegal dumping, the labor environment at the import country and the destruction of the ecosystem. Through conjoint analysis, willingness to pay the recycling fee decreases - 1,625 yen (equal to 34% of the current recycling fee of 4,630 yen) when choosing global recycling as opposed to domestic recycling, hypothesizing that consumers would rather recycle domestically instead of globally.
NASA Astrophysics Data System (ADS)
Barlow, Steven J.
1986-09-01
The Air Force needs a better method of designing new and retrofit heating, ventilating and air conditioning (HVAC) control systems. Air Force engineers currently use manual design/predict/verify procedures taught at the Air Force Institute of Technology, School of Civil Engineering, HVAC Control Systems course. These existing manual procedures are iterative and time-consuming. The objectives of this research were to: (1) Locate and, if necessary, modify an existing computer-based method for designing and analyzing HVAC control systems that is compatible with the HVAC Control Systems manual procedures, or (2) Develop a new computer-based method of designing and analyzing HVAC control systems that is compatible with the existing manual procedures. Five existing computer packages were investigated in accordance with the first objective: MODSIM (for modular simulation), HVACSIM (for HVAC simulation), TRNSYS (for transient system simulation), BLAST (for building load and system thermodynamics) and Elite Building Energy Analysis Program. None were found to be compatible or adaptable to the existing manual procedures, and consequently, a prototype of a new computer method was developed in accordance with the second research objective.
Three-dimensional murine airway segmentation in micro-CT images
NASA Astrophysics Data System (ADS)
Shi, Lijun; Thiesse, Jacqueline; McLennan, Geoffrey; Hoffman, Eric A.; Reinhardt, Joseph M.
2007-03-01
Thoracic imaging for small animals has emerged as an important tool for monitoring pulmonary disease progression and therapy response in genetically engineered animals. Micro-CT is becoming the standard thoracic imaging modality in small animal imaging because it can produce high-resolution images of the lung parenchyma, vasculature, and airways. Segmentation, measurement, and visualization of the airway tree is an important step in pulmonary image analysis. However, manual analysis of the airway tree in micro-CT images can be extremely time-consuming since a typical dataset is usually on the order of several gigabytes in size. Automated and semi-automated tools for micro-CT airway analysis are desirable. In this paper, we propose an automatic airway segmentation method for in vivo micro-CT images of the murine lung and validate our method by comparing the automatic results to manual tracing. Our method is based primarily on grayscale morphology. The results show good visual matches between manually segmented and automatically segmented trees. The average true positive volume fraction compared to manual analysis is 91.61%. The overall runtime for the automatic method is on the order of 30 minutes per volume compared to several hours to a few days for manual analysis.
Huang, Chih-Sheng; Yang, Wen-Yu; Chuang, Chun-Hsiang; Wang, Yu-Kai
2018-01-01
Electroencephalogram (EEG) signals are usually contaminated with various artifacts, such as signal associated with muscle activity, eye movement, and body motion, which have a noncerebral origin. The amplitude of such artifacts is larger than that of the electrical activity of the brain, so they mask the cortical signals of interest, resulting in biased analysis and interpretation. Several blind source separation methods have been developed to remove artifacts from the EEG recordings. However, the iterative process for measuring separation within multichannel recordings is computationally intractable. Moreover, manually excluding the artifact components requires a time-consuming offline process. This work proposes a real-time artifact removal algorithm that is based on canonical correlation analysis (CCA), feature extraction, and the Gaussian mixture model (GMM) to improve the quality of EEG signals. The CCA was used to decompose EEG signals into components followed by feature extraction to extract representative features and GMM to cluster these features into groups to recognize and remove artifacts. The feasibility of the proposed algorithm was demonstrated by effectively removing artifacts caused by blinks, head/body movement, and chewing from EEG recordings while preserving the temporal and spectral characteristics of the signals that are important to cognitive research. PMID:29599950
Facts About Drug Abuse: Trainer's Manual.
ERIC Educational Resources Information Center
Link, William E.; And Others
Following an introductory survey of the course, this modular drug abuse trainer's manual contains all course-specified materials. These materials are: the course goals and objectives; time/activity sheets; trainer guidelines, process notes, and exercise instructions; detailed lectures and supplementary information. The time/activity sheets contain…
ERIC Educational Resources Information Center
Allegheny Intermediate Unit, Pittsburgh, PA.
This manual contains activities and resources for infusing consumer education into English, business, mathematics, social studies, science, and home economics courses in grades nine through 12. The activities are intended to help students become more knowledgeable and efficient in managing their personal and collective economic affairs. The…
Consumer's Resource Handbook. 1988 Edition.
ERIC Educational Resources Information Center
Office of Consumer Affairs, Washington, DC.
This handbook is intended to help consumers exercise their rights in the marketplace in three ways: (1) it shows how to communicate more effectively with manufacturers, retailers, and service providers; (2) it is a self-help manual for resolving in dividual consumer complaints; and (3) it lists helpful sources of assistance. The handbook has two…
P-TRAP: a Panicle TRAit Phenotyping tool.
A L-Tam, Faroq; Adam, Helene; Anjos, António dos; Lorieux, Mathias; Larmande, Pierre; Ghesquière, Alain; Jouannic, Stefan; Shahbazkia, Hamid Reza
2013-08-29
In crops, inflorescence complexity and the shape and size of the seed are among the most important characters that influence yield. For example, rice panicles vary considerably in the number and order of branches, elongation of the axis, and the shape and size of the seed. Manual low-throughput phenotyping methods are time consuming, and the results are unreliable. However, high-throughput image analysis of the qualitative and quantitative traits of rice panicles is essential for understanding the diversity of the panicle as well as for breeding programs. This paper presents P-TRAP software (Panicle TRAit Phenotyping), a free open source application for high-throughput measurements of panicle architecture and seed-related traits. The software is written in Java and can be used with different platforms (the user-friendly Graphical User Interface (GUI) uses Netbeans Platform 7.3). The application offers three main tools: a tool for the analysis of panicle structure, a spikelet/grain counting tool, and a tool for the analysis of seed shape. The three tools can be used independently or simultaneously for analysis of the same image. Results are then reported in the Extensible Markup Language (XML) and Comma Separated Values (CSV) file formats. Images of rice panicles were used to evaluate the efficiency and robustness of the software. Compared to data obtained by manual processing, P-TRAP produced reliable results in a much shorter time. In addition, manual processing is not repeatable because dry panicles are vulnerable to damage. The software is very useful, practical and collects much more data than human operators. P-TRAP is a new open source software that automatically recognizes the structure of a panicle and the seeds on the panicle in numeric images. The software processes and quantifies several traits related to panicle structure, detects and counts the grains, and measures their shape parameters. In short, P-TRAP offers both efficient results and a user-friendly environment for experiments. The experimental results showed very good accuracy compared to field operator, expert verification and well-known academic methods.
P-TRAP: a Panicle Trait Phenotyping tool
2013-01-01
Background In crops, inflorescence complexity and the shape and size of the seed are among the most important characters that influence yield. For example, rice panicles vary considerably in the number and order of branches, elongation of the axis, and the shape and size of the seed. Manual low-throughput phenotyping methods are time consuming, and the results are unreliable. However, high-throughput image analysis of the qualitative and quantitative traits of rice panicles is essential for understanding the diversity of the panicle as well as for breeding programs. Results This paper presents P-TRAP software (Panicle TRAit Phenotyping), a free open source application for high-throughput measurements of panicle architecture and seed-related traits. The software is written in Java and can be used with different platforms (the user-friendly Graphical User Interface (GUI) uses Netbeans Platform 7.3). The application offers three main tools: a tool for the analysis of panicle structure, a spikelet/grain counting tool, and a tool for the analysis of seed shape. The three tools can be used independently or simultaneously for analysis of the same image. Results are then reported in the Extensible Markup Language (XML) and Comma Separated Values (CSV) file formats. Images of rice panicles were used to evaluate the efficiency and robustness of the software. Compared to data obtained by manual processing, P-TRAP produced reliable results in a much shorter time. In addition, manual processing is not repeatable because dry panicles are vulnerable to damage. The software is very useful, practical and collects much more data than human operators. Conclusions P-TRAP is a new open source software that automatically recognizes the structure of a panicle and the seeds on the panicle in numeric images. The software processes and quantifies several traits related to panicle structure, detects and counts the grains, and measures their shape parameters. In short, P-TRAP offers both efficient results and a user-friendly environment for experiments. The experimental results showed very good accuracy compared to field operator, expert verification and well-known academic methods. PMID:23987653
Systematic tracking, visualizing, and interpreting of consumer feedback for drinking water quality.
Dietrich, Andrea M; Phetxumphou, Katherine; Gallagher, Daniel L
2014-12-01
Consumer feedback and complaints provide utilities with useful data about consumer perceptions of aesthetic water quality in the distribution system. This research provides a systematic approach to interpret consumer complaint water quality data provided by four water utilities that recorded consumer complaints, but did not routinely process the data. The utilities tended to write down a myriad of descriptors that were too numerous or contained a variety of spellings so that electronic "harvesting" was not possible and much manual labor was required to categorize the complaints into majors areas, such as suggested by the Drinking Water Taste and Odor Wheel or existing check-sheets. When the consumer complaint data were categorized and visualized using spider (or radar) and run-time plots, major taste, odor, and appearance patterns emerged that clarified the issue and could provide guidance to the utility on the nature and extent of the problem. A caveat is that while humans readily identify visual issues with the water, such as color, cloudiness, or rust, describing specific tastes and odors in drinking water is acknowledged to be much more difficult for humans to achieve without training. This was demonstrated with two utility groups and a group of consumers identifying the odors of orange, 2-methylisoborneol, and dimethyl trisulfide. All three groups readily and succinctly identified the familiar orange odor. The two utility groups were much more able to identify the musty odor of 2-methylisoborneol, which was likely familiar to them from their work with raw and finished water. Dimethyl trisulfide, a garlic-onion odor associated with sulfur compounds in drinking water, was the least familiar to all three groups, although the laboratory staff did best. These results indicate that utility personnel should be tolerant of consumers who can assuredly say the water is different, but cannot describe the problem. Also, it indicates that a T&O program at a utility would benefit from identification of aesthetic issues in water. Copyright © 2014 Elsevier Ltd. All rights reserved.
Schlaeger, Sarah; Freitag, Friedemann; Klupp, Elisabeth; Dieckmeyer, Michael; Weidlich, Dominik; Inhuber, Stephanie; Deschauer, Marcus; Schoser, Benedikt; Bublitz, Sarah; Montagnese, Federica; Zimmer, Claus; Rummeny, Ernst J; Karampinos, Dimitrios C; Kirschke, Jan S; Baum, Thomas
2018-01-01
Magnetic resonance imaging (MRI) can non-invasively assess muscle anatomy, exercise effects and pathologies with different underlying causes such as neuromuscular diseases (NMD). Quantitative MRI including fat fraction mapping using chemical shift encoding-based water-fat MRI has emerged for reliable determination of muscle volume and fat composition. The data analysis of water-fat images requires segmentation of the different muscles which has been mainly performed manually in the past and is a very time consuming process, currently limiting the clinical applicability. An automatization of the segmentation process would lead to a more time-efficient analysis. In the present work, the manually segmented thigh magnetic resonance imaging database MyoSegmenTUM is presented. It hosts water-fat MR images of both thighs of 15 healthy subjects and 4 patients with NMD with a voxel size of 3.2x2x4 mm3 with the corresponding segmentation masks for four functional muscle groups: quadriceps femoris, sartorius, gracilis, hamstrings. The database is freely accessible online at https://osf.io/svwa7/?view_only=c2c980c17b3a40fca35d088a3cdd83e2. The database is mainly meant as ground truth which can be used as training and test dataset for automatic muscle segmentation algorithms. The segmentation allows extraction of muscle cross sectional area (CSA) and volume. Proton density fat fraction (PDFF) of the defined muscle groups from the corresponding images and quadriceps muscle strength measurements/neurological muscle strength rating can be used for benchmarking purposes.
Fully Automatic Speech-Based Analysis of the Semantic Verbal Fluency Task.
König, Alexandra; Linz, Nicklas; Tröger, Johannes; Wolters, Maria; Alexandersson, Jan; Robert, Phillipe
2018-06-08
Semantic verbal fluency (SVF) tests are routinely used in screening for mild cognitive impairment (MCI). In this task, participants name as many items as possible of a semantic category under a time constraint. Clinicians measure task performance manually by summing the number of correct words and errors. More fine-grained variables add valuable information to clinical assessment, but are time-consuming. Therefore, the aim of this study is to investigate whether automatic analysis of the SVF could provide these as accurate as manual and thus, support qualitative screening of neurocognitive impairment. SVF data were collected from 95 older people with MCI (n = 47), Alzheimer's or related dementias (ADRD; n = 24), and healthy controls (HC; n = 24). All data were annotated manually and automatically with clusters and switches. The obtained metrics were validated using a classifier to distinguish HC, MCI, and ADRD. Automatically extracted clusters and switches were highly correlated (r = 0.9) with manually established values, and performed as well on the classification task separating HC from persons with ADRD (area under curve [AUC] = 0.939) and MCI (AUC = 0.758). The results show that it is possible to automate fine-grained analyses of SVF data for the assessment of cognitive decline. © 2018 S. Karger AG, Basel.
Assessing consumption of bioactive micro-particles by filter-feeding Asian carp
Jensen, Nathan R.; Amberg, Jon J.; Luoma, James A.; Walleser, Liza R.; Gaikowski, Mark P.
2012-01-01
Silver carp Hypophthalmichthys molitrix (SVC) and bighead carp H. nobilis (BHC) have impacted waters in the US since their escape. Current chemical controls for aquatic nuisance species are non-selective. Development of a bioactive micro-particle that exploits filter-feeding habits of SVC or BHC could result in a new control tool. It is not fully understood if SVC or BHC will consume bioactive micro-particles. Two discrete trials were performed to: 1) evaluate if SVC and BHC consume the candidate micro-particle formulation; 2) determine what size they consume; 3) establish methods to evaluate consumption of filter-feeders for future experiments. Both SVC and BHC were exposed to small (50-100 μm) and large (150-200 μm) micro-particles in two 24-h trials. Particles in water were counted electronically and manually (microscopy). Particles on gill rakers were counted manually and intestinal tracts inspected for the presence of micro-particles. In Trial 1, both manual and electronic count data confirmed reductions of both size particles; SVC appeared to remove more small particles than large; more BHC consumed particles; SVC had fewer overall particles in their gill rakers than BHC. In Trial 2, electronic counts confirmed reductions of both size particles; both SVC and BHC consumed particles, yet more SVC consumed micro-particles compared to BHC. Of the fish that ate micro-particles, SVC consumed more than BHC. It is recommended to use multiple metrics to assess consumption of candidate micro-particles by filter-feeders when attempting to distinguish differential particle consumption. This study has implications for developing micro-particles for species-specific delivery of bioactive controls to help fisheries, provides some methods for further experiments with bioactive micro-particles, and may also have applications in aquaculture.
Automation Improves Schedule Quality and Increases Scheduling Efficiency for Residents.
Perelstein, Elizabeth; Rose, Ariella; Hong, Young-Chae; Cohn, Amy; Long, Micah T
2016-02-01
Medical resident scheduling is difficult due to multiple rules, competing educational goals, and ever-evolving graduate medical education requirements. Despite this, schedules are typically created manually, consuming hours of work, producing schedules of varying quality, and yielding negative consequences for resident morale and learning. To determine whether computerized decision support can improve the construction of residency schedules, saving time and improving schedule quality. The Optimized Residency Scheduling Assistant was designed by a team from the University of Michigan Department of Industrial and Operations Engineering. It was implemented in the C.S. Mott Children's Hospital Pediatric Emergency Department in the 2012-2013 academic year. The 4 metrics of schedule quality that were compared between the 2010-2011 and 2012-2013 academic years were the incidence of challenging shift transitions, the incidence of shifts following continuity clinics, the total shift inequity, and the night shift inequity. All scheduling rules were successfully incorporated. Average schedule creation time fell from 22 to 28 hours to 4 to 6 hours per month, and 3 of 4 metrics of schedule quality significantly improved. For the implementation year, the incidence of challenging shift transitions decreased from 83 to 14 (P < .01); the incidence of postclinic shifts decreased from 72 to 32 (P < .01); and the SD of night shifts dropped by 55.6% (P < .01). This automated shift scheduling system improves the current manual scheduling process, reducing time spent and improving schedule quality. Embracing such automated tools can benefit residency programs with shift-based scheduling needs.
Gross, Brooks A.; Walsh, Christine M.; Turakhia, Apurva A.; Booth, Victoria; Mashour, George; Poe, Gina R.
2009-01-01
Manual state scoring of physiological recordings in sleep studies is time-consuming, resulting in a data backlog, research delays and increased personnel costs. We developed MATLAB-based software to automate scoring of sleep/waking states in rats, potentially extendable to other animals, from a variety of recording systems. The software contains two programs, Sleep Scorer and Auto-Scorer, for manual and automated scoring. Auto-Scorer is a logic-based program that displays power spectral densities of an electromyographic signal and σ, δ, and θ frequency bands of an electroencephalographic signal, along with the δ/θ ratio and σ ×θ, for every epoch. The user defines thresholds from the training file state definitions which the Auto-Scorer uses with logic to discriminate the state of every epoch in the file. Auto-Scorer was evaluated by comparing its output to manually scored files from 6 rats under 2 experimental conditions by 3 users. Each user generated a training file, set thresholds, and autoscored the 12 files into 4 states (waking, non-REM, transition-to-REM, and REM sleep) in ¼ the time required to manually score the file. Overall performance comparisons between Auto-Scorer and manual scoring resulted in a mean agreement of 80.24 +/− 7.87%, comparable to the average agreement among 3 manual scorers (83.03 +/− 4.00%). There was no significant difference between user-user and user-Auto-Scorer agreement ratios. These results support the use of our open-source Auto-Scorer, coupled with user review, to rapidly and accurately score sleep/waking states from rat recordings. PMID:19615408
Surendranath, V; Albrecht, V; Hayhurst, J D; Schöne, B; Robinson, J; Marsh, S G E; Schmidt, A H; Lange, V
2017-07-01
Recent years have seen a rapid increase in the discovery of novel allelic variants of the human leukocyte antigen (HLA) genes. Commonly, only the exons encoding the peptide binding domains of novel HLA alleles are submitted. As a result, the IPD-IMGT/HLA Database lacks sequence information outside those regions for the majority of known alleles. This has implications for the application of the new sequencing technologies, which deliver sequence data often covering the complete gene. As these technologies simplify the characterization of the complete gene regions, it is desirable for novel alleles to be submitted as full-length sequences to the database. However, the manual annotation of full-length alleles and the generation of specific formats required by the sequence repositories is prone to error and time consuming. We have developed TypeLoader to address both these facets. With only the full-length sequence as a starting point, Typeloader performs automatic sequence annotation and subsequently handles all steps involved in preparing the specific formats for submission with very little manual intervention. TypeLoader is routinely used at the DKMS Life Science Lab and has aided in the successful submission of more than 900 novel HLA alleles as full-length sequences to the European Nucleotide Archive repository and the IPD-IMGT/HLA Database with a 95% reduction in the time spent on annotation and submission when compared with handling these processes manually. TypeLoader is implemented as a web application and can be easily installed and used on a standalone Linux desktop system or within a Linux client/server architecture. TypeLoader is downloadable from http://www.github.com/DKMS-LSL/typeloader. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Improved 3D live-wire method with application to 3D CT chest image analysis
NASA Astrophysics Data System (ADS)
Lu, Kongkuo; Higgins, William E.
2006-03-01
The definition of regions of interests (ROIs), such as suspect cancer nodules or lymph nodes in 3D CT chest images, is often difficult because of the complexity of the phenomena that give rise to them. Manual slice tracing has been used widely for years for such problems, because it is easy to implement and guaranteed to work. But the manual method is extremely time-consuming, especially for high-solution 3D images which may have hundreds of slices, and it is subject to operator biases. Numerous automated image-segmentation methods have been proposed, but they are generally strongly application dependent, and even the "most robust" methods have difficulty in defining complex anatomical ROIs. To address this problem, the semi-automatic interactive paradigm referred to as "live wire" segmentation has been proposed by researchers. In live-wire segmentation, the human operator interactively defines an ROI's boundary guided by an active automated method which suggests what to define. This process in general is far faster, more reproducible and accurate than manual tracing, while, at the same time, permitting the definition of complex ROIs having ill-defined boundaries. We propose a 2D live-wire method employing an improved cost over previous works. In addition, we define a new 3D live-wire formulation that enables rapid definition of 3D ROIs. The method only requires the human operator to consider a few slices in general. Experimental results indicate that the new 2D and 3D live-wire approaches are efficient, allow for high reproducibility, and are reliable for 2D and 3D object segmentation.
Mathieu, Renaud; Aryal, Jagannath; Chong, Albert K
2007-11-20
Effective assessment of biodiversity in cities requires detailed vegetation maps.To date, most remote sensing of urban vegetation has focused on thematically coarse landcover products. Detailed habitat maps are created by manual interpretation of aerialphotographs, but this is time consuming and costly at large scale. To address this issue, wetested the effectiveness of object-based classifications that use automated imagesegmentation to extract meaningful ground features from imagery. We applied thesetechniques to very high resolution multispectral Ikonos images to produce vegetationcommunity maps in Dunedin City, New Zealand. An Ikonos image was orthorectified and amulti-scale segmentation algorithm used to produce a hierarchical network of image objects.The upper level included four coarse strata: industrial/commercial (commercial buildings),residential (houses and backyard private gardens), vegetation (vegetation patches larger than0.8/1ha), and water. We focused on the vegetation stratum that was segmented at moredetailed level to extract and classify fifteen classes of vegetation communities. The firstclassification yielded a moderate overall classification accuracy (64%, κ = 0.52), which ledus to consider a simplified classification with ten vegetation classes. The overallclassification accuracy from the simplified classification was 77% with a κ value close tothe excellent range (κ = 0.74). These results compared favourably with similar studies inother environments. We conclude that this approach does not provide maps as detailed as those produced by manually interpreting aerial photographs, but it can still extract ecologically significant classes. It is an efficient way to generate accurate and detailed maps in significantly shorter time. The final map accuracy could be improved by integrating segmentation, automated and manual classification in the mapping process, especially when considering important vegetation classes with limited spectral contrast.
iElectrodes: A Comprehensive Open-Source Toolbox for Depth and Subdural Grid Electrode Localization.
Blenkmann, Alejandro O; Phillips, Holly N; Princich, Juan P; Rowe, James B; Bekinschtein, Tristan A; Muravchik, Carlos H; Kochen, Silvia
2017-01-01
The localization of intracranial electrodes is a fundamental step in the analysis of invasive electroencephalography (EEG) recordings in research and clinical practice. The conclusions reached from the analysis of these recordings rely on the accuracy of electrode localization in relationship to brain anatomy. However, currently available techniques for localizing electrodes from magnetic resonance (MR) and/or computerized tomography (CT) images are time consuming and/or limited to particular electrode types or shapes. Here we present iElectrodes, an open-source toolbox that provides robust and accurate semi-automatic localization of both subdural grids and depth electrodes. Using pre- and post-implantation images, the method takes 2-3 min to localize the coordinates in each electrode array and automatically number the electrodes. The proposed pre-processing pipeline allows one to work in a normalized space and to automatically obtain anatomical labels of the localized electrodes without neuroimaging experts. We validated the method with data from 22 patients implanted with a total of 1,242 electrodes. We show that localization distances were within 0.56 mm of those achieved by experienced manual evaluators. iElectrodes provided additional advantages in terms of robustness (even with severe perioperative cerebral distortions), speed (less than half the operator time compared to expert manual localization), simplicity, utility across multiple electrode types (surface and depth electrodes) and all brain regions.
NASA Astrophysics Data System (ADS)
Krappe, Sebastian; Wittenberg, Thomas; Haferlach, Torsten; Münzenmayer, Christian
2016-03-01
The morphological differentiation of bone marrow is fundamental for the diagnosis of leukemia. Currently, the counting and classification of the different types of bone marrow cells is done manually under the use of bright field microscopy. This is a time-consuming, subjective, tedious and error-prone process. Furthermore, repeated examinations of a slide may yield intra- and inter-observer variances. For that reason a computer assisted diagnosis system for bone marrow differentiation is pursued. In this work we focus (a) on a new method for the separation of nucleus and plasma parts and (b) on a knowledge-based hierarchical tree classifier for the differentiation of bone marrow cells in 16 different classes. Classification trees are easily interpretable and understandable and provide a classification together with an explanation. Using classification trees, expert knowledge (i.e. knowledge about similar classes and cell lines in the tree model of hematopoiesis) is integrated in the structure of the tree. The proposed segmentation method is evaluated with more than 10,000 manually segmented cells. For the evaluation of the proposed hierarchical classifier more than 140,000 automatically segmented bone marrow cells are used. Future automated solutions for the morphological analysis of bone marrow smears could potentially apply such an approach for the pre-classification of bone marrow cells and thereby shortening the examination time.
iElectrodes: A Comprehensive Open-Source Toolbox for Depth and Subdural Grid Electrode Localization
Blenkmann, Alejandro O.; Phillips, Holly N.; Princich, Juan P.; Rowe, James B.; Bekinschtein, Tristan A.; Muravchik, Carlos H.; Kochen, Silvia
2017-01-01
The localization of intracranial electrodes is a fundamental step in the analysis of invasive electroencephalography (EEG) recordings in research and clinical practice. The conclusions reached from the analysis of these recordings rely on the accuracy of electrode localization in relationship to brain anatomy. However, currently available techniques for localizing electrodes from magnetic resonance (MR) and/or computerized tomography (CT) images are time consuming and/or limited to particular electrode types or shapes. Here we present iElectrodes, an open-source toolbox that provides robust and accurate semi-automatic localization of both subdural grids and depth electrodes. Using pre- and post-implantation images, the method takes 2–3 min to localize the coordinates in each electrode array and automatically number the electrodes. The proposed pre-processing pipeline allows one to work in a normalized space and to automatically obtain anatomical labels of the localized electrodes without neuroimaging experts. We validated the method with data from 22 patients implanted with a total of 1,242 electrodes. We show that localization distances were within 0.56 mm of those achieved by experienced manual evaluators. iElectrodes provided additional advantages in terms of robustness (even with severe perioperative cerebral distortions), speed (less than half the operator time compared to expert manual localization), simplicity, utility across multiple electrode types (surface and depth electrodes) and all brain regions. PMID:28303098
Interactive approach to segment organs at risk in radiotherapy treatment planning
NASA Astrophysics Data System (ADS)
Dolz, Jose; Kirisli, Hortense A.; Viard, Romain; Massoptier, Laurent
2014-03-01
Accurate delineation of organs at risk (OAR) is required for radiation treatment planning (RTP). However, it is a very time consuming and tedious task. The use in clinic of image guided radiation therapy (IGRT) becomes more and more popular, thus increasing the need of (semi-)automatic methods for delineation of the OAR. In this work, an interactive segmentation approach to delineate OAR is proposed and validated. The method is based on the combination of watershed transformation, which groups small areas of similar intensities in homogeneous labels, and graph cuts approach, which uses these labels to create the graph. Segmentation information can be added in any view - axial, sagittal or coronal -, making the interaction with the algorithm easy and fast. Subsequently, this information is propagated within the whole volume, providing a spatially coherent result. Manual delineations made by experts of 6 OAR - lungs, kidneys, liver, spleen, heart and aorta - over a set of 9 computed tomography (CT) scans were used as reference standard to validate the proposed approach. With a maximum of 4 interactions, a Dice similarity coefficient (DSC) higher than 0.87 was obtained, which demonstrates that, with the proposed segmentation approach, only few interactions are required to achieve similar results as the ones obtained manually. The integration of this method in the RTP process may save a considerable amount of time, and reduce the annotation complexity.
Wiesmann, Veit; Bergler, Matthias; Palmisano, Ralf; Prinzen, Martin; Franz, Daniela; Wittenberg, Thomas
2017-03-18
Manual assessment and evaluation of fluorescent micrograph cell experiments is time-consuming and tedious. Automated segmentation pipelines can ensure efficient and reproducible evaluation and analysis with constant high quality for all images of an experiment. Such cell segmentation approaches are usually validated and rated in comparison to manually annotated micrographs. Nevertheless, manual annotations are prone to errors and display inter- and intra-observer variability which influence the validation results of automated cell segmentation pipelines. We present a new approach to simulate fluorescent cell micrographs that provides an objective ground truth for the validation of cell segmentation methods. The cell simulation was evaluated twofold: (1) An expert observer study shows that the proposed approach generates realistic fluorescent cell micrograph simulations. (2) An automated segmentation pipeline on the simulated fluorescent cell micrographs reproduces segmentation performances of that pipeline on real fluorescent cell micrographs. The proposed simulation approach produces realistic fluorescent cell micrographs with corresponding ground truth. The simulated data is suited to evaluate image segmentation pipelines more efficiently and reproducibly than it is possible on manually annotated real micrographs.
Lin, Steve; Turgulov, Anuar; Taher, Ahmed; Buick, Jason E; Byers, Adam; Drennan, Ian R; Hu, Samantha; J Morrison, Laurie
2016-10-01
Cardiopulmonary resuscitation (CPR) process measures research and quality assurance has traditionally been limited to the first 5 minutes of resuscitation due to significant costs in time, resources, and personnel from manual data abstraction. CPR performance may change over time during prolonged resuscitations, which represents a significant knowledge gap. Moreover, currently available commercial software output of CPR process measures are difficult to analyze. The objective was to develop and validate a software program to help automate the abstraction and transfer of CPR process measures data from electronic defibrillators for complete episodes of cardiac arrest resuscitation. We developed a software program to facilitate and help automate CPR data abstraction and transfer from electronic defibrillators for entire resuscitation episodes. Using an intermediary Extensible Markup Language export file, the automated software transfers CPR process measures data (electrocardiogram [ECG] number, CPR start time, number of ventilations, number of chest compressions, compression rate per minute, compression depth per minute, compression fraction, and end-tidal CO 2 per minute). We performed an internal validation of the software program on 50 randomly selected cardiac arrest cases with resuscitation durations between 15 and 60 minutes. CPR process measures were manually abstracted and transferred independently by two trained data abstractors and by the automated software program, followed by manual interpretation of raw ECG tracings, treatment interventions, and patient events. Error rates and the time needed for data abstraction, transfer, and interpretation were measured for both manual and automated methods, compared to an additional independent reviewer. A total of 9,826 data points were each abstracted by the two abstractors and by the software program. Manual data abstraction resulted in a total of six errors (0.06%) compared to zero errors by the software program. The mean ± SD time measured per case for manual data abstraction was 20.3 ± 2.7 minutes compared to 5.3 ± 1.4 minutes using the software program (p = 0.003). We developed and validated an automated software program that efficiently abstracts and transfers CPR process measures data from electronic defibrillators for complete cardiac arrest episodes. This software will enable future cardiac arrest studies and quality assurance programs to evaluate the impact of CPR process measures during prolonged resuscitations. © 2016 by the Society for Academic Emergency Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heeswijk, Miriam M. van; Department of Surgery, Maastricht University Medical Centre, Maastricht; Lambregts, Doenja M.J., E-mail: d.lambregts@nki.nl
Purpose: Diffusion-weighted imaging (DWI) tumor volumetry is promising for rectal cancer response assessment, but an important drawback is that manual per-slice tumor delineation can be highly time consuming. This study investigated whether manual DWI-volumetry can be reproduced using a (semi)automated segmentation approach. Methods and Materials: Seventy-nine patients underwent magnetic resonance imaging (MRI) that included DWI (highest b value [b1000 or b1100]) before and after chemoradiation therapy (CRT). Tumor volumes were assessed on b1000 (or b1100) DWI before and after CRT by means of (1) automated segmentation (by 2 inexperienced readers), (2) semiautomated segmentation (manual adjustment of the volumes obtained bymore » method 1 by 2 radiologists), and (3) manual segmentation (by 2 radiologists); this last assessment served as the reference standard. Intraclass correlation coefficients (ICC) and Dice similarity indices (DSI) were calculated to evaluate agreement between different methods and observers. Measurement times (from a radiologist's perspective) were recorded for each method. Results: Tumor volumes were not significantly different among the 3 methods, either before or after CRT (P=.08 to .92). ICCs compared to manual segmentation were 0.80 to 0.91 and 0.53 to 0.66 before and after CRT, respectively, for the automated segmentation and 0.91 to 0.97 and 0.61 to 0.75, respectively, for the semiautomated method. Interobserver agreement (ICC) pre and post CRT was 0.82 and 0.59 for automated segmentation, 0.91 and 0.73 for semiautomated segmentation, and 0.91 and 0.75 for manual segmentation, respectively. Mean DSI between the automated and semiautomated method were 0.83 and 0.58 pre-CRT and post-CRT, respectively; DSI between the automated and manual segmentation were 0.68 and 0.42 and 0.70 and 0.41 between the semiautomated and manual segmentation, respectively. Median measurement time for the radiologists was 0 seconds (pre- and post-CRT) for the automated method, 41 to 69 seconds (pre-CRT) and 60 to 67 seconds (post-CRT) for the semiautomated method, and 180 to 296 seconds (pre-CRT) and 84 to 91 seconds (post-CRT) for the manual method. Conclusions: DWI volumetry using a semiautomated segmentation approach is promising and a potentially time-saving alternative to manual tumor delineation, particularly for primary tumor volumetry. Once further optimized, it could be a helpful tool for tumor response assessment in rectal cancer.« less
van Heeswijk, Miriam M; Lambregts, Doenja M J; van Griethuysen, Joost J M; Oei, Stanley; Rao, Sheng-Xiang; de Graaff, Carla A M; Vliegen, Roy F A; Beets, Geerard L; Papanikolaou, Nikos; Beets-Tan, Regina G H
2016-03-15
Diffusion-weighted imaging (DWI) tumor volumetry is promising for rectal cancer response assessment, but an important drawback is that manual per-slice tumor delineation can be highly time consuming. This study investigated whether manual DWI-volumetry can be reproduced using a (semi)automated segmentation approach. Seventy-nine patients underwent magnetic resonance imaging (MRI) that included DWI (highest b value [b1000 or b1100]) before and after chemoradiation therapy (CRT). Tumor volumes were assessed on b1000 (or b1100) DWI before and after CRT by means of (1) automated segmentation (by 2 inexperienced readers), (2) semiautomated segmentation (manual adjustment of the volumes obtained by method 1 by 2 radiologists), and (3) manual segmentation (by 2 radiologists); this last assessment served as the reference standard. Intraclass correlation coefficients (ICC) and Dice similarity indices (DSI) were calculated to evaluate agreement between different methods and observers. Measurement times (from a radiologist's perspective) were recorded for each method. Tumor volumes were not significantly different among the 3 methods, either before or after CRT (P=.08 to .92). ICCs compared to manual segmentation were 0.80 to 0.91 and 0.53 to 0.66 before and after CRT, respectively, for the automated segmentation and 0.91 to 0.97 and 0.61 to 0.75, respectively, for the semiautomated method. Interobserver agreement (ICC) pre and post CRT was 0.82 and 0.59 for automated segmentation, 0.91 and 0.73 for semiautomated segmentation, and 0.91 and 0.75 for manual segmentation, respectively. Mean DSI between the automated and semiautomated method were 0.83 and 0.58 pre-CRT and post-CRT, respectively; DSI between the automated and manual segmentation were 0.68 and 0.42 and 0.70 and 0.41 between the semiautomated and manual segmentation, respectively. Median measurement time for the radiologists was 0 seconds (pre- and post-CRT) for the automated method, 41 to 69 seconds (pre-CRT) and 60 to 67 seconds (post-CRT) for the semiautomated method, and 180 to 296 seconds (pre-CRT) and 84 to 91 seconds (post-CRT) for the manual method. DWI volumetry using a semiautomated segmentation approach is promising and a potentially time-saving alternative to manual tumor delineation, particularly for primary tumor volumetry. Once further optimized, it could be a helpful tool for tumor response assessment in rectal cancer. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Collins, J.; Riegler, G.; Schrader, H.; Tinz, M.
2015-04-01
The Geo-intelligence division of Airbus Defence and Space and the German Aerospace Center (DLR) have partnered to produce the first fully global, high-accuracy Digital Surface Model (DSM) using SAR data from the twin satellite constellation: TerraSAR-X and TanDEM-X. The DLR is responsible for the processing and distribution of the TanDEM-X elevation model for the world's scientific community, while Airbus DS is responsible for the commercial production and distribution of the data, under the brand name WorldDEM. For the provision of a consumer-ready product, Airbus DS undertakes several steps to reduce the effect of radar-specific artifacts in the WorldDEM data. These artifacts can be divided into two categories: terrain and hydrological. Airbus DS has developed proprietary software and processes to detect and correct these artifacts in the most efficient manner. Some processes are fullyautomatic, while others require manual or semi-automatic control by operators.
Cordelières, Fabrice P; Petit, Valérie; Kumasaka, Mayuko; Debeir, Olivier; Letort, Véronique; Gallagher, Stuart J; Larue, Lionel
2013-01-01
Cell migration is a key biological process with a role in both physiological and pathological conditions. Locomotion of cells during embryonic development is essential for their correct positioning in the organism; immune cells have to migrate and circulate in response to injury. Failure of cells to migrate or an inappropriate acquisition of migratory capacities can result in severe defects such as altered pigmentation, skull and limb abnormalities during development, and defective wound repair, immunosuppression or tumor dissemination. The ability to accurately analyze and quantify cell migration is important for our understanding of development, homeostasis and disease. In vitro cell tracking experiments, using primary or established cell cultures, are often used to study migration as cells can quickly and easily be genetically or chemically manipulated. Images of the cells are acquired at regular time intervals over several hours using microscopes equipped with CCD camera. The locations (x,y,t) of each cell on the recorded sequence of frames then need to be tracked. Manual computer-assisted tracking is the traditional method for analyzing the migratory behavior of cells. However, this processing is extremely tedious and time-consuming. Most existing tracking algorithms require experience in programming languages that are unfamiliar to most biologists. We therefore developed an automated cell tracking program, written in Java, which uses a mean-shift algorithm and ImageJ as a library. iTrack4U is a user-friendly software. Compared to manual tracking, it saves considerable amount of time to generate and analyze the variables characterizing cell migration, since they are automatically computed with iTrack4U. Another major interest of iTrack4U is the standardization and the lack of inter-experimenter differences. Finally, iTrack4U is adapted for phase contrast and fluorescent cells.
GMP-conformant on-site manufacturing of a CD133+ stem cell product for cardiovascular regeneration.
Skorska, Anna; Müller, Paula; Gaebel, Ralf; Große, Jana; Lemcke, Heiko; Lux, Cornelia A; Bastian, Manuela; Hausburg, Frauke; Zarniko, Nicole; Bubritzki, Sandra; Ruch, Ulrike; Tiedemann, Gudrun; David, Robert; Steinhoff, Gustav
2017-02-10
CD133 + stem cells represent a promising subpopulation for innovative cell-based therapies in cardiovascular regeneration. Several clinical trials have shown remarkable beneficial effects following their intramyocardial transplantation. Yet, the purification of CD133 + stem cells is typically performed in centralized clean room facilities using semi-automatic manufacturing processes based on magnetic cell sorting (MACS®). However, this requires time-consuming and cost-intensive logistics. CD133 + stem cells were purified from patient-derived sternal bone marrow using the recently developed automatic CliniMACS Prodigy® BM-133 System (Prodigy). The entire manufacturing process, as well as the subsequent quality control of the final cell product (CP), were realized on-site and in compliance with EU guidelines for Good Manufacturing Practice. The biological activity of automatically isolated CD133 + cells was evaluated and compared to manually isolated CD133 + cells via functional assays as well as immunofluorescence microscopy. In addition, the regenerative potential of purified stem cells was assessed 3 weeks after transplantation in immunodeficient mice which had been subjected to experimental myocardial infarction. We established for the first time an on-site manufacturing procedure for stem CPs intended for the treatment of ischemic heart diseases using an automatized system. On average, 0.88 × 10 6 viable CD133 + cells with a mean log 10 depletion of 3.23 ± 0.19 of non-target cells were isolated. Furthermore, we demonstrated that these automatically isolated cells bear proliferation and differentiation capacities comparable to manually isolated cells in vitro. Moreover, the automatically generated CP shows equal cardiac regeneration potential in vivo. Our results indicate that the Prodigy is a powerful system for automatic manufacturing of a CD133 + CP within few hours. Compared to conventional manufacturing processes, future clinical application of this system offers multiple benefits including stable CP quality and on-site purification under reduced clean room requirements. This will allow saving of time, reduced logistics and diminished costs.
Increasingly automated procedure acquisition in dynamic systems
NASA Technical Reports Server (NTRS)
Mathe, Nathalie; Kedar, Smadar
1992-01-01
Procedures are widely used by operators for controlling complex dynamic systems. Currently, most development of such procedures is done manually, consuming a large amount of paper, time, and manpower in the process. While automated knowledge acquisition is an active field of research, not much attention has been paid to the problem of computer-assisted acquisition and refinement of complex procedures for dynamic systems. The Procedure Acquisition for Reactive Control Assistant (PARC), which is designed to assist users in more systematically and automatically encoding and refining complex procedures. PARC is able to elicit knowledge interactively from the user during operation of the dynamic system. We categorize procedure refinement into two stages: diagnosis - diagnose the failure and choose a repair - and repair - plan and perform the repair. The basic approach taken in PARC is to assist the user in all steps of this process by providing increased levels of assistance with layered tools. We illustrate the operation of PARC in refining procedures for the control of a robot arm.
Build-up Approach to Updating the Mock Quiet Spike(TradeMark) Beam Model
NASA Technical Reports Server (NTRS)
Herrera, Claudia Y.; Pak, Chan-gi
2007-01-01
A crucial part of aircraft design is ensuring that the required margin for flutter is satisfied. A trustworthy flutter analysis, which begins by possessing an accurate dynamics model, is necessary for this task. Traditionally, a model was updated manually by fine tuning specific stiffness parameters until the analytical results matched test data. This is a time consuming iterative process. NASA Dryden Flight Research Center has developed a mode matching code to execute this process in a more efficient manner. Recently, this code was implemented in the F-15B/Quiet Spike(TradeMark) (Gulfstream Aerospace Corporation, Savannah, Georgia) model update. A build-up approach requiring several ground vibration test configurations and a series of model updates was implemented in order to determine the connection stiffness between aircraft and test article. The mode matching code successfully updated various models for the F-15B/Quiet Spike(TradeMark) project to within 1 percent error in frequency and the modal assurance criteria values ranged from 88.51-99.42 percent.
Build-up Approach to Updating the Mock Quiet Spike(TM)Beam Model
NASA Technical Reports Server (NTRS)
Herrera, Claudia Y.; Pak, Chan-gi
2007-01-01
A crucial part of aircraft design is ensuring that the required margin for flutter is satisfied. A trustworthy flutter analysis, which begins by possessing an accurate dynamics model, is necessary for this task. Traditionally, a model was updated manually by fine tuning specific stiffness parameters until the analytical results matched test data. This is a time consuming iterative process. The NASA Dryden Flight Research Center has developed a mode matching code to execute this process in a more efficient manner. Recently, this code was implemented in the F-15B/Quiet Spike (Gulfstream Aerospace Corporation, Savannah, Georgia) model update. A build-up approach requiring several ground vibration test configurations and a series of model updates was implemented to determine the connection stiffness between aircraft and test article. The mode matching code successfully updated various models for the F-15B/Quiet Spike project to within 1 percent error in frequency and the modal assurance criteria values ranged from 88.51-99.42 percent.
A novel inspection system for cosmetic defects
NASA Astrophysics Data System (ADS)
Hazra, S.; Roy, R.; Williams, D.; Aylmore, R.; Hollingdale, D.
2013-12-01
The appearance of automotive skin panels creates desirability for a product and differentiates it from the competition. Because of the importance of skin panels, considerable care is taken in minimizing defects such as the 'hollow' defect that occur around door-handle depressions. However, the inspection process is manual, subjective and time-consuming. This paper describes the development of an objective and inspection scheme for the 'hollow' defect. In this inspection process, the geometry of a panel is captured using a structured lighting system. The geometry data is subsequently analyzed by a purpose-built wavelet-based algorithm to identify the location of any defects that may be present and to estimate the perceived severity of the defects without user intervention. This paper describes and critically evaluates the behavior of this physically-based algorithm on an ideal and real geometry and compares its result to an actual audit. The results show that the algorithm is capable of objectively locating and classifying 'hollow' defects in actual panels.
Bleiwas, Donald I.
2011-01-01
To produce materials from mine to market it is necessary to overcome obstacles that include the force of gravity, the strength of molecular bonds, and technological inefficiencies. These challenges are met by the application of energy to accomplish the work that includes the direct use of electricity, fossil fuel, and manual labor. The tables and analyses presented in this study contain estimates of electricity consumption for the mining and processing of ores, concentrates, intermediate products, and industrial and refined metallic commodities on a kilowatt-hour per unit basis, primarily the metric ton or troy ounce. Data contained in tables pertaining to specific currently operating facilities are static, as the amount of electricity consumed to process or produce a unit of material changes over time for a great number of reasons. Estimates were developed from diverse sources that included feasibility studies, company-produced annual and sustainability reports, conference proceedings, discussions with government and industry experts, journal articles, reference texts, and studies by nongovernmental organizations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, H; Tan, J; Kavanaugh, J
Purpose: Radiotherapy (RT) contours delineated either manually or semiautomatically require verification before clinical usage. Manual evaluation is very time consuming. A new integrated software tool using supervised pattern contour recognition was thus developed to facilitate this process. Methods: The contouring tool was developed using an object-oriented programming language C# and application programming interfaces, e.g. visualization toolkit (VTK). The C# language served as the tool design basis. The Accord.Net scientific computing libraries were utilized for the required statistical data processing and pattern recognition, while the VTK was used to build and render 3-D mesh models from critical RT structures in real-timemore » and 360° visualization. Principal component analysis (PCA) was used for system self-updating geometry variations of normal structures based on physician-approved RT contours as a training dataset. The inhouse design of supervised PCA-based contour recognition method was used for automatically evaluating contour normality/abnormality. The function for reporting the contour evaluation results was implemented by using C# and Windows Form Designer. Results: The software input was RT simulation images and RT structures from commercial clinical treatment planning systems. Several abilities were demonstrated: automatic assessment of RT contours, file loading/saving of various modality medical images and RT contours, and generation/visualization of 3-D images and anatomical models. Moreover, it supported the 360° rendering of the RT structures in a multi-slice view, which allows physicians to visually check and edit abnormally contoured structures. Conclusion: This new software integrates the supervised learning framework with image processing and graphical visualization modules for RT contour verification. This tool has great potential for facilitating treatment planning with the assistance of an automatic contour evaluation module in avoiding unnecessary manual verification for physicians/dosimetrists. In addition, its nature as a compact and stand-alone tool allows for future extensibility to include additional functions for physicians’ clinical needs.« less
Lüddemann, Tobias; Egger, Jan
2016-04-01
Among all types of cancer, gynecological malignancies belong to the fourth most frequent type of cancer among women. In addition to chemotherapy and external beam radiation, brachytherapy is the standard procedure for the treatment of these malignancies. In the progress of treatment planning, localization of the tumor as the target volume and adjacent organs of risks by segmentation is crucial to accomplish an optimal radiation distribution to the tumor while simultaneously preserving healthy tissue. Segmentation is performed manually and represents a time-consuming task in clinical daily routine. This study focuses on the segmentation of the rectum/sigmoid colon as an organ-at-risk in gynecological brachytherapy. The proposed segmentation method uses an interactive, graph-based segmentation scheme with a user-defined template. The scheme creates a directed two-dimensional graph, followed by the minimal cost closed set computation on the graph, resulting in an outlining of the rectum. The graph's outline is dynamically adapted to the last calculated cut. Evaluation was performed by comparing manual segmentations of the rectum/sigmoid colon to results achieved with the proposed method. The comparison of the algorithmic to manual result yielded a dice similarity coefficient value of [Formula: see text], in comparison to [Formula: see text] for the comparison of two manual segmentations by the same physician. Utilizing the proposed methodology resulted in a median time of [Formula: see text], compared to 300 s needed for pure manual segmentation.
Interactive and scale invariant segmentation of the rectum/sigmoid via user-defined templates
NASA Astrophysics Data System (ADS)
Lüddemann, Tobias; Egger, Jan
2016-03-01
Among all types of cancer, gynecological malignancies belong to the 4th most frequent type of cancer among women. Besides chemotherapy and external beam radiation, brachytherapy is the standard procedure for the treatment of these malignancies. In the progress of treatment planning, localization of the tumor as the target volume and adjacent organs of risks by segmentation is crucial to accomplish an optimal radiation distribution to the tumor while simultaneously preserving healthy tissue. Segmentation is performed manually and represents a time-consuming task in clinical daily routine. This study focuses on the segmentation of the rectum/sigmoid colon as an Organ-At-Risk in gynecological brachytherapy. The proposed segmentation method uses an interactive, graph-based segmentation scheme with a user-defined template. The scheme creates a directed two dimensional graph, followed by the minimal cost closed set computation on the graph, resulting in an outlining of the rectum. The graphs outline is dynamically adapted to the last calculated cut. Evaluation was performed by comparing manual segmentations of the rectum/sigmoid colon to results achieved with the proposed method. The comparison of the algorithmic to manual results yielded to a Dice Similarity Coefficient value of 83.85+/-4.08%, in comparison to 83.97+/-8.08% for the comparison of two manual segmentations of the same physician. Utilizing the proposed methodology resulted in a median time of 128 seconds per dataset, compared to 300 seconds needed for pure manual segmentation.
AnimalFinder: A semi-automated system for animal detection in time-lapse camera trap images
Price Tack, Jennifer L.; West, Brian S.; McGowan, Conor P.; Ditchkoff, Stephen S.; Reeves, Stanley J.; Keever, Allison; Grand, James B.
2017-01-01
Although the use of camera traps in wildlife management is well established, technologies to automate image processing have been much slower in development, despite their potential to drastically reduce personnel time and cost required to review photos. We developed AnimalFinder in MATLAB® to identify animal presence in time-lapse camera trap images by comparing individual photos to all images contained within the subset of images (i.e. photos from the same survey and site), with some manual processing required to remove false positives and collect other relevant data (species, sex, etc.). We tested AnimalFinder on a set of camera trap images and compared the presence/absence results with manual-only review with white-tailed deer (Odocoileus virginianus), wild pigs (Sus scrofa), and raccoons (Procyon lotor). We compared abundance estimates, model rankings, and coefficient estimates of detection and abundance for white-tailed deer using N-mixture models. AnimalFinder performance varied depending on a threshold value that affects program sensitivity to frequently occurring pixels in a series of images. Higher threshold values led to fewer false negatives (missed deer images) but increased manual processing time, but even at the highest threshold value, the program reduced the images requiring manual review by ~40% and correctly identified >90% of deer, raccoon, and wild pig images. Estimates of white-tailed deer were similar between AnimalFinder and the manual-only method (~1–2 deer difference, depending on the model), as were model rankings and coefficient estimates. Our results show that the program significantly reduced data processing time and may increase efficiency of camera trapping surveys.
Optimization Methods for Spiking Neurons and Networks
Russell, Alexander; Orchard, Garrick; Dong, Yi; Mihalaş, Ştefan; Niebur, Ernst; Tapson, Jonathan; Etienne-Cummings, Ralph
2011-01-01
Spiking neurons and spiking neural circuits are finding uses in a multitude of tasks such as robotic locomotion control, neuroprosthetics, visual sensory processing, and audition. The desired neural output is achieved through the use of complex neuron models, or by combining multiple simple neurons into a network. In either case, a means for configuring the neuron or neural circuit is required. Manual manipulation of parameters is both time consuming and non-intuitive due to the nonlinear relationship between parameters and the neuron’s output. The complexity rises even further as the neurons are networked and the systems often become mathematically intractable. In large circuits, the desired behavior and timing of action potential trains may be known but the timing of the individual action potentials is unknown and unimportant, whereas in single neuron systems the timing of individual action potentials is critical. In this paper, we automate the process of finding parameters. To configure a single neuron we derive a maximum likelihood method for configuring a neuron model, specifically the Mihalas–Niebur Neuron. Similarly, to configure neural circuits, we show how we use genetic algorithms (GAs) to configure parameters for a network of simple integrate and fire with adaptation neurons. The GA approach is demonstrated both in software simulation and hardware implementation on a reconfigurable custom very large scale integration chip. PMID:20959265
Teacher Resource Manual for Civics.
ERIC Educational Resources Information Center
Smith, Melinda R., Ed.
The learning activities in this resource manual supplement three commonly taught units in the secondary civics curriculum: law, government, and consumer economics. The activities were chosen to meet objectives of the New Mexico Basic Skills Plan. Although geared toward ninth-grade-level students, the activities can generally be adapted for…
EMG Processing Based Measures of Fatigue Assessment during Manual Lifting
Marhaban, M. H.; Abdullah, A. R.
2017-01-01
Manual lifting is one of the common practices used in the industries to transport or move objects to a desired place. Nowadays, even though mechanized equipment is widely available, manual lifting is still considered as an essential way to perform material handling task. Improper lifting strategies may contribute to musculoskeletal disorders (MSDs), where overexertion contributes as the highest factor. To overcome this problem, electromyography (EMG) signal is used to monitor the workers' muscle condition and to find maximum lifting load, lifting height and number of repetitions that the workers are able to handle before experiencing fatigue to avoid overexertion. Past researchers have introduced several EMG processing techniques and different EMG features that represent fatigue indices in time, frequency, and time-frequency domain. The impact of EMG processing based measures in fatigue assessment during manual lifting are reviewed in this paper. It is believed that this paper will greatly benefit researchers who need a bird's eye view of the biosignal processing which are currently available, thus determining the best possible techniques for lifting applications. PMID:28303251
Applications Of Digital Image Acquisition In Anthropometry
NASA Astrophysics Data System (ADS)
Woolford, Barbara; Lewis, James L.
1981-10-01
Anthropometric data on reach and mobility have traditionally been collected by time consuming and relatively inaccurate manual methods. Three dimensional digital image acquisition promises to radically increase the speed and ease of data collection and analysis. A three-camera video anthropometric system for collecting position, velocity, and force data in real time is under development for the Anthropometric Measurement Laboratory at NASA's Johnson Space Center. The use of a prototype of this system for collecting data on reach capabilities and on lateral stability is described. Two extensions of this system are planned.
LOGAM (Logistic Analysis Model). Volume 2. Users Manual.
1982-08-01
as opposed to simulation models which represent a system’s behavior as a function of time. These latter classes of models are often complex. They...includes the cost of ammunition and missiles comsumed by the system being costed during unit training. Excluded is the cost of ammunition consumed during...data. The results obtained from sensitivity testing may be used to construct graphs which display the behavior of the maintenance concept over the range
Towards Automatic Image Segmentation Using Optimised Region Growing Technique
NASA Astrophysics Data System (ADS)
Alazab, Mamoun; Islam, Mofakharul; Venkatraman, Sitalakshmi
Image analysis is being adopted extensively in many applications such as digital forensics, medical treatment, industrial inspection, etc. primarily for diagnostic purposes. Hence, there is a growing interest among researches in developing new segmentation techniques to aid the diagnosis process. Manual segmentation of images is labour intensive, extremely time consuming and prone to human errors and hence an automated real-time technique is warranted in such applications. There is no universally applicable automated segmentation technique that will work for all images as the image segmentation is quite complex and unique depending upon the domain application. Hence, to fill the gap, this paper presents an efficient segmentation algorithm that can segment a digital image of interest into a more meaningful arrangement of regions and objects. Our algorithm combines region growing approach with optimised elimination of false boundaries to arrive at more meaningful segments automatically. We demonstrate this using X-ray teeth images that were taken for real-life dental diagnosis.
An Automated Classification Technique for Detecting Defects in Battery Cells
NASA Technical Reports Server (NTRS)
McDowell, Mark; Gray, Elizabeth
2006-01-01
Battery cell defect classification is primarily done manually by a human conducting a visual inspection to determine if the battery cell is acceptable for a particular use or device. Human visual inspection is a time consuming task when compared to an inspection process conducted by a machine vision system. Human inspection is also subject to human error and fatigue over time. We present a machine vision technique that can be used to automatically identify defective sections of battery cells via a morphological feature-based classifier using an adaptive two-dimensional fast Fourier transformation technique. The initial area of interest is automatically classified as either an anode or cathode cell view as well as classified as an acceptable or a defective battery cell. Each battery cell is labeled and cataloged for comparison and analysis. The result is the implementation of an automated machine vision technique that provides a highly repeatable and reproducible method of identifying and quantifying defects in battery cells.
A multiparametric assay for quantitative nerve regeneration evaluation.
Weyn, B; van Remoortere, M; Nuydens, R; Meert, T; van de Wouwer, G
2005-08-01
We introduce an assay for the semi-automated quantification of nerve regeneration by image analysis. Digital images of histological sections of regenerated nerves are recorded using an automated inverted microscope and merged into high-resolution mosaic images representing the entire nerve. These are analysed by a dedicated image-processing package that computes nerve-specific features (e.g. nerve area, fibre count, myelinated area) and fibre-specific features (area, perimeter, myelin sheet thickness). The assay's performance and correlation of the automatically computed data with visually obtained data are determined on a set of 140 semithin sections from the distal part of a rat tibial nerve from four different experimental treatment groups (control, sham, sutured, cut) taken at seven different time points after surgery. Results show a high correlation between the manually and automatically derived data, and a high discriminative power towards treatment. Extra value is added by the large feature set. In conclusion, the assay is fast and offers data that currently can be obtained only by a combination of laborious and time-consuming tests.
Jones, Jason J; Chu, Jeffrey; Graham, Jacob; Zaluski, Serge; Rocha, Guillermo
2016-01-01
The aim of this study was to evaluate the operational impact of using preloaded intraocular lens (IOL) delivery systems compared with manually loaded IOL delivery processes during routine cataract surgeries. Time and motion data, staff and surgery schedules, and cost accounting reports were collected across three sites located in the US, France, and Canada. Time and motion data were collected for manually loaded IOL processes and preloaded IOL delivery systems over four surgery days. Staff and surgery schedules and cost accounting reports were collected during the 2 months prior and after introduction of the preloaded IOL delivery system. The study included a total of 154 routine cataract surgeries across all three sites. Of these, 77 surgeries were performed using a preloaded IOL delivery system, and the remaining 77 surgeries were performed using a manual IOL delivery process. Across all three sites, use of the preloaded IOL delivery system significantly decreased mean total case time by 6.2%-12.0% (P<0.001 for data from Canada and the US and P<0.05 for data from France). Use of the preloaded delivery system also decreased surgeon lens time, surgeon delays, and eliminated lens touches during IOL preparation. Compared to a manual IOL delivery process, use of a preloaded IOL delivery system for cataract surgery reduced total case time, total surgeon lens time, surgeon delays, and eliminated IOL touches. The time savings provided by the preloaded IOL delivery system provide an opportunity for sites to improve routine cataract surgery throughput without impacting surgeon or staff capacity.
Broyer, Patrick; Perrot, Nadine; Rostaing, Hervé; Blaze, Jérome; Pinston, Frederic; Gervasi, Gaspard; Charles, Marie-Hélène; Dachaud, Fabien; Dachaud, Jacques; Moulin, Frederic; Cordier, Sylvain; Dauwalder, Olivier; Meugnier, Hélène; Vandenesch, Francois
2018-01-01
Sepsis is the leading cause of death among patients in intensive care units (ICUs) requiring an early diagnosis to introduce efficient therapeutic intervention. Rapid identification (ID) of a causative pathogen is key to guide directed antimicrobial selection and was recently shown to reduce hospitalization length in ICUs. Direct processing of positive blood cultures by MALDI-TOF MS technology is one of the several currently available tools used to generate rapid microbial ID. However, all recently published protocols are still manual and time consuming, requiring dedicated technician availability and specific strategies for batch processing. We present here a new prototype instrument for automated preparation of Vitek ® MS slides directly from positive blood culture broth based on an "all-in-one" extraction strip. This bench top instrument was evaluated on 111 and 22 organisms processed using artificially inoculated blood culture bottles in the BacT/ALERT ® 3D (SA/SN blood culture bottles) or the BacT/ALERT Virtuo TM system (FA/FN Plus bottles), respectively. Overall, this new preparation station provided reliable and accurate Vitek MS species-level identification of 87% (Gram-negative bacteria = 85%, Gram-positive bacteria = 88%, and yeast = 100%) when used with BacT/ALERT ® 3D and of 84% (Gram-negative bacteria = 86%, Gram-positive bacteria = 86%, and yeast = 75%) with Virtuo ® instruments, respectively. The prototype was then evaluated in a clinical microbiology laboratory on 102 clinical blood culture bottles and compared to routine laboratory ID procedures. Overall, the correlation of ID on monomicrobial bottles was 83% (Gram-negative bacteria = 89%, Gram-positive bacteria = 79%, and yeast = 78%), demonstrating roughly equivalent performance between manual and automatized extraction methods. This prototype instrument exhibited a high level of performance regardless of bottle type or BacT/ALERT system. Furthermore, blood culture workflow could potentially be improved by converting direct ID of positive blood cultures from a batch-based to real-time and "on-demand" process.
MeSH Now: automatic MeSH indexing at PubMed scale via learning to rank.
Mao, Yuqing; Lu, Zhiyong
2017-04-17
MeSH indexing is the task of assigning relevant MeSH terms based on a manual reading of scholarly publications by human indexers. The task is highly important for improving literature retrieval and many other scientific investigations in biomedical research. Unfortunately, given its manual nature, the process of MeSH indexing is both time-consuming (new articles are not immediately indexed until 2 or 3 months later) and costly (approximately ten dollars per article). In response, automatic indexing by computers has been previously proposed and attempted but remains challenging. In order to advance the state of the art in automatic MeSH indexing, a community-wide shared task called BioASQ was recently organized. We propose MeSH Now, an integrated approach that first uses multiple strategies to generate a combined list of candidate MeSH terms for a target article. Through a novel learning-to-rank framework, MeSH Now then ranks the list of candidate terms based on their relevance to the target article. Finally, MeSH Now selects the highest-ranked MeSH terms via a post-processing module. We assessed MeSH Now on two separate benchmarking datasets using traditional precision, recall and F 1 -score metrics. In both evaluations, MeSH Now consistently achieved over 0.60 in F-score, ranging from 0.610 to 0.612. Furthermore, additional experiments show that MeSH Now can be optimized by parallel computing in order to process MEDLINE documents on a large scale. We conclude that MeSH Now is a robust approach with state-of-the-art performance for automatic MeSH indexing and that MeSH Now is capable of processing PubMed scale documents within a reasonable time frame. http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/MeSHNow/ .
Transportation Consumer Education for Adults: Mini-Units and Learning Activities.
ERIC Educational Resources Information Center
Finn, Peter; And Others
One of a series of eleven curriculum manuals which cover the four transportation topics of public transportation, transportation and the environment, transportation safety, and bicycles for elementary, secondary, and adult levels, this manual covers all four topics at the adult level. Materials in four chapters comprising seventeen mini-units…
ERIC Educational Resources Information Center
Kazimirski, J.; And Others
The second in a series of programmed books, "Creating a Market" is published by the International Labour Office as a manual for persons studying marketing. This manual was designed to meet the needs of the labor organization's technical cooperation programs and is primarily concerned with consumer goods industries. Using a fill-in-the-blanks and…
DOS Design/Application Tools System/Segment Specification. Volume 3
1990-09-01
consume the same information to obtain that information without "manual" translation by people. Solving the information management problem effectively...and consumes ’ even more information than centralized development. Distributed systems cannot be developed successfully by experiment without...human intervention because all tools consume input from and produce output to the same repository. New tools are easily absorbed into the environment
Data Processing of LAPAN-A3 Thermal Imager
NASA Astrophysics Data System (ADS)
Hartono, R.; Hakim, P. R.; Syafrudin, AH
2018-04-01
As an experimental microsatellite, LAPAN-A3/IPB satellite has an experimental thermal imager, which is called as micro-bolometer, to observe earth surface temperature for horizon observation. The imager data is transmitted from satellite to ground station by S-band video analog signal transmission, and then processed by ground station to become sequence of 8-bit enhanced and contrasted images. Data processing of LAPAN-A3/IPB thermal imager is more difficult than visual digital camera, especially for mosaic and classification purpose. This research aims to describe simple mosaic and classification process of LAPAN-A3/IPB thermal imager based on several videos data produced by the imager. The results show that stitching using Adobe Photoshop produces excellent result but can only process small area, while manual approach using ImageJ software can produce a good result but need a lot of works and time consuming. The mosaic process using image cross-correlation by Matlab offers alternative solution, which can process significantly bigger area in significantly shorter time processing. However, the quality produced is not as good as mosaic images of the other two methods. The simple classifying process that has been done shows that the thermal image can classify three distinct objects, i.e.: clouds, sea, and land surface. However, the algorithm fail to classify any other object which might be caused by distortions in the images. All of these results can be used as reference for development of thermal imager in LAPAN-A4 satellite.
Parallel optimization of signal detection in active magnetospheric signal injection experiments
NASA Astrophysics Data System (ADS)
Gowanlock, Michael; Li, Justin D.; Rude, Cody M.; Pankratius, Victor
2018-05-01
Signal detection and extraction requires substantial manual parameter tuning at different stages in the processing pipeline. Time-series data depends on domain-specific signal properties, necessitating unique parameter selection for a given problem. The large potential search space makes this parameter selection process time-consuming and subject to variability. We introduce a technique to search and prune such parameter search spaces in parallel and select parameters for time series filters using breadth- and depth-first search strategies to increase the likelihood of detecting signals of interest in the field of magnetospheric physics. We focus on studying geomagnetic activity in the extremely and very low frequency ranges (ELF/VLF) using ELF/VLF transmissions from Siple Station, Antarctica, received at Québec, Canada. Our technique successfully detects amplified transmissions and achieves substantial speedup performance gains as compared to an exhaustive parameter search. We present examples where our algorithmic approach reduces the search from hundreds of seconds down to less than 1 s, with a ranked signal detection in the top 99th percentile, thus making it valuable for real-time monitoring. We also present empirical performance models quantifying the trade-off between the quality of signal recovered and the algorithm response time required for signal extraction. In the future, improved signal extraction in scenarios like the Siple experiment will enable better real-time diagnostics of conditions of the Earth's magnetosphere for monitoring space weather activity.
Wagner, Maximilian E H; Gellrich, Nils-Claudius; Friese, Karl-Ingo; Becker, Matthias; Wolter, Franz-Erich; Lichtenstein, Juergen T; Stoetzer, Marcus; Rana, Majeed; Essig, Harald
2016-01-01
Objective determination of the orbital volume is important in the diagnostic process and in evaluating the efficacy of medical and/or surgical treatment of orbital diseases. Tools designed to measure orbital volume with computed tomography (CT) often cannot be used with cone beam CT (CBCT) because of inferior tissue representation, although CBCT has the benefit of greater availability and lower patient radiation exposure. Therefore, a model-based segmentation technique is presented as a new method for measuring orbital volume and compared to alternative techniques. Both eyes from thirty subjects with no known orbital pathology who had undergone CBCT as a part of routine care were evaluated (n = 60 eyes). Orbital volume was measured with manual, atlas-based, and model-based segmentation methods. Volume measurements, volume determination time, and usability were compared between the three methods. Differences in means were tested for statistical significance using two-tailed Student's t tests. Neither atlas-based (26.63 ± 3.15 mm(3)) nor model-based (26.87 ± 2.99 mm(3)) measurements were significantly different from manual volume measurements (26.65 ± 4.0 mm(3)). However, the time required to determine orbital volume was significantly longer for manual measurements (10.24 ± 1.21 min) than for atlas-based (6.96 ± 2.62 min, p < 0.001) or model-based (5.73 ± 1.12 min, p < 0.001) measurements. All three orbital volume measurement methods examined can accurately measure orbital volume, although atlas-based and model-based methods seem to be more user-friendly and less time-consuming. The new model-based technique achieves fully automated segmentation results, whereas all atlas-based segmentations at least required manipulations to the anterior closing. Additionally, model-based segmentation can provide reliable orbital volume measurements when CT image quality is poor.
NASA Astrophysics Data System (ADS)
Chávez, G. Moreno; Sarocchi, D.; Santana, E. Arce; Borselli, L.
2015-12-01
The study of grain size distribution is fundamental for understanding sedimentological environments. Through these analyses, clast erosion, transport and deposition processes can be interpreted and modeled. However, grain size distribution analysis can be difficult in some outcrops due to the number and complexity of the arrangement of clasts and matrix and their physical size. Despite various technological advances, it is almost impossible to get the full grain size distribution (blocks to sand grain size) with a single method or instrument of analysis. For this reason development in this area continues to be fundamental. In recent years, various methods of particle size analysis by automatic image processing have been developed, due to their potential advantages with respect to classical ones; speed and final detailed content of information (virtually for each analyzed particle). In this framework, we have developed a novel algorithm and software for grain size distribution analysis, based on color image segmentation using an entropy-controlled quadratic Markov measure field algorithm and the Rosiwal method for counting intersections between clast and linear transects in the images. We test the novel algorithm in different sedimentary deposit types from 14 varieties of sedimentological environments. The results of the new algorithm were compared with grain counts performed manually by the same Rosiwal methods applied by experts. The new algorithm has the same accuracy as a classical manual count process, but the application of this innovative methodology is much easier and dramatically less time-consuming. The final productivity of the new software for analysis of clasts deposits after recording field outcrop images can be increased significantly.
NASA Astrophysics Data System (ADS)
Bilitza, Dieter; Huang, Xueqin; Reinisch, Bodo W.; Benson, Robert F.; Hills, H. Kent; Schar, William B.
2004-02-01
The United States/Canadian ISIS-1 and ISIS-2 satellites collected several million topside ionograms in the 1960s and 1970s with a multinational network of ground stations that provided good global coverage. However, processing of these ionograms into electron density profiles required time-consuming manual scaling of the traces from the analog ionograms, and as a result, only a few percent of the ionograms had been processed into electron density profiles. In recent years an effort began to digitize the analog recordings to prepare the ionograms for computerized analysis. As of November 2002, approximately 390,000 ISIS-1 and ISIS-2 digital topside-sounder ionograms have been produced. The Topside Ionogram Scaler With True Height Algorithm (TOPIST) program was developed for the automated scaling of the echo traces and for the inversion of these traces into topside electron density profiles. The program is based on the techniques that have been successfully applied in the analysis of ground-based Digisonde ionograms. The TOPIST software also includes an "editing option" for manual scaling of the more difficult ionograms, which could not be scaled during the automated TOPIST run. TOPIST is now successfully scaling ˜60% of the ISIS ionograms, and the electron density profiles are available through the online archive of the National Space Science Data Center at ftp://nssdcftp.gsfc.nasa.gov/spacecraft_data/isis/topside_sounder. This data restoration effort is producing a unique global database of topside electron densities over more than one solar cycle, which will be of particular importance for improvements of topside ionosphere models, especially the International Reference Ionosphere.
Classifying seismic waveforms from scratch: a case study in the alpine environment
NASA Astrophysics Data System (ADS)
Hammer, C.; Ohrnberger, M.; Fäh, D.
2013-01-01
Nowadays, an increasing amount of seismic data is collected by daily observatory routines. The basic step for successfully analyzing those data is the correct detection of various event types. However, the visually scanning process is a time-consuming task. Applying standard techniques for detection like the STA/LTA trigger still requires the manual control for classification. Here, we present a useful alternative. The incoming data stream is scanned automatically for events of interest. A stochastic classifier, called hidden Markov model, is learned for each class of interest enabling the recognition of highly variable waveforms. In contrast to other automatic techniques as neural networks or support vector machines the algorithm allows to start the classification from scratch as soon as interesting events are identified. Neither the tedious process of collecting training samples nor a time-consuming configuration of the classifier is required. An approach originally introduced for the volcanic task force action allows to learn classifier properties from a single waveform example and some hours of background recording. Besides a reduction of required workload this also enables to detect very rare events. Especially the latter feature provides a milestone point for the use of seismic devices in alpine warning systems. Furthermore, the system offers the opportunity to flag new signal classes that have not been defined before. We demonstrate the application of the classification system using a data set from the Swiss Seismological Survey achieving very high recognition rates. In detail we document all refinements of the classifier providing a step-by-step guide for the fast set up of a well-working classification system.
Strategies for dereplication of natural compounds using high-resolution tandem mass spectrometry.
Kind, Tobias; Fiehn, Oliver
2017-09-01
Complete structural elucidation of natural products is commonly performed by nuclear magnetic resonance spectroscopy (NMR), but annotating compounds to most likely structures using high-resolution tandem mass spectrometry is a faster and feasible first step. The CASMI contest 2016 (Critical Assessment of Small Molecule Identification) provided spectra of eighteen compounds for the best manual structure identification in the natural products category. High resolution precursor and tandem mass spectra (MS/MS) were available to characterize the compounds. We used the Seven Golden Rules, Sirius2 and MS-FINDER software for determination of molecular formulas, and then we queried the formulas in different natural product databases including DNP, UNPD, ChemSpider and REAXYS to obtain molecular structures. We used different in-silico fragmentation tools including CFM-ID, CSI:FingerID and MS-FINDER to rank these compounds. Additional neutral losses and product ion peaks were manually investigated. This manual and time consuming approach allowed for the correct dereplication of thirteen of the eighteen natural products.
A Method for Automated Detection of Usability Problems from Client User Interface Events
Saadawi, Gilan M.; Legowski, Elizabeth; Medvedeva, Olga; Chavan, Girish; Crowley, Rebecca S.
2005-01-01
Think-aloud usability analysis provides extremely useful data but is very time-consuming and expensive to perform because of the extensive manual video analysis that is required. We describe a simple method for automated detection of usability problems from client user interface events for a developing medical intelligent tutoring system. The method incorporates (1) an agent-based method for communication that funnels all interface events and system responses to a centralized database, (2) a simple schema for representing interface events and higher order subgoals, and (3) an algorithm that reproduces the criteria used for manual coding of usability problems. A correction factor was empirically determining to account for the slower task performance of users when thinking aloud. We tested the validity of the method by simultaneously identifying usability problems using TAU and manually computing them from stored interface event data using the proposed algorithm. All usability problems that did not rely on verbal utterances were detectable with the proposed method. PMID:16779121
Dynamic CT myocardial perfusion imaging: performance of 3D semi-automated evaluation software.
Ebersberger, Ullrich; Marcus, Roy P; Schoepf, U Joseph; Lo, Gladys G; Wang, Yining; Blanke, Philipp; Geyer, Lucas L; Gray, J Cranston; McQuiston, Andrew D; Cho, Young Jun; Scheuering, Michael; Canstein, Christian; Nikolaou, Konstantin; Hoffmann, Ellen; Bamberg, Fabian
2014-01-01
To evaluate the performance of three-dimensional semi-automated evaluation software for the assessment of myocardial blood flow (MBF) and blood volume (MBV) at dynamic myocardial perfusion computed tomography (CT). Volume-based software relying on marginal space learning and probabilistic boosting tree-based contour fitting was applied to CT myocardial perfusion imaging data of 37 subjects. In addition, all image data were analysed manually and both approaches were compared with SPECT findings. Study endpoints included time of analysis and conventional measures of diagnostic accuracy. Of 592 analysable segments, 42 showed perfusion defects on SPECT. Average analysis times for the manual and software-based approaches were 49.1 ± 11.2 and 16.5 ± 3.7 min respectively (P < 0.01). There was strong agreement between the two measures of interest (MBF, ICC = 0.91, and MBV, ICC = 0.88, both P < 0.01) and no significant difference in MBF/MBV with respect to diagnostic accuracy between the two approaches for both MBF and MBV for manual versus software-based approach; respectively; all comparisons P > 0.05. Three-dimensional semi-automated evaluation of dynamic myocardial perfusion CT data provides similar measures and diagnostic accuracy to manual evaluation, albeit with substantially reduced analysis times. This capability may aid the integration of this test into clinical workflows. • Myocardial perfusion CT is attractive for comprehensive coronary heart disease assessment. • Traditional image analysis methods are cumbersome and time-consuming. • Automated 3D perfusion software shortens analysis times. • Automated 3D perfusion software increases standardisation of myocardial perfusion CT. • Automated, standardised analysis fosters myocardial perfusion CT integration into clinical practice.
Public online information about tinnitus: A cross-sectional study of YouTube videos.
Basch, Corey H; Yin, Jingjing; Kollia, Betty; Adedokun, Adeyemi; Trusty, Stephanie; Yeboah, Felicia; Fung, Isaac Chun-Hai
2018-01-01
To examine the information about tinnitus contained in different video sources on YouTube. The 100 most widely viewed tinnitus videos were manually coded. Firstly, we identified the sources of upload: consumer, professional, television-based clip, and internet-based clip. Secondly, the videos were analyzed to ascertain what pertinent information they contained from a current National Institute on Deafness and Other Communication Disorders fact sheet. Of the videos, 42 were consumer-generated, 33 from media, and 25 from professionals. Collectively, the 100 videos were viewed almost 9 million times. The odds of mentioning "objective tinnitus" in professional videos were 9.58 times those from media sources [odds ratio (OR) = 9.58; 95% confidence interval (CI): 1.94, 47.42; P = 0.01], whereas these odds in consumer videos were 51% of media-generated videos (OR = 0.51; 95% CI: 0.20, 1.29; P = 0.16). The odds that the purpose of a video was to sell a product or service were nearly the same for both consumer and professional videos. Consumer videos were found to be 4.33 times as likely to carry a theme about an individual's own experience with tinnitus (OR = 4.33; 95% CI: 1.62, 11.63; P = 0.004) as media videos. Of the top 100 viewed videos on tinnitus, most were uploaded by consumers, sharing individuals' experiences. Actions are needed to make scientific medical information more prominently available and accessible on YouTube and other social media.
Public Online Information About Tinnitus: A Cross-Sectional Study of YouTube Videos
Basch, Corey H.; Yin, Jingjing; Kollia, Betty; Adedokun, Adeyemi; Trusty, Stephanie; Yeboah, Felicia; Fung, Isaac Chun-Hai
2018-01-01
Purpose: To examine the information about tinnitus contained in different video sources on YouTube. Materials and Methods: The 100 most widely viewed tinnitus videos were manually coded. Firstly, we identified the sources of upload: consumer, professional, television-based clip, and internet-based clip. Secondly, the videos were analyzed to ascertain what pertinent information they contained from a current National Institute on Deafness and Other Communication Disorders fact sheet. Results: Of the videos, 42 were consumer-generated, 33 from media, and 25 from professionals. Collectively, the 100 videos were viewed almost 9 million times. The odds of mentioning “objective tinnitus” in professional videos were 9.58 times those from media sources [odds ratio (OR) = 9.58; 95% confidence interval (CI): 1.94, 47.42; P = 0.01], whereas these odds in consumer videos were 51% of media-generated videos (OR = 0.51; 95% CI: 0.20, 1.29; P = 0.16). The odds that the purpose of a video was to sell a product or service were nearly the same for both consumer and professional videos. Consumer videos were found to be 4.33 times as likely to carry a theme about an individual’s own experience with tinnitus (OR = 4.33; 95% CI: 1.62, 11.63; P = 0.004) as media videos. Conclusions: Of the top 100 viewed videos on tinnitus, most were uploaded by consumers, sharing individuals’ experiences. Actions are needed to make scientific medical information more prominently available and accessible on YouTube and other social media. PMID:29457600
Wallner, Jürgen; Hochegger, Kerstin; Chen, Xiaojun; Mischak, Irene; Reinbacher, Knut; Pau, Mauro; Zrnc, Tomislav; Schwenzer-Zimmerer, Katja; Zemann, Wolfgang; Schmalstieg, Dieter; Egger, Jan
2018-01-01
Computer assisted technologies based on algorithmic software segmentation are an increasing topic of interest in complex surgical cases. However-due to functional instability, time consuming software processes, personnel resources or licensed-based financial costs many segmentation processes are often outsourced from clinical centers to third parties and the industry. Therefore, the aim of this trial was to assess the practical feasibility of an easy available, functional stable and licensed-free segmentation approach to be used in the clinical practice. In this retrospective, randomized, controlled trail the accuracy and accordance of the open-source based segmentation algorithm GrowCut was assessed through the comparison to the manually generated ground truth of the same anatomy using 10 CT lower jaw data-sets from the clinical routine. Assessment parameters were the segmentation time, the volume, the voxel number, the Dice Score and the Hausdorff distance. Overall semi-automatic GrowCut segmentation times were about one minute. Mean Dice Score values of over 85% and Hausdorff Distances below 33.5 voxel could be achieved between the algorithmic GrowCut-based segmentations and the manual generated ground truth schemes. Statistical differences between the assessment parameters were not significant (p<0.05) and correlation coefficients were close to the value one (r > 0.94) for any of the comparison made between the two groups. Complete functional stable and time saving segmentations with high accuracy and high positive correlation could be performed by the presented interactive open-source based approach. In the cranio-maxillofacial complex the used method could represent an algorithmic alternative for image-based segmentation in the clinical practice for e.g. surgical treatment planning or visualization of postoperative results and offers several advantages. Due to an open-source basis the used method could be further developed by other groups or specialists. Systematic comparisons to other segmentation approaches or with a greater data amount are areas of future works.
Semi-automating the manual literature search for systematic reviews increases efficiency.
Chapman, Andrea L; Morgan, Laura C; Gartlehner, Gerald
2010-03-01
To minimise retrieval bias, manual literature searches are a key part of the search process of any systematic review. Considering the need to have accurate information, valid results of the manual literature search are essential to ensure scientific standards; likewise efficient approaches that minimise the amount of personnel time required to conduct a manual literature search are of great interest. The objective of this project was to determine the validity and efficiency of a new manual search method that utilises the scopus database. We used the traditional manual search approach as the gold standard to determine the validity and efficiency of the proposed scopus method. Outcome measures included completeness of article detection and personnel time involved. Using both methods independently, we compared the results based on accuracy of the results, validity and time spent conducting the search, efficiency. Regarding accuracy, the scopus method identified the same studies as the traditional approach indicating its validity. In terms of efficiency, using scopus led to a time saving of 62.5% compared with the traditional approach (3 h versus 8 h). The scopus method can significantly improve the efficiency of manual searches and thus of systematic reviews.
An automated workflow for patient-specific quality control of contour propagation
NASA Astrophysics Data System (ADS)
Beasley, William J.; McWilliam, Alan; Slevin, Nicholas J.; Mackay, Ranald I.; van Herk, Marcel
2016-12-01
Contour propagation is an essential component of adaptive radiotherapy, but current contour propagation algorithms are not yet sufficiently accurate to be used without manual supervision. Manual review of propagated contours is time-consuming, making routine implementation of real-time adaptive radiotherapy unrealistic. Automated methods of monitoring the performance of contour propagation algorithms are therefore required. We have developed an automated workflow for patient-specific quality control of contour propagation and validated it on a cohort of head and neck patients, on which parotids were outlined by two observers. Two types of error were simulated—mislabelling of contours and introducing noise in the scans before propagation. The ability of the workflow to correctly predict the occurrence of errors was tested, taking both sets of observer contours as ground truth, using receiver operator characteristic analysis. The area under the curve was 0.90 and 0.85 for the observers, indicating good ability to predict the occurrence of errors. This tool could potentially be used to identify propagated contours that are likely to be incorrect, acting as a flag for manual review of these contours. This would make contour propagation more efficient, facilitating the routine implementation of adaptive radiotherapy.
Semi-automatic assessment of skin capillary density: proof of principle and validation.
Gronenschild, E H B M; Muris, D M J; Schram, M T; Karaca, U; Stehouwer, C D A; Houben, A J H M
2013-11-01
Skin capillary density and recruitment have been proven to be relevant measures of microvascular function. Unfortunately, the assessment of skin capillary density from movie files is very time-consuming, since this is done manually. This impedes the use of this technique in large-scale studies. We aimed to develop a (semi-) automated assessment of skin capillary density. CapiAna (Capillary Analysis) is a newly developed semi-automatic image analysis application. The technique involves four steps: 1) movement correction, 2) selection of the frame range and positioning of the region of interest (ROI), 3) automatic detection of capillaries, and 4) manual correction of detected capillaries. To gain insight into the performance of the technique, skin capillary density was measured in twenty participants (ten women; mean age 56.2 [42-72] years). To investigate the agreement between CapiAna and the classic manual counting procedure, we used weighted Deming regression and Bland-Altman analyses. In addition, intra- and inter-observer coefficients of variation (CVs), and differences in analysis time were assessed. We found a good agreement between CapiAna and the classic manual method, with a Pearson's correlation coefficient (r) of 0.95 (P<0.001) and a Deming regression coefficient of 1.01 (95%CI: 0.91; 1.10). In addition, we found no significant differences between the two methods, with an intercept of the Deming regression of 1.75 (-6.04; 9.54), while the Bland-Altman analysis showed a mean difference (bias) of 2.0 (-13.5; 18.4) capillaries/mm(2). The intra- and inter-observer CVs of CapiAna were 2.5% and 5.6% respectively, while for the classic manual counting procedure these were 3.2% and 7.2%, respectively. Finally, the analysis time for CapiAna ranged between 25 and 35min versus 80 and 95min for the manual counting procedure. We have developed a semi-automatic image analysis application (CapiAna) for the assessment of skin capillary density, which agrees well with the classic manual counting procedure, is time-saving, and has a better reproducibility as compared to the classic manual counting procedure. As a result, the use of skin capillaroscopy is feasible in large-scale studies, which importantly extends the possibilities to perform microcirculation research in humans. © 2013.
Optimizing process and equipment efficiency using integrated methods
NASA Astrophysics Data System (ADS)
D'Elia, Michael J.; Alfonso, Ted F.
1996-09-01
The semiconductor manufacturing industry is continually riding the edge of technology as it tries to push toward higher design limits. Mature fabs must cut operating costs while increasing productivity to remain profitable and cannot justify large capital expenditures to improve productivity. Thus, they must push current tool production capabilities to cut manufacturing costs and remain viable. Working to continuously improve mature production methods requires innovation. Furthermore, testing and successful implementation of these ideas into modern production environments require both supporting technical data and commitment from those working with the process daily. At AMD, natural work groups (NWGs) composed of operators, technicians, engineers, and supervisors collaborate to foster innovative thinking and secure commitment. Recently, an AMD NWG improved equipment cycle time on the Genus tungsten silicide (WSi) deposition system. The team used total productive manufacturing (TPM) to identify areas for process improvement. Improved in-line equipment monitoring was achieved by constructing a real time overall equipment effectiveness (OEE) calculator which tracked equipment down, idle, qualification, and production times. In-line monitoring results indicated that qualification time associated with slow Inspex turn-around time and machine downtime associated with manual cleans contributed greatly to reduced availability. Qualification time was reduced by 75% by implementing a new Inspex monitor pre-staging technique. Downtime associated with manual cleans was reduced by implementing an in-situ plasma etch back to extend the time between manual cleans. A designed experiment was used to optimize the process. Time between 18 hour manual cleans has been improved from every 250 to every 1500 cycles. Moreover defect density realized a 3X improvement. Overall, the team achieved a 35% increase in tool availability. This paper details the above strategies and accomplishments.
3-Dimensional Root Cause Diagnosis via Co-analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Ziming; Lan, Zhiling; Yu, Li
2012-01-01
With the growth of system size and complexity, reliability has become a major concern for large-scale systems. Upon the occurrence of failure, system administrators typically trace the events in Reliability, Availability, and Serviceability (RAS) logs for root cause diagnosis. However, RAS log only contains limited diagnosis information. Moreover, the manual processing is time-consuming, error-prone, and not scalable. To address the problem, in this paper we present an automated root cause diagnosis mechanism for large-scale HPC systems. Our mechanism examines multiple logs to provide a 3-D fine-grained root cause analysis. Here, 3-D means that our analysis will pinpoint the failure layer,more » the time, and the location of the event that causes the problem. We evaluate our mechanism by means of real logs collected from a production IBM Blue Gene/P system at Oak Ridge National Laboratory. It successfully identifies failure layer information for 219 failures during 23-month period. Furthermore, it effectively identifies the triggering events with time and location information, even when the triggering events occur hundreds of hours before the resulting failures.« less
The Container Tree Nursery Manual: Volume 7, Seedling processing, storage, and outplanting
Thomas D. Landis; R. Kasten Dumroese; Diane L. Haase
2010-01-01
This manual is based on the best current knowledge of container nursery management and should be used as a general reference. Recommendations were made using the best information available at the time and are, therefore, subject to revision as more knowledge becomes available. Much of the information in this manual was primarily developed from information on growing...
[Constructing a database that can input record of use and product-specific information].
Kawai, Satoru; Satoh, Kenichi; Yamamoto, Hideo
2012-01-01
In Japan, patients were infected by viral hepatitis C generally by administering a specific fibrinogen injection. However, it has been difficult to identify patients who were infected as result of the injections due to the lack of medical records. It is still not a common practice by a number of medical facilities to maintain detailed information because manual record keeping is extremely time consuming and subject to human error. Due to these reasons, the regulator required Medical device manufacturers and pharmaceutical companies to attach a bar code called "GS1-128" effective March 28, 2008. Based on this new process, we have come up with the idea of constructing a new database whose records can be entered by bar code scanning to ensure data integrity. Upon examining the efficacy of this new data collection process from the perspective of time efficiency and of course data accuracy, "GS1-128" proved that it significantly reduces time and record keeping mistakes. Patients not only became easily identifiable by a lot number and a serial number when immediate care was required, but "GS1-128" enhanced the ability to pinpoint manufacturing errors in the event any trouble or side effects are reported. This data can be shared with and utilized by the entire medical industry and will help perfect the products and enhance record keeping. I believe this new process is extremely important.
Automated Synthesis of Architecture of Avionic Systems
NASA Technical Reports Server (NTRS)
Chau, Savio; Xu, Joseph; Dang, Van; Lu, James F.
2006-01-01
The Architecture Synthesis Tool (AST) is software that automatically synthesizes software and hardware architectures of avionic systems. The AST is expected to be most helpful during initial formulation of an avionic-system design, when system requirements change frequently and manual modification of architecture is time-consuming and susceptible to error. The AST comprises two parts: (1) an architecture generator, which utilizes a genetic algorithm to create a multitude of architectures; and (2) a functionality evaluator, which analyzes the architectures for viability, rejecting most of the non-viable ones. The functionality evaluator generates and uses a viability tree a hierarchy representing functions and components that perform the functions such that the system as a whole performs system-level functions representing the requirements for the system as specified by a user. Architectures that survive the functionality evaluator are further evaluated by the selection process of the genetic algorithm. Architectures found to be most promising to satisfy the user s requirements and to perform optimally are selected as parents to the next generation of architectures. The foregoing process is iterated as many times as the user desires. The final output is one or a few viable architectures that satisfy the user s requirements.
Automatic and quantitative measurement of collagen gel contraction using model-guided segmentation
NASA Astrophysics Data System (ADS)
Chen, Hsin-Chen; Yang, Tai-Hua; Thoreson, Andrew R.; Zhao, Chunfeng; Amadio, Peter C.; Sun, Yung-Nien; Su, Fong-Chin; An, Kai-Nan
2013-08-01
Quantitative measurement of collagen gel contraction plays a critical role in the field of tissue engineering because it provides spatial-temporal assessment (e.g., changes of gel area and diameter during the contraction process) reflecting the cell behavior and tissue material properties. So far the assessment of collagen gels relies on manual segmentation, which is time-consuming and suffers from serious intra- and inter-observer variability. In this study, we propose an automatic method combining various image processing techniques to resolve these problems. The proposed method first detects the maximal feasible contraction range of circular references (e.g., culture dish) and avoids the interference of irrelevant objects in the given image. Then, a three-step color conversion strategy is applied to normalize and enhance the contrast between the gel and background. We subsequently introduce a deformable circular model which utilizes regional intensity contrast and circular shape constraint to locate the gel boundary. An adaptive weighting scheme was employed to coordinate the model behavior, so that the proposed system can overcome variations of gel boundary appearances at different contraction stages. Two measurements of collagen gels (i.e., area and diameter) can readily be obtained based on the segmentation results. Experimental results, including 120 gel images for accuracy validation, showed high agreement between the proposed method and manual segmentation with an average dice similarity coefficient larger than 0.95. The results also demonstrated obvious improvement in gel contours obtained by the proposed method over two popular, generic segmentation methods.
Automated extraction of subdural electrode grid from post-implant MRI scans for epilepsy surgery
NASA Astrophysics Data System (ADS)
Pozdin, Maksym A.; Skrinjar, Oskar
2005-04-01
This paper presents an automated algorithm for extraction of Subdural Electrode Grid (SEG) from post-implant MRI scans for epilepsy surgery. Post-implant MRI scans are corrupted by the image artifacts caused by implanted electrodes. The artifacts appear as dark spherical voids and given that the cerebrospinal fluid is also dark in T1-weigthed MRI scans, it is a difficult and time-consuming task to manually locate SEG position relative to brain structures of interest. The proposed algorithm reliably and accurately extracts SEG from post-implant MRI scan, i.e. finds its shape and position relative to brain structures of interest. The algorithm was validated against manually determined electrode locations, and the average error was 1.6mm for the three tested subjects.
Self-service for software development projects and HPC activities
NASA Astrophysics Data System (ADS)
Husejko, M.; Høimyr, N.; Gonzalez, A.; Koloventzos, G.; Asbury, D.; Trzcinska, A.; Agtzidis, I.; Botrel, G.; Otto, J.
2014-05-01
This contribution describes how CERN has implemented several essential tools for agile software development processes, ranging from version control (Git) to issue tracking (Jira) and documentation (Wikis). Running such services in a large organisation like CERN requires many administrative actions both by users and service providers, such as creating software projects, managing access rights, users and groups, and performing tool-specific customisation. Dealing with these requests manually would be a time-consuming task. Another area of our CERN computing services that has required dedicated manual support has been clusters for specific user communities with special needs. Our aim is to move all our services to a layered approach, with server infrastructure running on the internal cloud computing infrastructure at CERN. This contribution illustrates how we plan to optimise the management of our of services by means of an end-user facing platform acting as a portal into all the related services for software projects, inspired by popular portals for open-source developments such as Sourceforge, GitHub and others. Furthermore, the contribution will discuss recent activities with tests and evaluations of High Performance Computing (HPC) applications on different hardware and software stacks, and plans to offer a dynamically scalable HPC service at CERN, based on affordable hardware.
Farris, Dominic James; Lichtwark, Glen A
2016-05-01
Dynamic measurements of human muscle fascicle length from sequences of B-mode ultrasound images have become increasingly prevalent in biomedical research. Manual digitisation of these images is time consuming and algorithms for automating the process have been developed. Here we present a freely available software implementation of a previously validated algorithm for semi-automated tracking of muscle fascicle length in dynamic ultrasound image recordings, "UltraTrack". UltraTrack implements an affine extension to an optic flow algorithm to track movement of the muscle fascicle end-points throughout dynamically recorded sequences of images. The underlying algorithm has been previously described and its reliability tested, but here we present the software implementation with features for: tracking multiple fascicles in multiple muscles simultaneously; correcting temporal drift in measurements; manually adjusting tracking results; saving and re-loading of tracking results and loading a range of file formats. Two example runs of the software are presented detailing the tracking of fascicles from several lower limb muscles during a squatting and walking activity. We have presented a software implementation of a validated fascicle-tracking algorithm and made the source code and standalone versions freely available for download. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Irshad, Humayun; Oh, Eun-Yeong; Schmolze, Daniel; Quintana, Liza M.; Collins, Laura; Tamimi, Rulla M.; Beck, Andrew H.
2017-02-01
The assessment of protein expression in immunohistochemistry (IHC) images provides important diagnostic, prognostic and predictive information for guiding cancer diagnosis and therapy. Manual scoring of IHC images represents a logistical challenge, as the process is labor intensive and time consuming. Since the last decade, computational methods have been developed to enable the application of quantitative methods for the analysis and interpretation of protein expression in IHC images. These methods have not yet replaced manual scoring for the assessment of IHC in the majority of diagnostic laboratories and in many large-scale research studies. An alternative approach is crowdsourcing the quantification of IHC images to an undefined crowd. The aim of this study is to quantify IHC images for labeling of ER status with two different crowdsourcing approaches, image-labeling and nuclei-labeling, and compare their performance with automated methods. Crowdsourcing- derived scores obtained greater concordance with the pathologist interpretations for both image-labeling and nuclei-labeling tasks (83% and 87%), as compared to the pathologist concordance achieved by the automated method (81%) on 5,338 TMA images from 1,853 breast cancer patients. This analysis shows that crowdsourcing the scoring of protein expression in IHC images is a promising new approach for large scale cancer molecular pathology studies.
A template-based approach for responsibility management in executable business processes
NASA Astrophysics Data System (ADS)
Cabanillas, Cristina; Resinas, Manuel; Ruiz-Cortés, Antonio
2018-05-01
Process-oriented organisations need to manage the different types of responsibilities their employees may have w.r.t. the activities involved in their business processes. Despite several approaches provide support for responsibility modelling, in current Business Process Management Systems (BPMS) the only responsibility considered at runtime is the one related to performing the work required for activity completion. Others like accountability or consultation must be implemented by manually adding activities in the executable process model, which is time-consuming and error-prone. In this paper, we address this limitation by enabling current BPMS to execute processes in which people with different responsibilities interact to complete the activities. We introduce a metamodel based on Responsibility Assignment Matrices (RAM) to model the responsibility assignment for each activity, and a flexible template-based mechanism that automatically transforms such information into BPMN elements, which can be interpreted and executed by a BPMS. Thus, our approach does not enforce any specific behaviour for the different responsibilities but new templates can be modelled to specify the interaction that best suits the activity requirements. Furthermore, libraries of templates can be created and reused in different processes. We provide a reference implementation and build a library of templates for a well-known set of responsibilities.
Mining Genotype-Phenotype Associations from Public Knowledge Sources via Semantic Web Querying.
Kiefer, Richard C; Freimuth, Robert R; Chute, Christopher G; Pathak, Jyotishman
2013-01-01
Gene Wiki Plus (GeneWiki+) and the Online Mendelian Inheritance in Man (OMIM) are publicly available resources for sharing information about disease-gene and gene-SNP associations in humans. While immensely useful to the scientific community, both resources are manually curated, thereby making the data entry and publication process time-consuming, and to some degree, error-prone. To this end, this study investigates Semantic Web technologies to validate existing and potentially discover new genotype-phenotype associations in GWP and OMIM. In particular, we demonstrate the applicability of SPARQL queries for identifying associations not explicitly stated for commonly occurring chronic diseases in GWP and OMIM, and report our preliminary findings for coverage, completeness, and validity of the associations. Our results highlight the benefits of Semantic Web querying technology to validate existing disease-gene associations as well as identify novel associations although further evaluation and analysis is required before such information can be applied and used effectively.
Classification of Mobile Laser Scanning Point Clouds from Height Features
NASA Astrophysics Data System (ADS)
Zheng, M.; Lemmens, M.; van Oosterom, P.
2017-09-01
The demand for 3D maps of cities and road networks is steadily growing and mobile laser scanning (MLS) systems are often the preferred geo-data acquisition method for capturing such scenes. Because MLS systems are mounted on cars or vans they can acquire billions of points of road scenes within a few hours of survey. Manual processing of point clouds is labour intensive and thus time consuming and expensive. Hence, the need for rapid and automated methods for 3D mapping of dense point clouds is growing exponentially. The last five years the research on automated 3D mapping of MLS data has tremendously intensified. In this paper, we present our work on automated classification of MLS point clouds. In the present stage of the research we exploited three features - two height components and one reflectance value, and achieved an overall accuracy of 73 %, which is really encouraging for further refining our approach.
Computer-aided implant design for the restoration of cranial defects.
Chen, Xiaojun; Xu, Lu; Li, Xing; Egger, Jan
2017-06-23
Patient-specific cranial implants are important and necessary in the surgery of cranial defect restoration. However, traditional methods of manual design of cranial implants are complicated and time-consuming. Our purpose is to develop a novel software named EasyCrania to design the cranial implants conveniently and efficiently. The process can be divided into five steps, which are mirroring model, clipping surface, surface fitting, the generation of the initial implant and the generation of the final implant. The main concept of our method is to use the geometry information of the mirrored model as the base to generate the final implant. The comparative studies demonstrated that the EasyCrania can improve the efficiency of cranial implant design significantly. And, the intra- and inter-rater reliability of the software were stable, which were 87.07 ± 1.6% and 87.73 ± 1.4% respectively.
Bone suppression in CT angiography data by region-based multiresolution segmentation
NASA Astrophysics Data System (ADS)
Blaffert, Thomas; Wiemker, Rafael; Lin, Zhong Min
2003-05-01
Multi slice CT (MSCT) scanners have the advantage of high and isotropic image resolution, which broadens the range of examinations for CT angiography (CTA). A very important method to present the large amount of high-resolution 3D data is the visualization by maximum intensity projections (MIP). A problem with MIP projections in angiography is that bones often hide the vessels of interest, especially the scull and vertebral column. Software tools for a manual selection of bone regions and their suppression in the MIP are available, but processing is time-consuming and tedious. A highly computer-assisted of even fully automated suppression of bones would considerably speed up the examination and probably increase the number of examined cases. In this paper we investigate the suppression (or removal) of bone regions in 3D CT data sets for vascular examinations of the head with a visualization of the carotids and the circle of Willis.
Flowers, Natalie L
2010-01-01
CodeSlinger is a desktop application that was developed to aid medical professionals in the intertranslation, exploration, and use of biomedical coding schemes. The application was designed to provide a highly intuitive, easy-to-use interface that simplifies a complex business problem: a set of time-consuming, laborious tasks that were regularly performed by a group of medical professionals involving manually searching coding books, searching the Internet, and checking documentation references. A workplace observation session with a target user revealed the details of the current process and a clear understanding of the business goals of the target user group. These goals drove the design of the application's interface, which centers on searches for medical conditions and displays the codes found in the application's database that represent those conditions. The interface also allows the exploration of complex conceptual relationships across multiple coding schemes.
Abusam, A; Keesman, K J; van Straten, G; Spanjers, H; Meinema, K
2001-01-01
When applied to large simulation models, the process of parameter estimation is also called calibration. Calibration of complex non-linear systems, such as activated sludge plants, is often not an easy task. On the one hand, manual calibration of such complex systems is usually time-consuming, and its results are often not reproducible. On the other hand, conventional automatic calibration methods are not always straightforward and often hampered by local minima problems. In this paper a new straightforward and automatic procedure, which is based on the response surface method (RSM) for selecting the best identifiable parameters, is proposed. In RSM, the process response (output) is related to the levels of the input variables in terms of a first- or second-order regression model. Usually, RSM is used to relate measured process output quantities to process conditions. However, in this paper RSM is used for selecting the dominant parameters, by evaluating parameters sensitivity in a predefined region. Good results obtained in calibration of ASM No. 1 for N-removal in a full-scale oxidation ditch proved that the proposed procedure is successful and reliable.
Automated Tracing of Horizontal Neuron Processes During Retinal Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerekes, Ryan A; Martins, Rodrigo; Dyer, Michael A
2011-01-01
In the developing mammalian retina, horizontal neurons undergo a dramatic reorganization oftheir processes shortly after they migrate to their appropriate laminar position. This is an importantprocess because it is now understood that the apical processes are important for establishing theregular mosaic of horizontal cells in the retina and proper reorganization during lamination isrequired for synaptogenesis with photoreceptors and bipolar neurons. However, this process isdifficult to study because the analysis of horizontal neuron anatomy is labor intensive and time-consuming. In this paper, we present a computational method for automatically tracing the three-dimensional (3-D) dendritic structure of horizontal retinal neurons in two-photonmore » laser scanningmicroscope (TPLSM) imagery. Our method is based on 3-D skeletonization and is thus able topreserve the complex structure of the dendritic arbor of these cells. We demonstrate theeffectiveness of our approach by comparing our tracing results against two sets of semi-automatedtraces over a set of 10 horizontal neurons ranging in age from P1 to P5. We observe an averageagreement level of 81% between our automated trace and the manual traces. This automatedmethod will serve as an important starting point for further refinement and optimization.« less
Scalable and Interactive Segmentation and Visualization of Neural Processes in EM Datasets
Jeong, Won-Ki; Beyer, Johanna; Hadwiger, Markus; Vazquez, Amelio; Pfister, Hanspeter; Whitaker, Ross T.
2011-01-01
Recent advances in scanning technology provide high resolution EM (Electron Microscopy) datasets that allow neuroscientists to reconstruct complex neural connections in a nervous system. However, due to the enormous size and complexity of the resulting data, segmentation and visualization of neural processes in EM data is usually a difficult and very time-consuming task. In this paper, we present NeuroTrace, a novel EM volume segmentation and visualization system that consists of two parts: a semi-automatic multiphase level set segmentation with 3D tracking for reconstruction of neural processes, and a specialized volume rendering approach for visualization of EM volumes. It employs view-dependent on-demand filtering and evaluation of a local histogram edge metric, as well as on-the-fly interpolation and ray-casting of implicit surfaces for segmented neural structures. Both methods are implemented on the GPU for interactive performance. NeuroTrace is designed to be scalable to large datasets and data-parallel hardware architectures. A comparison of NeuroTrace with a commonly used manual EM segmentation tool shows that our interactive workflow is faster and easier to use for the reconstruction of complex neural processes. PMID:19834227
Spatial Statistics for Tumor Cell Counting and Classification
NASA Astrophysics Data System (ADS)
Wirjadi, Oliver; Kim, Yoo-Jin; Breuel, Thomas
To count and classify cells in histological sections is a standard task in histology. One example is the grading of meningiomas, benign tumors of the meninges, which requires to assess the fraction of proliferating cells in an image. As this process is very time consuming when performed manually, automation is required. To address such problems, we propose a novel application of Markov point process methods in computer vision, leading to algorithms for computing the locations of circular objects in images. In contrast to previous algorithms using such spatial statistics methods in image analysis, the present one is fully trainable. This is achieved by combining point process methods with statistical classifiers. Using simulated data, the method proposed in this paper will be shown to be more accurate and more robust to noise than standard image processing methods. On the publicly available SIMCEP benchmark for cell image analysis algorithms, the cell count performance of the present paper is significantly more accurate than results published elsewhere, especially when cells form dense clusters. Furthermore, the proposed system performs as well as a state-of-the-art algorithm for the computer-aided histological grading of meningiomas when combined with a simple k-nearest neighbor classifier for identifying proliferating cells.
Stereophotogrammetry in studies of riparian vegetation dynamics
NASA Astrophysics Data System (ADS)
Hortobagyi, Borbala; Vautier, Franck; Corenblit, Dov; Steiger, Johannes
2014-05-01
Riparian vegetation responds to hydrogeomorphic disturbances and also controls sediment deposition and erosion. Spatio-temporal riparian vegetation dynamics within fluvial corridors have been quantified in many studies using aerial photographs and GIS. However, this approach does not allow the consideration of woody vegetation growth rates (i.e. vertical dimension) which are fundamental when studying feedbacks between the processes of fluvial landform construction and vegetation establishment and succession. We built 3D photogrammetric models of vegetation height based on aerial argentic and digital photographs from sites of the Allier and Garonne Rivers (France). The models were realized at two different spatial scales and with two different methods. The "large" scale corresponds to the reach of the river corridor on the Allier river (photograph taken in 2009) and the "small" scale to river bars of the Allier (photographs taken in 2002, 2009) and Garonne Rivers (photographs taken in 2000, 2002, 2006 and 2010). At the corridor scale, we generated vegetation height models using an automatic procedure. This method is fast but can only be used with digital photographs. At the bar scale, we constructed the models manually using a 3D visualization on the screen. This technique showed good results for digital and also argentic photographs but is very time-consuming. A diachronic study was performed in order to investigate vegetation succession by distinguishing three different classes according to the vegetation height: herbs (<1 m), shrubs (1-4 m) or trees (>4 m). Both methods, i.e. automatic and manual, were employed to study the evolution of the three vegetation classes and the recruitment of new vegetation patches. A comparison was conducted between the vegetation height given by models (automatic and manual) and the vegetation height measured in the field. The manually produced models (small scale) were of a precision of 0.5-1 m, allowing the quantification of woody vegetation growth rates. Thus, our results show that the manual method we developed is accurate to quantify vegetation growth rates at small scales, whereas the less accurate automatic method is appropriate to study vegetation succession at the corridor scale. Both methods are complementary and will contribute to a further exploration of the mutual relationships between hydrogeomorphic processes, topography and vegetation dynamics within alluvial systems, adding the quantification of the vertical dimension of riparian vegetation to their spatio-temporal characteristics.
Automatic detection and decoding of honey bee waggle dances.
Wario, Fernando; Wild, Benjamin; Rojas, Raúl; Landgraf, Tim
2017-01-01
The waggle dance is one of the most popular examples of animal communication. Forager bees direct their nestmates to profitable resources via a complex motor display. Essentially, the dance encodes the polar coordinates to the resource in the field. Unemployed foragers follow the dancer's movements and then search for the advertised spots in the field. Throughout the last decades, biologists have employed different techniques to measure key characteristics of the waggle dance and decode the information it conveys. Early techniques involved the use of protractors and stopwatches to measure the dance orientation and duration directly from the observation hive. Recent approaches employ digital video recordings and manual measurements on screen. However, manual approaches are very time-consuming. Most studies, therefore, regard only small numbers of animals in short periods of time. We have developed a system capable of automatically detecting, decoding and mapping communication dances in real-time. In this paper, we describe our recording setup, the image processing steps performed for dance detection and decoding and an algorithm to map dances to the field. The proposed system performs with a detection accuracy of 90.07%. The decoded waggle orientation has an average error of -2.92° (± 7.37°), well within the range of human error. To evaluate and exemplify the system's performance, a group of bees was trained to an artificial feeder, and all dances in the colony were automatically detected, decoded and mapped. The system presented here is the first of this kind made publicly available, including source code and hardware specifications. We hope this will foster quantitative analyses of the honey bee waggle dance.
Lüddemann, Tobias; Egger, Jan
2016-01-01
Abstract. Among all types of cancer, gynecological malignancies belong to the fourth most frequent type of cancer among women. In addition to chemotherapy and external beam radiation, brachytherapy is the standard procedure for the treatment of these malignancies. In the progress of treatment planning, localization of the tumor as the target volume and adjacent organs of risks by segmentation is crucial to accomplish an optimal radiation distribution to the tumor while simultaneously preserving healthy tissue. Segmentation is performed manually and represents a time-consuming task in clinical daily routine. This study focuses on the segmentation of the rectum/sigmoid colon as an organ-at-risk in gynecological brachytherapy. The proposed segmentation method uses an interactive, graph-based segmentation scheme with a user-defined template. The scheme creates a directed two-dimensional graph, followed by the minimal cost closed set computation on the graph, resulting in an outlining of the rectum. The graph’s outline is dynamically adapted to the last calculated cut. Evaluation was performed by comparing manual segmentations of the rectum/sigmoid colon to results achieved with the proposed method. The comparison of the algorithmic to manual result yielded a dice similarity coefficient value of 83.85±4.08, in comparison to 83.97±8.08% for the comparison of two manual segmentations by the same physician. Utilizing the proposed methodology resulted in a median time of 128 s/dataset, compared to 300 s needed for pure manual segmentation. PMID:27403448
California State Library: Processing Center Design and Specifications. Volume III, Coding Manual.
ERIC Educational Resources Information Center
Sherman, Don; Shoffner, Ralph M.
As part of the report on the California State Library Processing Center design and specifications, this volume is a coding manual for the conversion of catalog card data to a machine-readable form. The form is compatible with the national MARC system, while at the same time it contains provisions for problems peculiar to the local situation. This…
Automatic detection of larynx cancer from contrast-enhanced magnetic resonance images
NASA Astrophysics Data System (ADS)
Doshi, Trushali; Soraghan, John; Grose, Derek; MacKenzie, Kenneth; Petropoulakis, Lykourgos
2015-03-01
Detection of larynx cancer from medical imaging is important for the quantification and for the definition of target volumes in radiotherapy treatment planning (RTP). Magnetic resonance imaging (MRI) is being increasingly used in RTP due to its high resolution and excellent soft tissue contrast. Manually detecting larynx cancer from sequential MRI is time consuming and subjective. The large diversity of cancer in terms of geometry, non-distinct boundaries combined with the presence of normal anatomical regions close to the cancer regions necessitates the development of automatic and robust algorithms for this task. A new automatic algorithm for the detection of larynx cancer from 2D gadoliniumenhanced T1-weighted (T1+Gd) MRI to assist clinicians in RTP is presented. The algorithm employs edge detection using spatial neighborhood information of pixels and incorporates this information in a fuzzy c-means clustering process to robustly separate different tissues types. Furthermore, it utilizes the information of the expected cancerous location for cancer regions labeling. Comparison of this automatic detection system with manual clinical detection on real T1+Gd axial MRI slices of 2 patients (24 MRI slices) with visible larynx cancer yields an average dice similarity coefficient of 0.78+/-0.04 and average root mean square error of 1.82+/-0.28 mm. Preliminary results show that this fully automatic system can assist clinicians in RTP by obtaining quantifiable and non-subjective repeatable detection results in a particular time-efficient and unbiased fashion.
Ughi, Giovanni J; Adriaenssens, Tom; Desmet, Walter; D’hooge, Jan
2012-01-01
Intravascular optical coherence tomography (IV-OCT) is an imaging modality that can be used for the assessment of intracoronary stents. Recent publications pointed to the fact that 3D visualizations have potential advantages compared to conventional 2D representations. However, 3D imaging still requires a time consuming manual procedure not suitable for on-line application during coronary interventions. We propose an algorithm for a rapid and fully automatic 3D visualization of IV-OCT pullbacks. IV-OCT images are first processed for the segmentation of the different structures. This also allows for automatic pullback calibration. Then, according to the segmentation results, different structures are depicted with different colors to visualize the vessel wall, the stent and the guide-wire in details. Final 3D rendering results are obtained through the use of a commercial 3D DICOM viewer. Manual analysis was used as ground-truth for the validation of the segmentation algorithms. A correlation value of 0.99 and good limits of agreement (Bland Altman statistics) were found over 250 images randomly extracted from 25 in vivo pullbacks. Moreover, 3D rendering was compared to angiography, pictures of deployed stents made available by the manufacturers and to conventional 2D imaging corroborating visualization results. Computational time for the visualization of an entire data sets resulted to be ~74 sec. The proposed method allows for the on-line use of 3D IV-OCT during percutaneous coronary interventions, potentially allowing treatments optimization. PMID:23243578
GISentinel: a software platform for automatic ulcer detection on capsule endoscopy videos
NASA Astrophysics Data System (ADS)
Yi, Steven; Jiao, Heng; Meng, Fan; Leighton, Jonathon A.; Shabana, Pasha; Rentz, Lauri
2014-03-01
In this paper, we present a novel and clinically valuable software platform for automatic ulcer detection on gastrointestinal (GI) tract from Capsule Endoscopy (CE) videos. Typical CE videos take about 8 hours. They have to be reviewed manually by physicians to detect and locate diseases such as ulcers and bleedings. The process is time consuming. Moreover, because of the long-time manual review, it is easy to lead to miss-finding. Working with our collaborators, we were focusing on developing a software platform called GISentinel, which can fully automated GI tract ulcer detection and classification. This software includes 3 parts: the frequency based Log-Gabor filter regions of interest (ROI) extraction, the unique feature selection and validation method (e.g. illumination invariant feature, color independent features, and symmetrical texture features), and the cascade SVM classification for handling "ulcer vs. non-ulcer" cases. After the experiments, this SW gave descent results. In frame-wise, the ulcer detection rate is 69.65% (319/458). In instance-wise, the ulcer detection rate is 82.35%(28/34).The false alarm rate is 16.43% (34/207). This work is a part of our innovative 2D/3D based GI tract disease detection software platform. The final goal of this SW is to find and classification of major GI tract diseases intelligently, such as bleeding, ulcer, and polyp from the CE videos. This paper will mainly describe the automatic ulcer detection functional module.
Reis, Felipe; Machín, Leandro; Rosenthal, Amauri; Deliza, Rosires; Ares, Gastón
2016-12-01
People do not usually process all the available information on packages for making their food choices and rely on heuristics for making their decisions, particularly when having limited time. However, in most consumer studies encourage participants to invest a lot of time for making their choices. Therefore, imposing a time-constraint in consumer studies may increase their ecological validity. In this context, the aim of the present work was to evaluate the influence of a time-constraint on consumer evaluation of pomegranate/orange juice bottles using rating-based conjoint task. A consumer study with 100 participants was carried out, in which they had to evaluate 16 pomegranate/orange fruit juice bottles, differing in bottle design, front-of-pack nutritional information, nutrition claim and processing claim, and to rate their intention to purchase. Half of the participants evaluated the bottle images without time constraint and the other half had a time-constraint of 3s for evaluating each image. Eye-movements were recorded during the evaluation. Results showed that time-constraint when evaluating intention to purchase did not largely modify the way in which consumers visually processed bottle images. Regardless of the experimental condition (with or without time constraint), they tended to evaluate the same product characteristics and to give them the same relative importance. However, a trend towards a more superficial evaluation of the bottles that skipped complex information was observed. Regarding the influence of product characteristics on consumer intention to purchase, bottle design was the variable with the largest relative importance in both conditions, overriding the influence of nutritional or processing characteristics, which stresses the importance of graphic design in shaping consumer perception. Copyright © 2016 Elsevier Ltd. All rights reserved.
High-throughput measurement of rice tillers using a conveyor equipped with x-ray computed tomography
NASA Astrophysics Data System (ADS)
Yang, Wanneng; Xu, Xiaochun; Duan, Lingfeng; Luo, Qingming; Chen, Shangbin; Zeng, Shaoqun; Liu, Qian
2011-02-01
Tillering is one of the most important agronomic traits because the number of shoots per plant determines panicle number, a key component of grain yield. The conventional method of counting tillers is still manual. Under the condition of mass measurement, the accuracy and efficiency could be gradually degraded along with fatigue of experienced staff. Thus, manual measurement, including counting and recording, is not only time consuming but also lack objectivity. To automate this process, we developed a high-throughput facility, dubbed high-throughput system for measuring automatically rice tillers (H-SMART), for measuring rice tillers based on a conventional x-ray computed tomography (CT) system and industrial conveyor. Each pot-grown rice plant was delivered into the CT system for scanning via the conveyor equipment. A filtered back-projection algorithm was used to reconstruct the transverse section image of the rice culms. The number of tillers was then automatically extracted by image segmentation. To evaluate the accuracy of this system, three batches of rice at different growth stages (tillering, heading, or filling) were tested, yielding absolute mean absolute errors of 0.22, 0.36, and 0.36, respectively. Subsequently, the complete machine was used under industry conditions to estimate its efficiency, which was 4320 pots per continuous 24 h workday. Thus, the H-SMART could determine the number of tillers of pot-grown rice plants, providing three advantages over the manual tillering method: absence of human disturbance, automation, and high throughput. This facility expands the application of agricultural photonics in plant phenomics.
Yang, Wanneng; Xu, Xiaochun; Duan, Lingfeng; Luo, Qingming; Chen, Shangbin; Zeng, Shaoqun; Liu, Qian
2011-02-01
Tillering is one of the most important agronomic traits because the number of shoots per plant determines panicle number, a key component of grain yield. The conventional method of counting tillers is still manual. Under the condition of mass measurement, the accuracy and efficiency could be gradually degraded along with fatigue of experienced staff. Thus, manual measurement, including counting and recording, is not only time consuming but also lack objectivity. To automate this process, we developed a high-throughput facility, dubbed high-throughput system for measuring automatically rice tillers (H-SMART), for measuring rice tillers based on a conventional x-ray computed tomography (CT) system and industrial conveyor. Each pot-grown rice plant was delivered into the CT system for scanning via the conveyor equipment. A filtered back-projection algorithm was used to reconstruct the transverse section image of the rice culms. The number of tillers was then automatically extracted by image segmentation. To evaluate the accuracy of this system, three batches of rice at different growth stages (tillering, heading, or filling) were tested, yielding absolute mean absolute errors of 0.22, 0.36, and 0.36, respectively. Subsequently, the complete machine was used under industry conditions to estimate its efficiency, which was 4320 pots per continuous 24 h workday. Thus, the H-SMART could determine the number of tillers of pot-grown rice plants, providing three advantages over the manual tillering method: absence of human disturbance, automation, and high throughput. This facility expands the application of agricultural photonics in plant phenomics.
Teachers Environmental Resource Unit: Consumer Resources Idea Manual.
ERIC Educational Resources Information Center
Bemiss, Clair W.
The Consumer Resources Environteam has developed this idea handbook as part of the Broad Spectrum Environmental Education Program in Brevard County, Florida. Interest had been displayed by local civic groups, fraternal clubs, and private organizations in identifying environmental improvement projects that could be undertaken by individual groups.…
Manual Therapy Practices of Sobadores in North Carolina
Graham, Alan; Sandberg, Joanne C.; Quandt, Sara A.; Mora, Dana C.
2016-01-01
Abstract Objectives: This analysis provides a description of the manual-therapy elements of sobadores practicing in North Carolina, using videotapes of patient treatment sessions. Design: Three sobadores allowed the video recording of eight patient treatment sessions (one each for two sobadores, six for the third sobador). Each of the recordings was reviewed by an experienced chiropractor who recorded the frequencies of seven defined manual-therapy elements: (1) treatment time; (2) patient position on treatment surface; (3) patient body part contacted by the sobador; (4) sobador examination methods; (5) primary treatment processes; (6) sobador body part area referencing patient; and (7) adjunctive treatment processes. Results: The range of treatment time of 9–30 min was similar to the treatment spectra that combine techniques used by conventional massage and manipulative practitioners. The patient positions on the treatment surface were not extraordinary, given the wide variety of treatment processes used, and indicated the sobadores treat patients in multiple positions. The patient body part contacted by the sobadores indicated that they were treating each of the major parts of the musculoskeletal system. Basic palpation dominated the sobadores' examination methods. The sobadores' primary treatment processes included significant variety, but rubbing was the dominant practice. The hands were the sobador body area that most often made contact with the patient. They all used lubricants. Conclusions: Sobadores' methods are similar to those of other manual-therapy practitioners. Additional study of video-recorded sobador practices is needed. Video-recorded practice of other traditional and conventional manual therapies for comparative analysis will help delineate the specific similarities and differences among the manual therapies. PMID:27400120
Engineer Modeling Study. Volume II. Users Manual.
1982-09-01
Distribution Center, Digital Equip- ment Corporation, 1980). The following paragraphs briefly describe each of the major input sections...abbreviation 3. A sequence number for post-processing 4. Clock time 5. Order number pointer (six digits ) 6. Job number pointer (six digits ) 7. Unit number...KIT) Users Manual (Boeing Computer % Services, Inc., 1977). S VAX/VMS Users Manual. Volume 3A (Software Distribution Center, Digital Equipment
Cost Analysis Sources and Documents Data Base Reference Manual (Update)
1989-06-01
M: Refcrence Manual PRICE H: Training Course Workbook 11. Use in Cost Analysis. Important source of cost estimates for electronic and mechanical...Nature of Data. Contains many microeconomic time series by month or quarter. 5. Level of Detail. Very detailed. 6. Normalization Processes Required...Reference Manual. Moorestown, N.J,: GE Corporation, September 1986. 64. PRICE Training Course Workbook . Moorestown, N.J.: GE Corporation, February 1986
Illumina Unamplified Indexed Library Construction: An Automated Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hack, Christopher A.; Sczyrba, Alexander; Cheng, Jan-Fang
Manual library construction is a limiting factor in Illumina sequencing. Constructing libraries by hand is costly, time-consuming, low-throughput, and ergonomically hazardous, and constructing multiple libraries introduces risk of library failure due to pipetting errors. The ability to construct multiple libraries simultaneously in automated fashion represents significant cost and time savings. Here we present a strategy to construct up to 96 unamplified indexed libraries using Illumina TruSeq reagents and a Biomek FX robotic platform. We also present data to indicate that this library construction method has little or no risk of cross-contamination between samples.
Image quality comparisons of X-Omat RP, L and B films.
Van Dis, M L; Beck, F M
1991-08-01
The Eastman Kodak Company has recently developed a new film, X-Omat B (XB), designed to be interchangeable with X-Omat RP (XRP) film. The manufacturer claims the new film can be manually developed in half the time of other X-Omat films while automatic processing is unchanged. Three X-Omat film types were processed manually or automatically and the image qualities were evaluated. The XRP film had greater contrast than the XB and X-Omat L (XL) films when manually processed, and the XL film showed less contrast than the XB and XRP films when processed automatically. There was no difference in the subjective evaluation of the various film types and processing methods, and the XB film could be interchanged with XRP film in a simulated clinical situation.
Boers, A M; Marquering, H A; Jochem, J J; Besselink, N J; Berkhemer, O A; van der Lugt, A; Beenen, L F; Majoie, C B
2013-08-01
Cerebral infarct volume as observed in follow-up CT is an important radiologic outcome measure of the effectiveness of treatment of patients with acute ischemic stroke. However, manual measurement of CIV is time-consuming and operator-dependent. The purpose of this study was to develop and evaluate a robust automated measurement of the CIV. The CIV in early follow-up CT images of 34 consecutive patients with acute ischemic stroke was segmented with an automated intensity-based region-growing algorithm, which includes partial volume effect correction near the skull, midline determination, and ventricle and hemorrhage exclusion. Two observers manually delineated the CIV. Interobserver variability of the manual assessments and the accuracy of the automated method were evaluated by using the Pearson correlation, Bland-Altman analysis, and Dice coefficients. The accuracy was defined as the correlation with the manual assessment as a reference standard. The Pearson correlation for the automated method compared with the reference standard was similar to the manual correlation (R = 0.98). The accuracy of the automated method was excellent with a mean difference of 0.5 mL with limits of agreement of -38.0-39.1 mL, which were more consistent than the interobserver variability of the 2 observers (-40.9-44.1 mL). However, the Dice coefficients were higher for the manual delineation. The automated method showed a strong correlation and accuracy with the manual reference measurement. This approach has the potential to become the standard in assessing the infarct volume as a secondary outcome measure for evaluating the effectiveness of treatment.
Teaching Consumer Education and Financial Planning: A Manual for School and Classroom Use.
ERIC Educational Resources Information Center
Council for Family Financial Education, Silver Spring, MD.
This manual, designed for both the teacher and curriculum planner, is organized around six major themes: planning, buying, borrowing, protecting, investing, and sharing. These major areas are used in place of more conventional topical categories because the emphasis here is upon each individual's behavior. A chart is provided that shows the…
A semi-automated technique for labeling and counting of apoptosing retinal cells
2014-01-01
Background Retinal ganglion cell (RGC) loss is one of the earliest and most important cellular changes in glaucoma. The DARC (Detection of Apoptosing Retinal Cells) technology enables in vivo real-time non-invasive imaging of single apoptosing retinal cells in animal models of glaucoma and Alzheimer’s disease. To date, apoptosing RGCs imaged using DARC have been counted manually. This is time-consuming, labour-intensive, vulnerable to bias, and has considerable inter- and intra-operator variability. Results A semi-automated algorithm was developed which enabled automated identification of apoptosing RGCs labeled with fluorescent Annexin-5 on DARC images. Automated analysis included a pre-processing stage involving local-luminance and local-contrast “gain control”, a “blob analysis” step to differentiate between cells, vessels and noise, and a method to exclude non-cell structures using specific combined ‘size’ and ‘aspect’ ratio criteria. Apoptosing retinal cells were counted by 3 masked operators, generating ‘Gold-standard’ mean manual cell counts, and were also counted using the newly developed automated algorithm. Comparison between automated cell counts and the mean manual cell counts on 66 DARC images showed significant correlation between the two methods (Pearson’s correlation coefficient 0.978 (p < 0.001), R Squared = 0.956. The Intraclass correlation coefficient was 0.986 (95% CI 0.977-0.991, p < 0.001), and Cronbach’s alpha measure of consistency = 0.986, confirming excellent correlation and consistency. No significant difference (p = 0.922, 95% CI: −5.53 to 6.10) was detected between the cell counts of the two methods. Conclusions The novel automated algorithm enabled accurate quantification of apoptosing RGCs that is highly comparable to manual counting, and appears to minimise operator-bias, whilst being both fast and reproducible. This may prove to be a valuable method of quantifying apoptosing retinal cells, with particular relevance to translation in the clinic, where a Phase I clinical trial of DARC in glaucoma patients is due to start shortly. PMID:24902592
Fan, Jung-Wei; Lussier, Yves A
2017-01-01
Dietary supplements remain a relatively underexplored source for drug repurposing. A systematic approach to soliciting responses from a large consumer population is desirable to speed up innovation. We tested a workflow that mines unexpected benefits of dietary supplements from massive consumer reviews. A (non-exhaustive) list of regular expressions was used to screen over 2 million reviews on health and personal care products. The matched reviews were manually analyzed, and one supplement-disease pair was linked to biological databases for enriching the hypothesized association. The regular expressions found 169 candidate reviews, of which 45.6% described unexpected benefits of certain dietary supplements. The manual analysis showed some of the supplement-disease associations to be novel or in agreement with evidence published later in the literature. The hypothesis enrichment was able to identify meaningful function similarity between the supplement and the disease. The results demonstrated value of the workflow in identifying candidates for supplement repurposing.
Ice nucleating agents allow embryo freezing without manual seeding.
Teixeira, Magda; Buff, Samuel; Desnos, Hugo; Loiseau, Céline; Bruyère, Pierre; Joly, Thierry; Commin, Loris
2017-12-01
Embryo slow freezing protocols include a nucleation induction step called manual seeding. This step is time consuming, manipulator dependent and hard to standardize. It requires access to samples, which is not always possible within the configuration of systems, such as differential scanning calorimeters or cryomicroscopes. Ice nucleation can be induced by other methods, e.g., by the use of ice nucleating agents. Snomax is a commercial preparation of inactivated proteins extracted from Pseudomonas syringae. The aim of our study was to investigate if Snomax can be an alternative to manual seeding in the slow freezing of mouse embryos. The influence of Snomax on the pH and osmolality of the freezing medium was evaluated. In vitro development (blastocyst formation and hatching rates) of fresh embryos exposed to Snomax and embryo cryopreserved with and without Snomax was assessed. The mitochondrial activity of frozen-thawed blastocysts was assessed by JC-1 fluorescent staining. Snomax didn't alter the physicochemical properties of the freezing medium, and did not affect embryo development of fresh embryos. After cryopreservation, the substitution of manual seeding by the ice nucleating agent (INA) Snomax did not affect embryo development or embryo mitochondrial activity. In conclusion, Snomax seems to be an effective ice nucleating agent for the slow freezing of mouse embryos. Snomax can also be a valuable alternative to manual seeding in research protocols in which manual seeding cannot be performed (i.e., differential scanning calorimetry and cryomicroscopy). Copyright © 2017 Elsevier Inc. All rights reserved.
A Virtual Reality Visualization Tool for Neuron Tracing
Usher, Will; Klacansky, Pavol; Federer, Frederick; Bremer, Peer-Timo; Knoll, Aaron; Angelucci, Alessandra; Pascucci, Valerio
2017-01-01
Tracing neurons in large-scale microscopy data is crucial to establishing a wiring diagram of the brain, which is needed to understand how neural circuits in the brain process information and generate behavior. Automatic techniques often fail for large and complex datasets, and connectomics researchers may spend weeks or months manually tracing neurons using 2D image stacks. We present a design study of a new virtual reality (VR) system, developed in collaboration with trained neuroanatomists, to trace neurons in microscope scans of the visual cortex of primates. We hypothesize that using consumer-grade VR technology to interact with neurons directly in 3D will help neuroscientists better resolve complex cases and enable them to trace neurons faster and with less physical and mental strain. We discuss both the design process and technical challenges in developing an interactive system to navigate and manipulate terabyte-sized image volumes in VR. Using a number of different datasets, we demonstrate that, compared to widely used commercial software, consumer-grade VR presents a promising alternative for scientists. PMID:28866520
A Virtual Reality Visualization Tool for Neuron Tracing.
Usher, Will; Klacansky, Pavol; Federer, Frederick; Bremer, Peer-Timo; Knoll, Aaron; Yarch, Jeff; Angelucci, Alessandra; Pascucci, Valerio
2018-01-01
Tracing neurons in large-scale microscopy data is crucial to establishing a wiring diagram of the brain, which is needed to understand how neural circuits in the brain process information and generate behavior. Automatic techniques often fail for large and complex datasets, and connectomics researchers may spend weeks or months manually tracing neurons using 2D image stacks. We present a design study of a new virtual reality (VR) system, developed in collaboration with trained neuroanatomists, to trace neurons in microscope scans of the visual cortex of primates. We hypothesize that using consumer-grade VR technology to interact with neurons directly in 3D will help neuroscientists better resolve complex cases and enable them to trace neurons faster and with less physical and mental strain. We discuss both the design process and technical challenges in developing an interactive system to navigate and manipulate terabyte-sized image volumes in VR. Using a number of different datasets, we demonstrate that, compared to widely used commercial software, consumer-grade VR presents a promising alternative for scientists.
Qu, Zhenhong; Ghorbani, Rhonda P; Li, Hongyan; Hunter, Robert L; Hannah, Christina D
2007-03-01
Gross examination, encompassing description, dissection, and sampling, is a complex task and an essential component of surgical pathology. Because of the complexity of the task, standardized protocols to guide the gross examination often become a bulky manual that is difficult to use. This problem is further compounded by the high specimen volume and biohazardous nature of the task. As a result, such a manual is often underused, leading to errors that are potentially harmful and time consuming to correct-a common chronic problem affecting many pathology laboratories. To combat this problem, we have developed a simple method that incorporates complex text and graphic information of a typical procedure manual and yet allows easy access to any intended instructive information in the manual. The method uses the Object-Linking-and-Embedding function of Microsoft Word (Microsoft, Redmond, WA) to establish hyperlinks among different contents, and then it uses the touch screen technology to facilitate navigation through the manual on a computer screen installed at the cutting bench with no need for a physical keyboard or a mouse. It takes less than 4 seconds to reach any intended information in the manual by 3 to 4 touches on the screen. A 3-year follow-up study shows that this method has increased use of the manual and has improved the quality of gross examination. The method is simple and can be easily tailored to different formats of instructive information, allowing flexible organization, easy access, and quick navigation. Increased compliance to instructive information reduces errors at the grossing bench and improves work efficiency.
Novis, David A; Walsh, Molly; Wilkinson, David; St Louis, Mary; Ben-Ezra, Jonathon
2006-05-01
Automated laboratory hematology analyzers are capable of performing differential counts on peripheral blood smears with greater precision and more accurate detection of distributional and morphologic abnormalities than those performed by manual examinations of blood smears. Manual determinations of blood morphology and leukocyte differential counts are time-consuming, expensive, and may not always be necessary. The frequency with which hematology laboratory workers perform manual screens despite the availability of labor-saving features of automated analyzers is unknown. To determine the normative rates with which manual peripheral blood smears were performed in clinical laboratories, to examine laboratory practices associated with higher or lower manual review rates, and to measure the effects of manual smear review on the efficiency of generating complete blood count (CBC) determinations. From each of 3 traditional shifts per day, participants were asked to select serially, 10 automated CBC specimens, and to indicate whether manual scans and/or reviews with complete differential counts were performed on blood smears prepared from those specimens. Sampling continued until a total of 60 peripheral smears were reviewed manually. For each specimen on which a manual review was performed, participants indicated the patient's age, hemoglobin value, white blood cell count, platelet count, and the primary reason why the manual review was performed. Participants also submitted data concerning their institutions' demographic profiles and their laboratories' staffing, work volume, and practices regarding CBC determinations. The rates of manual reviews and estimations of efficiency in performing CBC determinations were obtained from the data. A total of 263 hospitals and independent laboratories, predominantly located in the United States, participating in the College of American Pathologists Q-Probes Program. There were 95,141 CBC determinations examined in this study; participants reviewed 15,423 (16.2%) peripheral blood smears manually. In the median institution (50th percentile), manual reviews of peripheral smears were performed on 26.7% of specimens. Manual differential count review rates were inversely associated with the magnitude of platelet counts that were required by laboratory policy to trigger smear reviews and with the efficiency of generating CBC reports. Lower manual differential count review rates were associated with laboratory policies that allowed manual reviews solely on the basis of abnormal automated red cell parameters and that precluded performing repeat manual reviews within designated time intervals. The manual scan rate elevated with increased number of hospital beds. In more than one third (35.7%) of the peripheral smears reviewed manually, participants claimed to have learned additional information beyond what was available on automated hematology analyzer printouts alone. By adopting certain laboratory practices, it may be possible to reduce the rates of manual reviews of peripheral blood smears and increase the efficiency of generating CBC results.
Romero, Peggy; Miller, Ted; Garakani, Arman
2009-12-01
Current methods to assess neurodegradation in dorsal root ganglion cultures as a model for neurodegenerative diseases are imprecise and time-consuming. Here we describe two new methods to quantify neuroprotection in these cultures. The neurite quality index (NQI) builds upon earlier manual methods, incorporating additional morphological events to increase detection sensitivity for the detection of early degeneration events. Neurosight is a machine vision-based method that recapitulates many of the strengths of NQI while enabling high-throughput screening applications with decreased costs.
Easing The Calculation Of Bolt-Circle Coordinates
NASA Technical Reports Server (NTRS)
Burley, Richard K.
1995-01-01
Bolt Circle Calculation (BOLT-CALC) computer program used to reduce significant time consumed in manually computing trigonometry of rectangular Cartesian coordinates of holes in bolt circle as shown on blueprint or drawing. Eliminates risk of computational errors, particularly in cases involving many holes or in cases in which coordinates expressed to many significant digits. Program assists in many practical situations arising in machine shops. Written in BASIC. Also successfully compiled and implemented by use of Microsoft's QuickBasic v4.0.
Wang, Caixia; Chen, Yuanyuan; Yang, Feng; Ren, Jie; Yu, Xin; Wang, Jiani; Sun, Siyu
2016-08-01
The present study aimed to assess the efficacy of computer-based endoscope cleaning and disinfection using a hospital management information system (HMIS). A total of 2,674 gastroscopes were eligible for inclusion in this study. For the processes of disinfection management, the gastroscopes were randomly divided into 2 groups: gastroscope disinfection HMIS (GD-HMIS) group and manual group. In the GD-HMIS group, an integrated circuit card (IC card) chip was installed to monitor and record endoscope cleaning and disinfection automatically and in real time, whereas the endoscope cleaning and disinfection in the manual group was recorded manually. The overall disinfection progresses for both groups were recorded, and the total operational time was calculated. For the GD-HMIS group, endoscope disinfection HMIS software was successfully developed. The time to complete a single session of cleaning and disinfecting on a gastroscope was 15.6 minutes (range, 14.3-17.2 minutes) for the GD-HMIS group and 21.3 minutes (range, 20.2-23.9 minutes) for the manual group. Failure to record information, such as the identification number of the endoscope, occasionally occurred in the manual group, which affected the accuracy and reliability of manual recording. Computer-based gastroscope cleaning and disinfection using a hospital management information system could monitor the process of gastroscope cleaning and disinfection in real time and improve the accuracy and reliability, thereby ensuring the quality of gastroscope cleaning and disinfection. Copyright © 2016 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
A prototype of an automated high resolution InSAR volcano-monitoring system in the MED-SUV project
NASA Astrophysics Data System (ADS)
Chowdhury, Tanvir A.; Minet, Christian; Fritz, Thomas
2016-04-01
Volcanic processes which produce a variety of geological and hydrological hazards are difficult to predict and capable of triggering natural disasters on regional to global scales. Therefore it is important to monitor volcano continuously and with a high spatial and temporal sampling rate. The monitoring of active volcanoes requires the reliable measurement of surface deformation before, during and after volcanic activities and it helps for the better understanding and modelling of the involved geophysical processes. Space-borne synthetic aperture radar (SAR) interferometry (InSAR), persistent scatterer interferometry (PSI) and small baseline subset algorithm (SBAS) provide a powerful tool for observing the eruptive activities and measuring the surface changes of millimetre accuracy. All the mentioned techniques with deformation time series extraction address the challenges by exploiting medium to large SAR image stacks. The process of selecting, ordering, downloading, storing, logging, extracting and preparing the data for processing is very time consuming has to be done manually for every single data-stack. In many cases it is even an iterative process which has to be done regularly and continuously. Therefore, data processing becomes slow which causes significant delays in data delivery. The SAR Satellite based High Resolution Data Acquisition System, which will be developed at DLR, will automate this entire time consuming tasks and allows an operational volcano monitoring system. Every 24 hours the system runs for searching new acquired scene over the volcanoes and keeps track of the data orders, log the status and download the provided data via ftp-transfer including E-Mail alert. Furthermore, the system will deliver specified reports and maps to a database for review and use by specialists. The user interaction will be minimized and iterative processes will be totally avoided. In this presentation, a prototype of SAR Satellite based High Resolution Data Acquisition System, which is developed and operated by DLR, will be described in detail. The workflow of the developed system is described which allow a meaningful contribution of SAR for monitoring volcanic eruptive activities. A more robust and efficient InSAR data processing in IWAP processor will be introduced in the framework of a remote sensing task of MED-SUV project. An application of the developed prototype system to a historic eruption of Mount Etna and Piton de la Fournaise will be depicted in the last part of the presentation.
Design of on-line system for measuring and tracking time of assembly
NASA Astrophysics Data System (ADS)
Senderská, Katarína; Mareš, Albert; Evin, Emil
2016-04-01
Manual assembly performed at assembly workstations nowadays still has a unique place in different kinds of production. To increase the productivity and quality of manual assembly it is necessary to analyse the existing workplaces and find ways to improve and streamline work done at these workplaces. The article deals with the design of a model for on-line analysis of a manual assembly process. The proposed model is based on the use of sensors or the so-called button-box and the use of software for recording and evaluating data. Based on the obtained data it is then possible to evaluate the time characteristics of the assembly process, aswell as to find sources of delays and mistakes and then take appropriate action to correct them.
Implementation of NFC technology for industrial applications: case flexible production
NASA Astrophysics Data System (ADS)
Sallinen, Mikko; Strömmer, Esko; Ylisaukko-oja, Arto
2007-09-01
Near Field communication (NFC) technology enables a flexible short range communication. It has large amount of envisaged applications in consumer, welfare and industrial sector. Compared with other short range communication technologies such as Bluetooth or Wibree it provides advantages that we will introduce in this paper. In this paper, we present an example of applying NFC technology to industrial application where simple tasks can be automatized and industrial assembly process can be improved radically by replacing manual paperwork and increasing trace of the products during the production.
NASA Technical Reports Server (NTRS)
Call, Jared A.; Kwok, John H.; Fisher, Forest W.
2013-01-01
This innovation is a tool used to verify and validate spacecraft sequences at the predicted events file (PEF) level for the GRAIL (Gravity Recovery and Interior Laboratory, see http://www.nasa. gov/mission_pages/grail/main/index. html) mission as part of the Multi-Mission Planning and Sequencing Team (MPST) operations process to reduce the possibility for errors. This tool is used to catch any sequence related errors or issues immediately after the seqgen modeling to streamline downstream processes. This script verifies and validates the seqgen modeling for the GRAIL MPST process. A PEF is provided as input, and dozens of checks are performed on it to verify and validate the command products including command content, command ordering, flight-rule violations, modeling boundary consistency, resource limits, and ground commanding consistency. By performing as many checks as early in the process as possible, grl_pef_check streamlines the MPST task of generating GRAIL command and modeled products on an aggressive schedule. By enumerating each check being performed, and clearly stating the criteria and assumptions made at each step, grl_pef_check can be used as a manual checklist as well as an automated tool. This helper script was written with a focus on enabling the user with the information they need in order to evaluate a sequence quickly and efficiently, while still keeping them informed and active in the overall sequencing process. grl_pef_check verifies and validates the modeling and sequence content prior to investing any more effort into the build. There are dozens of various items in the modeling run that need to be checked, which is a time-consuming and errorprone task. Currently, no software exists that provides this functionality. Compared to a manual process, this script reduces human error and saves considerable man-hours by automating and streamlining the mission planning and sequencing task for the GRAIL mission.
Peng, Sean X; Cousineau, Martin; Juzwin, Stephen J; Ritchie, David M
2006-01-01
A novel 96-well screen filter plate (patent pending) has been invented to eliminate a time-consuming and labor-intensive step in preparation of in vivo study samples--to remove blood or plasma clots. These clots plug the pipet tips during a manual or automated sample-transfer step causing inaccurate pipetting or total pipetting failure. Traditionally, these blood and plasma clots are removed by picking them out manually one by one from each sample tube before any sample transfer can be made. This has significantly slowed the sample preparation process and has become a bottleneck for automated high-throughput sample preparation using robotic liquid handlers. Our novel screen filter plate was developed to solve this problem. The 96-well screen filter plate consists of 96 stainless steel wire-mesh screen tubes connected to the 96 openings of a top plate so that the screen filter plate can be readily inserted into a 96-well sample storage plate. Upon insertion, the blood and plasma clots are excluded from entering the screen tube while clear sample solutions flow freely into it. In this way, sample transfer can be easily completed by either manual or automated pipetting methods. In this report, three structurally diverse compounds were selected to evaluate and validate the use of the screen filter plate. The plasma samples of these compounds were transferred and processed in the presence and absence of the screen filter plate and then analyzed by LC-MS/MS methods. Our results showed a good agreement between the samples prepared with and without the screen filter plate, demonstrating the utility and efficiency of this novel device for preparation of blood and plasma samples. The device is simple, easy to use, and reusable. It can be employed for sample preparation of other biological fluids that contain floating particulates or aggregates.
Sustaining Employment: Social Skills at Work.
ERIC Educational Resources Information Center
Jonikas, Jessica A.; And Others
This manual is intended for use by persons with psychiatric disabilities who are employed in the community but need to improve their social skills to maintain their employment. It is designed to be taught to mental health consumers by mental health consumers. Each session outline includes objectives; a list of materials needed; and exercises that…
Hebert, Courtney; Flaherty, Jennifer; Smyer, Justin; Ding, Jing; Mangino, Julie E
2018-03-01
Surveillance is an important tool for infection control; however, this task can often be time-consuming and take away from infection prevention activities. With the increasing availability of comprehensive electronic health records, there is an opportunity to automate these surveillance activities. The objective of this article is to describe the implementation of an electronic algorithm for ventilator-associated events (VAEs) at a large academic medical center METHODS: This article reports on a 6-month manual validation of a dashboard for VAEs. We developed a computerized algorithm for automatically detecting VAEs and compared the output of this algorithm to the traditional, manual method of VAE surveillance. Manual surveillance by the infection preventionists identified 13 possible and 11 probable ventilator-associated pneumonias (VAPs), and the VAE dashboard identified 16 possible and 13 probable VAPs. The dashboard had 100% sensitivity and 100% accuracy when compared with manual surveillance for possible and probable VAP. We report on the successfully implemented VAE dashboard. Workflow of the infection preventionists was simplified after implementation of the dashboard with subjective time-savings reported. Implementing a computerized dashboard for VAE surveillance at a medical center with a comprehensive electronic health record is feasible; however, this required significant initial and ongoing work on the part of data analysts and infection preventionists. Copyright © 2018 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
Svetnik, Vladimir; Ma, Junshui; Soper, Keith A.; Doran, Scott; Renger, John J.; Deacon, Steve; Koblan, Ken S.
2007-01-01
Objective: To evaluate the performance of 2 automated systems, Morpheus and Somnolyzer24X7, with various levels of human review/editing, in scoring polysomnographic (PSG) recordings from a clinical trial using zolpidem in a model of transient insomnia. Methods: 164 all-night PSG recordings from 82 subjects collected during 2 nights of sleep, one under placebo and one under zolpidem (10 mg) treatment were used. For each recording, 6 different methods were used to provide sleep stage scores based on Rechtschaffen & Kales criteria: 1) full manual scoring, 2) automated scoring by Morpheus 3) automated scoring by Somnolyzer24X7, 4) automated scoring by Morpheus with full manual review, 5) automated scoring by Morpheus with partial manual review, 6) automated scoring by Somnolyzer24X7 with partial manual review. Ten traditional clinical efficacy measures of sleep initiation, maintenance, and architecture were calculated. Results: Pair-wise epoch-by-epoch agreements between fully automated and manual scores were in the range of intersite manual scoring agreements reported in the literature (70%-72%). Pair-wise epoch-by-epoch agreements between automated scores manually reviewed were higher (73%-76%). The direction and statistical significance of treatment effect sizes using traditional efficacy endpoints were essentially the same whichever method was used. As the degree of manual review increased, the magnitude of the effect size approached those estimated with fully manual scoring. Conclusion: Automated or semi-automated sleep PSG scoring offers valuable alternatives to costly, time consuming, and intrasite and intersite variable manual scoring, especially in large multicenter clinical trials. Reduction in scoring variability may also reduce the sample size of a clinical trial. Citation: Svetnik V; Ma J; Soper KA; Doran S; Renger JJ; Deacon S; Koblan KS. Evaluation of automated and semi-automated scoring of polysomnographic recordings from a clinical trial using zolpidem in the treatment of insomnia. SLEEP 2007;30(11):1562-1574. PMID:18041489
Megaw, R; Rane-Malcolm, T; Brannan, S; Smith, R; Sanders, R
2011-11-01
To determine current knowledge and opinion on revalidation, and methods of cataract surgery audit in Scotland and to outline the current and future possibilities for electronic cataract surgery audit. In 2010 we conducted a prospective, cross-sectional, Scottish-wide survey on revalidation knowledge and opinion, and cataract audit practice among all senior NHS ophthalmologists. Results were anonymised and recorded manually for analysis. In all, 61% of the ophthalmologists surveyed took part. Only 33% felt ready to take part in revalidation, whereas 76% felt they did not have adequate information about the process. Also, 71% did not feel revalidation would improve patient care, but 85% agreed that cataract surgery audit is essential for ophthalmic practice. In addition, 91% audit their cataract outcomes; 52% do so continuously. Further, 63% audit their subspecialist surgical results. Only 25% audit their cataract surgery practice electronically, and only 12% collect clinical data using a hospital PAS system. Funding and system incompatibility were the main reasons cited for the lack of electronic audit setup. Currently, eight separate hospital IT patient administration systems are used across 14 health boards in Scotland. Revalidation is set to commence in 2012. The Royal College of Ophthalmologists will use cataract outcome audit as a tool to ensure surgical competency for the process. Retrospective manual auditing of cataract outcome is time consuming, and can be avoided with an electronic system. Scottish ophthalmologists view revalidation with scepticism and appear to have inadequate knowledge of the process. However, they strongly agree with the concept of cataract surgery audit. The existing and future electronic applications that may support surgical audit are commercial electronic records, web-based applications, centrally funded software applications, and robust NHS connections between community and hospital.
The Extraction of Terrace in the Loess Plateau Based on radial method
NASA Astrophysics Data System (ADS)
Liu, W.; Li, F.
2016-12-01
The terrace of Loess Plateau, as a typical kind of artificial landform and an important measure of soil and water conservation, its positioning and automatic extraction will simplify the work of land use investigation. The existing methods of terrace extraction mainly include visual interpretation and automatic extraction. The manual method is used in land use investigation, but it is time-consuming and laborious. Researchers put forward some automatic extraction methods. For example, Fourier transform method can recognize terrace and find accurate position from frequency domain image, but it is more affected by the linear objects in the same direction of terrace; Texture analysis method is simple and have a wide range application of image processing. The disadvantage of texture analysis method is unable to recognize terraces' edge; Object-oriented is a new method of image classification, but when introduce it to terrace extracting, fracture polygons will be the most serious problem and it is difficult to explain its geological meaning. In order to positioning the terraces, we use high- resolution remote sensing image to extract and analyze the gray value of the pixels which the radial went through. During the recognition process, we firstly use the DEM data analysis or by manual selecting, to roughly confirm the position of peak points; secondly, take each of the peak points as the center to make radials in all directions; finally, extracting the gray values of the pixels which the radials went through, and analyzing its changing characteristics to confirm whether the terrace exists. For the purpose of getting accurate position of terrace, terraces' discontinuity, extension direction, ridge width, image processing algorithm, remote sensing image illumination and other influence factors were fully considered when designing the algorithms.
Hazes, Bart
2014-02-28
Protein-coding DNA sequences and their corresponding amino acid sequences are routinely used to study relationships between sequence, structure, function, and evolution. The rapidly growing size of sequence databases increases the power of such comparative analyses but it makes it more challenging to prepare high quality sequence data sets with control over redundancy, quality, completeness, formatting, and labeling. Software tools for some individual steps in this process exist but manual intervention remains a common and time consuming necessity. CDSbank is a database that stores both the protein-coding DNA sequence (CDS) and amino acid sequence for each protein annotated in Genbank. CDSbank also stores Genbank feature annotation, a flag to indicate incomplete 5' and 3' ends, full taxonomic data, and a heuristic to rank the scientific interest of each species. This rich information allows fully automated data set preparation with a level of sophistication that aims to meet or exceed manual processing. Defaults ensure ease of use for typical scenarios while allowing great flexibility when needed. Access is via a free web server at http://hazeslab.med.ualberta.ca/CDSbank/. CDSbank presents a user-friendly web server to download, filter, format, and name large sequence data sets. Common usage scenarios can be accessed via pre-programmed default choices, while optional sections give full control over the processing pipeline. Particular strengths are: extract protein-coding DNA sequences just as easily as amino acid sequences, full access to taxonomy for labeling and filtering, awareness of incomplete sequences, and the ability to take one protein sequence and extract all synonymous CDS or identical protein sequences in other species. Finally, CDSbank can also create labeled property files to, for instance, annotate or re-label phylogenetic trees.
Developing an orientation program.
Edwards, K
1999-01-01
When the local area experienced tremendous growth and change, the radiology department at Maury Hospital in Columbia, Tennessee looked seriously at its orientation process in preparation for hiring additional personnel. It was an appropriate time for the department to review its orientation process and to develop a manual to serve as both a tool for supervisors and an ongoing reference for new employees. To gather information for the manual, supervisors were asked to identify information they considered vital for new employees to know concerning the daily operations of the department, its policies and procedures, the organizational structure of the hospital, and hospital and departmental computer systems. That information became the basis of the orientation manual, and provided an introduction to the hospital and radiology department; the structure of the organization; an overview of the radiology department; personnel information; operating procedures and computer systems; and various policies and procedures. With the manual complete, the radiology department concentrated on an orientation process that would meet the needs of supervisors who said they had trouble remembering the many details necessary to teach new employees. A pre-orientation checklist was developed, which contained the many details supervisors must handle between the time an employee is hired and arrives for work. The next step was the creation of a checklist for use by the supervisor during a new employee's first week on the job. A final step in the hospital's orientation program is to have each new employee evaluate the entire orientation process. That information is then used to update and revise the manual.
Automated Text Markup for Information Retrieval from an Electronic Textbook of Infectious Disease
Berrios, Daniel C.; Kehler, Andrew; Kim, David K.; Yu, Victor L.; Fagan, Lawrence M.
1998-01-01
The information needs of practicing clinicians frequently require textbook or journal searches. Making these sources available in electronic form improves the speed of these searches, but precision (i.e., the fraction of relevant to total documents retrieved) remains low. Improving the traditional keyword search by transforming search terms into canonical concepts does not improve search precision greatly. Kim et al. have designed and built a prototype system (MYCIN II) for computer-based information retrieval from a forthcoming electronic textbook of infectious disease. The system requires manual indexing by experts in the form of complex text markup. However, this mark-up process is time consuming (about 3 person-hours to generate, review, and transcribe the index for each of 218 chapters). We have designed and implemented a system to semiautomate the markup process. The system, information extraction for semiautomated indexing of documents (ISAID), uses query models and existing information-extraction tools to provide support for any user, including the author of the source material, to mark up tertiary information sources quickly and accurately.
Automating expert role to determine design concept in Kansei Engineering
NASA Astrophysics Data System (ADS)
Lokman, Anitawati Mohd; Haron, Mohammad Bakri Che; Abidin, Siti Zaleha Zainal; Khalid, Noor Elaiza Abd
2016-02-01
Affect has become imperative in product quality. In affective design field, Kansei Engineering (KE) has been recognized as a technology that enables discovery of consumer's emotion and formulation of guide to design products that win consumers in the competitive market. Albeit powerful technology, there is no rule of thumb in its analysis and interpretation process. KE expertise is required to determine sets of related Kansei and the significant concept of emotion. Many research endeavors become handicapped with the limited number of available and accessible KE experts. This work is performed to simulate the role of experts with the use of Natphoric algorithm thus providing sound solution to the complexity and flexibility in KE. The algorithm is designed to learn the process by implementing training datasets taken from previous KE research works. A framework for automated KE is then designed to realize the development of automated KE system. A comparative analysis is performed to determine feasibility of the developed prototype to automate the process. The result shows that the significant Kansei is determined by manual KE implementation and the automated process is highly similar. KE research advocates will benefit this system to automatically determine significant design concepts.
Hautvast, Gilion L T F; Salton, Carol J; Chuang, Michael L; Breeuwer, Marcel; O'Donnell, Christopher J; Manning, Warren J
2012-05-01
Quantitative analysis of short-axis functional cardiac magnetic resonance images can be performed using automatic contour detection methods. The resulting myocardial contours must be reviewed and possibly corrected, which can be time-consuming, particularly when performed across all cardiac phases. We quantified the impact of manual contour corrections on both analysis time and quantitative measurements obtained from left ventricular short-axis cine images acquired from 1555 participants of the Framingham Heart Study Offspring cohort using computer-aided contour detection methods. The total analysis time for a single case was 7.6 ± 1.7 min for an average of 221 ± 36 myocardial contours per participant. This included 4.8 ± 1.6 min for manual contour correction of 2% of all automatically detected endocardial contours and 8% of all automatically detected epicardial contours. However, the impact of these corrections on global left ventricular parameters was limited, introducing differences of 0.4 ± 4.1 mL for end-diastolic volume, -0.3 ± 2.9 mL for end-systolic volume, 0.7 ± 3.1 mL for stroke volume, and 0.3 ± 1.8% for ejection fraction. We conclude that left ventricular functional parameters can be obtained under 5 min from short-axis functional cardiac magnetic resonance images using automatic contour detection methods. Manual correction more than doubles analysis time, with minimal impact on left ventricular volumes and ejection fraction. Copyright © 2011 Wiley Periodicals, Inc.
Automated analysis of cell migration and nuclear envelope rupture in confined environments.
Elacqua, Joshua J; McGregor, Alexandra L; Lammerding, Jan
2018-01-01
Recent in vitro and in vivo studies have highlighted the importance of the cell nucleus in governing migration through confined environments. Microfluidic devices that mimic the narrow interstitial spaces of tissues have emerged as important tools to study cellular dynamics during confined migration, including the consequences of nuclear deformation and nuclear envelope rupture. However, while image acquisition can be automated on motorized microscopes, the analysis of the corresponding time-lapse sequences for nuclear transit through the pores and events such as nuclear envelope rupture currently requires manual analysis. In addition to being highly time-consuming, such manual analysis is susceptible to person-to-person variability. Studies that compare large numbers of cell types and conditions therefore require automated image analysis to achieve sufficiently high throughput. Here, we present an automated image analysis program to register microfluidic constrictions and perform image segmentation to detect individual cell nuclei. The MATLAB program tracks nuclear migration over time and records constriction-transit events, transit times, transit success rates, and nuclear envelope rupture. Such automation reduces the time required to analyze migration experiments from weeks to hours, and removes the variability that arises from different human analysts. Comparison with manual analysis confirmed that both constriction transit and nuclear envelope rupture were detected correctly and reliably, and the automated analysis results closely matched a manual analysis gold standard. Applying the program to specific biological examples, we demonstrate its ability to detect differences in nuclear transit time between cells with different levels of the nuclear envelope proteins lamin A/C, which govern nuclear deformability, and to detect an increase in nuclear envelope rupture duration in cells in which CHMP7, a protein involved in nuclear envelope repair, had been depleted. The program thus presents a versatile tool for the study of confined migration and its effect on the cell nucleus.
Processing electronic photos of Mercury produced by ground based observation
NASA Astrophysics Data System (ADS)
Ksanfomality, Leonid
New images of Mercury have been obtained by processing of ground based observations that were carried out using the short exposure technique. The disk of the planet extendeds usually from 6 to 7 arc seconds, with the linear size of the image in a focal plane of the telescope about 0.3-0.5 mm on the average. Processing initial millisecond electronic photos of the planet is very labour-consuming. Some features of processing of initial millisecond electronic photos by methods of correlation stacking were considered in (Ksanfomality et al., 2005; Ksanfomality and Sprague, 2007). The method uses manual selection of good photos including a so-called pilot- file, the search for which usually must be done manually. The pilot-file is the most successful one, in opinion of the operator. It defines the future result of the stacking. To change pilot-files increases the labor of processing many times. Programs of processing analyze the contents of a sample, find in it any details, and search for recurrence of these almost imperceptible details in thousand of other stacking electronic pictures. If, proceeding from experience, the form and position of a pilot-file still can be estimated, the estimation of a reality of barely distinct details in it is somewhere in between the imaging and imagination. In 2006-07 some programs of automatic processing have been created. Unfortunately, the efficiency of all automatic programs is not as good as manual selection. Together with the selection, some other known methods are used. The point spread function (PSF) is described by a known mathematical function which in its central part decreases smoothly from the center. Usually the width of this function is accepted at a level 0.7 or 0.5 of the maxima. If many thousands of initial electronic pictures are acquired, it is possible during their processing to take advantage of known statistics of random variables and to choose the width of the function at a level, say, 0.9 maxima. Then the resolution of the image improves appreciably. The essential element of processing is the mathematical model of unsharp mask. But this is a two-edged instrument. The result depends on a choice of the size of the mask. If the size is too small, all low spatial frequencies will be lost, and the image becomes grey uniformly; on the contrary, if the size of the unsharp mask is too great, all fine details disappear. In some cases the compromise in selection of parameters of the unsharp mask becomes critical.
Detection of Glaucoma Using Image Processing Techniques: A Critique.
Kumar, B Naveen; Chauhan, R P; Dahiya, Nidhi
2018-01-01
The primary objective of this article is to present a summary of different types of image processing methods employed for the detection of glaucoma, a serious eye disease. Glaucoma affects the optic nerve in which retinal ganglion cells become dead, and this leads to loss of vision. The principal cause is the increase in intraocular pressure, which occurs in open-angle and angle-closure glaucoma, the two major types affecting the optic nerve. In the early stages of glaucoma, no perceptible symptoms appear. As the disease progresses, vision starts to become hazy, leading to blindness. Therefore, early detection of glaucoma is needed for prevention. Manual analysis of ophthalmic images is fairly time-consuming and accuracy depends on the expertise of the professionals. Automatic analysis of retinal images is an important tool. Automation aids in the detection, diagnosis, and prevention of risks associated with the disease. Fundus images obtained from a fundus camera have been used for the analysis. Requisite pre-processing techniques have been applied to the image and, depending upon the technique, various classifiers have been used to detect glaucoma. The techniques mentioned in the present review have certain advantages and disadvantages. Based on this study, one can determine which technique provides an optimum result.
Automated Derivation of Complex System Constraints from User Requirements
NASA Technical Reports Server (NTRS)
Foshee, Mark; Murey, Kim; Marsh, Angela
2010-01-01
The Payload Operations Integration Center (POIC) located at the Marshall Space Flight Center has the responsibility of integrating US payload science requirements for the International Space Station (ISS). All payload operations must request ISS system resources so that the resource usage will be included in the ISS on-board execution timelines. The scheduling of resources and building of the timeline is performed using the Consolidated Planning System (CPS). The ISS resources are quite complex due to the large number of components that must be accounted for. The planners at the POIC simplify the process for Payload Developers (PD) by providing the PDs with a application that has the basic functionality PDs need as well as list of simplified resources in the User Requirements Collection (URC) application. The planners maintained a mapping of the URC resources to the CPS resources. The process of manually converting PD's science requirements from a simplified representation to a more complex CPS representation is a time-consuming and tedious process. The goal is to provide a software solution to allow the planners to build a mapping of the complex CPS constraints to the basic URC constraints and automatically convert the PD's requirements into systems requirements during export to CPS.
Communication and knowledge sharing in human-robot interaction and learning from demonstration.
Koenig, Nathan; Takayama, Leila; Matarić, Maja
2010-01-01
Inexpensive personal robots will soon become available to a large portion of the population. Currently, most consumer robots are relatively simple single-purpose machines or toys. In order to be cost effective and thus widely accepted, robots will need to be able to accomplish a wide range of tasks in diverse conditions. Learning these tasks from demonstrations offers a convenient mechanism to customize and train a robot by transferring task related knowledge from a user to a robot. This avoids the time-consuming and complex process of manual programming. The way in which the user interacts with a robot during a demonstration plays a vital role in terms of how effectively and accurately the user is able to provide a demonstration. Teaching through demonstrations is a social activity, one that requires bidirectional communication between a teacher and a student. The work described in this paper studies how the user's visual observation of the robot and the robot's auditory cues affect the user's ability to teach the robot in a social setting. Results show that auditory cues provide important knowledge about the robot's internal state, while visual observation of a robot can hinder an instructor due to incorrect mental models of the robot and distractions from the robot's movements. Copyright © 2010. Published by Elsevier Ltd.
2011-01-01
Background Global positioning systems (GPS) are increasingly being used in health research to determine the location of study participants. Combining GPS data with data collected via travel/activity diaries allows researchers to assess where people travel in conjunction with data about trip purpose and accompaniment. However, linking GPS and diary data is problematic and to date the only method has been to match the two datasets manually, which is time consuming and unlikely to be practical for larger data sets. This paper assesses the feasibility of a new sequence alignment method of linking GPS and travel diary data in comparison with the manual matching method. Methods GPS and travel diary data obtained from a study of children's independent mobility were linked using sequence alignment algorithms to test the proof of concept. Travel diaries were assessed for quality by counting the number of errors and inconsistencies in each participant's set of diaries. The success of the sequence alignment method was compared for higher versus lower quality travel diaries, and for accompanied versus unaccompanied trips. Time taken and percentage of trips matched were compared for the sequence alignment method and the manual method. Results The sequence alignment method matched 61.9% of all trips. Higher quality travel diaries were associated with higher match rates in both the sequence alignment and manual matching methods. The sequence alignment method performed almost as well as the manual method and was an order of magnitude faster. However, the sequence alignment method was less successful at fully matching trips and at matching unaccompanied trips. Conclusions Sequence alignment is a promising method of linking GPS and travel diary data in large population datasets, especially if limitations in the trip detection algorithm are addressed. PMID:22142322
Cannesson, Maxime; Tanabe, Masaki; Suffoletto, Matthew S; McNamara, Dennis M; Madan, Shobhit; Lacomis, Joan M; Gorcsan, John
2007-01-16
We sought to test the hypothesis that a novel 2-dimensional echocardiographic image analysis system using artificial intelligence-learned pattern recognition can rapidly and reproducibly calculate ejection fraction (EF). Echocardiographic EF by manual tracing is time consuming, and visual assessment is inherently subjective. We studied 218 patients (72 female), including 165 with abnormal left ventricular (LV) function. Auto EF incorporated a database trained on >10,000 human EF tracings to automatically locate and track the LV endocardium from routine grayscale digital cineloops and calculate EF in 15 s. Auto EF results were independently compared with manually traced biplane Simpson's rule, visual EF, and magnetic resonance imaging (MRI) in a subset. Auto EF was possible in 200 (92%) of consecutive patients, of which 77% were completely automated and 23% required manual editing. Auto EF correlated well with manual EF (r = 0.98; 6% limits of agreement) and required less time per patient (48 +/- 26 s vs. 102 +/- 21 s; p < 0.01). Auto EF correlated well with visual EF by expert readers (r = 0.96; p < 0.001), but interobserver variability was greater (3.4 +/- 2.9% vs. 9.8 +/- 5.7%, respectively; p < 0.001). Visual EF was less accurate by novice readers (r = 0.82; 19% limits of agreement) and improved with trainee-operated Auto EF (r = 0.96; 7% limits of agreement). Auto EF also correlated with MRI EF (n = 21) (r = 0.95; 12% limits of agreement), but underestimated absolute volumes (r = 0.95; bias of -36 +/- 27 ml overall). Auto EF can automatically calculate EF similarly to results by manual biplane Simpson's rule and MRI, with less variability than visual EF, and has clinical potential.
Dozza, Marco; González, Nieves Pañeda
2013-11-01
New trends in research on traffic accidents include Naturalistic Driving Studies (NDS). NDS are based on large scale data collection of driver, vehicle, and environment information in real world. NDS data sets have proven to be extremely valuable for the analysis of safety critical events such as crashes and near crashes. However, finding safety critical events in NDS data is often difficult and time consuming. Safety critical events are currently identified using kinematic triggers, for instance searching for deceleration below a certain threshold signifying harsh braking. Due to the low sensitivity and specificity of this filtering procedure, manual review of video data is currently necessary to decide whether the events identified by the triggers are actually safety critical. Such reviewing procedure is based on subjective decisions, is expensive and time consuming, and often tedious for the analysts. Furthermore, since NDS data is exponentially growing over time, this reviewing procedure may not be viable anymore in the very near future. This study tested the hypothesis that automatic processing of driver video information could increase the correct classification of safety critical events from kinematic triggers in naturalistic driving data. Review of about 400 video sequences recorded from the events, collected by 100 Volvo cars in the euroFOT project, suggested that drivers' individual reaction may be the key to recognize safety critical events. In fact, whether an event is safety critical or not often depends on the individual driver. A few algorithms, able to automatically classify driver reaction from video data, have been compared. The results presented in this paper show that the state of the art subjective review procedures to identify safety critical events from NDS can benefit from automated objective video processing. In addition, this paper discusses the major challenges in making such video analysis viable for future NDS and new potential applications for NDS video processing. As new NDS such as SHRP2 are now providing the equivalent of five years of one vehicle data each day, the development of new methods, such as the one proposed in this paper, seems necessary to guarantee that these data can actually be analysed. Copyright © 2013 Elsevier Ltd. All rights reserved.
An automatic vision-based malaria diagnosis system.
Vink, J P; Laubscher, M; Vlutters, R; Silamut, K; Maude, R J; Hasan, M U; DE Haan, G
2013-06-01
Malaria is a worldwide health problem with 225 million infections each year. A fast and easy-to-use method, with high performance is required to differentiate malaria from non-malarial fevers. Manual examination of blood smears is currently the gold standard, but it is time-consuming, labour-intensive, requires skilled microscopists and the sensitivity of the method depends heavily on the skills of the microscopist. We propose an easy-to-use, quantitative cartridge-scanner system for vision-based malaria diagnosis, focusing on low malaria parasite densities. We have used special finger-prick cartridges filled with acridine orange to obtain a thin blood film and a dedicated scanner to image the cartridge. Using supervised learning, we have built a Plasmodium falciparum detector. A two-step approach was used to first segment potentially interesting areas, which are then analysed in more detail. The performance of the detector was validated using 5,420 manually annotated parasite images from malaria parasite culture in medium, as well as using 40 cartridges of 11,780 images containing healthy blood. From finger prick to result, the prototype cartridge-scanner system gave a quantitative diagnosis in 16 min, of which only 1 min required manual interaction of basic operations. It does not require a wet lab or a skilled operator and provides parasite images for manual review and quality control. In healthy samples, the image analysis part of the system achieved an overall specificity of 99.999978% at the level of (infected) red blood cells, resulting in at most seven false positives per microlitre. Furthermore, the system showed a sensitivity of 75% at the cell level, enabling the detection of low parasite densities in a fast and easy-to-use manner. A field trial in Chittagong (Bangladesh) indicated that future work should primarily focus on improving the filling process of the cartridge and the focus control part of the scanner. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.
User-guided segmentation for volumetric retinal optical coherence tomography images
Yin, Xin; Chao, Jennifer R.; Wang, Ruikang K.
2014-01-01
Abstract. Despite the existence of automatic segmentation techniques, trained graders still rely on manual segmentation to provide retinal layers and features from clinical optical coherence tomography (OCT) images for accurate measurements. To bridge the gap between this time-consuming need of manual segmentation and currently available automatic segmentation techniques, this paper proposes a user-guided segmentation method to perform the segmentation of retinal layers and features in OCT images. With this method, by interactively navigating three-dimensional (3-D) OCT images, the user first manually defines user-defined (or sketched) lines at regions where the retinal layers appear very irregular for which the automatic segmentation method often fails to provide satisfactory results. The algorithm is then guided by these sketched lines to trace the entire 3-D retinal layer and anatomical features by the use of novel layer and edge detectors that are based on robust likelihood estimation. The layer and edge boundaries are finally obtained to achieve segmentation. Segmentation of retinal layers in mouse and human OCT images demonstrates the reliability and efficiency of the proposed user-guided segmentation method. PMID:25147962
User-guided segmentation for volumetric retinal optical coherence tomography images.
Yin, Xin; Chao, Jennifer R; Wang, Ruikang K
2014-08-01
Despite the existence of automatic segmentation techniques, trained graders still rely on manual segmentation to provide retinal layers and features from clinical optical coherence tomography (OCT) images for accurate measurements. To bridge the gap between this time-consuming need of manual segmentation and currently available automatic segmentation techniques, this paper proposes a user-guided segmentation method to perform the segmentation of retinal layers and features in OCT images. With this method, by interactively navigating three-dimensional (3-D) OCT images, the user first manually defines user-defined (or sketched) lines at regions where the retinal layers appear very irregular for which the automatic segmentation method often fails to provide satisfactory results. The algorithm is then guided by these sketched lines to trace the entire 3-D retinal layer and anatomical features by the use of novel layer and edge detectors that are based on robust likelihood estimation. The layer and edge boundaries are finally obtained to achieve segmentation. Segmentation of retinal layers in mouse and human OCT images demonstrates the reliability and efficiency of the proposed user-guided segmentation method.
NASA Astrophysics Data System (ADS)
Zhou, Wanmeng; Wang, Hua; Tang, Guojin; Guo, Shuai
2016-09-01
The time-consuming experimental method for handling qualities assessment cannot meet the increasing fast design requirements for the manned space flight. As a tool for the aircraft handling qualities research, the model-predictive-control structured inverse simulation (MPC-IS) has potential applications in the aerospace field to guide the astronauts' operations and evaluate the handling qualities more effectively. Therefore, this paper establishes MPC-IS for the manual-controlled rendezvous and docking (RVD) and proposes a novel artificial neural network inverse simulation system (ANN-IS) to further decrease the computational cost. The novel system was obtained by replacing the inverse model of MPC-IS with the artificial neural network. The optimal neural network was trained by the genetic Levenberg-Marquardt algorithm, and finally determined by the Levenberg-Marquardt algorithm. In order to validate MPC-IS and ANN-IS, the manual-controlled RVD experiments on the simulator were carried out. The comparisons between simulation results and experimental data demonstrated the validity of two systems and the high computational efficiency of ANN-IS.
Technology for planning and scheduling under complex constraints
NASA Astrophysics Data System (ADS)
Alguire, Karen M.; Pedro Gomes, Carla O.
1997-02-01
Within the context of law enforcement, several problems fall into the category of planning and scheduling under constraints. Examples include resource and personnel scheduling, and court scheduling. In the case of court scheduling, a schedule must be generated considering available resources, e.g., court rooms and personnel. Additionally, there are constraints on individual court cases, e.g., temporal and spatial, and between different cases, e.g., precedence. Finally, there are overall objectives that the schedule should satisfy such as timely processing of cases and optimal use of court facilities. Manually generating a schedule that satisfies all of the constraints is a very time consuming task. As the number of court cases and constraints increases, this becomes increasingly harder to handle without the assistance of automatic scheduling techniques. This paper describes artificial intelligence (AI) technology that has been used to develop several high performance scheduling applications including a military transportation scheduler, a military in-theater airlift scheduler, and a nuclear power plant outage scheduler. We discuss possible law enforcement applications where we feel the same technology could provide long-term benefits to law enforcement agencies and their operations personnel.
Ti, Lian Kah; Ang, Sophia Bee Leng; Saw, Sharon; Sethi, Sunil Kumar; Yip, James W L
2012-08-01
Timely reporting and acknowledgement are crucial steps in critical laboratory results (CLR) management. The authors previously showed that an automated pathway incorporating short messaging system (SMS) texts, auto-escalation, and manual telephone back-up improved the rate and speed of physician acknowledgement compared with manual telephone calling alone. This study investigated if it also improved the rate and speed of physician intervention to CLR and whether utilising the manual back-up affected intervention rates. Data from seven audits between November 2007 and January 2011 were analysed. These audits were carried out to assess the robustness of CLR reporting process in the authors' institution. Comparisons were made in the rate and speed of acknowledgement and intervention between the audits performed before and after automation. Using the automation audits, the authors compared intervention data between communication with SMS only and when manual intervention was required. 1680 CLR were reported during the audit periods. Automation improved the rate (100% vs 84.2%; p<0.001) and speed (median 12 min vs 23 min; p<0.001) of CLR acknowledgement. It also improved the rate (93.7% vs 84.0%, p<0.001) and speed (median 21 min vs 109 min; p<0.001) of CLR intervention. From the automation audits, the use of SMS only did not improve physician intervention rates. The automated communication pathway improved physician intervention rate and time in tandem with improved acknowledgement rate and time when compared with manual telephone calling. The use of manual intervention to augment automation did not adversely affect physician intervention rate, implying that an end-to-end pathway was more important than automation alone.
Gap-free segmentation of vascular networks with automatic image processing pipeline.
Hsu, Chih-Yang; Ghaffari, Mahsa; Alaraj, Ali; Flannery, Michael; Zhou, Xiaohong Joe; Linninger, Andreas
2017-03-01
Current image processing techniques capture large vessels reliably but often fail to preserve connectivity in bifurcations and small vessels. Imaging artifacts and noise can create gaps and discontinuity of intensity that hinders segmentation of vascular trees. However, topological analysis of vascular trees require proper connectivity without gaps, loops or dangling segments. Proper tree connectivity is also important for high quality rendering of surface meshes for scientific visualization or 3D printing. We present a fully automated vessel enhancement pipeline with automated parameter settings for vessel enhancement of tree-like structures from customary imaging sources, including 3D rotational angiography, magnetic resonance angiography, magnetic resonance venography, and computed tomography angiography. The output of the filter pipeline is a vessel-enhanced image which is ideal for generating anatomical consistent network representations of the cerebral angioarchitecture for further topological or statistical analysis. The filter pipeline combined with computational modeling can potentially improve computer-aided diagnosis of cerebrovascular diseases by delivering biometrics and anatomy of the vasculature. It may serve as the first step in fully automatic epidemiological analysis of large clinical datasets. The automatic analysis would enable rigorous statistical comparison of biometrics in subject-specific vascular trees. The robust and accurate image segmentation using a validated filter pipeline would also eliminate operator dependency that has been observed in manual segmentation. Moreover, manual segmentation is time prohibitive given that vascular trees have more than thousands of segments and bifurcations so that interactive segmentation consumes excessive human resources. Subject-specific trees are a first step toward patient-specific hemodynamic simulations for assessing treatment outcomes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Mazzaferri, Javier; Larrivée, Bruno; Cakir, Bertan; Sapieha, Przemyslaw; Costantino, Santiago
2018-03-02
Preclinical studies of vascular retinal diseases rely on the assessment of developmental dystrophies in the oxygen induced retinopathy rodent model. The quantification of vessel tufts and avascular regions is typically computed manually from flat mounted retinas imaged using fluorescent probes that highlight the vascular network. Such manual measurements are time-consuming and hampered by user variability and bias, thus a rapid and objective method is needed. Here, we introduce a machine learning approach to segment and characterize vascular tufts, delineate the whole vasculature network, and identify and analyze avascular regions. Our quantitative retinal vascular assessment (QuRVA) technique uses a simple machine learning method and morphological analysis to provide reliable computations of vascular density and pathological vascular tuft regions, devoid of user intervention within seconds. We demonstrate the high degree of error and variability of manual segmentations, and designed, coded, and implemented a set of algorithms to perform this task in a fully automated manner. We benchmark and validate the results of our analysis pipeline using the consensus of several manually curated segmentations using commonly used computer tools. The source code of our implementation is released under version 3 of the GNU General Public License ( https://www.mathworks.com/matlabcentral/fileexchange/65699-javimazzaf-qurva ).
Automated and unsupervised detection of malarial parasites in microscopic images.
Purwar, Yashasvi; Shah, Sirish L; Clarke, Gwen; Almugairi, Areej; Muehlenbachs, Atis
2011-12-13
Malaria is a serious infectious disease. According to the World Health Organization, it is responsible for nearly one million deaths each year. There are various techniques to diagnose malaria of which manual microscopy is considered to be the gold standard. However due to the number of steps required in manual assessment, this diagnostic method is time consuming (leading to late diagnosis) and prone to human error (leading to erroneous diagnosis), even in experienced hands. The focus of this study is to develop a robust, unsupervised and sensitive malaria screening technique with low material cost and one that has an advantage over other techniques in that it minimizes human reliance and is, therefore, more consistent in applying diagnostic criteria. A method based on digital image processing of Giemsa-stained thin smear image is developed to facilitate the diagnostic process. The diagnosis procedure is divided into two parts; enumeration and identification. The image-based method presented here is designed to automate the process of enumeration and identification; with the main advantage being its ability to carry out the diagnosis in an unsupervised manner and yet have high sensitivity and thus reducing cases of false negatives. The image based method is tested over more than 500 images from two independent laboratories. The aim is to distinguish between positive and negative cases of malaria using thin smear blood slide images. Due to the unsupervised nature of method it requires minimal human intervention thus speeding up the whole process of diagnosis. Overall sensitivity to capture cases of malaria is 100% and specificity ranges from 50-88% for all species of malaria parasites. Image based screening method will speed up the whole process of diagnosis and is more advantageous over laboratory procedures that are prone to errors and where pathological expertise is minimal. Further this method provides a consistent and robust way of generating the parasite clearance curves.
A cost analysis comparing xeroradiography to film technics for intraoral radiography.
Gratt, B M; Sickles, E A
1986-01-01
In the United States during 1978 $730 million was spent on dental radiographic services. Currently there are three alternatives for the processing of intraoral radiographs: manual wet-tanks, automatic film units, or xeroradiography. It was the intent of this study to determine which processing system is the most economical. Cost estimates were based on a usage rate of 750 patient images per month and included a calculation of the average cost per radiograph over a five-year period. Capital costs included initial processing equipment and site preparation. Operational costs included labor, supplies, utilities, darkroom rental, and breakdown costs. Clinical time trials were employed to measure examination times. Maintenance logs were employed to assess labor costs. Indirect costs of training were estimated. Results indicated that xeroradiography was the most cost effective ($0.81 per image) compared to either automatic film processing ($1.14 per image) or manual processing ($1.35 per image). Variations in projected costs indicated that if a dental practice performs primarily complete-mouth surveys, exposes less than 120 radiographs per month, and pays less than +6.50 per hour in wages, then manual (wet-tank) processing is the most economical method for producing intraoral radiographs.
Evaluation of an automatic segmentation algorithm for definition of head and neck organs at risk.
Thomson, David; Boylan, Chris; Liptrot, Tom; Aitkenhead, Adam; Lee, Lip; Yap, Beng; Sykes, Andrew; Rowbottom, Carl; Slevin, Nicholas
2014-08-03
The accurate definition of organs at risk (OARs) is required to fully exploit the benefits of intensity-modulated radiotherapy (IMRT) for head and neck cancer. However, manual delineation is time-consuming and there is considerable inter-observer variability. This is pertinent as function-sparing and adaptive IMRT have increased the number and frequency of delineation of OARs. We evaluated accuracy and potential time-saving of Smart Probabilistic Image Contouring Engine (SPICE) automatic segmentation to define OARs for salivary-, swallowing- and cochlea-sparing IMRT. Five clinicians recorded the time to delineate five organs at risk (parotid glands, submandibular glands, larynx, pharyngeal constrictor muscles and cochleae) for each of 10 CT scans. SPICE was then used to define these structures. The acceptability of SPICE contours was initially determined by visual inspection and the total time to modify them recorded per scan. The Simultaneous Truth and Performance Level Estimation (STAPLE) algorithm created a reference standard from all clinician contours. Clinician, SPICE and modified contours were compared against STAPLE by the Dice similarity coefficient (DSC) and mean/maximum distance to agreement (DTA). For all investigated structures, SPICE contours were less accurate than manual contours. However, for parotid/submandibular glands they were acceptable (median DSC: 0.79/0.80; mean, maximum DTA: 1.5 mm, 14.8 mm/0.6 mm, 5.7 mm). Modified SPICE contours were also less accurate than manual contours. The utilisation of SPICE did not result in time-saving/improve efficiency. Improvements in accuracy of automatic segmentation for head and neck OARs would be worthwhile and are required before its routine clinical implementation.
SensePath: Understanding the Sensemaking Process Through Analytic Provenance.
Nguyen, Phong H; Xu, Kai; Wheat, Ashley; Wong, B L William; Attfield, Simon; Fields, Bob
2016-01-01
Sensemaking is described as the process of comprehension, finding meaning and gaining insight from information, producing new knowledge and informing further action. Understanding the sensemaking process allows building effective visual analytics tools to make sense of large and complex datasets. Currently, it is often a manual and time-consuming undertaking to comprehend this: researchers collect observation data, transcribe screen capture videos and think-aloud recordings, identify recurring patterns, and eventually abstract the sensemaking process into a general model. In this paper, we propose a general approach to facilitate such a qualitative analysis process, and introduce a prototype, SensePath, to demonstrate the application of this approach with a focus on browser-based online sensemaking. The approach is based on a study of a number of qualitative research sessions including observations of users performing sensemaking tasks and post hoc analyses to uncover their sensemaking processes. Based on the study results and a follow-up participatory design session with HCI researchers, we decided to focus on the transcription and coding stages of thematic analysis. SensePath automatically captures user's sensemaking actions, i.e., analytic provenance, and provides multi-linked views to support their further analysis. A number of other requirements elicited from the design session are also implemented in SensePath, such as easy integration with existing qualitative analysis workflow and non-intrusive for participants. The tool was used by an experienced HCI researcher to analyze two sensemaking sessions. The researcher found the tool intuitive and considerably reduced analysis time, allowing better understanding of the sensemaking process.
Automatic detection and decoding of honey bee waggle dances
Wild, Benjamin; Rojas, Raúl; Landgraf, Tim
2017-01-01
The waggle dance is one of the most popular examples of animal communication. Forager bees direct their nestmates to profitable resources via a complex motor display. Essentially, the dance encodes the polar coordinates to the resource in the field. Unemployed foragers follow the dancer’s movements and then search for the advertised spots in the field. Throughout the last decades, biologists have employed different techniques to measure key characteristics of the waggle dance and decode the information it conveys. Early techniques involved the use of protractors and stopwatches to measure the dance orientation and duration directly from the observation hive. Recent approaches employ digital video recordings and manual measurements on screen. However, manual approaches are very time-consuming. Most studies, therefore, regard only small numbers of animals in short periods of time. We have developed a system capable of automatically detecting, decoding and mapping communication dances in real-time. In this paper, we describe our recording setup, the image processing steps performed for dance detection and decoding and an algorithm to map dances to the field. The proposed system performs with a detection accuracy of 90.07%. The decoded waggle orientation has an average error of -2.92° (± 7.37°), well within the range of human error. To evaluate and exemplify the system’s performance, a group of bees was trained to an artificial feeder, and all dances in the colony were automatically detected, decoded and mapped. The system presented here is the first of this kind made publicly available, including source code and hardware specifications. We hope this will foster quantitative analyses of the honey bee waggle dance. PMID:29236712
Fully automatic registration and segmentation of first-pass myocardial perfusion MR image sequences.
Gupta, Vikas; Hendriks, Emile A; Milles, Julien; van der Geest, Rob J; Jerosch-Herold, Michael; Reiber, Johan H C; Lelieveldt, Boudewijn P F
2010-11-01
Derivation of diagnostically relevant parameters from first-pass myocardial perfusion magnetic resonance images involves the tedious and time-consuming manual segmentation of the myocardium in a large number of images. To reduce the manual interaction and expedite the perfusion analysis, we propose an automatic registration and segmentation method for the derivation of perfusion linked parameters. A complete automation was accomplished by first registering misaligned images using a method based on independent component analysis, and then using the registered data to automatically segment the myocardium with active appearance models. We used 18 perfusion studies (100 images per study) for validation in which the automatically obtained (AO) contours were compared with expert drawn contours on the basis of point-to-curve error, Dice index, and relative perfusion upslope in the myocardium. Visual inspection revealed successful segmentation in 15 out of 18 studies. Comparison of the AO contours with expert drawn contours yielded 2.23 ± 0.53 mm and 0.91 ± 0.02 as point-to-curve error and Dice index, respectively. The average difference between manually and automatically obtained relative upslope parameters was found to be statistically insignificant (P = .37). Moreover, the analysis time per slice was reduced from 20 minutes (manual) to 1.5 minutes (automatic). We proposed an automatic method that significantly reduced the time required for analysis of first-pass cardiac magnetic resonance perfusion images. The robustness and accuracy of the proposed method were demonstrated by the high spatial correspondence and statistically insignificant difference in perfusion parameters, when AO contours were compared with expert drawn contours. Copyright © 2010 AUR. Published by Elsevier Inc. All rights reserved.
Factors affecting dry-cured ham consumer acceptability.
Morales, R; Guerrero, L; Aguiar, A P S; Guàrdia, M D; Gou, P
2013-11-01
The objectives of the present study were (1) to compare the relative importance of price, processing time, texture and intramuscular fat in purchase intention of dry-cured ham through conjoint analysis, (2) to evaluate the effect of dry-cured ham appearance on consumer expectations, and (3) to describe the consumer sensory preferences of dry-cured ham using external preference mapping. Texture and processing time influenced the consumer preferences in conjoint analysis. Red colour intensity, colour uniformity, external fat and white film presence/absence influenced consumer expectations. The consumer disliked hams with bitter and metallic flavour and with excessive saltiness and piquantness. Differences between expected and experienced acceptability were found, which indicates that the visual preference of consumers does not allow them to select a dry-cured ham that satisfies their sensory preferences of flavour and texture. Copyright © 2013 Elsevier Ltd. All rights reserved.
Autoverification process improvement by Six Sigma approach: Clinical chemistry & immunoassay.
Randell, Edward W; Short, Garry; Lee, Natasha; Beresford, Allison; Spencer, Margaret; Kennell, Marina; Moores, Zoë; Parry, David
2018-05-01
This study examines effectiveness of a project to enhance an autoverification (AV) system through application of Six Sigma (DMAIC) process improvement strategies. Similar AV systems set up at three sites underwent examination and modification to produce improved systems while monitoring proportions of samples autoverified, the time required for manual review and verification, sample processing time, and examining characteristics of tests not autoverified. This information was used to identify areas for improvement and monitor the impact of changes. Use of reference range based criteria had the greatest impact on the proportion of tests autoverified. To improve AV process, reference range based criteria was replaced with extreme value limits based on a 99.5% test result interval, delta check criteria were broadened, and new specimen consistency rules were implemented. Decision guidance tools were also developed to assist staff using the AV system. The mean proportion of tests and samples autoverified improved from <62% for samples and <80% for tests, to >90% for samples and >95% for tests across all three sites. The new AV system significantly decreased turn-around time and total sample review time (to about a third), however, time spent for manual review of held samples almost tripled. There was no evidence of compromise to the quality of testing process and <1% of samples held for exceeding delta check or extreme limits required corrective action. The Six Sigma (DMAIC) process improvement methodology was successfully applied to AV systems resulting in an increase in overall test and sample AV by >90%, improved turn-around time, reduced time for manual verification, and with no obvious compromise to quality or error detection. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
SciBox, an end-to-end automated science planning and commanding system
NASA Astrophysics Data System (ADS)
Choo, Teck H.; Murchie, Scott L.; Bedini, Peter D.; Steele, R. Josh; Skura, Joseph P.; Nguyen, Lillian; Nair, Hari; Lucks, Michael; Berman, Alice F.; McGovern, James A.; Turner, F. Scott
2014-01-01
SciBox is a new technology for planning and commanding science operations for Earth-orbital and planetary space missions. It has been incrementally developed since 2001 and demonstrated on several spaceflight projects. The technology has matured to the point that it is now being used to plan and command all orbital science operations for the MErcury Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER) mission to Mercury. SciBox encompasses the derivation of observing sequences from science objectives, the scheduling of those sequences, the generation of spacecraft and instrument commands, and the validation of those commands prior to uploading to the spacecraft. Although the process is automated, science and observing requirements are incorporated at each step by a series of rules and parameters to optimize observing opportunities, which are tested and validated through simulation and review. Except for limited special operations and tests, there is no manual scheduling of observations or construction of command sequences. SciBox reduces the lead time for operations planning by shortening the time-consuming coordination process, reduces cost by automating the labor-intensive processes of human-in-the-loop adjudication of observing priorities, reduces operations risk by systematically checking constraints, and maximizes science return by fully evaluating the trade space of observing opportunities to meet MESSENGER science priorities within spacecraft recorder, downlink, scheduling, and orbital-geometry constraints.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The final report for the project is presented in five volumes. This volume is the Programmer's Manual. It covers: a system overview, attractiveness component of gravity model, trip-distribution component of gravity model, economic-effects model, and the consumer-surplus model. The project sought to determine the impact of Outer Continental Shelf development on recreation and tourism.
Genetics Home Reference: myotonia congenita
... Manual Consumer Version: Congenital Myopathies Orphanet: Thomsen and Becker disease Patient Support and Advocacy Resources (3 links) Muscular Dystrophy Association National Organization for Rare Disorders (NORD) Resource ...
Computed tomography-based volumetric tool for standardized measurement of the maxillary sinus
Giacomini, Guilherme; Pavan, Ana Luiza Menegatti; Altemani, João Mauricio Carrasco; Duarte, Sergio Barbosa; Fortaleza, Carlos Magno Castelo Branco; Miranda, José Ricardo de Arruda
2018-01-01
Volume measurements of maxillary sinus may be useful to identify diseases affecting paranasal sinuses. However, literature shows a lack of consensus in studies measuring the volume. This may be attributable to different computed tomography data acquisition techniques, segmentation methods, focuses of investigation, among other reasons. Furthermore, methods for volumetrically quantifying the maxillary sinus are commonly manual or semiautomated, which require substantial user expertise and are time-consuming. The purpose of the present study was to develop an automated tool for quantifying the total and air-free volume of the maxillary sinus based on computed tomography images. The quantification tool seeks to standardize maxillary sinus volume measurements, thus allowing better comparisons and determinations of factors that influence maxillary sinus size. The automated tool utilized image processing techniques (watershed, threshold, and morphological operators). The maxillary sinus volume was quantified in 30 patients. To evaluate the accuracy of the automated tool, the results were compared with manual segmentation that was performed by an experienced radiologist using a standard procedure. The mean percent differences between the automated and manual methods were 7.19% ± 5.83% and 6.93% ± 4.29% for total and air-free maxillary sinus volume, respectively. Linear regression and Bland-Altman statistics showed good agreement and low dispersion between both methods. The present automated tool for maxillary sinus volume assessment was rapid, reliable, robust, accurate, and reproducible and may be applied in clinical practice. The tool may be used to standardize measurements of maxillary volume. Such standardization is extremely important for allowing comparisons between studies, providing a better understanding of the role of the maxillary sinus, and determining the factors that influence maxillary sinus size under normal and pathological conditions. PMID:29304130
NASA Astrophysics Data System (ADS)
Nanayakkara, Nuwan D.; Samarabandu, Jagath; Fenster, Aaron
2006-04-01
Estimation of prostate location and volume is essential in determining a dose plan for ultrasound-guided brachytherapy, a common prostate cancer treatment. However, manual segmentation is difficult, time consuming and prone to variability. In this paper, we present a semi-automatic discrete dynamic contour (DDC) model based image segmentation algorithm, which effectively combines a multi-resolution model refinement procedure together with the domain knowledge of the image class. The segmentation begins on a low-resolution image by defining a closed DDC model by the user. This contour model is then deformed progressively towards higher resolution images. We use a combination of a domain knowledge based fuzzy inference system (FIS) and a set of adaptive region based operators to enhance the edges of interest and to govern the model refinement using a DDC model. The automatic vertex relocation process, embedded into the algorithm, relocates deviated contour points back onto the actual prostate boundary, eliminating the need of user interaction after initialization. The accuracy of the prostate boundary produced by the proposed algorithm was evaluated by comparing it with a manually outlined contour by an expert observer. We used this algorithm to segment the prostate boundary in 114 2D transrectal ultrasound (TRUS) images of six patients scheduled for brachytherapy. The mean distance between the contours produced by the proposed algorithm and the manual outlines was 2.70 ± 0.51 pixels (0.54 ± 0.10 mm). We also showed that the algorithm is insensitive to variations of the initial model and parameter values, thus increasing the accuracy and reproducibility of the resulting boundaries in the presence of noise and artefacts.
Shahidi, Shoaleh; Bahrampour, Ehsan; Soltanimehr, Elham; Zamani, Ali; Oshagh, Morteza; Moattari, Marzieh; Mehdizadeh, Alireza
2014-09-16
Two-dimensional projection radiographs have been traditionally considered the modality of choice for cephalometric analysis. To overcome the shortcomings of two-dimensional images, three-dimensional computed tomography (CT) has been used to evaluate craniofacial structures. However, manual landmark detection depends on medical expertise, and the process is time-consuming. The present study was designed to produce software capable of automated localization of craniofacial landmarks on cone beam (CB) CT images based on image registration and to evaluate its accuracy. The software was designed using MATLAB programming language. The technique was a combination of feature-based (principal axes registration) and voxel similarity-based methods for image registration. A total of 8 CBCT images were selected as our reference images for creating a head atlas. Then, 20 CBCT images were randomly selected as the test images for evaluating the method. Three experts twice located 14 landmarks in all 28 CBCT images during two examinations set 6 weeks apart. The differences in the distances of coordinates of each landmark on each image between manual and automated detection methods were calculated and reported as mean errors. The combined intraclass correlation coefficient for intraobserver reliability was 0.89 and for interobserver reliability 0.87 (95% confidence interval, 0.82 to 0.93). The mean errors of all 14 landmarks were <4 mm. Additionally, 63.57% of landmarks had a mean error of <3 mm compared with manual detection (gold standard method). The accuracy of our approach for automated localization of craniofacial landmarks, which was based on combining feature-based and voxel similarity-based methods for image registration, was acceptable. Nevertheless we recommend repetition of this study using other techniques, such as intensity-based methods.
An Investigation of Automatic Change Detection for Topographic Map Updating
NASA Astrophysics Data System (ADS)
Duncan, P.; Smit, J.
2012-08-01
Changes to the landscape are constantly occurring and it is essential for geospatial and mapping organisations that these changes are regularly detected and captured, so that map databases can be updated to reflect the current status of the landscape. The Chief Directorate of National Geospatial Information (CD: NGI), South Africa's national mapping agency, currently relies on manual methods of detecting changes and capturing these changes. These manual methods are time consuming and labour intensive, and rely on the skills and interpretation of the operator. It is therefore necessary to move towards more automated methods in the production process at CD: NGI. The aim of this research is to do an investigation into a methodology for automatic or semi-automatic change detection for the purpose of updating topographic databases. The method investigated for detecting changes is through image classification as well as spatial analysis and is focussed on urban landscapes. The major data input into this study is high resolution aerial imagery and existing topographic vector data. Initial results indicate the traditional pixel-based image classification approaches are unsatisfactory for large scale land-use mapping and that object-orientated approaches hold more promise. Even in the instance of object-oriented image classification generalization of techniques on a broad-scale has provided inconsistent results. A solution may lie with a hybrid approach of pixel and object-oriented techniques.
Zakeri, Fahimeh Sadat; Setarehdan, Seyed Kamaledin; Norouzi, Somayye
2017-10-01
Segmentation of the arterial wall boundaries from intravascular ultrasound images is an important image processing task in order to quantify arterial wall characteristics such as shape, area, thickness and eccentricity. Since manual segmentation of these boundaries is a laborious and time consuming procedure, many researchers attempted to develop (semi-) automatic segmentation techniques as a powerful tool for educational and clinical purposes in the past but as yet there is no any clinically approved method in the market. This paper presents a deterministic-statistical strategy for automatic media-adventitia border detection by a fourfold algorithm. First, a smoothed initial contour is extracted based on the classification in the sparse representation framework which is combined with the dynamic directional convolution vector field. Next, an active contour model is utilized for the propagation of the initial contour toward the interested borders. Finally, the extracted contour is refined in the leakage, side branch openings and calcification regions based on the image texture patterns. The performance of the proposed algorithm is evaluated by comparing the results to those manually traced borders by an expert on 312 different IVUS images obtained from four different patients. The statistical analysis of the results demonstrates the efficiency of the proposed method in the media-adventitia border detection with enough consistency in the leakage and calcification regions. Copyright © 2017 Elsevier Ltd. All rights reserved.
Automatic segmentation and supervised learning-based selection of nuclei in cancer tissue images.
Nandy, Kaustav; Gudla, Prabhakar R; Amundsen, Ryan; Meaburn, Karen J; Misteli, Tom; Lockett, Stephen J
2012-09-01
Analysis of preferential localization of certain genes within the cell nuclei is emerging as a new technique for the diagnosis of breast cancer. Quantitation requires accurate segmentation of 100-200 cell nuclei in each tissue section to draw a statistically significant result. Thus, for large-scale analysis, manual processing is too time consuming and subjective. Fortuitously, acquired images generally contain many more nuclei than are needed for analysis. Therefore, we developed an integrated workflow that selects, following automatic segmentation, a subpopulation of accurately delineated nuclei for positioning of fluorescence in situ hybridization-labeled genes of interest. Segmentation was performed by a multistage watershed-based algorithm and screening by an artificial neural network-based pattern recognition engine. The performance of the workflow was quantified in terms of the fraction of automatically selected nuclei that were visually confirmed as well segmented and by the boundary accuracy of the well-segmented nuclei relative to a 2D dynamic programming-based reference segmentation method. Application of the method was demonstrated for discriminating normal and cancerous breast tissue sections based on the differential positioning of the HES5 gene. Automatic results agreed with manual analysis in 11 out of 14 cancers, all four normal cases, and all five noncancerous breast disease cases, thus showing the accuracy and robustness of the proposed approach. Published 2012 Wiley Periodicals, Inc.
Martínez, Fabio; Romero, Eduardo; Dréan, Gaël; Simon, Antoine; Haigron, Pascal; De Crevoisier, Renaud; Acosta, Oscar
2014-01-01
Accurate segmentation of the prostate and organs at risk in computed tomography (CT) images is a crucial step for radiotherapy (RT) planning. Manual segmentation, as performed nowadays, is a time consuming process and prone to errors due to the a high intra- and inter-expert variability. This paper introduces a new automatic method for prostate, rectum and bladder segmentation in planning CT using a geometrical shape model under a Bayesian framework. A set of prior organ shapes are first built by applying Principal Component Analysis (PCA) to a population of manually delineated CT images. Then, for a given individual, the most similar shape is obtained by mapping a set of multi-scale edge observations to the space of organs with a customized likelihood function. Finally, the selected shape is locally deformed to adjust the edges of each organ. Experiments were performed with real data from a population of 116 patients treated for prostate cancer. The data set was split in training and test groups, with 30 and 86 patients, respectively. Results show that the method produces competitive segmentations w.r.t standard methods (Averaged Dice = 0.91 for prostate, 0.94 for bladder, 0.89 for Rectum) and outperforms the majority-vote multi-atlas approaches (using rigid registration, free-form deformation (FFD) and the demons algorithm) PMID:24594798
Towards Automated Large-Scale 3D Phenotyping of Vineyards under Field Conditions
Rose, Johann Christian; Kicherer, Anna; Wieland, Markus; Klingbeil, Lasse; Töpfer, Reinhard; Kuhlmann, Heiner
2016-01-01
In viticulture, phenotypic data are traditionally collected directly in the field via visual and manual means by an experienced person. This approach is time consuming, subjective and prone to human errors. In recent years, research therefore has focused strongly on developing automated and non-invasive sensor-based methods to increase data acquisition speed, enhance measurement accuracy and objectivity and to reduce labor costs. While many 2D methods based on image processing have been proposed for field phenotyping, only a few 3D solutions are found in the literature. A track-driven vehicle consisting of a camera system, a real-time-kinematic GPS system for positioning, as well as hardware for vehicle control, image storage and acquisition is used to visually capture a whole vine row canopy with georeferenced RGB images. In the first post-processing step, these images were used within a multi-view-stereo software to reconstruct a textured 3D point cloud of the whole grapevine row. A classification algorithm is then used in the second step to automatically classify the raw point cloud data into the semantic plant components, grape bunches and canopy. In the third step, phenotypic data for the semantic objects is gathered using the classification results obtaining the quantity of grape bunches, berries and the berry diameter. PMID:27983669
Towards Automated Large-Scale 3D Phenotyping of Vineyards under Field Conditions.
Rose, Johann Christian; Kicherer, Anna; Wieland, Markus; Klingbeil, Lasse; Töpfer, Reinhard; Kuhlmann, Heiner
2016-12-15
In viticulture, phenotypic data are traditionally collected directly in the field via visual and manual means by an experienced person. This approach is time consuming, subjective and prone to human errors. In recent years, research therefore has focused strongly on developing automated and non-invasive sensor-based methods to increase data acquisition speed, enhance measurement accuracy and objectivity and to reduce labor costs. While many 2D methods based on image processing have been proposed for field phenotyping, only a few 3D solutions are found in the literature. A track-driven vehicle consisting of a camera system, a real-time-kinematic GPS system for positioning, as well as hardware for vehicle control, image storage and acquisition is used to visually capture a whole vine row canopy with georeferenced RGB images. In the first post-processing step, these images were used within a multi-view-stereo software to reconstruct a textured 3D point cloud of the whole grapevine row. A classification algorithm is then used in the second step to automatically classify the raw point cloud data into the semantic plant components, grape bunches and canopy. In the third step, phenotypic data for the semantic objects is gathered using the classification results obtaining the quantity of grape bunches, berries and the berry diameter.
3D acquisition and modeling for flint artefacts analysis
NASA Astrophysics Data System (ADS)
Loriot, B.; Fougerolle, Y.; Sestier, C.; Seulin, R.
2007-07-01
In this paper, we are interested in accurate acquisition and modeling of flint artefacts. Archaeologists needs accurate geometry measurements to refine their understanding of the flint artefacts manufacturing process. Current techniques require several operations. First, a copy of a flint artefact is reproduced. The copy is then sliced. A picture is taken for each slice. Eventually, geometric information is manually determined from the pictures. Such a technique is very time consuming, and the processing applied to the original, as well as the reproduced object, induces several measurement errors (prototyping approximations, slicing, image acquisition, and measurement). By using 3D scanners, we significantly reduce the number of operations related to data acquisition and completely suppress the prototyping step to obtain an accurate 3D model. The 3D models are segmented into sliced parts that are then analyzed. Each slice is then automatically fitted by mathematical representation. Such a representation offers several interesting properties: geometric features can be characterized (e.g. shapes, curvature, sharp edges, etc), and a shape of the original piece of stone can be extrapolated. The contributions of this paper are an acquisition technique using 3D scanners that strongly reduces human intervention, acquisition time and measurement errors, and the representation of flint artefacts as mathematical 2D sections that enable accurate analysis.
Bouchoucha, Mongia; Akrout, Mouna; Bellali, Hédia; Bouchoucha, Rim; Tarhouni, Fadwa; Mansour, Abderraouf Ben; Zouari, Béchir
2016-01-01
Background Estimation of food portion sizes has always been a challenge in dietary studies on free-living individuals. The aim of this work was to develop and validate a food photography manual to improve the accuracy of the estimated size of consumed food portions. Methods A manual was compiled from digital photos of foods commonly consumed by the Tunisian population. The food was cooked and weighed before taking digital photographs of three portion sizes. The manual was validated by comparing the method of 24-hour recall (using photos) to the reference method [food weighing (FW)]. In both the methods, the comparison focused on food intake amounts as well as nutritional issues. Validity was assessed by Bland–Altman limits of agreement. In total, 31 male and female volunteers aged 9–89 participated in the study. Results We focused on eight food categories and compared their estimated amounts (using the 24-hour recall method) to those actually consumed (using FW). Animal products and sweets were underestimated, whereas pasta, bread, vegetables, fruits, and dairy products were overestimated. However, the difference between the two methods is not statistically significant except for pasta (p<0.05) and dairy products (p<0.05). The coefficient of correlation between the two methods is highly significant, ranging from 0.876 for pasta to 0.989 for dairy products. Nutrient intake calculated for both methods showed insignificant differences except for fat (p<0.001) and dietary fiber (p<0.05). A highly significant correlation was observed between the two methods for all micronutrients. The test agreement highlights the lack of difference between the two methods. Conclusion The difference between the 24-hour recall method using digital photos and the weighing method is acceptable. Our findings indicate that the food photography manual can be a useful tool for quantifying food portion sizes in epidemiological dietary surveys. PMID:27585631
Bouchoucha, Mongia; Akrout, Mouna; Bellali, Hédia; Bouchoucha, Rim; Tarhouni, Fadwa; Mansour, Abderraouf Ben; Zouari, Béchir
2016-01-01
Background Estimation of food portion sizes has always been a challenge in dietary studies on free-living individuals. The aim of this work was to develop and validate a food photography manual to improve the accuracy of the estimated size of consumed food portions. Methods A manual was compiled from digital photos of foods commonly consumed by the Tunisian population. The food was cooked and weighed before taking digital photographs of three portion sizes. The manual was validated by comparing the method of 24-hour recall (using photos) to the reference method [food weighing (FW)]. In both the methods, the comparison focused on food intake amounts as well as nutritional issues. Validity was assessed by Bland-Altman limits of agreement. In total, 31 male and female volunteers aged 9-89 participated in the study. Results We focused on eight food categories and compared their estimated amounts (using the 24-hour recall method) to those actually consumed (using FW). Animal products and sweets were underestimated, whereas pasta, bread, vegetables, fruits, and dairy products were overestimated. However, the difference between the two methods is not statistically significant except for pasta (p<0.05) and dairy products (p<0.05). The coefficient of correlation between the two methods is highly significant, ranging from 0.876 for pasta to 0.989 for dairy products. Nutrient intake calculated for both methods showed insignificant differences except for fat (p<0.001) and dietary fiber (p<0.05). A highly significant correlation was observed between the two methods for all micronutrients. The test agreement highlights the lack of difference between the two methods. Conclusion The difference between the 24-hour recall method using digital photos and the weighing method is acceptable. Our findings indicate that the food photography manual can be a useful tool for quantifying food portion sizes in epidemiological dietary surveys.
Bouchoucha, Mongia; Akrout, Mouna; Bellali, Hédia; Bouchoucha, Rim; Tarhouni, Fadwa; Mansour, Abderraouf Ben; Zouari, Béchir
2016-01-01
Estimation of food portion sizes has always been a challenge in dietary studies on free-living individuals. The aim of this work was to develop and validate a food photography manual to improve the accuracy of the estimated size of consumed food portions. A manual was compiled from digital photos of foods commonly consumed by the Tunisian population. The food was cooked and weighed before taking digital photographs of three portion sizes. The manual was validated by comparing the method of 24-hour recall (using photos) to the reference method [food weighing (FW)]. In both the methods, the comparison focused on food intake amounts as well as nutritional issues. Validity was assessed by Bland-Altman limits of agreement. In total, 31 male and female volunteers aged 9-89 participated in the study. We focused on eight food categories and compared their estimated amounts (using the 24-hour recall method) to those actually consumed (using FW). Animal products and sweets were underestimated, whereas pasta, bread, vegetables, fruits, and dairy products were overestimated. However, the difference between the two methods is not statistically significant except for pasta (p<0.05) and dairy products (p<0.05). The coefficient of correlation between the two methods is highly significant, ranging from 0.876 for pasta to 0.989 for dairy products. Nutrient intake calculated for both methods showed insignificant differences except for fat (p<0.001) and dietary fiber (p<0.05). A highly significant correlation was observed between the two methods for all micronutrients. The test agreement highlights the lack of difference between the two methods. The difference between the 24-hour recall method using digital photos and the weighing method is acceptable. Our findings indicate that the food photography manual can be a useful tool for quantifying food portion sizes in epidemiological dietary surveys.
Automatic evidence retrieval for systematic reviews.
Choong, Miew Keen; Galgani, Filippo; Dunn, Adam G; Tsafnat, Guy
2014-10-01
Snowballing involves recursively pursuing relevant references cited in the retrieved literature and adding them to the search results. Snowballing is an alternative approach to discover additional evidence that was not retrieved through conventional search. Snowballing's effectiveness makes it best practice in systematic reviews despite being time-consuming and tedious. Our goal was to evaluate an automatic method for citation snowballing's capacity to identify and retrieve the full text and/or abstracts of cited articles. Using 20 review articles that contained 949 citations to journal or conference articles, we manually searched Microsoft Academic Search (MAS) and identified 78.0% (740/949) of the cited articles that were present in the database. We compared the performance of the automatic citation snowballing method against the results of this manual search, measuring precision, recall, and F1 score. The automatic method was able to correctly identify 633 (as proportion of included citations: recall=66.7%, F1 score=79.3%; as proportion of citations in MAS: recall=85.5%, F1 score=91.2%) of citations with high precision (97.7%), and retrieved the full text or abstract for 490 (recall=82.9%, precision=92.1%, F1 score=87.3%) of the 633 correctly retrieved citations. The proposed method for automatic citation snowballing is accurate and is capable of obtaining the full texts or abstracts for a substantial proportion of the scholarly citations in review articles. By automating the process of citation snowballing, it may be possible to reduce the time and effort of common evidence surveillance tasks such as keeping trial registries up to date and conducting systematic reviews.
NASA Astrophysics Data System (ADS)
Leavens, Claudia; Vik, Torbjørn; Schulz, Heinrich; Allaire, Stéphane; Kim, John; Dawson, Laura; O'Sullivan, Brian; Breen, Stephen; Jaffray, David; Pekar, Vladimir
2008-03-01
Manual contouring of target volumes and organs at risk in radiation therapy is extremely time-consuming, in particular for treating the head-and-neck area, where a single patient treatment plan can take several hours to contour. As radiation treatment delivery moves towards adaptive treatment, the need for more efficient segmentation techniques will increase. We are developing a method for automatic model-based segmentation of the head and neck. This process can be broken down into three main steps: i) automatic landmark identification in the image dataset of interest, ii) automatic landmark-based initialization of deformable surface models to the patient image dataset, and iii) adaptation of the deformable models to the patient-specific anatomical boundaries of interest. In this paper, we focus on the validation of the first step of this method, quantifying the results of our automatic landmark identification method. We use an image atlas formed by applying thin-plate spline (TPS) interpolation to ten atlas datasets, using 27 manually identified landmarks in each atlas/training dataset. The principal variation modes returned by principal component analysis (PCA) of the landmark positions were used by an automatic registration algorithm, which sought the corresponding landmarks in the clinical dataset of interest using a controlled random search algorithm. Applying a run time of 60 seconds to the random search, a root mean square (rms) distance to the ground-truth landmark position of 9.5 +/- 0.6 mm was calculated for the identified landmarks. Automatic segmentation of the brain, mandible and brain stem, using the detected landmarks, is demonstrated.
Integrated piezoelectric actuators in deep drawing tools to reduce the try-out
NASA Astrophysics Data System (ADS)
Neugebauer, Reimund; Mainda, Patrick; Kerschner, Matthias; Drossel, Welf-Guntram; Roscher, Hans-Jürgen
2011-05-01
Tool making is a very time consuming and expensive operation because many iteration loops are used to manually adjust tool components during the try-out process. That means that trying out deep drawing tools is 30% of the total costs. This is the reason why an active deep drawing tool was developed at the Fraunhofer Institute for Machine Tools and Forming Technology IWU in cooperation with Audi and Volkswagen to reduce the costs and production rates. The main difference between the active and conventional deep drawing tools is using piezoelectric actuators to control the forming process. The active tool idea, which is the main subject of this research, will be presented as well as the findings of experiments with the custom-built deep drawing tool. This experimental tool was designed according to production requirements and has been equipped with piezoelectric actuators that allow active pressure distribution on the sheet metal flange. The disposed piezoelectric elements are similar to those being used in piezo injector systems for modern diesel engines. In order to achieve the required force, the actuators are combined in a cluster that is embedded in the die of the deep drawing tool. One main objective of this work, i.e. reducing the time-consuming try-out-period, has been achieved with the experimental tool which means that the actuators were used to set static pressure distribution between the blankholder and die. We will present the findings of our analysis and the advantages of the active system over a conventional deep drawing tool. In addition to the ability of changing the static pressure distribution, the piezoelectric actuator can also be used to generate a dynamic pressure distribution during the forming process. As a result the active tool has the potential to expand the forming constraints to make it possible to manage forming restrictions caused by light weight materials in future.
Process control and recovery in the Link Monitor and Control Operator Assistant
NASA Technical Reports Server (NTRS)
Lee, Lorrine; Hill, Randall W., Jr.
1993-01-01
This paper describes our approach to providing process control and recovery functions in the Link Monitor and Control Operator Assistant (LMCOA). The focus of the LMCOA is to provide semi-automated monitor and control to support station operations in the Deep Space Network. The LMCOA will be demonstrated with precalibration operations for Very Long Baseline Interferometry on a 70-meter antenna. Precalibration, the task of setting up the equipment to support a communications link with a spacecraft, is a manual, time consuming and error-prone process. One problem with the current system is that it does not provide explicit feedback about the effects of control actions. The LMCOA uses a Temporal Dependency Network (TDN) to represent an end-to-end sequence of operational procedures and a Situation Manager (SM) module to provide process control, diagnosis, and recovery functions. The TDN is a directed network representing precedence, parallelism, precondition, and postcondition constraints. The SM maintains an internal model of the expected and actual states of the subsystems in order to determine if each control action executed successfully and to provide feedback to the user. The LMCOA is implemented on a NeXT workstation using Objective C, Interface Builder and the C Language Integrated Production System.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2015-07-01
Air sealing of building enclosures is a difficult and time-consuming process. Current methods in new construction require laborers to physically locate small and sometimes large holes in multiple assemblies and then manually seal each of them. The innovation demonstrated under this research study was the automated air sealing and compartmentalization of buildings through the use of an aerosolized sealant, developed by the Western Cooling Efficiency Center at University of California Davis.CARB sought to demonstrate this new technology application in a multifamily building in Queens, NY. The effectiveness of the sealing process was evaluated by three methods: air leakage testing ofmore » overall apartment before and after sealing, point-source testing of individual leaks, and pressure measurements in the walls of the target apartment during sealing. Aerosolized sealing was successful by several measures in this study. Many individual leaks that are labor-intensive to address separately were well sealed by the aerosol particles. In addition, many diffuse leaks that are difficult to identify and treat were also sealed. The aerosol-based sealing process resulted in an average reduction of 71% in air leakage across three apartments and an average apartment airtightness of 0.08 CFM50/SF of enclosure area.« less
Comparison on Human Resource Requirement between Manual and Automated Dispensing Systems.
Noparatayaporn, Prapaporn; Sakulbumrungsil, Rungpetch; Thaweethamcharoen, Tanita; Sangseenil, Wunwisa
2017-05-01
This study was conducted to compare human resource requirement among manual, automated, and modified automated dispensing systems. Data were collected from the pharmacy department at the 2100-bed university hospital (Siriraj Hospital, Bangkok, Thailand). Data regarding the duration of the medication distribution process were collected by using self-reported forms for 1 month. The data on the automated dispensing machine (ADM) system were obtained from 1 piloted inpatient ward, whereas those on the manual system were the average of other wards. Data on dispensing, returned unused medication, and stock management processes under the traditional manual system and the ADM system were from actual activities, whereas the modified ADM system was modeled. The full-time equivalent (FTE) of each model was estimated for comparison. The result showed that the manual system required 46.84 FTEs of pharmacists and 132.66 FTEs of pharmacy technicians. By adding pharmacist roles on screening and verification under the ADM system, the ADM system required 117.61 FTEs of pharmacists. Replacing counting and filling medication functions by ADM has decreased the number of pharmacy technicians to 55.38 FTEs. After the modified ADM system canceled the return unused medication process, FTEs requirement for pharmacists and pharmacy technicians decreased to 69.78 and 51.90 FTEs, respectively. The ADM system decreased the workload of pharmacy technicians, whereas it required more time from pharmacists. However, the increased workload of pharmacists was associated with more comprehensive patient care functions, which resulted from the redesigned work process. Copyright © 2017. Published by Elsevier Inc.
A Time-Cost Management System for use in Educational Planning.
ERIC Educational Resources Information Center
McIsaac, Donald N., Jr.; and Others
Prepared specifically for the Denver Public Schools, this manual nevertheless provides some of the basic understanding required for the proper execution of educational planning based upon PERT/CPM techniques. The theory of PERT/CPM and the fundamental processes involved therein are elucidated in the first part of the manual while the operating…
Automatic map generalisation from research to production
NASA Astrophysics Data System (ADS)
Nyberg, Rose; Johansson, Mikael; Zhang, Yang
2018-05-01
The manual work of map generalisation is known to be a complex and time consuming task. With the development of technology and societies, the demands for more flexible map products with higher quality are growing. The Swedish mapping, cadastral and land registration authority Lantmäteriet has manual production lines for databases in five different scales, 1 : 10 000 (SE10), 1 : 50 000 (SE50), 1 : 100 000 (SE100), 1 : 250 000 (SE250) and 1 : 1 million (SE1M). To streamline this work, Lantmäteriet started a project to automatically generalise geographic information. Planned timespan for the project is 2015-2022. Below the project background together with the methods for the automatic generalisation are described. The paper is completed with a description of results and conclusions.
Automatic analysis of quantitative NMR data of pharmaceutical compound libraries.
Liu, Xuejun; Kolpak, Michael X; Wu, Jiejun; Leo, Gregory C
2012-08-07
In drug discovery, chemical library compounds are usually dissolved in DMSO at a certain concentration and then distributed to biologists for target screening. Quantitative (1)H NMR (qNMR) is the preferred method for the determination of the actual concentrations of compounds because the relative single proton peak areas of two chemical species represent the relative molar concentrations of the two compounds, that is, the compound of interest and a calibrant. Thus, an analyte concentration can be determined using a calibration compound at a known concentration. One particularly time-consuming step in the qNMR analysis of compound libraries is the manual integration of peaks. In this report is presented an automated method for performing this task without prior knowledge of compound structures and by using an external calibration spectrum. The script for automated integration is fast and adaptable to large-scale data sets, eliminating the need for manual integration in ~80% of the cases.
User's manual for computer program BASEPLOT
Sanders, Curtis L.
2002-01-01
The checking and reviewing of daily records of streamflow within the U.S. Geological Survey is traditionally accomplished by hand-plotting and mentally collating tables of data. The process is time consuming, difficult to standardize, and subject to errors in computation, data entry, and logic. In addition, the presentation of flow data on the internet requires more timely and accurate computation of daily flow records. BASEPLOT was developed for checking and review of primary streamflow records within the U.S. Geological Survey. Use of BASEPLOT enables users to (1) provide efficiencies during the record checking and review process, (2) improve quality control, (3) achieve uniformity of checking and review techniques of simple stage-discharge relations, and (4) provide a tool for teaching streamflow computation techniques. The BASEPLOT program produces tables of quality control checks and produces plots of rating curves and discharge measurements; variable shift (V-shift) diagrams; and V-shifts converted to stage-discharge plots, using data stored in the U.S. Geological Survey Automatic Data Processing System database. In addition, the program plots unit-value hydrographs that show unit-value stages, shifts, and datum corrections; input shifts, datum corrections, and effective dates; discharge measurements; effective dates for rating tables; and numeric quality control checks. Checklist/tutorial forms are provided for reviewers to ensure completeness of review and standardize the review process. The program was written for the U.S. Geological Survey SUN computer using the Statistical Analysis System (SAS) software produced by SAS Institute, Incorporated.
Sensing roughness and polish direction
NASA Astrophysics Data System (ADS)
Jakobsen, M. L.; Olesen, A. S.; Larsen, H. E.; Stubager, J.; Hanson, S. G.; Pedersen, T. F.; Pedersen, H. C.
2016-04-01
As a part of the work carried out on a project supported by the Danish council for technology and innovation, we have investigated the option of smoothing standard CNC machined surfaces. In the process of constructing optical prototypes, involving custom-designed optics, the development cost and time consumption can become relatively large numbers in a research budget. Machining the optical surfaces directly is expensive and time consuming. Alternatively, a more standardized and cheaper machining method can be used, but then the object needs to be manually polished. During the polishing process the operator needs information about the RMS-value of the surface roughness and the current direction of the scratches introduces by the polishing process. The RMS-value indicates to the operator how far he is from the final finish, and the scratch orientation is often specified by the customer in order to avoid complications during the casting process. In this work we present a method for measuring the RMS-values of the surface roughness while simultaneously determining the polishing direction. We are mainly interested in the RMS-values in the range from 0 - 100 nm, which corresponds to the finish categories of A1, A2 and A3. Based on simple intensity measurements we estimates the RMS-value of the surface roughness, and by using a sectioned annual photo-detector to collect the scattered light we can determine the direction of polishing and distinguish light scattered from random structures and light scattered from scratches.
A sustainable development of a city electrical grid via a non-contractual Demand-Side Management
NASA Astrophysics Data System (ADS)
Samoylenko, Vladislav O.; Pazderin, Andrew V.
2017-06-01
An increasing energy consumption of large cities as well as an extreme high density of city electrical loads leads to the necessity to search for an alternative approaches to city grid development. The ongoing implementation of the energy accounting tariffs with differentiated rates depending upon the market conditions and changing in a short-term perspective, provide the possibility to use it as a financial incentive base of a Demand-Side Management (DSM). Modern hi-technology energy metering and accounting systems with a large number of functions and consumer feedback are supposed to be the good means of DSM. Existing systems of Smart Metering (SM) billing usually provide general information about consumption curve, bills and compared data, but not the advanced statistics about the correspondence of financial and electric parameters. Also, consumer feedback is usually not fully used. So, the efforts to combine the market principle, Smart Metering and a consumer feedback for an active non-contractual load control are essential. The paper presents the rating-based multi-purpose system of mathematical statistics and algorithms of DSM efficiency estimation useful for both the consumers and the energy companies. The estimation is performed by SM Data processing systems. The system is aimed for load peak shaving and load curve smoothing. It is focused primarily on a retail market support. The system contributes to the energy efficiency and a distribution process improvement by the manual management or by the automated Smart Appliances interaction.
Seghier, Mohamed L; Kolanko, Magdalena A; Leff, Alexander P; Jäger, Hans R; Gregoire, Simone M; Werring, David J
2011-03-23
Cerebral microbleeds, visible on gradient-recalled echo (GRE) T2* MRI, have generated increasing interest as an imaging marker of small vessel diseases, with relevance for intracerebral bleeding risk or brain dysfunction. Manual rating methods have limited reliability and are time-consuming. We developed a new method for microbleed detection using automated segmentation (MIDAS) and compared it with a validated visual rating system. In thirty consecutive stroke service patients, standard GRE T2* images were acquired and manually rated for microbleeds by a trained observer. After spatially normalizing each patient's GRE T2* images into a standard stereotaxic space, the automated microbleed detection algorithm (MIDAS) identified cerebral microbleeds by explicitly incorporating an "extra" tissue class for abnormal voxels within a unified segmentation-normalization model. The agreement between manual and automated methods was assessed using the intraclass correlation coefficient (ICC) and Kappa statistic. We found that MIDAS had generally moderate to good agreement with the manual reference method for the presence of lobar microbleeds (Kappa = 0.43, improved to 0.65 after manual exclusion of obvious artefacts). Agreement for the number of microbleeds was very good for lobar regions: (ICC = 0.71, improved to ICC = 0.87). MIDAS successfully detected all patients with multiple (≥2) lobar microbleeds. MIDAS can identify microbleeds on standard MR datasets, and with an additional rapid editing step shows good agreement with a validated visual rating system. MIDAS may be useful in screening for multiple lobar microbleeds.
Genetics Home Reference: nonsyndromic congenital nail disorder 10
... Nails MalaCards: nail disorder, nonsyndromic congenital, 10 Merck Manual Consumer Version: Deformities, Dystrophies, and Discoloration of the Nails Orphanet: Autosomal recessive nail dysplasia Patient Support ...
ERIC Educational Resources Information Center
Forgue, Raymond E.; And Others
Five topic areas in consumer education are provided in this manual developed for use by financial counselors in conducting ten one-hour educational sessions for adult groups. The session titles are the following: (1) The Internal Money World--The Individual, The Family, and Money; (2) Effective Money Management; (3) Effective Credit Management;…
Methods for Managing Stress in the Workplace: Coping Effectively on the Job.
ERIC Educational Resources Information Center
Casey, Anita; And Others
This manual is intended for use by persons with psychiatric disabilities who are employed in the community but need help in coping with daily stressors at work. It is designed to be taught to mental health consumers by mental health consumers. Each session includes a review of the previous session; objectives; a list of materials needed; and…
Seed sprout production: Consumables and a foundation for higher plant growth in space
NASA Technical Reports Server (NTRS)
Day, Michelle; Thomas, Terri; Johnson, Steve; Luttges, Marvin
1990-01-01
Seed sprouts can be produced as a source of fresh vegetable materials and as higher plant seedlings in space. Sprout production was undertaken to evaluate the mass accumulations possible, the technologies needed, and the reliability of the overall process. Baseline experiments corroborated the utility of sprout production protocols for a variety of seed types. The automated delivery of saturated humidity effectively supplants labor intensive manual soaking techniques. Automated humidification also lend itself to modest centrifugal sprout growth environments. A small amount of ultraviolet radiation effectively suppressed bacterial and fungal contamination, and the sprouts were suitable for consumption.
Michel, J.; Hsiao, A.; Fenick, A.
2014-01-01
Summary Background Transitioning between Electronic Medical Records (EMR) can result in patient data being stranded in legacy systems with subsequent failure to provide appropriate patient care. Manual chart abstraction is labor intensive, error-prone, and difficult to institute for immunizations on a systems level in a timely fashion. Objectives We sought to transfer immunization data from two of our health system’s soon to be replaced EMRs to the future EMR using a single process instead of separate interfaces for each facility. Methods We used scripted data entry, a process where a computer automates manual data entry, to insert data into the future EMR. Using the Center for Disease Control’s CVX immunization codes we developed a bridge between immunization identifiers within our system’s EMRs. We performed a two-step process evaluation of the data transfer using automated data comparison and manual chart review. Results We completed the data migration from two facilities in 16.8 hours with no data loss or corruption. We successfully populated the future EMR with 99.16% of our legacy immunization data – 500,906 records – just prior to our EMR transition date. A subset of immunizations, first recognized during clinical care, had not originally been extracted from the legacy systems. Once identified, this data – 1,695 records – was migrated using the same process with minimal additional effort. Conclusions Scripted data entry for immunizations is more accurate than published estimates for manual data entry and we completed our data transfer in 1.2% of the total time we predicted for manual data entry. Performing this process before EMR conversion helped identify obstacles to data migration. Drawing upon this work, we will reuse this process for other healthcare facilities in our health system as they transition to the future EMR. PMID:24734139
Guo, Haihong; Li, Jiao; Dai, Tao
2015-01-01
This study built up a classification schema of consumer health questions which consisted of 48 quaternary categories and 35 annotation rules. Using such a schema, we manually classified 2,000 questions randomly selected from nearly 100 thousand hypertension-related messages posted by consumers on a Chinese health website to analyze the information needs of health consumers. The results showed questions in the categories of treatment, diagnosis, healthy lifestyle, management, epidemiology, and health provider choosing were 48.1%, 23.8%, 11.9%, 5.2%, 9.0%, and 1.9% respectively. The comparison of the questions asked by consumers and physicians showed that their health information needs were significantly different (P<0.0001).
Wallner, Jürgen; Hochegger, Kerstin; Chen, Xiaojun; Mischak, Irene; Reinbacher, Knut; Pau, Mauro; Zrnc, Tomislav; Schwenzer-Zimmerer, Katja; Zemann, Wolfgang; Schmalstieg, Dieter
2018-01-01
Introduction Computer assisted technologies based on algorithmic software segmentation are an increasing topic of interest in complex surgical cases. However—due to functional instability, time consuming software processes, personnel resources or licensed-based financial costs many segmentation processes are often outsourced from clinical centers to third parties and the industry. Therefore, the aim of this trial was to assess the practical feasibility of an easy available, functional stable and licensed-free segmentation approach to be used in the clinical practice. Material and methods In this retrospective, randomized, controlled trail the accuracy and accordance of the open-source based segmentation algorithm GrowCut was assessed through the comparison to the manually generated ground truth of the same anatomy using 10 CT lower jaw data-sets from the clinical routine. Assessment parameters were the segmentation time, the volume, the voxel number, the Dice Score and the Hausdorff distance. Results Overall semi-automatic GrowCut segmentation times were about one minute. Mean Dice Score values of over 85% and Hausdorff Distances below 33.5 voxel could be achieved between the algorithmic GrowCut-based segmentations and the manual generated ground truth schemes. Statistical differences between the assessment parameters were not significant (p<0.05) and correlation coefficients were close to the value one (r > 0.94) for any of the comparison made between the two groups. Discussion Complete functional stable and time saving segmentations with high accuracy and high positive correlation could be performed by the presented interactive open-source based approach. In the cranio-maxillofacial complex the used method could represent an algorithmic alternative for image-based segmentation in the clinical practice for e.g. surgical treatment planning or visualization of postoperative results and offers several advantages. Due to an open-source basis the used method could be further developed by other groups or specialists. Systematic comparisons to other segmentation approaches or with a greater data amount are areas of future works. PMID:29746490
Chaffin, Mark; Bard, David; Bigfoot, Dolores Subia; Maher, Erin J
2012-08-01
In a statewide implementation, the manualized SafeCare home-based model was effective in reducing child welfare recidivism and producing high client satisfaction. Concerns about the effectiveness and acceptability of structured, manualized models with American Indians have been raised in the literature, but have rarely been directly tested. This study tests recidivism reduction equivalency and acceptability among American Indian parents. A subpopulation of 354 American Indian parents was drawn from a larger trial that compared services with versus without modules of the SafeCare model. Outcomes were 6-year recidivism, pre/post/follow-up measures of depression and child abuse potential, and posttreatment consumer ratings of working alliance, service satisfaction, and cultural competency. Recidivism reduction among American Indian parents was found to be equivalent for cases falling within customary SafeCare inclusion criteria. When extended to cases outside customary inclusion boundaries, there was no apparent recidivism advantage or disadvantage. Contrary to concerns, SafeCare had higher consumer ratings of cultural competency, working alliance, service quality, and service benefit. Findings support using SafeCare with American Indians parents who meet customary SafeCare inclusion criteria. Findings do not support concerns in the literature that a manualized, structured, evidence-based model might be less effective or culturally unacceptable for American Indians.
Chaffin, Mark; Bard, David; Bigfoot, Dolores Subia; Maher, Erin J.
2015-01-01
In a statewide implementation, the manualized SafeCare home–based model was effective in reducing child welfare recidivism and producing high client satisfaction. Concerns about the effectiveness and acceptability of structured, manualized models with American Indians have been raised in the literature, but have rarely been directly tested. This study tests recidivism reduction equivalency and acceptability among American Indian parents. A subpopulation of 354 American Indian parents was drawn from a larger trial that compared services with versus without modules of the SafeCare model. Outcomes were 6-year recidivism, pre/post/follow-up measures of depression and child abuse potential, and posttreatment consumer ratings of working alliance, service satisfaction, and cultural competency. Recidivism reduction among American Indian parents was found to be equivalent for cases falling within customary SafeCare inclusion criteria. When extended to cases outside customary inclusion boundaries, there was no apparent recidivism advantage or disadvantage. Contrary to concerns, SafeCare had higher consumer ratings of cultural competency, working alliance, service quality, and service benefit. Findings support using SafeCare with American Indians parents who meet customary SafeCare inclusion criteria. Findings do not support concerns in the literature that a manualized, structured, evidence-based model might be less effective or culturally unacceptable for American Indians. PMID:22927674
Automatic Processing of Current Affairs Queries
ERIC Educational Resources Information Center
Salton, G.
1973-01-01
The SMART system is used for the analysis, search and retrieval of news stories appearing in Time'' magazine. A comparison is made between the automatic text processing methods incorporated into the SMART system and a manual search using the classified index to Time.'' (14 references) (Author)
Genetics Home Reference: Fabry disease
... Sheet (PDF) Disease InfoSearch: Fabry Disease Emory University School of Medicine (PDF) International Center for Fabry Disease, Mount Sinai School of Medicine MalaCards: fabry disease Merck Manual Consumer ...
Nijs, Jo; Van Houdenhove, Boudewijn
2009-02-01
During the past decade, scientific research has provided new insight into the development from an acute, localised musculoskeletal disorder towards chronic widespread pain/fibromyalgia (FM). Chronic widespread pain/FM is characterised by sensitisation of central pain pathways. An in-depth review of basic and clinical research was performed to design a theoretical framework for manual therapy in these patients. It is explained that manual therapy might be able to influence the process of chronicity in three different ways. (I) In order to prevent chronicity in (sub)acute musculoskeletal disorders, it seems crucial to limit the time course of afferent stimulation of peripheral nociceptors. (II) In the case of chronic widespread pain and established sensitisation of central pain pathways, relatively minor injuries/trauma at any locations are likely to sustain the process of central sensitisation and should be treated appropriately with manual therapy accounting for the decreased sensory threshold. Inappropriate pain beliefs should be addressed and exercise interventions should account for the process of central sensitisation. (III) However, manual therapists ignoring the processes involved in the development and maintenance of chronic widespread pain/FM may cause more harm then benefit to the patient by triggering or sustaining central sensitisation.
Mining Genotype-Phenotype Associations from Public Knowledge Sources via Semantic Web Querying
Kiefer, Richard C.; Freimuth, Robert R.; Chute, Christopher G; Pathak, Jyotishman
Gene Wiki Plus (GeneWiki+) and the Online Mendelian Inheritance in Man (OMIM) are publicly available resources for sharing information about disease-gene and gene-SNP associations in humans. While immensely useful to the scientific community, both resources are manually curated, thereby making the data entry and publication process time-consuming, and to some degree, error-prone. To this end, this study investigates Semantic Web technologies to validate existing and potentially discover new genotype-phenotype associations in GWP and OMIM. In particular, we demonstrate the applicability of SPARQL queries for identifying associations not explicitly stated for commonly occurring chronic diseases in GWP and OMIM, and report our preliminary findings for coverage, completeness, and validity of the associations. Our results highlight the benefits of Semantic Web querying technology to validate existing disease-gene associations as well as identify novel associations although further evaluation and analysis is required before such information can be applied and used effectively. PMID:24303249
Automated protein NMR structure determination using wavelet de-noised NOESY spectra.
Dancea, Felician; Günther, Ulrich
2005-11-01
A major time-consuming step of protein NMR structure determination is the generation of reliable NOESY cross peak lists which usually requires a significant amount of manual interaction. Here we present a new algorithm for automated peak picking involving wavelet de-noised NOESY spectra in a process where the identification of peaks is coupled to automated structure determination. The core of this method is the generation of incremental peak lists by applying different wavelet de-noising procedures which yield peak lists of a different noise content. In combination with additional filters which probe the consistency of the peak lists, good convergence of the NOESY-based automated structure determination could be achieved. These algorithms were implemented in the context of the ARIA software for automated NOE assignment and structure determination and were validated for a polysulfide-sulfur transferase protein of known structure. The procedures presented here should be commonly applicable for efficient protein NMR structure determination and automated NMR peak picking.
Shin, Y S; Yeom, Y K; Hwang, H
1993-02-01
This paper describes the development of a claim review and payment model utilizing the diagnosis related groups (DRGs) for the fee for service-based payment system of the Korean health insurance. The present review process, which examines all claims manually on a case-by-case basis, has been considered to be inefficient, costly, and time-consuming. Differences in case mix among hospitals are controlled in the proposed model using the Korean DRGs. They were developed by modifying the US-DRG system. An empirical test of the model indicated that it can enhance the efficiency as well as the credibility and objectivity of the claim review. Furthermore, it is expected that it can contribute effectively to medical cost containments and to optimal practice pattern of hospitals by establishing a useful mechanism in monitoring the performance of hospitals. However, the performance of this model needs to be upgraded by refining the Korean DRGs which play a key role in the model.
An automatic and effective parameter optimization method for model tuning
NASA Astrophysics Data System (ADS)
Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.
2015-05-01
Physical parameterizations in General Circulation Models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determines parameter sensitivity and the other chooses the optimum initial value of sensitive parameters, are introduced before the downhill simplex method to reduce the computational cost and improve the tuning performance. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9%. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameters tuning during the model development stage.
NASA Astrophysics Data System (ADS)
Liu, Xi; Zhou, Mei; Qiu, Song; Sun, Li; Liu, Hongying; Li, Qingli; Wang, Yiting
2017-12-01
Red blood cell counting, as a routine examination, plays an important role in medical diagnoses. Although automated hematology analyzers are widely used, manual microscopic examination by a hematologist or pathologist is still unavoidable, which is time-consuming and error-prone. This paper proposes a full-automatic red blood cell counting method which is based on microscopic hyperspectral imaging of blood smears and combines spatial and spectral information to achieve high precision. The acquired hyperspectral image data of the blood smear in the visible and near-infrared spectral range are firstly preprocessed, and then a quadratic blind linear unmixing algorithm is used to get endmember abundance images. Based on mathematical morphological operation and an adaptive Otsu’s method, a binaryzation process is performed on the abundance images. Finally, the connected component labeling algorithm with magnification-based parameter setting is applied to automatically select the binary images of red blood cell cytoplasm. Experimental results show that the proposed method can perform well and has potential for clinical applications.
NASA Technical Reports Server (NTRS)
Ouandji, Cynthia; Wang, Jonathan; Arismendi, Dillon; Lee, Alonzo; Blaich, Justin; Gentry, Diana
2017-01-01
At its core, the field of microbial experimental evolution seeks to elucidate the natural laws governing the history of microbial life by understanding its underlying driving mechanisms. However, observing evolution in nature is complex, as environmental conditions are difficult to control. Laboratory-based experiments for observing population evolution provide more control, but manually culturing and studying multiple generations of microorganisms can be time consuming, labor intensive, and prone to inconsistency. We have constructed a prototype, closed system device that automates the process of directed evolution experiments in microorganisms. It is compatible with any liquid microbial culture, including polycultures and field samples, provides flow control and adjustable agitation, continuously monitors optical density (OD), and can dynamically control environmental pressures such as ultraviolet-C (UV-C) radiation and temperature. Here, the results of the prototype are compared to iterative exposure and survival assays conducted using a traditional hood, UV-C lamp, and shutter system.
Held, Christian; Wenzel, Jens; Webel, Rike; Marschall, Manfred; Lang, Roland; Palmisano, Ralf; Wittenberg, Thomas
2011-01-01
In order to improve reproducibility and objectivity of fluorescence microscopy based experiments and to enable the evaluation of large datasets, flexible segmentation methods are required which are able to adapt to different stainings and cell types. This adaption is usually achieved by the manual adjustment of the segmentation methods parameters, which is time consuming and challenging for biologists with no knowledge on image processing. To avoid this, parameters of the presented methods automatically adapt to user generated ground truth to determine the best method and the optimal parameter setup. These settings can then be used for segmentation of the remaining images. As robust segmentation methods form the core of such a system, the currently used watershed transform based segmentation routine is replaced by a fast marching level set based segmentation routine which incorporates knowledge on the cell nuclei. Our evaluations reveal that incorporation of multimodal information improves segmentation quality for the presented fluorescent datasets.
Automated detection of lung nodules with three-dimensional convolutional neural networks
NASA Astrophysics Data System (ADS)
Pérez, Gustavo; Arbeláez, Pablo
2017-11-01
Lung cancer is the cancer type with highest mortality rate worldwide. It has been shown that early detection with computer tomography (CT) scans can reduce deaths caused by this disease. Manual detection of cancer nodules is costly and time-consuming. We present a general framework for the detection of nodules in lung CT images. Our method consists of the pre-processing of a patient's CT with filtering and lung extraction from the entire volume using a previously calculated mask for each patient. From the extracted lungs, we perform a candidate generation stage using morphological operations, followed by the training of a three-dimensional convolutional neural network for feature representation and classification of extracted candidates for false positive reduction. We perform experiments on the publicly available LIDC-IDRI dataset. Our candidate extraction approach is effective to produce precise candidates with a recall of 99.6%. In addition, false positive reduction stage manages to successfully classify candidates and increases precision by a factor of 7.000.
Peralta, Emmanuel; Vargas, Héctor; Hermosilla, Gabriel
2018-01-01
Proximity sensors are broadly used in mobile robots for obstacle detection. The traditional calibration process of this kind of sensor could be a time-consuming task because it is usually done by identification in a manual and repetitive way. The resulting obstacles detection models are usually nonlinear functions that can be different for each proximity sensor attached to the robot. In addition, the model is highly dependent on the type of sensor (e.g., ultrasonic or infrared), on changes in light intensity, and on the properties of the obstacle such as shape, colour, and surface texture, among others. That is why in some situations it could be useful to gather all the measurements provided by different kinds of sensor in order to build a unique model that estimates the distances to the obstacles around the robot. This paper presents a novel approach to get an obstacles detection model based on the fusion of sensors data and automatic calibration by using artificial neural networks. PMID:29495338
Computer aided manual validation of mass spectrometry-based proteomic data.
Curran, Timothy G; Bryson, Bryan D; Reigelhaupt, Michael; Johnson, Hannah; White, Forest M
2013-06-15
Advances in mass spectrometry-based proteomic technologies have increased the speed of analysis and the depth provided by a single analysis. Computational tools to evaluate the accuracy of peptide identifications from these high-throughput analyses have not kept pace with technological advances; currently the most common quality evaluation methods are based on statistical analysis of the likelihood of false positive identifications in large-scale data sets. While helpful, these calculations do not consider the accuracy of each identification, thus creating a precarious situation for biologists relying on the data to inform experimental design. Manual validation is the gold standard approach to confirm accuracy of database identifications, but is extremely time-intensive. To palliate the increasing time required to manually validate large proteomic datasets, we provide computer aided manual validation software (CAMV) to expedite the process. Relevant spectra are collected, catalogued, and pre-labeled, allowing users to efficiently judge the quality of each identification and summarize applicable quantitative information. CAMV significantly reduces the burden associated with manual validation and will hopefully encourage broader adoption of manual validation in mass spectrometry-based proteomics. Copyright © 2013 Elsevier Inc. All rights reserved.
Lee, Jeongjin; Kim, Kyoung Won; Kim, So Yeon; Kim, Bohyoung; Lee, So Jung; Kim, Hyoung Jung; Lee, Jong Seok; Lee, Moon Gyu; Song, Gi-Won; Hwang, Shin; Lee, Sung-Gyu
2014-09-01
To assess the feasibility of semiautomated MR volumetry using gadoxetic acid-enhanced MRI at the hepatobiliary phase compared with manual CT volumetry. Forty potential live liver donor candidates who underwent MR and CT on the same day, were included in our study. Semiautomated MR volumetry was performed using gadoxetic acid-enhanced MRI at the hepatobiliary phase. We performed the quadratic MR image division for correction of the bias field inhomogeneity. With manual CT volumetry as the reference standard, we calculated the average volume measurement error of the semiautomated MR volumetry. We also calculated the mean of the number and time of the manual editing, edited volume, and total processing time. The average volume measurement errors of the semiautomated MR volumetry were 2.35% ± 1.22%. The average values of the numbers of editing, operation times of manual editing, edited volumes, and total processing time for the semiautomated MR volumetry were 1.9 ± 0.6, 8.1 ± 2.7 s, 12.4 ± 8.8 mL, and 11.7 ± 2.9 s, respectively. Semiautomated liver MR volumetry using hepatobiliary phase gadoxetic acid-enhanced MRI with the quadratic MR image division is a reliable, easy, and fast tool to measure liver volume in potential living liver donors. Copyright © 2013 Wiley Periodicals, Inc.
An analysis of the process and results of manual geocode correction
McDonald, Yolanda J.; Schwind, Michael; Goldberg, Daniel W.; Lampley, Amanda; Wheeler, Cosette M.
2018-01-01
Geocoding is the science and process of assigning geographical coordinates (i.e. latitude, longitude) to a postal address. The quality of the geocode can vary dramatically depending on several variables, including incorrect input address data, missing address components, and spelling mistakes. A dataset with a considerable number of geocoding inaccuracies can potentially result in an imprecise analysis and invalid conclusions. There has been little quantitative analysis of the amount of effort (i.e. time) to perform geocoding correction, and how such correction could improve geocode quality type. This study used a low-cost and easy to implement method to improve geocode quality type of an input database (i.e. addresses to be matched) through the processes of manual geocode intervention, and it assessed the amount of effort to manually correct inaccurate geocodes, reported the resulting match rate improvement between the original and the corrected geocodes, and documented the corresponding spatial shift by geocode quality type resulting from the corrections. Findings demonstrated that manual intervention of geocoding resulted in a 90% improvement of geocode quality type, took 42 hours to process, and the spatial shift ranged from 0.02 to 151,368 m. This study provides evidence to inform research teams considering the application of manual geocoding intervention that it is a low-cost and relatively easy process to execute. PMID:28555477
Lymph node segmentation on CT images by a shape model guided deformable surface methodh
NASA Astrophysics Data System (ADS)
Maleike, Daniel; Fabel, Michael; Tetzlaff, Ralf; von Tengg-Kobligk, Hendrik; Heimann, Tobias; Meinzer, Hans-Peter; Wolf, Ivo
2008-03-01
With many tumor entities, quantitative assessment of lymph node growth over time is important to make therapy choices or to evaluate new therapies. The clinical standard is to document diameters on transversal slices, which is not the best measure for a volume. We present a new algorithm to segment (metastatic) lymph nodes and evaluate the algorithm with 29 lymph nodes in clinical CT images. The algorithm is based on a deformable surface search, which uses statistical shape models to restrict free deformation. To model lymph nodes, we construct an ellipsoid shape model, which strives for a surface with strong gradients and user-defined gray values. The algorithm is integrated into an application, which also allows interactive correction of the segmentation results. The evaluation shows that the algorithm gives good results in the majority of cases and is comparable to time-consuming manual segmentation. The median volume error was 10.1% of the reference volume before and 6.1% after manual correction. Integrated into an application, it is possible to perform lymph node volumetry for a whole patient within the 10 to 15 minutes time limit imposed by clinical routine.
Thomas, Marianna S; Newman, David; Leinhard, Olof Dahlqvist; Kasmai, Bahman; Greenwood, Richard; Malcolm, Paul N; Karlsson, Anette; Rosander, Johannes; Borga, Magnus; Toms, Andoni P
2014-09-01
To measure the test-retest reproducibility of an automated system for quantifying whole body and compartmental muscle volumes using wide bore 3 T MRI. Thirty volunteers stratified by body mass index underwent whole body 3 T MRI, two-point Dixon sequences, on two separate occasions. Water-fat separation was performed, with automated segmentation of whole body, torso, upper and lower leg volumes, and manually segmented lower leg muscle volumes. Mean automated total body muscle volume was 19·32 L (SD9·1) and 19·28 L (SD9·12) for first and second acquisitions (Intraclass correlation coefficient (ICC) = 1·0, 95% level of agreement -0·32-0·2 L). ICC for all automated test-retest muscle volumes were almost perfect (0·99-1·0) with 95% levels of agreement 1.8-6.6% of mean volume. Automated muscle volume measurements correlate closely with manual quantification (right lower leg: manual 1·68 L (2SD0·6) compared to automated 1·64 L (2SD 0·6), left lower leg: manual 1·69 L (2SD 0·64) compared to automated 1·63 L (SD0·61), correlation coefficients for automated and manual segmentation were 0·94-0·96). Fully automated whole body and compartmental muscle volume quantification can be achieved rapidly on a 3 T wide bore system with very low margins of error, excellent test-retest reliability and excellent correlation to manual segmentation in the lower leg. Sarcopaenia is an important reversible complication of a number of diseases. Manual quantification of muscle volume is time-consuming and expensive. Muscles can be imaged using in and out of phase MRI. Automated atlas-based segmentation can identify muscle groups. Automated muscle volume segmentation is reproducible and can replace manual measurements.
NASA Astrophysics Data System (ADS)
Patel, Ajay; van de Leemput, Sil C.; Prokop, Mathias; van Ginneken, Bram; Manniesing, Rashindra
2017-03-01
Segmentation of anatomical structures is fundamental in the development of computer aided diagnosis systems for cerebral pathologies. Manual annotations are laborious, time consuming and subject to human error and observer variability. Accurate quantification of cerebrospinal fluid (CSF) can be employed as a morphometric measure for diagnosis and patient outcome prediction. However, segmenting CSF in non-contrast CT images is complicated by low soft tissue contrast and image noise. In this paper we propose a state-of-the-art method using a multi-scale three-dimensional (3D) fully convolutional neural network (CNN) to automatically segment all CSF within the cranial cavity. The method is trained on a small dataset comprised of four manually annotated cerebral CT images. Quantitative evaluation of a separate test dataset of four images shows a mean Dice similarity coefficient of 0.87 +/- 0.01 and mean absolute volume difference of 4.77 +/- 2.70 %. The average prediction time was 68 seconds. Our method allows for fast and fully automated 3D segmentation of cerebral CSF in non-contrast CT, and shows promising results despite a limited amount of training data.
van Pelt, Roy; Nguyen, Huy; ter Haar Romeny, Bart; Vilanova, Anna
2012-03-01
Quantitative analysis of vascular blood flow, acquired by phase-contrast MRI, requires accurate segmentation of the vessel lumen. In clinical practice, 2D-cine velocity-encoded slices are inspected, and the lumen is segmented manually. However, segmentation of time-resolved volumetric blood-flow measurements is a tedious and time-consuming task requiring automation. Automated segmentation of large thoracic arteries, based solely on the 3D-cine phase-contrast MRI (PC-MRI) blood-flow data, was done. An active surface model, which is fast and topologically stable, was used. The active surface model requires an initial surface, approximating the desired segmentation. A method to generate this surface was developed based on a voxel-wise temporal maximum of blood-flow velocities. The active surface model balances forces, based on the surface structure and image features derived from the blood-flow data. The segmentation results were validated using volunteer studies, including time-resolved 3D and 2D blood-flow data. The segmented surface was intersected with a velocity-encoded PC-MRI slice, resulting in a cross-sectional contour of the lumen. These cross-sections were compared to reference contours that were manually delineated on high-resolution 2D-cine slices. The automated approach closely approximates the manual blood-flow segmentations, with error distances on the order of the voxel size. The initial surface provides a close approximation of the desired luminal geometry. This improves the convergence time of the active surface and facilitates parametrization. An active surface approach for vessel lumen segmentation was developed, suitable for quantitative analysis of 3D-cine PC-MRI blood-flow data. As opposed to prior thresholding and level-set approaches, the active surface model is topologically stable. A method to generate an initial approximate surface was developed, and various features that influence the segmentation model were evaluated. The active surface segmentation results were shown to closely approximate manual segmentations.
Computer measurement of particle sizes in electron microscope images
NASA Technical Reports Server (NTRS)
Hall, E. L.; Thompson, W. B.; Varsi, G.; Gauldin, R.
1976-01-01
Computer image processing techniques have been applied to particle counting and sizing in electron microscope images. Distributions of particle sizes were computed for several images and compared to manually computed distributions. The results of these experiments indicate that automatic particle counting within a reasonable error and computer processing time is feasible. The significance of the results is that the tedious task of manually counting a large number of particles can be eliminated while still providing the scientist with accurate results.
Liukkonen, Mimmi K; Mononen, Mika E; Tanska, Petri; Saarakkala, Simo; Nieminen, Miika T; Korhonen, Rami K
2017-10-01
Manual segmentation of articular cartilage from knee joint 3D magnetic resonance images (MRI) is a time consuming and laborious task. Thus, automatic methods are needed for faster and reproducible segmentations. In the present study, we developed a semi-automatic segmentation method based on radial intensity profiles to generate 3D geometries of knee joint cartilage which were then used in computational biomechanical models of the knee joint. Six healthy volunteers were imaged with a 3T MRI device and their knee cartilages were segmented both manually and semi-automatically. The values of cartilage thicknesses and volumes produced by these two methods were compared. Furthermore, the influences of possible geometrical differences on cartilage stresses and strains in the knee were evaluated with finite element modeling. The semi-automatic segmentation and 3D geometry construction of one knee joint (menisci, femoral and tibial cartilages) was approximately two times faster than with manual segmentation. Differences in cartilage thicknesses, volumes, contact pressures, stresses, and strains between segmentation methods in femoral and tibial cartilage were mostly insignificant (p > 0.05) and random, i.e. there were no systematic differences between the methods. In conclusion, the devised semi-automatic segmentation method is a quick and accurate way to determine cartilage geometries; it may become a valuable tool for biomechanical modeling applications with large patient groups.
Assessing the difficulty and time cost of de-identification in clinical narratives.
Dorr, D A; Phillips, W F; Phansalkar, S; Sims, S A; Hurdle, J F
2006-01-01
To characterize the difficulty confronting investigators in removing protected health information (PHI) from cross-discipline, free-text clinical notes, an important challenge to clinical informatics research as recalibrated by the introduction of the US Health Insurance Portability and Accountability Act (HIPAA) and similar regulations. Randomized selection of clinical narratives from complete admissions written by diverse providers, reviewed using a two-tiered rater system and simple automated regular expression tools. For manual review, two independent reviewers used simple search and replace algorithms and visual scanning to find PHI as defined by HIPAA, followed by an independent second review to detect any missed PHI. Simple automated review was also performed for the "easy" PHI that are number- or date-based. From 262 notes, 2074 PHI, or 7.9 +/- 6.1 per note, were found. The average recall (or sensitivity) was 95.9% while precision was 99.6% for single reviewers. Agreement between individual reviewers was strong (ICC = 0.99), although some asymmetry in errors was seen between reviewers (p = 0.001). The automated technique had better recall (98.5%) but worse precision (88.4%) for its subset of identifiers. Manually de-identifying a note took 87.3 +/- 61 seconds on average. Manual de-identification of free-text notes is tedious and time-consuming, but even simple PHI is difficult to automatically identify with the exactitude required under HIPAA.
Pezo Nikolić, Borka; Lovrić, Daniel; Ljubas Maček, Jana; Rešković Lukšić, Vlatka; Matasić, Richard; Šeparović Hanževački, Jadranka
2017-12-01
Some manufacturers do not provide automated intracardiac electrogram method (IEGM) systems for atrioventricular (AV) and interventricular (VV) delay optimization in cardiac resynchronization therapy (CRT). We aimed to evaluate the accuracy of manual IEGM method in 48 patients previously implanted with Medtronic Syncra CRT. All patients underwent standard device interrogation followed by CRT optimization by IEGM method and by echocardiography one month after implantation. The patient mean age was 60.7±11.8 years and there were 33 (68.8%) males. After CRT implantation, the left ventricular ejection fraction increased from 28.0±7.9% to 39.1±11.0% (p<0.001). Optimal aortic flow Velocity Time Integral (aVTI) was obtained when VV was set to 20-50 ms left ventricular pre-activation. There was a strong correlation between VV values determined by echocardiography and IEGM (R=0.823, p<0.001). We found no significant difference in AV, VV and aVTI values between echocardiography and IEGM method. However, IEGM was significantly less time-consuming than echocardiography [20 (10-28) vs. 40 (35-60) minutes, p<0.001]. Manual IEGM method may be good alternative to echocardiography and automated IEGM method. It also emphasizes the need for implementation of automated IEGM systems in as many CRT devices as possible.
Levin, David; Aladl, Usaf; Germano, Guido; Slomka, Piotr
2005-09-01
We exploit consumer graphics hardware to perform real-time processing and visualization of high-resolution, 4D cardiac data. We have implemented real-time, realistic volume rendering, interactive 4D motion segmentation of cardiac data, visualization of multi-modality cardiac data and 3D display of multiple series cardiac MRI. We show that an ATI Radeon 9700 Pro can render a 512x512x128 cardiac Computed Tomography (CT) study at 0.9 to 60 frames per second (fps) depending on rendering parameters and that 4D motion based segmentation can be performed in real-time. We conclude that real-time rendering and processing of cardiac data can be implemented on consumer graphics cards.
Reddy, J M V Raghavendra; Latha, Prasanna; Gowda, Basavana; Manvikar, Varadendra; Vijayalaxmi, D Benal; Ponangi, Kalyana Chakravarthi
2014-02-01
Predictable successful endodontic therapy depends on correct diagnosis, effective cleaning, shaping and disinfection of the root canals and adequate obturation. Irrigation serves as a flush to remove debris, tissue solvent and lubricant from the canal irregularities; however these irregularities can restrict the complete debridement of root canal by mechanical instrumentation.Various types of hand and rotary instruments are used for the preparation of the root canal system to obtain debris free canals. The purpose of this study was to evaluate the amount of smear layer and debris removal on canal walls following the using of manual Nickel-Titanium (NiTi) files compared with rotary ProTaperNiTi files using a Scanning Electron Microscope in two individual groups. A comparative study consisting of 50 subjects randomized into two groups - 25 subjects in Group A (manual) and 25 subjects in Group B (rotary) was undertaken to investigate and compare the effects of smear layer and debris between manual and rotary NiTi instruments. Chi square test was used to find the significance of smear layer and debris removal in the coronal, middle and apical between Group A and Group B. Both systems of Rotary ProTaperNiTi and manual NiTi files used in the present study, did not create completely clean root canals. Manual NiTi files produced significantly less smear layer and debris compared to Rotary ProTaperNiTi instruments. Rotary instruments were less time consuming when compared to manual instruments. Instrument separation was not found to be significant with both the groups. Both systems of Rotary ProTaperNiTi and manual NiTi files used did not produce completely clean root canals. Manual NiTi files produced significantly less smear layer and debris compared to Rotary protaper instruments. How to cite the article: Reddy JM, Latha P, Gowda B, Manvikar V, Vijayalaxmi DB, Ponangi KC. Smear layer and debris removal using manual Ni-Ti files compared with rotary Protaper Ni-Ti files - An In-Vitro SEM study. J Int Oral Health 2014;6(1):89-94.
Reddy, J M V Raghavendra; Latha, Prasanna; Gowda, Basavana; Manvikar, Varadendra; Vijayalaxmi, D Benal; Ponangi, Kalyana Chakravarthi
2014-01-01
Background: Predictable successful endodontic therapy depends on correct diagnosis, effective cleaning, shaping and disinfection of the root canals and adequate obturation. Irrigation serves as a flush to remove debris, tissue solvent and lubricant from the canal irregularities; however these irregularities can restrict the complete debridement of root canal by mechanical instrumentation.Various types of hand and rotary instruments are used for the preparation of the root canal system to obtain debris free canals. The purpose of this study was to evaluate the amount of smear layer and debris removal on canal walls following the using of manual Nickel-Titanium (NiTi) files compared with rotary ProTaperNiTi files using a Scanning Electron Microscope in two individual groups. Materials & Methods: A comparative study consisting of 50 subjects randomized into two groups – 25 subjects in Group A (manual) and 25 subjects in Group B (rotary) was undertaken to investigate and compare the effects of smear layer and debris between manual and rotary NiTi instruments. Chi square test was used to find the significance of smear layer and debris removal in the coronal, middle and apical between Group A and Group B. Results: Both systems of Rotary ProTaperNiTi and manual NiTi files used in the present study, did not create completely clean root canals. Manual NiTi files produced significantly less smear layer and debris compared to Rotary ProTaperNiTi instruments. Rotary instruments were less time consuming when compared to manual instruments. Instrument separation was not found to be significant with both the groups. Conclusions: Both systems of Rotary ProTaperNiTi and manual NiTi files used did not produce completely clean root canals. Manual NiTi files produced significantly less smear layer and debris compared to Rotary protaper instruments. How to cite the article: Reddy JM, Latha P, Gowda B, Manvikar V, Vijayalaxmi DB, Ponangi KC. Smear layer and debris removal using manual Ni-Ti files compared with rotary Protaper Ni-Ti files - An In-Vitro SEM study. J Int Oral Health 2014;6(1):89-94. PMID:24653610
NASA Astrophysics Data System (ADS)
Antonetti, Manuel; Buss, Rahel; Scherrer, Simon; Margreth, Michael; Zappa, Massimiliano
2016-07-01
The identification of landscapes with similar hydrological behaviour is useful for runoff and flood predictions in small ungauged catchments. An established method for landscape classification is based on the concept of dominant runoff process (DRP). The various DRP-mapping approaches differ with respect to the time and data required for mapping. Manual approaches based on expert knowledge are reliable but time-consuming, whereas automatic GIS-based approaches are easier to implement but rely on simplifications which restrict their application range. To what extent these simplifications are applicable in other catchments is unclear. More information is also needed on how the different complexities of automatic DRP-mapping approaches affect hydrological simulations. In this paper, three automatic approaches were used to map two catchments on the Swiss Plateau. The resulting maps were compared to reference maps obtained with manual mapping. Measures of agreement and association, a class comparison, and a deviation map were derived. The automatically derived DRP maps were used in synthetic runoff simulations with an adapted version of the PREVAH hydrological model, and simulation results compared with those from simulations using the reference maps. The DRP maps derived with the automatic approach with highest complexity and data requirement were the most similar to the reference maps, while those derived with simplified approaches without original soil information differed significantly in terms of both extent and distribution of the DRPs. The runoff simulations derived from the simpler DRP maps were more uncertain due to inaccuracies in the input data and their coarse resolution, but problems were also linked with the use of topography as a proxy for the storage capacity of soils. The perception of the intensity of the DRP classes also seems to vary among the different authors, and a standardised definition of DRPs is still lacking. Furthermore, we argue not to use expert knowledge for only model building and constraining, but also in the phase of landscape classification.
Pustina, Dorian; Coslett, H. Branch; Turkeltaub, Peter E.; Tustison, Nicholas; Schwartz, Myrna F.; Avants, Brian
2015-01-01
The gold standard for identifying stroke lesions is manual tracing, a method that is known to be observer dependent and time consuming, thus impractical for big data studies. We propose LINDA (Lesion Identification with Neighborhood Data Analysis), an automated segmentation algorithm capable of learning the relationship between existing manual segmentations and a single T1-weighted MRI. A dataset of 60 left hemispheric chronic stroke patients is used to build the method and test it with k-fold and leave-one-out procedures. With respect to manual tracings, predicted lesion maps showed a mean dice overlap of 0.696±0.16, Hausdorff distance of 17.9±9.8mm, and average displacement of 2.54±1.38mm. The manual and predicted lesion volumes correlated at r=0.961. An additional dataset of 45 patients was utilized to test LINDA with independent data, achieving high accuracy rates and confirming its cross-institutional applicability. To investigate the cost of moving from manual tracings to automated segmentation, we performed comparative lesion-to-symptom mapping (LSM) on five behavioral scores. Predicted and manual lesions produced similar neuro-cognitive maps, albeit with some discussed discrepancies. Of note, region-wise LSM was more robust to the prediction error than voxel-wise LSM. Our results show that, while several limitations exist, our current results compete with or exceed the state-of-the-art, producing consistent predictions, very low failure rates, and transferable knowledge between labs. This work also establishes a new viewpoint on evaluating automated methods not only with segmentation accuracy but also with brain-behavior relationships. LINDA is made available online with trained models from over 100 patients. PMID:26756101
Daniel, Kaemmerer; Maria, Athelogou; Amelie, Lupp; Isabell, Lenhardt; Stefan, Schulz; Luisa, Peter; Merten, Hommann; Vikas, Prasad; Gerd, Binnig; Paul, Baum Richard
2014-01-01
Background: Manual evaluation of somatostatin receptor (SSTR) immunohistochemistry (IHC) is a time-consuming and cost-intensive procedure. Aim of the study was to compare manual evaluation of SSTR subtype IHC to an automated software-based analysis, and to in-vivo imaging by SSTR-based PET/CT. Methods: We examined 25 gastroenteropancreatic neuroendocrine tumor (GEP-NET) patients and correlated their in-vivo SSTR-PET/CT data (determined by the standardized uptake values SUVmax,-mean) with the corresponding ex-vivo IHC data of SSTR subtype (1, 2A, 4, 5) expression. Exactly the same lesions were imaged by PET/CT, resected and analyzed by IHC in each patient. After manual evaluation, the IHC slides were digitized and automatically evaluated for SSTR expression by Definiens XD software. A virtual IHC score “BB1” was created for comparing the manual and automated analysis of SSTR expression. Results: BB1 showed a significant correlation with the corresponding conventionally determined Her2/neu score of the SSTR-subtypes 2A (rs: 0.57), 4 (rs: 0.44) and 5 (rs: 0.43). BB1 of SSTR2A also significantly correlated with the SUVmax (rs: 0.41) and the SUVmean (rs: 0.50). Likewise, a significant correlation was seen between the conventionally evaluated SSTR2A status and the SUVmax (rs: 0.42) and SUVmean (rs: 0.62).Conclusion: Our data demonstrate that the evaluation of the SSTR status by automated analysis (BB1 score), using digitized histopathology slides (“virtual microscopy”), corresponds well with the SSTR2A, 4 and 5 expression as determined by conventional manual histopathology. The BB1 score also exhibited a significant association to the SSTR-PET/CT data in accordance with the high affinity profile of the SSTR analogues used for imaging. PMID:25197368
Automated matching software for clinical trials eligibility: measuring efficiency and flexibility.
Penberthy, Lynne; Brown, Richard; Puma, Federico; Dahman, Bassam
2010-05-01
Clinical trials (CT) serve as the media that translates clinical research into standards of care. Low or slow recruitment leads to delays in delivery of new therapies to the public. Determination of eligibility in all patients is one of the most important factors to assure unbiased results from the clinical trials process and represents the first step in addressing the issue of under representation and equal access to clinical trials. This is a pilot project evaluating the efficiency, flexibility, and generalizibility of an automated clinical trials eligibility screening tool across 5 different clinical trials and clinical trial scenarios. There was a substantial total savings during the study period in research staff time spent in evaluating patients for eligibility ranging from 165h to 1329h. There was a marked enhancement in efficiency with the automated system for all but one study in the pilot. The ratio of mean staff time required per eligible patient identified ranged from 0.8 to 19.4 for the manual versus the automated process. The results of this study demonstrate that automation offers an opportunity to reduce the burden of the manual processes required for CT eligibility screening and to assure that all patients have an opportunity to be evaluated for participation in clinical trials as appropriate. The automated process greatly reduces the time spent on eligibility screening compared with the traditional manual process by effectively transferring the load of the eligibility assessment process to the computer. Copyright (c) 2010 Elsevier Inc. All rights reserved.
Aronsky, D.; Haug, P. J.
1999-01-01
Decision support systems that integrate guidelines have become popular applications to reduce variation and deliver cost-effective care. However, adverse characteristics of decision support systems, such as additional and time-consuming data entry or manually identifying eligible patients, result in a "behavioral bottleneck" that prevents decision support systems to become part of the clinical routine. This paper describes the design and the implementation of an integrated decision support system that explores a novel approach for bypassing the behavioral bottleneck. The real-time decision support system does not require health care providers to enter additional data and consists of a diagnostic and a management component. Images Fig. 1 Fig. 2 Fig. 3 PMID:10566348
A feasibility study to determine if there is a market for automatic meter-reading devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hilberg, G.R.
1996-08-01
For many utilities the cost of manually reading meters is increasing due to personnel expenses and equipment costs. The current system of manual meters provides little ability for the utility to reduce costs. To reduce meter reading costs the utility must automate the manual system and reduce personnel expenses. A water utility in San Diego county was studied to calculate the cost of reading individual water meters. This would allow for the selective replacement of {open_quotes}high-cost{close_quotes} meters to quickly reduce meter-reading costs while limiting the necessary capital investments. As the {open_quotes}high-cost{close_quotes} meters are selectively replaced, a utility with a significantmore » difference in individual meter reading costs could save three to five dollars per meter per year. This study showed that the {open_quotes}high-cost{close_quotes} meters were six times more expensive to read than the average meter. Additionally, AMR systems increase the information available to consumers and to the utility on usage patterns and problems. The challenge was to cost effectively identify the {open_quotes}high-cost{close_quotes} meters. The costs to collect these data were less than $500.« less
Yu, Xuefei; Lin, Liangzhuo; Shen, Jie; Chen, Zhi; Jian, Jun; Li, Bin; Xin, Sherman Xuegang
2018-01-01
The mean amplitude of glycemic excursions (MAGE) is an essential index for glycemic variability assessment, which is treated as a key reference for blood glucose controlling at clinic. However, the traditional "ruler and pencil" manual method for the calculation of MAGE is time-consuming and prone to error due to the huge data size, making the development of robust computer-aided program an urgent requirement. Although several software products are available instead of manual calculation, poor agreement among them is reported. Therefore, more studies are required in this field. In this paper, we developed a mathematical algorithm based on integer nonlinear programming. Following the proposed mathematical method, an open-code computer program named MAGECAA v1.0 was developed and validated. The results of the statistical analysis indicated that the developed program was robust compared to the manual method. The agreement among the developed program and currently available popular software is satisfied, indicating that the worry about the disagreement among different software products is not necessary. The open-code programmable algorithm is an extra resource for those peers who are interested in the related study on methodology in the future.
Automatic Evaluation of Collagen Fiber Directions from Polarized Light Microscopy Images.
Novak, Kamil; Polzer, Stanislav; Tichy, Michal; Bursa, Jiri
2015-08-01
Mechanical properties of the arterial wall depend largely on orientation and density of collagen fiber bundles. Several methods have been developed for observation of collagen orientation and density; the most frequently applied collagen-specific manual approach is based on polarized light (PL). However, it is very time consuming and the results are operator dependent. We have proposed a new automated method for evaluation of collagen fiber direction from two-dimensional polarized light microscopy images (2D PLM). The algorithm has been verified against artificial images and validated against manual measurements. Finally the collagen content has been estimated. The proposed algorithm was capable of estimating orientation of some 35 k points in 15 min when applied to aortic tissue and over 500 k points in 35 min for Achilles tendon. The average angular disagreement between each operator and the algorithm was -9.3±8.6° and -3.8±8.6° in the case of aortic tissue and -1.6±6.4° and 2.6±7.8° for Achilles tendon. Estimated mean collagen content was 30.3±5.8% and 94.3±2.7% for aortic media and Achilles tendon, respectively. The proposed automated approach is operator independent and several orders faster than manual measurements and therefore has the potential to replace manual measurements of collagen orientation via PLM.
Mental health recovery: A review of the peer-reviewed published literature.
Jacob, Sini; Munro, Ian; Taylor, Beverley Joan; Griffiths, Debra
The concept of mental health recovery promotes collaborative partnership among consumers, carers and service providers. However views on mental health recovery are less explored among carers and service providers. The aim of this review was to analyse contemporary literature exploring views of mental health consumers, carers and service providers in relation to their understanding of the meaning of mental health recovery and factors influencing mental health recovery. The literature review questions were: How is mental health recovery and factors influencing mental health recovery viewed by consumers, carers and service providers? What are the differences and similarities in those perceptions? How can the outcomes and recommendations inform the Australian mental health practices? A review of the literature used selected electronic databases and specific search terms and supplemented with manual searching. Twenty-six studies were selected for review which included qualitative, mixed method, and quantitative approaches and a Delphi study. The findings indicated that the concept of mental health recovery is more explored among consumers and is seldom explored among carers and service providers. The studies suggested that recovery from mental illness is a multidimensional process and the concept cannot be defined in rigid terms. In order to achieve the best possible care, the stakeholders require flexible attitudes and openness to embrace the philosophy.
Natural Language Processing As an Alternative to Manual Reporting of Colonoscopy Quality Metrics
RAJU, GOTTUMUKKALA S.; LUM, PHILLIP J.; SLACK, REBECCA; THIRUMURTHI, SELVI; LYNCH, PATRICK M.; MILLER, ETHAN; WESTON, BRIAN R.; DAVILA, MARTA L.; BHUTANI, MANOOP S.; SHAFI, MEHNAZ A.; BRESALIER, ROBERT S.; DEKOVICH, ALEXANDER A.; LEE, JEFFREY H.; GUHA, SUSHOVAN; PANDE, MALA; BLECHACZ, BORIS; RASHID, ASIF; ROUTBORT, MARK; SHUTTLESWORTH, GLADIS; MISHRA, LOPA; STROEHLEIN, JOHN R.; ROSS, WILLIAM A.
2015-01-01
BACKGROUND & AIMS The adenoma detection rate (ADR) is a quality metric tied to interval colon cancer occurrence. However, manual extraction of data to calculate and track the ADR in clinical practice is labor-intensive. To overcome this difficulty, we developed a natural language processing (NLP) method to identify patients, who underwent their first screening colonoscopy, identify adenomas and sessile serrated adenomas (SSA). We compared the NLP generated results with that of manual data extraction to test the accuracy of NLP, and report on colonoscopy quality metrics using NLP. METHODS Identification of screening colonoscopies using NLP was compared with that using the manual method for 12,748 patients who underwent colonoscopies from July 2010 to February 2013. Also, identification of adenomas and SSAs using NLP was compared with that using the manual method with 2259 matched patient records. Colonoscopy ADRs using these methods were generated for each physician. RESULTS NLP correctly identified 91.3% of the screening examinations, whereas the manual method identified 87.8% of them. Both the manual method and NLP correctly identified examinations of patients with adenomas and SSAs in the matched records almost perfectly. Both NLP and manual method produce comparable values for ADR for each endoscopist as well as the group as a whole. CONCLUSIONS NLP can correctly identify screening colonoscopies, accurately identify adenomas and SSAs in a pathology database, and provide real-time quality metrics for colonoscopy. PMID:25910665
Automated transient detection in the STEREO Heliospheric Imagers.
NASA Astrophysics Data System (ADS)
Barnard, Luke; Scott, Chris; Owens, Mat; Lockwood, Mike; Tucker-Hood, Kim; Davies, Jackie
2014-05-01
Since the launch of the twin STEREO satellites, the heliospheric imagers (HI) have been used, with good results, in tracking transients of solar origin, such as Coronal Mass Ejections (CMEs), out far into the heliosphere. A frequently used approach is to build a "J-map", in which multiple elongation profiles along a constant position angle are stacked in time, building an image in which radially propagating transients form curved tracks in the J-map. From this the time-elongation profile of a solar transient can be manually identified. This is a time consuming and laborious process, and the results are subjective, depending on the skill and expertise of the investigator. Therefore, it is desirable to develop an automated algorithm for the detection and tracking of the transient features observed in HI data. This is to some extent previously covered ground, as similar problems have been encountered in the analysis of coronagraph data and have led to the development of products such as CACtus etc. We present the results of our investigation into the automated detection of solar transients observed in J-maps formed from HI data. We use edge and line detection methods to identify transients in the J-maps, and then use kinematic models of the solar transient propagation (such as the fixed-phi and harmonic mean geometric models) to estimate the solar transients properties, such as transient speed and propagation direction, from the time-elongation profile. The effectiveness of this process is assessed by comparison of our results with a set of manually identified CMEs, extracted and analysed by the Solar Storm Watch Project. Solar Storm Watch is a citizen science project in which solar transients are identified in J-maps formed from HI data and tracked multiple times by different users. This allows the calculation of a consensus time-elongation profile for each event, and therefore does not suffer from the potential subjectivity of an individual researcher tracking an event. Furthermore, we present preliminary results regarding the estimation of the ambient solar wind speed from the automated analysis of the HI J-maps, by the tracking of numerous small scale features entrained into the ambient solar wind, which can only be tracked out to small elongations.
Alachiotis, Nikolaos; Vogiatzi, Emmanouella; Pavlidis, Pavlos; Stamatakis, Alexandros
2013-01-01
Automated DNA sequencers generate chromatograms that contain raw sequencing data. They also generate data that translates the chromatograms into molecular sequences of A, C, G, T, or N (undetermined) characters. Since chromatogram translation programs frequently introduce errors, a manual inspection of the generated sequence data is required. As sequence numbers and lengths increase, visual inspection and manual correction of chromatograms and corresponding sequences on a per-peak and per-nucleotide basis becomes an error-prone, time-consuming, and tedious process. Here, we introduce ChromatoGate (CG), an open-source software that accelerates and partially automates the inspection of chromatograms and the detection of sequencing errors for bidirectional sequencing runs. To provide users full control over the error correction process, a fully automated error correction algorithm has not been implemented. Initially, the program scans a given multiple sequence alignment (MSA) for potential sequencing errors, assuming that each polymorphic site in the alignment may be attributed to a sequencing error with a certain probability. The guided MSA assembly procedure in ChromatoGate detects chromatogram peaks of all characters in an alignment that lead to polymorphic sites, given a user-defined threshold. The threshold value represents the sensitivity of the sequencing error detection mechanism. After this pre-filtering, the user only needs to inspect a small number of peaks in every chromatogram to correct sequencing errors. Finally, we show that correcting sequencing errors is important, because population genetic and phylogenetic inferences can be misled by MSAs with uncorrected mis-calls. Our experiments indicate that estimates of population mutation rates can be affected two- to three-fold by uncorrected errors. PMID:24688709
Alachiotis, Nikolaos; Vogiatzi, Emmanouella; Pavlidis, Pavlos; Stamatakis, Alexandros
2013-01-01
Automated DNA sequencers generate chromatograms that contain raw sequencing data. They also generate data that translates the chromatograms into molecular sequences of A, C, G, T, or N (undetermined) characters. Since chromatogram translation programs frequently introduce errors, a manual inspection of the generated sequence data is required. As sequence numbers and lengths increase, visual inspection and manual correction of chromatograms and corresponding sequences on a per-peak and per-nucleotide basis becomes an error-prone, time-consuming, and tedious process. Here, we introduce ChromatoGate (CG), an open-source software that accelerates and partially automates the inspection of chromatograms and the detection of sequencing errors for bidirectional sequencing runs. To provide users full control over the error correction process, a fully automated error correction algorithm has not been implemented. Initially, the program scans a given multiple sequence alignment (MSA) for potential sequencing errors, assuming that each polymorphic site in the alignment may be attributed to a sequencing error with a certain probability. The guided MSA assembly procedure in ChromatoGate detects chromatogram peaks of all characters in an alignment that lead to polymorphic sites, given a user-defined threshold. The threshold value represents the sensitivity of the sequencing error detection mechanism. After this pre-filtering, the user only needs to inspect a small number of peaks in every chromatogram to correct sequencing errors. Finally, we show that correcting sequencing errors is important, because population genetic and phylogenetic inferences can be misled by MSAs with uncorrected mis-calls. Our experiments indicate that estimates of population mutation rates can be affected two- to three-fold by uncorrected errors.
Stevens, Matthew P; Rudland, Elice; Garland, Suzanne M; Tabrizi, Sepehr N
2006-07-01
Roche Molecular Systems recently released two PCR-based assays, AMPLICOR and LINEAR ARRAY (LA), for the detection and genotyping, respectively, of human papillomaviruses (HPVs). The manual specimen processing method recommended for use with both assays, AmpliLute, can be time-consuming and labor-intensive and is open to potential specimen cross-contamination. We evaluated the Roche MagNA Pure LC (MP) as an alternative for specimen processing prior to use with either assay. DNA was extracted from cervical brushings, collected in PreservCyt media, by AmpliLute and MP using DNA-I and Total Nucleic Acid (TNA) kits, from 150 patients with histologically confirmed cervical abnormalities. DNA was amplified and detected by AMPLICOR and the LA HPV test. Concordances of 96.5% (139 of 144) (kappa=0.93) and 95.1% (135 of 142) (kappa=0.90) were generated by AMPLICOR when we compared DNA extracts from AmpliLute to MP DNA-I and TNA, respectively. The HPV genotype profiles were identical in 78.7 and 74.7% of samples between AmpliLute and DNA-I or TNA, respectively. To improve LA concordance, all 150 specimens were extracted by MP DNA-I protocol after the centrifugation of 1-ml PreservCyt samples. This modified approach improved HPV genotype concordance levels between AmpliLute and MP DNA-I to 88.0% (P=0.043) without affecting AMPLICOR sensitivity. Laboratories that have an automated MP extraction system would find this procedure more feasible and easier to handle than the recommended manual extraction method and could substitute such extractions for AMPLICOR and LA HPV tests once internally validated.
Beltrán, David; Muñetón-Ayala, Mercedes; de Vega, Manuel
2018-04-01
Embodiment theories claim that language meaning involves sensory-motor simulation processes in the brain. A challenge for these theories, however, is to explain how abstract words, such as negations, are processed. In this article, we test the hypothesis that understanding sentential negation (e.g., You will not cut the bread) reuses the neural circuitry of response inhibition. Participants read manual action sentences with either affirmative or negative polarity, embedded in a Stop-Signal paradigm, while their EEG was recorded. The results showed that the inhibition-related N1 and P3 components were enhanced by successful inhibition. Most important, the early N1 amplitude was also modulated by sentence polarity, producing the largest values for successful inhibitions in the context of negative sentences, whereas no polarity effect was found for failing inhibition or go trials. The estimated neural sources for N1 effects revealed activations in the right inferior frontal gyrus, a typical inhibition-related area. Also, the estimated stop-signal reaction time was larger in trials with negative sentences. These results provide strong evidence that action-related negative sentences consume neural resources of response inhibition, resulting in less efficient processing in the Stop-Signal task. Copyright © 2018 Elsevier Ltd. All rights reserved.
Benítez, José Alberto; Labra, José Emilio; Quiroga, Enedina; Martín, Vicente; García, Isaías; Marqués-Sánchez, Pilar; Benavides, Carmen
2017-01-01
There is a great concern nowadays regarding alcohol consumption and drug abuse, especially in young people. Analyzing the social environment where these adolescents are immersed, as well as a series of measures determining the alcohol abuse risk or personal situation and perception using a number of questionnaires like AUDIT, FAS, KIDSCREEN, and others, it is possible to gain insight into the current situation of a given individual regarding his/her consumption behavior. But this analysis, in order to be achieved, requires the use of tools that can ease the process of questionnaire creation, data gathering, curation and representation, and later analysis and visualization to the user. This research presents the design and construction of a web-based platform able to facilitate each of the mentioned processes by integrating the different phases into an intuitive system with a graphical user interface that hides the complexity underlying each of the questionnaires and techniques used and presenting the results in a flexible and visual way, avoiding any manual handling of data during the process. Advantages of this approach are shown and compared to the previous situation where some of the tasks were accomplished by time consuming and error prone manipulations of data.
Hilliard, Mark; Alley, William R; McManus, Ciara A; Yu, Ying Qing; Hallinan, Sinead; Gebler, John; Rudd, Pauline M
Glycosylation is an important attribute of biopharmaceutical products to monitor from development through production. However, glycosylation analysis has traditionally been a time-consuming process with long sample preparation protocols and manual interpretation of the data. To address the challenges associated with glycan analysis, we developed a streamlined analytical solution that covers the entire process from sample preparation to data analysis. In this communication, we describe the complete analytical solution that begins with a simplified and fast N-linked glycan sample preparation protocol that can be completed in less than 1 hr. The sample preparation includes labelling with RapiFluor-MS tag to improve both fluorescence (FLR) and mass spectral (MS) sensitivities. Following HILIC-UPLC/FLR/MS analyses, the data are processed and a library search based on glucose units has been included to expedite the task of structural assignment. We then applied this total analytical solution to characterize the glycosylation of the NIST Reference Material mAb 8761. For this glycoprotein, we confidently identified 35 N-linked glycans and all three major classes, high mannose, complex, and hybrid, were present. The majority of the glycans were neutral and fucosylated; glycans featuring N-glycolylneuraminic acid and those with two galactoses connected via an α1,3-linkage were also identified.
Wallace, Nathan D; Ceguerra, Anna V; Breen, Andrew J; Ringer, Simon P
2018-06-01
Atom probe tomography is a powerful microscopy technique capable of reconstructing the 3D position and chemical identity of millions of atoms within engineering materials, at the atomic level. Crystallographic information contained within the data is particularly valuable for the purposes of reconstruction calibration and grain boundary analysis. Typically, analysing this data is a manual, time-consuming and error prone process. In many cases, the crystallographic signal is so weak that it is difficult to detect at all. In this study, a new automated signal processing methodology is demonstrated. We use the affine properties of the detector coordinate space, or the 'detector stack', as the basis for our calculations. The methodological framework and the visualisation tools are shown to be superior to the standard method of crystallographic pole visualisation directly from field evaporation images and there is no requirement for iterations between a full real-space initial tomographic reconstruction and the detector stack. The mapping approaches are demonstrated for aluminium, tungsten, magnesium and molybdenum. Implications for reconstruction calibration, accuracy of crystallographic measurements, reliability and repeatability are discussed. Copyright © 2018 Elsevier B.V. All rights reserved.
Automatic Screening for Perturbations in Boolean Networks.
Schwab, Julian D; Kestler, Hans A
2018-01-01
A common approach to address biological questions in systems biology is to simulate regulatory mechanisms using dynamic models. Among others, Boolean networks can be used to model the dynamics of regulatory processes in biology. Boolean network models allow simulating the qualitative behavior of the modeled processes. A central objective in the simulation of Boolean networks is the computation of their long-term behavior-so-called attractors. These attractors are of special interest as they can often be linked to biologically relevant behaviors. Changing internal and external conditions can influence the long-term behavior of the Boolean network model. Perturbation of a Boolean network by stripping a component of the system or simulating a surplus of another element can lead to different attractors. Apparently, the number of possible perturbations and combinations of perturbations increases exponentially with the size of the network. Manually screening a set of possible components for combinations that have a desired effect on the long-term behavior can be very time consuming if not impossible. We developed a method to automatically screen for perturbations that lead to a user-specified change in the network's functioning. This method is implemented in the visual simulation framework ViSiBool utilizing satisfiability (SAT) solvers for fast exhaustive attractor search.
The effects of voice and manual control mode on dual task performance
NASA Technical Reports Server (NTRS)
Wickens, C. D.; Zenyuh, J.; Culp, V.; Marshak, W.
1986-01-01
Two fundamental principles of human performance, compatibility and resource competition, are combined with two structural dichotomies in the human information processing system, manual versus voice output, and left versus right cerebral hemisphere, in order to predict the optimum combination of voice and manual control with either hand, for time-sharing performance of a dicrete and continuous task. Eight right handed male subjected performed a discrete first-order tracking task, time-shared with an auditorily presented Sternberg Memory Search Task. Each task could be controlled by voice, or by the left or right hand, in all possible combinations except for a dual voice mode. When performance was analyzed in terms of a dual-task decrement from single task control conditions, the following variables influenced time-sharing efficiency in diminishing order of magnitude, (1) the modality of control, (discrete manual control of tracking was superior to discrete voice control of tracking and the converse was true with the memory search task), (2) response competition, (performance was degraded when both tasks were responded manually), (3) hemispheric competition, (performance degraded whenever two tasks were controlled by the left hemisphere) (i.e., voice or right handed control). The results confirm the value of predictive models invoice control implementation.
Mechanized or hand operations: which is less expensive for small timber?
Robert Rummber; John Klepac
2002-01-01
Two harvesting systems, one manual post-and-rail and one small-scale cut-to-length harvester, were compared in a lodgepole pine thinning. Elemental time study data were collected, along with estimates of residual stand damage. The harvester was about as productive as a manual crew of five. For material 5" and larger, the cost for felling, processing and piling...
Creating a standardized watersheds database for the Lower Rio Grande/Río Bravo, Texas
Brown, J.R.; Ulery, Randy L.; Parcher, Jean W.
2000-01-01
This report describes the creation of a large-scale watershed database for the lower Rio Grande/Río Bravo Basin in Texas. The watershed database includes watersheds delineated to all 1:24,000-scale mapped stream confluences and other hydrologically significant points, selected watershed characteristics, and hydrologic derivative datasets.Computer technology allows generation of preliminary watershed boundaries in a fraction of the time needed for manual methods. This automated process reduces development time and results in quality improvements in watershed boundaries and characteristics. These data can then be compiled in a permanent database, eliminating the time-consuming step of data creation at the beginning of a project and providing a stable base dataset that can give users greater confidence when further subdividing watersheds.A standardized dataset of watershed characteristics is a valuable contribution to the understanding and management of natural resources. Vertical integration of the input datasets used to automatically generate watershed boundaries is crucial to the success of such an effort. The optimum situation would be to use the digital orthophoto quadrangles as the source of all the input datasets. While the hydrographic data from the digital line graphs can be revised to match the digital orthophoto quadrangles, hypsography data cannot be revised to match the digital orthophoto quadrangles. Revised hydrography from the digital orthophoto quadrangle should be used to create an updated digital elevation model that incorporates the stream channels as revised from the digital orthophoto quadrangle. Computer-generated, standardized watersheds that are vertically integrated with existing digital line graph hydrographic data will continue to be difficult to create until revisions can be made to existing source datasets. Until such time, manual editing will be necessary to make adjustments for man-made features and changes in the natural landscape that are not reflected in the digital elevation model data.
Creating a standardized watersheds database for the lower Rio Grande/Rio Bravo, Texas
Brown, Julie R.; Ulery, Randy L.; Parcher, Jean W.
2000-01-01
This report describes the creation of a large-scale watershed database for the lower Rio Grande/Rio Bravo Basin in Texas. The watershed database includes watersheds delineated to all 1:24,000-scale mapped stream confluences and other hydrologically significant points, selected watershed characteristics, and hydrologic derivative datasets. Computer technology allows generation of preliminary watershed boundaries in a fraction of the time needed for manual methods. This automated process reduces development time and results in quality improvements in watershed boundaries and characteristics. These data can then be compiled in a permanent database, eliminating the time-consuming step of data creation at the beginning of a project and providing a stable base dataset that can give users greater confidence when further subdividing watersheds. A standardized dataset of watershed characteristics is a valuable contribution to the understanding and management of natural resources. Vertical integration of the input datasets used to automatically generate watershed boundaries is crucial to the success of such an effort. The optimum situation would be to use the digital orthophoto quadrangles as the source of all the input datasets. While the hydrographic data from the digital line graphs can be revised to match the digital orthophoto quadrangles, hypsography data cannot be revised to match the digital orthophoto quadrangles. Revised hydrography from the digital orthophoto quadrangle should be used to create an updated digital elevation model that incorporates the stream channels as revised from the digital orthophoto quadrangle. Computer-generated, standardized watersheds that are vertically integrated with existing digital line graph hydrographic data will continue to be difficult to create until revisions can be made to existing source datasets. Until such time, manual editing will be necessary to make adjustments for man-made features and changes in the natural landscape that are not reflected in the digital elevation model data.
Troy, Declan J; Ojha, Kumari Shikha; Kerry, Joseph P; Tiwari, Brijesh K
2016-10-01
New and emerging robust technologies can play an important role in ensuring a more resilient meat value chain and satisfying consumer demands and needs. This paper outlines various novel thermal and non-thermal technologies which have shown potential for meat processing applications. A number of process analytical techniques which have shown potential for rapid, real-time assessment of meat quality are also discussed. The commercial uptake and consumer acceptance of novel technologies in meat processing have been subjects of great interest over the past decade. Consumer focus group studies have shown that consumer expectations and liking for novel technologies, applicable to meat processing applications, vary significantly. This overview also highlights the necessity for meat processors to address consumer risk-benefit perceptions, knowledge and trust in order to be commercially successful in the application of novel technologies within the meat sector. Copyright © 2016. Published by Elsevier Ltd.