NASA Technical Reports Server (NTRS)
Oconnell, R. F.; Hassig, H. J.; Radovcich, N. A.
1976-01-01
Results of a study of the development of flutter modules applicable to automated structural design of advanced aircraft configurations, such as a supersonic transport, are presented. Automated structural design is restricted to automated sizing of the elements of a given structural model. It includes a flutter optimization procedure; i.e., a procedure for arriving at a structure with minimum mass for satisfying flutter constraints. Methods of solving the flutter equation and computing the generalized aerodynamic force coefficients in the repetitive analysis environment of a flutter optimization procedure are studied, and recommended approaches are presented. Five approaches to flutter optimization are explained in detail and compared. An approach to flutter optimization incorporating some of the methods discussed is presented. Problems related to flutter optimization in a realistic design environment are discussed and an integrated approach to the entire flutter task is presented. Recommendations for further investigations are made. Results of numerical evaluations, applying the five methods of flutter optimization to the same design task, are presented.
Automated standardization technique for an inductively-coupled plasma emission spectrometer
Garbarino, John R.; Taylor, Howard E.
1982-01-01
The manifold assembly subsystem described permits real-time computer-controlled standardization and quality control of a commercial inductively-coupled plasma atomic emission spectrometer. The manifold assembly consists of a branch-structured glass manifold, a series of microcomputer-controlled solenoid valves, and a reservoir for each standard. Automated standardization involves selective actuation of each solenoid valve that permits a specific mixed standard solution to be pumped to the nebulizer of the spectrometer. Quality control is based on the evaluation of results obtained for a mixed standard containing 17 analytes, that is measured periodically with unknown samples. An inaccurate standard evaluation triggers restandardization of the instrument according to a predetermined protocol. Interaction of the computer-controlled manifold assembly hardware with the spectrometer system is outlined. Evaluation of the automated standardization system with respect to reliability, simplicity, flexibility, and efficiency is compared to the manual procedure. ?? 1982.
NMR-based automated protein structure determination.
Würz, Julia M; Kazemi, Sina; Schmidt, Elena; Bagaria, Anurag; Güntert, Peter
2017-08-15
NMR spectra analysis for protein structure determination can now in many cases be performed by automated computational methods. This overview of the computational methods for NMR protein structure analysis presents recent automated methods for signal identification in multidimensional NMR spectra, sequence-specific resonance assignment, collection of conformational restraints, and structure calculation, as implemented in the CYANA software package. These algorithms are sufficiently reliable and integrated into one software package to enable the fully automated structure determination of proteins starting from NMR spectra without manual interventions or corrections at intermediate steps, with an accuracy of 1-2 Å backbone RMSD in comparison with manually solved reference structures. Copyright © 2017 Elsevier Inc. All rights reserved.
Nondestructive Evaluation of Hardwood Logs Using Automated Interpretation of CT Images
Daniel L. Schmoldt; Dongping Zhu; Richard W. Conners
1993-01-01
Computed tomography (CT) imaging is being used to examine the internal structure of hardwood logs. The following steps are used to automatically interpret CT images: (1) preprocessing to remove unwanted portions of the image, e.g., annual ring structure, (2) image-by-image segmentation to produce relatively homogeneous image areas, (3) volume growing to create volumes...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Joshua M.
Manufacturing tasks that are deemed too hazardous for workers require the use of automation, robotics, and/or other remote handling tools. The associated hazards may be radiological or nonradiological, and based on the characteristics of the environment and processing, a design may necessitate robotic labor, human labor, or both. There are also other factors such as cost, ergonomics, maintenance, and efficiency that also effect task allocation and other design choices. Handling the tradeoffs of these factors can be complex, and lack of experience can be an issue when trying to determine if and what feasible automation/robotics options exist. To address thismore » problem, we utilize common engineering design approaches adapted more for manufacturing system design in hazardous environments. We limit our scope to the conceptual and embodiment design stages, specifically a computational algorithm for concept generation and early design evaluation. In regard to concept generation, we first develop the functional model or function structure for the process, using the common 'verb-noun' format for describing function. A common language or functional basis for manufacturing was developed and utilized to formalize function descriptions and guide rules for function decomposition. Potential components for embodiment are also grouped in terms of this functional language and are stored in a database. The properties of each component are given as quantitative and qualitative criteria. Operators are also rated for task-relevant criteria which are used to address task compatibility. Through the gathering of process requirements/constraints, construction of the component database, and development of the manufacturing basis and rule set, design knowledge is stored and available for computer use. Thus, once the higher level process functions are defined, the computer can automate the synthesis of new design concepts through alternating steps of embodiment and function structure updates/decomposition. In the process, criteria guide function allocation of components/operators and help ensure compatibility and feasibility. Through multiple function assignment options and varied function structures, multiple design concepts are created. All of the generated designs are then evaluated based on a number of relevant evaluation criteria: cost, dose, ergonomics, hazards, efficiency, etc. These criteria are computed using physical properties/parameters of each system based on the qualities an engineer would use to make evaluations. Nuclear processes such as oxide conversion and electrorefining are utilized to aid algorithm development and provide test cases for the completed program. Through our approach, we capture design knowledge related to manufacturing and other operations in hazardous environments to enable a computational program to automatically generate and evaluate system design concepts.« less
NASA Astrophysics Data System (ADS)
Zhou, X.; Hayashi, T.; Han, M.; Chen, H.; Hara, T.; Fujita, H.; Yokoyama, R.; Kanematsu, M.; Hoshi, H.
2009-02-01
X-ray CT images have been widely used in clinical diagnosis in recent years. A modern CT scanner can generate about 1000 CT slices to show the details of all the human organs within 30 seconds. However, CT image interpretations (viewing 500-1000 slices of CT images manually in front of a screen or films for each patient) require a lot of time and energy. Therefore, computer-aided diagnosis (CAD) systems that can support CT image interpretations are strongly anticipated. Automated recognition of the anatomical structures in CT images is a basic pre-processing of the CAD system. The bone structure is a part of anatomical structures and very useful to act as the landmarks for predictions of the other different organ positions. However, the automated recognition of the bone structure is still a challenging issue. This research proposes an automated scheme for segmenting the bone regions and recognizing the bone structure in noncontrast torso CT images. The proposed scheme was applied to 48 torso CT cases and a subjective evaluation for the experimental results was carried out by an anatomical expert following the anatomical definition. The experimental results showed that the bone structure in 90% CT cases have been recognized correctly. For quantitative evaluation, automated recognition results were compared to manual inputs of bones of lower limb created by an anatomical expert on 10 randomly selected CT cases. The error (maximum distance in 3D) between the recognition results and manual inputs distributed from 3-8 mm in different parts of the bone regions.
Automated structure determination of proteins with the SAIL-FLYA NMR method.
Takeda, Mitsuhiro; Ikeya, Teppei; Güntert, Peter; Kainosho, Masatsune
2007-01-01
The labeling of proteins with stable isotopes enhances the NMR method for the determination of 3D protein structures in solution. Stereo-array isotope labeling (SAIL) provides an optimal stereospecific and regiospecific pattern of stable isotopes that yields sharpened lines, spectral simplification without loss of information, and the ability to collect rapidly and evaluate fully automatically the structural restraints required to solve a high-quality solution structure for proteins up to twice as large as those that can be analyzed using conventional methods. Here, we describe a protocol for the preparation of SAIL proteins by cell-free methods, including the preparation of S30 extract and their automated structure analysis using the FLYA algorithm and the program CYANA. Once efficient cell-free expression of the unlabeled or uniformly labeled target protein has been achieved, the NMR sample preparation of a SAIL protein can be accomplished in 3 d. A fully automated FLYA structure calculation can be completed in 1 d on a powerful computer system.
Specialized computer system to diagnose critical lined equipment
NASA Astrophysics Data System (ADS)
Yemelyanov, V. A.; Yemelyanova, N. Y.; Morozova, O. A.; Nedelkin, A. A.
2018-05-01
The paper presents data on the problem of diagnosing the lining condition at the iron and steel works. The authors propose and describe the structure of the specialized computer system to diagnose critical lined equipment. The relative results of diagnosing lining condition by the basic system and the proposed specialized computer system are presented. To automate evaluation of lining condition and support in making decisions regarding the operation mode of the lined equipment, the specialized software has been developed.
Automated generation of weld path trajectories.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sizemore, John M.; Hinman-Sweeney, Elaine Marie; Ames, Arlo Leroy
2003-06-01
AUTOmated GENeration of Control Programs for Robotic Welding of Ship Structure (AUTOGEN) is software that automates the planning and compiling of control programs for robotic welding of ship structure. The software works by evaluating computer representations of the ship design and the manufacturing plan. Based on this evaluation, AUTOGEN internally identifies and appropriately characterizes each weld. Then it constructs the robot motions necessary to accomplish the welds and determines for each the correct assignment of process control values. AUTOGEN generates these robot control programs completely without manual intervention or edits except to correct wrong or missing input data. Most shipmore » structure assemblies are unique or at best manufactured only a few times. Accordingly, the high cost inherent in all previous methods of preparing complex control programs has made robot welding of ship structures economically unattractive to the U.S. shipbuilding industry. AUTOGEN eliminates the cost of creating robot control programs. With programming costs eliminated, capitalization of robots to weld ship structures becomes economically viable. Robot welding of ship structures will result in reduced ship costs, uniform product quality, and enhanced worker safety. Sandia National Laboratories and Northrop Grumman Ship Systems worked with the National Shipbuilding Research Program to develop a means of automated path and process generation for robotic welding. This effort resulted in the AUTOGEN program, which has successfully demonstrated automated path generation and robot control. Although the current implementation of AUTOGEN is optimized for welding applications, the path and process planning capability has applicability to a number of industrial applications, including painting, riveting, and adhesive delivery.« less
NASA Astrophysics Data System (ADS)
Hoang, Bui Huy; Oda, Masahiro; Jiang, Zhengang; Kitasaka, Takayuki; Misawa, Kazunari; Fujiwara, Michitaka; Mori, Kensaku
2011-03-01
This paper presents an automated anatomical labeling method of arteries extracted from contrasted 3D CT images based on multi-class AdaBoost. In abdominal surgery, understanding of vasculature related to a target organ such as the colon is very important. Therefore, the anatomical structure of blood vessels needs to be understood by computers in a system supporting abdominal surgery. There are several researches on automated anatomical labeling, but there is no research on automated anatomical labeling to arteries concerning with the colon. The proposed method obtains a tree structure of arteries from the artery region and calculates features values of each branch. These feature values are thickness, curvature, direction, and running vectors of branch. Then, candidate arterial names are computed by classifiers that are trained to output artery names. Finally, a global optimization process is applied to the candidate arterial names to determine final names. Target arteries of this paper are nine lower abdominal arteries (AO, LCIA, RCIA, LEIA, REIA, SMA, IMA, LIIA, RIIA). We applied the proposed method to 14 cases of 3D abdominal contrasted CT images, and evaluated the results by leave-one-out scheme. The average precision and recall rates of the proposed method were 87.9% and 93.3%, respectively. The results of this method are applicable for anatomical name display of surgical simulation and computer aided surgery.
Code of Federal Regulations, 2010 CFR
2010-10-01
.... (h) Automated data processing computer systems, including: (1) Planning efforts in the identification, evaluation, and selection of an automated data processing computer system solution meeting the program... existing automated data processing computer system to support Tribal IV-D program operations, and...
Code of Federal Regulations, 2013 CFR
2013-10-01
.... (h) Automated data processing computer systems, including: (1) Planning efforts in the identification, evaluation, and selection of an automated data processing computer system solution meeting the program... existing automated data processing computer system to support Tribal IV-D program operations, and...
Code of Federal Regulations, 2014 CFR
2014-10-01
.... (h) Automated data processing computer systems, including: (1) Planning efforts in the identification, evaluation, and selection of an automated data processing computer system solution meeting the program... existing automated data processing computer system to support Tribal IV-D program operations, and...
Code of Federal Regulations, 2012 CFR
2012-10-01
.... (h) Automated data processing computer systems, including: (1) Planning efforts in the identification, evaluation, and selection of an automated data processing computer system solution meeting the program... existing automated data processing computer system to support Tribal IV-D program operations, and...
Code of Federal Regulations, 2011 CFR
2011-10-01
.... (h) Automated data processing computer systems, including: (1) Planning efforts in the identification, evaluation, and selection of an automated data processing computer system solution meeting the program... existing automated data processing computer system to support Tribal IV-D program operations, and...
An anatomy of industrial robots and their controls
NASA Astrophysics Data System (ADS)
Luh, J. Y. S.
1983-02-01
The modernization of manufacturing facilities by means of automation represents an approach for increasing productivity in industry. The three existing types of automation are related to continuous process controls, the use of transfer conveyor methods, and the employment of programmable automation for the low-volume batch production of discrete parts. The industrial robots, which are defined as computer controlled mechanics manipulators, belong to the area of programmable automation. Typically, the robots perform tasks of arc welding, paint spraying, or foundary operation. One may assign a robot to perform a variety of job assignments simply by changing the appropriate computer program. The present investigation is concerned with an evaluation of the potential of the robot on the basis of its basic structure and controls. It is found that robots function well in limited areas of industry. If the range of tasks which robots can perform is to be expanded, it is necessary to provide multiple-task sensors, or special tooling, or even automatic tooling.
Automated analysis in generic groups
NASA Astrophysics Data System (ADS)
Fagerholm, Edvard
This thesis studies automated methods for analyzing hardness assumptions in generic group models, following ideas of symbolic cryptography. We define a broad class of generic and symbolic group models for different settings---symmetric or asymmetric (leveled) k-linear groups --- and prove ''computational soundness'' theorems for the symbolic models. Based on this result, we formulate a master theorem that relates the hardness of an assumption to solving problems in polynomial algebra. We systematically analyze these problems identifying different classes of assumptions and obtain decidability and undecidability results. Then, we develop automated procedures for verifying the conditions of our master theorems, and thus the validity of hardness assumptions in generic group models. The concrete outcome is an automated tool, the Generic Group Analyzer, which takes as input the statement of an assumption, and outputs either a proof of its generic hardness or shows an algebraic attack against the assumption. Structure-preserving signatures are signature schemes defined over bilinear groups in which messages, public keys and signatures are group elements, and the verification algorithm consists of evaluating ''pairing-product equations''. Recent work on structure-preserving signatures studies optimality of these schemes in terms of the number of group elements needed in the verification key and the signature, and the number of pairing-product equations in the verification algorithm. While the size of keys and signatures is crucial for many applications, another aspect of performance is the time it takes to verify a signature. The most expensive operation during verification is the computation of pairings. However, the concrete number of pairings is not captured by the number of pairing-product equations considered in earlier work. We consider the question of what is the minimal number of pairing computations needed to verify structure-preserving signatures. We build an automated tool to search for structure-preserving signatures matching a template. Through exhaustive search we conjecture lower bounds for the number of pairings required in the Type~II setting and prove our conjecture to be true. Finally, our tool exhibits examples of structure-preserving signatures matching the lower bounds, which proves tightness of our bounds, as well as improves on previously known structure-preserving signature schemes.
Automated procedures for sizing aerospace vehicle structures /SAVES/
NASA Technical Reports Server (NTRS)
Giles, G. L.; Blackburn, C. L.; Dixon, S. C.
1972-01-01
Results from a continuing effort to develop automated methods for structural design are described. A system of computer programs presently under development called SAVES is intended to automate the preliminary structural design of a complete aerospace vehicle. Each step in the automated design process of the SAVES system of programs is discussed, with emphasis placed on use of automated routines for generation of finite-element models. The versatility of these routines is demonstrated by structural models generated for a space shuttle orbiter, an advanced technology transport,n hydrogen fueled Mach 3 transport. Illustrative numerical results are presented for the Mach 3 transport wing.
Computational methods for structural load and resistance modeling
NASA Technical Reports Server (NTRS)
Thacker, B. H.; Millwater, H. R.; Harren, S. V.
1991-01-01
An automated capability for computing structural reliability considering uncertainties in both load and resistance variables is presented. The computations are carried out using an automated Advanced Mean Value iteration algorithm (AMV +) with performance functions involving load and resistance variables obtained by both explicit and implicit methods. A complete description of the procedures used is given as well as several illustrative examples, verified by Monte Carlo Analysis. In particular, the computational methods described in the paper are shown to be quite accurate and efficient for a material nonlinear structure considering material damage as a function of several primitive random variables. The results show clearly the effectiveness of the algorithms for computing the reliability of large-scale structural systems with a maximum number of resolutions.
Computer Assisted School Automation (CASA) in Japan.
ERIC Educational Resources Information Center
Sakamoto, Takashi; Nakanome, Naoaki
1991-01-01
This assessment of the status of computer assisted school automation (CASA) in Japan begins by describing the structure of the Japanese educational system and the roles of CASA in that system. Statistics on various aspects of computers in Japanese schools and the findings of several surveys are cited to report on the present state of educational…
The Electrolyte Genome project: A big data approach in battery materials discovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qu, Xiaohui; Jain, Anubhav; Rajput, Nav Nidhi
2015-06-01
We present a high-throughput infrastructure for the automated calculation of molecular properties with a focus on battery electrolytes. The infrastructure is largely open-source and handles both practical aspects (input file generation, output file parsing, and information management) as well as more complex problems (structure matching, salt complex generation, and failure recovery). Using this infrastructure, we have computed the ionization potential (IP) and electron affinities (EA) of 4830 molecules relevant to battery electrolytes (encompassing almost 55,000 quantum mechanics calculations) at the B3LYP/6-31+G(*) level. We describe automated workflows for computing redox potential, dissociation constant, and salt-molecule binding complex structure generation. We presentmore » routines for automatic recovery from calculation errors, which brings the failure rate from 9.2% to 0.8% for the QChem DFT code. Automated algorithms to check duplication between two arbitrary molecules and structures are described. We present benchmark data on basis sets and functionals on the G2-97 test set; one finding is that a IP/EA calculation method that combines PBE geometry optimization and B3LYP energy evaluation requires less computational cost and yields nearly identical results as compared to a full B3LYP calculation, and could be suitable for the calculation of large molecules. Our data indicates that among the 8 functionals tested, XYGJ-OS and B3LYP are the two best functionals to predict IP/EA with an RMSE of 0.12 and 0.27 eV, respectively. Application of our automated workflow to a large set of quinoxaline derivative molecules shows that functional group effect and substitution position effect can be separated for IP/EA of quinoxaline derivatives, and the most sensitive position is different for IP and EA. Published by Elsevier B.V« less
NASA Astrophysics Data System (ADS)
Wang, Bohan; Wang, Hsing-Wen; Guo, Hengchang; Anderson, Erik; Tang, Qinggong; Wu, Tongtong; Falola, Reuben; Smith, Tikina; Andrews, Peter M.; Chen, Yu
2017-12-01
Chronic kidney disease (CKD) is characterized by a progressive loss of renal function over time. Histopathological analysis of the condition of glomeruli and the proximal convolutional tubules over time can provide valuable insights into the progression of CKD. Optical coherence tomography (OCT) is a technology that can analyze the microscopic structures of a kidney in a nondestructive manner. Recently, we have shown that OCT can provide real-time imaging of kidney microstructures in vivo without administering exogenous contrast agents. A murine model of CKD induced by intravenous Adriamycin (ADR) injection is evaluated by OCT. OCT images of the rat kidneys have been captured every week up to eight weeks. Tubular diameter and hypertrophic tubule population of the kidneys at multiple time points after ADR injection have been evaluated through a fully automated computer-vision system. Results revealed that mean tubular diameter and hypertrophic tubule population increase with time in post-ADR injection period. The results suggest that OCT images of the kidney contain abundant information about kidney histopathology. Fully automated computer-aided diagnosis based on OCT has the potential for clinical evaluation of CKD conditions.
Computer automation for feedback system design
NASA Technical Reports Server (NTRS)
1975-01-01
Mathematical techniques and explanations of various steps used by an automated computer program to design feedback systems are summarized. Special attention was given to refining the automatic evaluation suboptimal loop transmission and the translation of time to frequency domain specifications.
Tey, Wei Keat; Kuang, Ye Chow; Ooi, Melanie Po-Leen; Khoo, Joon Joon
2018-03-01
Interstitial fibrosis in renal biopsy samples is a scarring tissue structure that may be visually quantified by pathologists as an indicator to the presence and extent of chronic kidney disease. The standard method of quantification by visual evaluation presents reproducibility issues in the diagnoses. This study proposes an automated quantification system for measuring the amount of interstitial fibrosis in renal biopsy images as a consistent basis of comparison among pathologists. The system extracts and segments the renal tissue structures based on colour information and structural assumptions of the tissue structures. The regions in the biopsy representing the interstitial fibrosis are deduced through the elimination of non-interstitial fibrosis structures from the biopsy area and quantified as a percentage of the total area of the biopsy sample. A ground truth image dataset has been manually prepared by consulting an experienced pathologist for the validation of the segmentation algorithms. The results from experiments involving experienced pathologists have demonstrated a good correlation in quantification result between the automated system and the pathologists' visual evaluation. Experiments investigating the variability in pathologists also proved the automated quantification error rate to be on par with the average intra-observer variability in pathologists' quantification. Interstitial fibrosis in renal biopsy samples is a scarring tissue structure that may be visually quantified by pathologists as an indicator to the presence and extent of chronic kidney disease. The standard method of quantification by visual evaluation presents reproducibility issues in the diagnoses due to the uncertainties in human judgement. An automated quantification system for accurately measuring the amount of interstitial fibrosis in renal biopsy images is presented as a consistent basis of comparison among pathologists. The system identifies the renal tissue structures through knowledge-based rules employing colour space transformations and structural features extraction from the images. In particular, the renal glomerulus identification is based on a multiscale textural feature analysis and a support vector machine. The regions in the biopsy representing interstitial fibrosis are deduced through the elimination of non-interstitial fibrosis structures from the biopsy area. The experiments conducted evaluate the system in terms of quantification accuracy, intra- and inter-observer variability in visual quantification by pathologists, and the effect introduced by the automated quantification system on the pathologists' diagnosis. A 40-image ground truth dataset has been manually prepared by consulting an experienced pathologist for the validation of the segmentation algorithms. The results from experiments involving experienced pathologists have demonstrated an average error of 9 percentage points in quantification result between the automated system and the pathologists' visual evaluation. Experiments investigating the variability in pathologists involving samples from 70 kidney patients also proved the automated quantification error rate to be on par with the average intra-observer variability in pathologists' quantification. The accuracy of the proposed quantification system has been validated with the ground truth dataset and compared against the pathologists' quantification results. It has been shown that the correlation between different pathologists' estimation of interstitial fibrosis area has significantly improved, demonstrating the effectiveness of the quantification system as a diagnostic aide. Copyright © 2017 Elsevier B.V. All rights reserved.
Automated software system for checking the structure and format of ACM SIG documents
NASA Astrophysics Data System (ADS)
Mirza, Arsalan Rahman; Sah, Melike
2017-04-01
Microsoft (MS) Office Word is one of the most commonly used software tools for creating documents. MS Word 2007 and above uses XML to represent the structure of MS Word documents. Metadata about the documents are automatically created using Office Open XML (OOXML) syntax. We develop a new framework, which is called ADFCS (Automated Document Format Checking System) that takes the advantage of the OOXML metadata, in order to extract semantic information from MS Office Word documents. In particular, we develop a new ontology for Association for Computing Machinery (ACM) Special Interested Group (SIG) documents for representing the structure and format of these documents by using OWL (Web Ontology Language). Then, the metadata is extracted automatically in RDF (Resource Description Framework) according to this ontology using the developed software. Finally, we generate extensive rules in order to infer whether the documents are formatted according to ACM SIG standards. This paper, introduces ACM SIG ontology, metadata extraction process, inference engine, ADFCS online user interface, system evaluation and user study evaluations.
NASA Technical Reports Server (NTRS)
Schroeder, Lyle C.; Bailey, M. C.; Mitchell, John L.
1992-01-01
Methods for increasing the electromagnetic (EM) performance of reflectors with rough surfaces were tested and evaluated. First, one quadrant of the 15-meter hoop-column antenna was retrofitted with computer-driven and controlled motors to allow automated adjustment of the reflector surface. The surface errors, measured with metric photogrammetry, were used in a previously verified computer code to calculate control motor adjustments. With this system, a rough antenna surface (rms of approximately 0.180 inch) was corrected in two iterations to approximately the structural surface smoothness limit of 0.060 inch rms. The antenna pattern and gain improved significantly as a result of these surface adjustments. The EM performance was evaluated with a computer program for distorted reflector antennas which had been previously verified with experimental data. Next, the effects of the surface distortions were compensated for in computer simulations by superimposing excitation from an array feed to maximize antenna performance relative to an undistorted reflector. Results showed that a 61-element array could produce EM performance improvements equal to surface adjustments. When both mechanical surface adjustment and feed compensation techniques were applied, the equivalent operating frequency increased from approximately 6 to 18 GHz.
Automated flight path planning for virtual endoscopy.
Paik, D S; Beaulieu, C F; Jeffrey, R B; Rubin, G D; Napel, S
1998-05-01
In this paper, a novel technique for rapid and automatic computation of flight paths for guiding virtual endoscopic exploration of three-dimensional medical images is described. While manually planning flight paths is a tedious and time consuming task, our algorithm is automated and fast. Our method for positioning the virtual camera is based on the medial axis transform but is much more computationally efficient. By iteratively correcting a path toward the medial axis, the necessity of evaluating simple point criteria during morphological thinning is eliminated. The virtual camera is also oriented in a stable viewing direction, avoiding sudden twists and turns. We tested our algorithm on volumetric data sets of eight colons, one aorta and one bronchial tree. The algorithm computed the flight paths in several minutes per volume on an inexpensive workstation with minimal computation time added for multiple paths through branching structures (10%-13% per extra path). The results of our algorithm are smooth, centralized paths that aid in the task of navigation in virtual endoscopic exploration of three-dimensional medical images.
Automated image quality assessment for chest CT scans.
Reeves, Anthony P; Xie, Yiting; Liu, Shuang
2018-02-01
Medical image quality needs to be maintained at standards sufficient for effective clinical reading. Automated computer analytic methods may be applied to medical images for quality assessment. For chest CT scans in a lung cancer screening context, an automated quality assessment method is presented that characterizes image noise and image intensity calibration. This is achieved by image measurements in three automatically segmented homogeneous regions of the scan: external air, trachea lumen air, and descending aorta blood. Profiles of CT scanner behavior are also computed. The method has been evaluated on both phantom and real low-dose chest CT scans and results show that repeatable noise and calibration measures may be realized by automated computer algorithms. Noise and calibration profiles show relevant differences between different scanners and protocols. Automated image quality assessment may be useful for quality control for lung cancer screening and may enable performance improvements to automated computer analysis methods. © 2017 American Association of Physicists in Medicine.
Automation of the aircraft design process
NASA Technical Reports Server (NTRS)
Heldenfels, R. R.
1974-01-01
The increasing use of the computer to automate the aerospace product development and engineering process is examined with emphasis on structural analysis and design. Examples of systems of computer programs in aerospace and other industries are reviewed and related to the characteristics of aircraft design in its conceptual, preliminary, and detailed phases. Problems with current procedures are identified, and potential improvements from optimum utilization of integrated disciplinary computer programs by a man/computer team are indicated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matney, J; Hammers, J; Kaidar-Person, O
2016-06-15
Purpose: To compute daily dose delivered during radiotherapy, deformable registration needs to be relatively fast, automated, and accurate. The aim of this study was to evaluate the performance of commercial deformable registration software for deforming between two modalities: planning computed tomography (pCT) images acquired for treatment planning and cone beam (CB) CT images acquired prior to each fraction of prostate cancer radiotherapy. Methods: A workflow was designed using MIM Software™ that aligned and deformed pCT into daily CBCT images in two steps: (1) rigid shifts applied after daily CBCT imaging to align patient anatomy to the pCT and (2) normalizedmore » intensity-based deformable registration to account for interfractional anatomical variations. The physician-approved CTV and organ and risk (OAR) contours were deformed from the pCT to daily CBCT over the course of treatment. The same structures were delineated on each daily CBCT by a radiation oncologist. Dice similarity coefficient (DSC) mean and standard deviations were calculated to quantify the deformable registration quality for prostate, bladder, rectum and femoral heads. Results: To date, contour comparisons have been analyzed for 31 daily fractions of 2 of 10 of the cohort. Interim analysis shows that right and left femoral head contours demonstrate the highest agreement (DSC: 0.96±0.02) with physician contours. Additionally, deformed bladder (DSC: 0.81±0.09) and prostate (DSC: 0.80±0.07) have good agreement with physician-defined daily contours. Rectum contours have the highest variations (DSC: 0.66±0.10) between the deformed and physician-defined contours on daily CBCT imaging. Conclusion: For structures with relatively high contrast boundaries on CBCT, the MIM automated deformable registration provided accurate representations of the daily contours during treatment delivery. These findings will permit subsequent investigations to automate daily dose computation from CBCT. However, improved methods need to be investigated to improve deformable results for rectum contours.« less
Automated aortic calcification detection in low-dose chest CT images
NASA Astrophysics Data System (ADS)
Xie, Yiting; Htwe, Yu Maw; Padgett, Jennifer; Henschke, Claudia; Yankelevitz, David; Reeves, Anthony P.
2014-03-01
The extent of aortic calcification has been shown to be a risk indicator for vascular events including cardiac events. We have developed a fully automated computer algorithm to segment and measure aortic calcification in low-dose noncontrast, non-ECG gated, chest CT scans. The algorithm first segments the aorta using a pre-computed Anatomy Label Map (ALM). Then based on the segmented aorta, aortic calcification is detected and measured in terms of the Agatston score, mass score, and volume score. The automated scores are compared with reference scores obtained from manual markings. For aorta segmentation, the aorta is modeled as a series of discrete overlapping cylinders and the aortic centerline is determined using a cylinder-tracking algorithm. Then the aortic surface location is detected using the centerline and a triangular mesh model. The segmented aorta is used as a mask for the detection of aortic calcification. For calcification detection, the image is first filtered, then an elevated threshold of 160 Hounsfield units (HU) is used within the aorta mask region to reduce the effect of noise in low-dose scans, and finally non-aortic calcification voxels (bony structures, calcification in other organs) are eliminated. The remaining candidates are considered as true aortic calcification. The computer algorithm was evaluated on 45 low-dose non-contrast CT scans. Using linear regression, the automated Agatston score is 98.42% correlated with the reference Agatston score. The automated mass and volume score is respectively 98.46% and 98.28% correlated with the reference mass and volume score.
StrAuto: automation and parallelization of STRUCTURE analysis.
Chhatre, Vikram E; Emerson, Kevin J
2017-03-24
Population structure inference using the software STRUCTURE has become an integral part of population genetic studies covering a broad spectrum of taxa including humans. The ever-expanding size of genetic data sets poses computational challenges for this analysis. Although at least one tool currently implements parallel computing to reduce computational overload of this analysis, it does not fully automate the use of replicate STRUCTURE analysis runs required for downstream inference of optimal K. There is pressing need for a tool that can deploy population structure analysis on high performance computing clusters. We present an updated version of the popular Python program StrAuto, to streamline population structure analysis using parallel computing. StrAuto implements a pipeline that combines STRUCTURE analysis with the Evanno Δ K analysis and visualization of results using STRUCTURE HARVESTER. Using benchmarking tests, we demonstrate that StrAuto significantly reduces the computational time needed to perform iterative STRUCTURE analysis by distributing runs over two or more processors. StrAuto is the first tool to integrate STRUCTURE analysis with post-processing using a pipeline approach in addition to implementing parallel computation - a set up ideal for deployment on computing clusters. StrAuto is distributed under the GNU GPL (General Public License) and available to download from http://strauto.popgen.org .
DOE Office of Scientific and Technical Information (OSTI.GOV)
Darby, John L.
LinguisticBelief is a Java computer code that evaluates combinations of linguistic variables using an approximate reasoning rule base. Each variable is comprised of fuzzy sets, and a rule base describes the reasoning on combinations of variables fuzzy sets. Uncertainty is considered and propagated through the rule base using the belief/plausibility measure. The mathematics of fuzzy sets, approximate reasoning, and belief/ plausibility are complex. Without an automated tool, this complexity precludes their application to all but the simplest of problems. LinguisticBelief automates the use of these techniques, allowing complex problems to be evaluated easily. LinguisticBelief can be used free of chargemore » on any Windows XP machine. This report documents the use and structure of the LinguisticBelief code, and the deployment package for installation client machines.« less
Recent advances in automated protein design and its future challenges.
Setiawan, Dani; Brender, Jeffrey; Zhang, Yang
2018-04-25
Protein function is determined by protein structure which is in turn determined by the corresponding protein sequence. If the rules that cause a protein to adopt a particular structure are understood, it should be possible to refine or even redefine the function of a protein by working backwards from the desired structure to the sequence. Automated protein design attempts to calculate the effects of mutations computationally with the goal of more radical or complex transformations than are accessible by experimental techniques. Areas covered: The authors give a brief overview of the recent methodological advances in computer-aided protein design, showing how methodological choices affect final design and how automated protein design can be used to address problems considered beyond traditional protein engineering, including the creation of novel protein scaffolds for drug development. Also, the authors address specifically the future challenges in the development of automated protein design. Expert opinion: Automated protein design holds potential as a protein engineering technique, particularly in cases where screening by combinatorial mutagenesis is problematic. Considering solubility and immunogenicity issues, automated protein design is initially more likely to make an impact as a research tool for exploring basic biology in drug discovery than in the design of protein biologics.
Understanding and enhancing user acceptance of computer technology
NASA Technical Reports Server (NTRS)
Rouse, William B.; Morris, Nancy M.
1986-01-01
Technology-driven efforts to implement computer technology often encounter problems due to lack of acceptance or begrudging acceptance of the personnel involved. It is argued that individuals' acceptance of automation, in terms of either computerization or computer aiding, is heavily influenced by their perceptions of the impact of the automation on their discretion in performing their jobs. It is suggested that desired levels of discretion reflect needs to feel in control and achieve self-satisfaction in task performance, as well as perceptions of inadequacies of computer technology. Discussion of these factors leads to a structured set of considerations for performing front-end analysis, deciding what to automate, and implementing the resulting changes.
A Factor Graph Approach to Automated GO Annotation
Spetale, Flavio E.; Tapia, Elizabeth; Krsticevic, Flavia; Roda, Fernando; Bulacio, Pilar
2016-01-01
As volume of genomic data grows, computational methods become essential for providing a first glimpse onto gene annotations. Automated Gene Ontology (GO) annotation methods based on hierarchical ensemble classification techniques are particularly interesting when interpretability of annotation results is a main concern. In these methods, raw GO-term predictions computed by base binary classifiers are leveraged by checking the consistency of predefined GO relationships. Both formal leveraging strategies, with main focus on annotation precision, and heuristic alternatives, with main focus on scalability issues, have been described in literature. In this contribution, a factor graph approach to the hierarchical ensemble formulation of the automated GO annotation problem is presented. In this formal framework, a core factor graph is first built based on the GO structure and then enriched to take into account the noisy nature of GO-term predictions. Hence, starting from raw GO-term predictions, an iterative message passing algorithm between nodes of the factor graph is used to compute marginal probabilities of target GO-terms. Evaluations on Saccharomyces cerevisiae, Arabidopsis thaliana and Drosophila melanogaster protein sequences from the GO Molecular Function domain showed significant improvements over competing approaches, even when protein sequences were naively characterized by their physicochemical and secondary structure properties or when loose noisy annotation datasets were considered. Based on these promising results and using Arabidopsis thaliana annotation data, we extend our approach to the identification of most promising molecular function annotations for a set of proteins of unknown function in Solanum lycopersicum. PMID:26771463
A Factor Graph Approach to Automated GO Annotation.
Spetale, Flavio E; Tapia, Elizabeth; Krsticevic, Flavia; Roda, Fernando; Bulacio, Pilar
2016-01-01
As volume of genomic data grows, computational methods become essential for providing a first glimpse onto gene annotations. Automated Gene Ontology (GO) annotation methods based on hierarchical ensemble classification techniques are particularly interesting when interpretability of annotation results is a main concern. In these methods, raw GO-term predictions computed by base binary classifiers are leveraged by checking the consistency of predefined GO relationships. Both formal leveraging strategies, with main focus on annotation precision, and heuristic alternatives, with main focus on scalability issues, have been described in literature. In this contribution, a factor graph approach to the hierarchical ensemble formulation of the automated GO annotation problem is presented. In this formal framework, a core factor graph is first built based on the GO structure and then enriched to take into account the noisy nature of GO-term predictions. Hence, starting from raw GO-term predictions, an iterative message passing algorithm between nodes of the factor graph is used to compute marginal probabilities of target GO-terms. Evaluations on Saccharomyces cerevisiae, Arabidopsis thaliana and Drosophila melanogaster protein sequences from the GO Molecular Function domain showed significant improvements over competing approaches, even when protein sequences were naively characterized by their physicochemical and secondary structure properties or when loose noisy annotation datasets were considered. Based on these promising results and using Arabidopsis thaliana annotation data, we extend our approach to the identification of most promising molecular function annotations for a set of proteins of unknown function in Solanum lycopersicum.
Towards automatic Markov reliability modeling of computer architectures
NASA Technical Reports Server (NTRS)
Liceaga, C. A.; Siewiorek, D. P.
1986-01-01
The analysis and evaluation of reliability measures using time-varying Markov models is required for Processor-Memory-Switch (PMS) structures that have competing processes such as standby redundancy and repair, or renewal processes such as transient or intermittent faults. The task of generating these models is tedious and prone to human error due to the large number of states and transitions involved in any reasonable system. Therefore model formulation is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model formulation. This paper presents an overview of the Automated Reliability Modeling (ARM) program, under development at NASA Langley Research Center. ARM will accept as input a description of the PMS interconnection graph, the behavior of the PMS components, the fault-tolerant strategies, and the operational requirements. The output of ARM will be the reliability of availability Markov model formulated for direct use by evaluation programs. The advantages of such an approach are (a) utility to a large class of users, not necessarily expert in reliability analysis, and (b) a lower probability of human error in the computation.
Computation of Flow Through Water-Control Structures Using Program DAMFLO.2
Sanders, Curtis L.; Feaster, Toby D.
2004-01-01
As part of its mission to collect, analyze, and store streamflow data, the U.S. Geological Survey computes flow through several dam structures throughout the country. Flows are computed using hydraulic equations that describe flow through sluice and Tainter gates, crest gates, lock gates, spillways, locks, pumps, and siphons, which are calibrated using flow measurements. The program DAMFLO.2 was written to compute, tabulate, and plot flow through dam structures using data that describe the physical properties of dams and various hydraulic parameters and ratings that use time-varying data, such as lake elevations or gate openings. The program uses electronic computer files of time-varying data, such as lake elevation or gate openings, retrieved from the U.S. Geological Survey Automated Data Processing System. Computed time-varying flow data from DAMFLO.2 are output in flat files, which can be entered into the Automated Data Processing System database. All computations are made in units of feet and seconds. DAMFLO.2 uses the procedures and language developed by the SAS Institute Inc.
Automatic specification of reliability models for fault-tolerant computers
NASA Technical Reports Server (NTRS)
Liceaga, Carlos A.; Siewiorek, Daniel P.
1993-01-01
The calculation of reliability measures using Markov models is required for life-critical processor-memory-switch structures that have standby redundancy or that are subject to transient or intermittent faults or repair. The task of specifying these models is tedious and prone to human error because of the large number of states and transitions required in any reasonable system. Therefore, model specification is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model specification. Automation requires a general system description language (SDL). For practicality, this SDL should also provide a high level of abstraction and be easy to learn and use. The first attempt to define and implement an SDL with those characteristics is presented. A program named Automated Reliability Modeling (ARM) was constructed as a research vehicle. The ARM program uses a graphical interface as its SDL, and it outputs a Markov reliability model specification formulated for direct use by programs that generate and evaluate the model.
1974-07-01
automated manufacturing processes and a rough technoeconomic evaluation of those concepts. Our evaluation is largely based on estimates; therefore, the...must be subjected to thorough analysis and experimental verification before they can be considered definitive. They are being published at this time...hardware and sensor technology, manufacturing engineering, automation, and economic analysis . Members of this team inspected over thirty manufacturing
Wong, Stephen; Hargreaves, Eric L; Baltuch, Gordon H; Jaggi, Jurg L; Danish, Shabbar F
2012-01-01
Microelectrode recording (MER) is necessary for precision localization of target structures such as the subthalamic nucleus during deep brain stimulation (DBS) surgery. Attempts to automate this process have produced quantitative temporal trends (feature activity vs. time) extracted from mobile MER data. Our goal was to evaluate computational methods of generating spatial profiles (feature activity vs. depth) from temporal trends that would decouple automated MER localization from the clinical procedure and enhance functional localization in DBS surgery. We evaluated two methods of interpolation (standard vs. kernel) that generated spatial profiles from temporal trends. We compared interpolated spatial profiles to true spatial profiles that were calculated with depth windows, using correlation coefficient analysis. Excellent approximation of true spatial profiles is achieved by interpolation. Kernel-interpolated spatial profiles produced superior correlation coefficient values at optimal kernel widths (r = 0.932-0.940) compared to standard interpolation (r = 0.891). The choice of kernel function and kernel width resulted in trade-offs in smoothing and resolution. Interpolation of feature activity to create spatial profiles from temporal trends is accurate and can standardize and facilitate MER functional localization of subcortical structures. The methods are computationally efficient, enhancing localization without imposing additional constraints on the MER clinical procedure during DBS surgery. Copyright © 2012 S. Karger AG, Basel.
Design Through Manufacturing: The Solid Model-Finite Element Analysis Interface
NASA Technical Reports Server (NTRS)
Rubin, Carol
2002-01-01
State-of-the-art computer aided design (CAD) presently affords engineers the opportunity to create solid models of machine parts reflecting every detail of the finished product. Ideally, in the aerospace industry, these models should fulfill two very important functions: (1) provide numerical. control information for automated manufacturing of precision parts, and (2) enable analysts to easily evaluate the stress levels (using finite element analysis - FEA) for all structurally significant parts used in aircraft and space vehicles. Today's state-of-the-art CAD programs perform function (1) very well, providing an excellent model for precision manufacturing. But they do not provide a straightforward and simple means of automating the translation from CAD to FEA models, especially for aircraft-type structures. Presently, the process of preparing CAD models for FEA consumes a great deal of the analyst's time.
Automated Training Evaluation (ATE). Final Report.
ERIC Educational Resources Information Center
Charles, John P.; Johnson, Robert M.
The automation of weapons system training presents the potential for significant savings in training costs in terms of manpower, time, and money. The demonstration of the technical feasibility of automated training through the application of advanced digital computer techniques and advanced training techniques is essential before the application…
PDB_REDO: automated re-refinement of X-ray structure models in the PDB.
Joosten, Robbie P; Salzemann, Jean; Bloch, Vincent; Stockinger, Heinz; Berglund, Ann-Charlott; Blanchet, Christophe; Bongcam-Rudloff, Erik; Combet, Christophe; Da Costa, Ana L; Deleage, Gilbert; Diarena, Matteo; Fabbretti, Roberto; Fettahi, Géraldine; Flegel, Volker; Gisel, Andreas; Kasam, Vinod; Kervinen, Timo; Korpelainen, Eija; Mattila, Kimmo; Pagni, Marco; Reichstadt, Matthieu; Breton, Vincent; Tickle, Ian J; Vriend, Gert
2009-06-01
Structural biology, homology modelling and rational drug design require accurate three-dimensional macromolecular coordinates. However, the coordinates in the Protein Data Bank (PDB) have not all been obtained using the latest experimental and computational methods. In this study a method is presented for automated re-refinement of existing structure models in the PDB. A large-scale benchmark with 16 807 PDB entries showed that they can be improved in terms of fit to the deposited experimental X-ray data as well as in terms of geometric quality. The re-refinement protocol uses TLS models to describe concerted atom movement. The resulting structure models are made available through the PDB_REDO databank (http://www.cmbi.ru.nl/pdb_redo/). Grid computing techniques were used to overcome the computational requirements of this endeavour.
Automated Quantification of Pneumothorax in CT
Do, Synho; Salvaggio, Kristen; Gupta, Supriya; Kalra, Mannudeep; Ali, Nabeel U.; Pien, Homer
2012-01-01
An automated, computer-aided diagnosis (CAD) algorithm for the quantification of pneumothoraces from Multidetector Computed Tomography (MDCT) images has been developed. Algorithm performance was evaluated through comparison to manual segmentation by expert radiologists. A combination of two-dimensional and three-dimensional processing techniques was incorporated to reduce required processing time by two-thirds (as compared to similar techniques). Volumetric measurements on relative pneumothorax size were obtained and the overall performance of the automated method shows an average error of just below 1%. PMID:23082091
NASA Technical Reports Server (NTRS)
Haley, D. C.; Almand, B. J.; Thomas, M. M.; Krauze, L. D.; Gremban, K. D.; Sanborn, J. C.; Kelly, J. H.; Depkovich, T. M.; Wolfe, W. J.; Nguyen, T.
1986-01-01
The purpose of the Robotic Simulation (ROBSIM) program is to provide a broad range of computer capabilities to assist in the design, verification, simulation, and study of robotic systems. ROBSIM is programmed in FORTRAM 77 and implemented on a VAX 11/750 computer using the VMS operating system. The programmer's guide describes the ROBSIM implementation and program logic flow, and the functions and structures of the different subroutines. With the manual and the in-code documentation, an experienced programmer can incorporate additional routines and modify existing ones to add desired capabilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klopman, G.; Tu, M.
1997-09-01
It is shown that a combination of two programs, MultiCASE and META, can help assess the biodegradability of industrial organic materials in the ecosystem. MultiCASE is an artificial intelligence computer program that had been trained to identify molecular substructures believed to cause or inhibit biodegradation and META is an expert system trained to predict the aerobic biodegradation products of organic molecules. These two programs can be used to help evaluate the fate of disposed chemicals by estimating their biodegradability and the nature of their biodegradation products under conditions that may model the environment.
NASA Technical Reports Server (NTRS)
Haley, D. C.; Almand, B. J.; Thomas, M. M.; Krauze, L. D.; Gremban, K. D.; Sanborn, J. C.; Kelly, J. H.; Depkovich, T. M.
1984-01-01
The purpose of the Robotics Simulation (ROBSIM) program is to provide a broad range of computer capabilities to assist in the design, verification, simulation, and study of robotic systems. ROBSIM is programmed in FORTRAN 77 and implemented on a VAX 11/750 computer using the VMS operating system. This programmer's guide describes the ROBSIM implementation and program logic flow, and the functions and structures of the different subroutines. With this manual and the in-code documentation, and experienced programmer can incorporate additional routines and modify existing ones to add desired capabilities.
NASA Astrophysics Data System (ADS)
Sharma, Archie; Corona, Enrique; Mitra, Sunanda; Nutter, Brian S.
2006-03-01
Early detection of structural damage to the optic nerve head (ONH) is critical in diagnosis of glaucoma, because such glaucomatous damage precedes clinically identifiable visual loss. Early detection of glaucoma can prevent progression of the disease and consequent loss of vision. Traditional early detection techniques involve observing changes in the ONH through an ophthalmoscope. Stereo fundus photography is also routinely used to detect subtle changes in the ONH. However, clinical evaluation of stereo fundus photographs suffers from inter- and intra-subject variability. Even the Heidelberg Retina Tomograph (HRT) has not been found to be sufficiently sensitive for early detection. A semi-automated algorithm for quantitative representation of the optic disc and cup contours by computing accumulated disparities in the disc and cup regions from stereo fundus image pairs has already been developed using advanced digital image analysis methodologies. A 3-D visualization of the disc and cup is achieved assuming camera geometry. High correlation among computer-generated and manually segmented cup to disc ratios in a longitudinal study involving 159 stereo fundus image pairs has already been demonstrated. However, clinical usefulness of the proposed technique can only be tested by a fully automated algorithm. In this paper, we present a fully automated algorithm for segmentation of optic cup and disc contours from corresponding stereo disparity information. Because this technique does not involve human intervention, it eliminates subjective variability encountered in currently used clinical methods and provides ophthalmologists with a cost-effective and quantitative method for detection of ONH structural damage for early detection of glaucoma.
The computer program SPARC (SPARC Performs Automated Reasoning in Chemistry) has been under development for several years to estimate physical properties and chemical reactivity parameters of organic compounds strictly from molecular structure. SPARC uses computational algorithms...
The 3D Euler solutions using automated Cartesian grid generation
NASA Technical Reports Server (NTRS)
Melton, John E.; Enomoto, Francis Y.; Berger, Marsha J.
1993-01-01
Viewgraphs on 3-dimensional Euler solutions using automated Cartesian grid generation are presented. Topics covered include: computational fluid dynamics (CFD) and the design cycle; Cartesian grid strategy; structured body fit; grid generation; prolate spheroid; and ONERA M6 wing.
Automated MAD and MIR structure solution
Terwilliger, Thomas C.; Berendzen, Joel
1999-01-01
Obtaining an electron-density map from X-ray diffraction data can be difficult and time-consuming even after the data have been collected, largely because MIR and MAD structure determinations currently require many subjective evaluations of the qualities of trial heavy-atom partial structures before a correct heavy-atom solution is obtained. A set of criteria for evaluating the quality of heavy-atom partial solutions in macromolecular crystallography have been developed. These have allowed the conversion of the crystal structure-solution process into an optimization problem and have allowed its automation. The SOLVE software has been used to solve MAD data sets with as many as 52 selenium sites in the asymmetric unit. The automated structure-solution process developed is a major step towards the fully automated structure-determination, model-building and refinement procedure which is needed for genomic scale structure determinations. PMID:10089316
Shen, Hong-Bin; Yi, Dong-Liang; Yao, Li-Xiu; Yang, Jie; Chou, Kuo-Chen
2008-10-01
In the postgenomic age, with the avalanche of protein sequences generated and relatively slow progress in determining their structures by experiments, it is important to develop automated methods to predict the structure of a protein from its sequence. The membrane proteins are a special group in the protein family that accounts for approximately 30% of all proteins; however, solved membrane protein structures only represent less than 1% of known protein structures to date. Although a great success has been achieved for developing computational intelligence techniques to predict secondary structures in both globular and membrane proteins, there is still much challenging work in this regard. In this review article, we firstly summarize the recent progress of automation methodology development in predicting protein secondary structures, especially in membrane proteins; we will then give some future directions in this research field.
Generic and Automated Data Evaluation in Analytical Measurement.
Adam, Martin; Fleischer, Heidi; Thurow, Kerstin
2017-04-01
In the past year, automation has become more and more important in the field of elemental and structural chemical analysis to reduce the high degree of manual operation and processing time as well as human errors. Thus, a high number of data points are generated, which requires fast and automated data evaluation. To handle the preprocessed export data from different analytical devices with software from various vendors offering a standardized solution without any programming knowledge should be preferred. In modern laboratories, multiple users will use this software on multiple personal computers with different operating systems (e.g., Windows, Macintosh, Linux). Also, mobile devices such as smartphones and tablets have gained growing importance. The developed software, Project Analytical Data Evaluation (ADE), is implemented as a web application. To transmit the preevaluated data from the device software to the Project ADE, the exported XML report files are detected and the included data are imported into the entities database using the Data Upload software. Different calculation types of a sample within one measurement series (e.g., method validation) are identified using information tags inside the sample name. The results are presented in tables and diagrams on different information levels (general, detailed for one analyte or sample).
Computational technique for stepwise quantitative assessment of equation correctness
NASA Astrophysics Data System (ADS)
Othman, Nuru'l Izzah; Bakar, Zainab Abu
2017-04-01
Many of the computer-aided mathematics assessment systems that are available today possess the capability to implement stepwise correctness checking of a working scheme for solving equations. The computational technique for assessing the correctness of each response in the scheme mainly involves checking the mathematical equivalence and providing qualitative feedback. This paper presents a technique, known as the Stepwise Correctness Checking and Scoring (SCCS) technique that checks the correctness of each equation in terms of structural equivalence and provides quantitative feedback. The technique, which is based on the Multiset framework, adapts certain techniques from textual information retrieval involving tokenization, document modelling and similarity evaluation. The performance of the SCCS technique was tested using worked solutions on solving linear algebraic equations in one variable. 350 working schemes comprising of 1385 responses were collected using a marking engine prototype, which has been developed based on the technique. The results show that both the automated analytical scores and the automated overall scores generated by the marking engine exhibit high percent agreement, high correlation and high degree of agreement with manual scores with small average absolute and mixed errors.
Mantovani, Giulia; Pifferi, Massimo; Vozzi, Giovanni
2010-06-01
Patients with primary ciliary dyskinesia (PCD) have structural and/or functional alterations of cilia that imply deficits in mucociliary clearance and different respiratory pathologies. A useful indicator for the difficult diagnosis is the ciliary beat frequency (CBF) that is significantly lower in pathological cases than in physiological ones. The CBF computation is not rapid, therefore, the aim of this study is to propose an automated method to evaluate it directly from videos of ciliated cells. The cells are taken from inferior nasal turbinates and videos of ciliary movements are registered and eventually processed by the developed software. The software consists in the extraction of features from videos (written with C++ language) and the computation of the frequency (written with Matlab language). This system was tested both on the samples of nasal cavity and software models, and the results were really promising because in a few seconds, it can compute a reliable frequency if compared with that measured with visual methods. It is to be noticed that the reliability of the computation increases with the quality of acquisition system and especially with the sampling frequency. It is concluded that the developed software could be a useful mean for PCD diagnosis.
Evolutionary and biological metaphors for engineering design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakiela, M.
1994-12-31
Since computing became generally available, there has been strong interest in using computers to assist and automate engineering design processes. Specifically, for design optimization and automation, nonlinear programming and artificial intelligence techniques have been extensively studied. New computational techniques, based upon the natural processes of evolution, adaptation, and learing, are showing promise because of their generality and robustness. This presentation will describe the use of two such techniques, genetic algorithms and classifier systems, for a variety of engineering design problems. Structural topology optimization, meshing, and general engineering optimization are shown as example applications.
1981-06-30
manpower needs as to quantity, quality and timing; all the internal functions of the personnel service are tapped to help meet these ends. Manpower...Program ACOS - Automated Computation of Service ACQ - Acquisition ACSAC - Assistant Chief of Staff for Automation and Comunications ACT - Automated...ARSTAF - Army Staff ARSTAFF - Army Staff ARTEP - Army Training and Evaluation Program ASI - Additional Skill Identifier ASVAB - Armed Services
Jacob, Joseph; Bartholmai, Brian J; Rajagopalan, Srinivasan; Brun, Anne Laure; Egashira, Ryoko; Karwoski, Ronald; Kokosi, Maria; Wells, Athol U; Hansell, David M
2016-11-23
To evaluate computer-based computer tomography (CT) analysis (CALIPER) against visual CT scoring and pulmonary function tests (PFTs) when predicting mortality in patients with connective tissue disease-related interstitial lung disease (CTD-ILD). To identify outcome differences between distinct CTD-ILD groups derived following automated stratification of CALIPER variables. A total of 203 consecutive patients with assorted CTD-ILDs had CT parenchymal patterns evaluated by CALIPER and visual CT scoring: honeycombing, reticular pattern, ground glass opacities, pulmonary vessel volume, emphysema, and traction bronchiectasis. CT scores were evaluated against pulmonary function tests: forced vital capacity, diffusing capacity for carbon monoxide, carbon monoxide transfer coefficient, and composite physiologic index for mortality analysis. Automated stratification of CALIPER-CT variables was evaluated in place of and alongside forced vital capacity and diffusing capacity for carbon monoxide in the ILD gender, age physiology (ILD-GAP) model using receiver operating characteristic curve analysis. Cox regression analyses identified four independent predictors of mortality: patient age (P < 0.0001), smoking history (P = 0.0003), carbon monoxide transfer coefficient (P = 0.003), and pulmonary vessel volume (P < 0.0001). Automated stratification of CALIPER variables identified three morphologically distinct groups which were stronger predictors of mortality than all CT and functional indices. The Stratified-CT model substituted automated stratified groups for functional indices in the ILD-GAP model and maintained model strength (area under curve (AUC) = 0.74, P < 0.0001), ILD-GAP (AUC = 0.72, P < 0.0001). Combining automated stratified groups with the ILD-GAP model (stratified CT-GAP model) strengthened predictions of 1- and 2-year mortality: ILD-GAP (AUC = 0.87 and 0.86, respectively); stratified CT-GAP (AUC = 0.89 and 0.88, respectively). CALIPER-derived pulmonary vessel volume is an independent predictor of mortality across all CTD-ILD patients. Furthermore, automated stratification of CALIPER CT variables represents a novel method of prognostication at least as robust as PFTs in CTD-ILD patients.
ERIC Educational Resources Information Center
Varank, Ilhan; Erkoç, M. Fatih; Büyükimdat, Meryem Köskeroglu; Aktas, Mehmet; Yeni, Sabiha; Adigüzel, Tufan; Cömert, Zafer; Esgin, Esad
2014-01-01
The purpose of this study was to investigate the effectiveness of an online automated evaluation and feedback system that assessed students' word processing assignments prepared with Microsoft Office Word. The participants of the study were 119 undergraduate teacher education students, 86 of whom were female and 32 were male, enrolled in different…
NASA Technical Reports Server (NTRS)
Warren, Andrew H.; Arelt, Joseph E.; Lalicata, Anthony L.; Rogers, Karen M.
1993-01-01
A method of efficient and automated thermal-structural processing of very large space structures is presented. The method interfaces the finite element and finite difference techniques. It also results in a pronounced reduction of the quantity of computations, computer resources and manpower required for the task, while assuring the desired accuracy of the results.
Performance evaluation of the NASA/KSC CAD/CAE and office automation LAN's
NASA Technical Reports Server (NTRS)
Zobrist, George W.
1994-01-01
This study's objective is the performance evaluation of the existing CAD/CAE (Computer Aided Design/Computer Aided Engineering) network at NASA/KSC. This evaluation also includes a similar study of the Office Automation network, since it is being planned to integrate this network into the CAD/CAE network. The Microsoft mail facility which is presently on the CAD/CAE network was monitored to determine its present usage. This performance evaluation of the various networks will aid the NASA/KSC network managers in planning for the integration of future workload requirements into the CAD/CAE network and determining the effectiveness of the planned FDDI (Fiber Distributed Data Interface) migration.
NASA Astrophysics Data System (ADS)
Khazaeli, S.; Ravandi, A. G.; Banerji, S.; Bagchi, A.
2016-04-01
Recently, data-driven models for Structural Health Monitoring (SHM) have been of great interest among many researchers. In data-driven models, the sensed data are processed to determine the structural performance and evaluate the damages of an instrumented structure without necessitating the mathematical modeling of the structure. A framework of data-driven models for online assessment of the condition of a structure has been developed here. The developed framework is intended for automated evaluation of the monitoring data and structural performance by the Internet technology and resources. The main challenges in developing such framework include: (a) utilizing the sensor measurements to estimate and localize the induced damage in a structure by means of signal processing and data mining techniques, and (b) optimizing the computing and storage resources with the aid of cloud services. The main focus in this paper is to demonstrate the efficiency of the proposed framework for real-time damage detection of a multi-story shear-building structure in two damage scenarios (change in mass and stiffness) in various locations. Several features are extracted from the sensed data by signal processing techniques and statistical methods. Machine learning algorithms are deployed to select damage-sensitive features as well as classifying the data to trace the anomaly in the response of the structure. Here, the cloud computing resources from Amazon Web Services (AWS) have been used to implement the proposed framework.
Automated segmentation of pulmonary structures in thoracic computed tomography scans: a review
NASA Astrophysics Data System (ADS)
van Rikxoort, Eva M.; van Ginneken, Bram
2013-09-01
Computed tomography (CT) is the modality of choice for imaging the lungs in vivo. Sub-millimeter isotropic images of the lungs can be obtained within seconds, allowing the detection of small lesions and detailed analysis of disease processes. The high resolution of thoracic CT and the high prevalence of lung diseases require a high degree of automation in the analysis pipeline. The automated segmentation of pulmonary structures in thoracic CT has been an important research topic for over a decade now. This systematic review provides an overview of current literature. We discuss segmentation methods for the lungs, the pulmonary vasculature, the airways, including airway tree construction and airway wall segmentation, the fissures, the lobes and the pulmonary segments. For each topic, the current state of the art is summarized, and topics for future research are identified.
A Computational Geometry Approach to Automated Pulmonary Fissure Segmentation in CT Examinations
Pu, Jiantao; Leader, Joseph K; Zheng, Bin; Knollmann, Friedrich; Fuhrman, Carl; Sciurba, Frank C; Gur, David
2010-01-01
Identification of pulmonary fissures, which form the boundaries between the lobes in the lungs, may be useful during clinical interpretation of CT examinations to assess the early presence and characterization of manifestation of several lung diseases. Motivated by the unique nature of the surface shape of pulmonary fissures in three-dimensional space, we developed a new automated scheme using computational geometry methods to detect and segment fissures depicted on CT images. After a geometric modeling of the lung volume using the Marching Cube Algorithm, Laplacian smoothing is applied iteratively to enhance pulmonary fissures by depressing non-fissure structures while smoothing the surfaces of lung fissures. Next, an Extended Gaussian Image based procedure is used to locate the fissures in a statistical manner that approximates the fissures using a set of plane “patches.” This approach has several advantages such as independence of anatomic knowledge of the lung structure except the surface shape of fissures, limited sensitivity to other lung structures, and ease of implementation. The scheme performance was evaluated by two experienced thoracic radiologists using a set of 100 images (slices) randomly selected from 10 screening CT examinations. In this preliminary evaluation 98.7% and 94.9% of scheme segmented fissure voxels are within 2 mm of the fissures marked independently by two radiologists in the testing image dataset. Using the scheme detected fissures as reference, 89.4% and 90.1% of manually marked fissure points have distance ≤ 2 mm to the reference suggesting a possible under-segmentation of the scheme. The case-based RMS (root-mean-square) distances (“errors”) between our scheme and the radiologist ranged from 1.48±0.92 to 2.04±3.88 mm. The discrepancy of fissure detection results between the automated scheme and either radiologist is smaller in this dataset than the inter-reader variability. PMID:19272987
Computer Administering of the Psychological Investigations: Set-Relational Representation
NASA Astrophysics Data System (ADS)
Yordzhev, Krasimir
Computer administering of a psychological investigation is the computer representation of the entire procedure of psychological assessments - test construction, test implementation, results evaluation, storage and maintenance of the developed database, its statistical processing, analysis and interpretation. A mathematical description of psychological assessment with the aid of personality tests is discussed in this article. The set theory and the relational algebra are used in this description. A relational model of data, needed to design a computer system for automation of certain psychological assessments is given. Some finite sets and relation on them, which are necessary for creating a personality psychological test, are described. The described model could be used to develop real software for computer administering of any psychological test and there is full automation of the whole process: test construction, test implementation, result evaluation, storage of the developed database, statistical implementation, analysis and interpretation. A software project for computer administering personality psychological tests is suggested.
Pathways and Challenges to Innovation in Aerospace
NASA Technical Reports Server (NTRS)
Terrile, Richard J.
2010-01-01
This paper explores impediments to innovation in aerospace and suggests how successful pathways from other industries can be adopted to facilitate greater innovation. Because of its nature, space exploration would seem to be a ripe field of technical innovation. However, engineering can also be a frustratingly conservative endeavor when the realities of cost and risk are included. Impediments like the "find the fault" engineering culture, the treatment of technical risk as almost always evaluated in terms of negative impact, the difficult to account for expansive Moore's Law growth when making predictions, and the stove-piped structural organization of most large aerospace companies and federally funded research laboratories tend to inhibit cross-cutting technical innovation. One successful example of a multi-use cross cutting application that can scale with Moore's Law is the Evolutionary Computational Methods (ECM) technique developed at the Jet Propulsion Lab for automated spectral retrieval. Future innovations like computational engineering and automated design optimization can potentially redefine space exploration, but will require learning lessons from successful innovators.
NASA Technical Reports Server (NTRS)
Svalbonas, V.
1973-01-01
The User's manual for the shell theory automated for rotational structures (STARS) 2B and 2V (buckling, vibrations) is presented. Several features of the program are: (1) arbitrary branching of the shell meridians, (2) arbitrary boundary conditions, (3) minimum input requirements to describe a complex, practical shell of revolution structure, and (4) accurate analysis capability using a minimum number of degrees of freedom.
Alhmidi, Heba; Cadnum, Jennifer L; Piedrahita, Christina T; John, Amrita R; Donskey, Curtis J
2018-04-01
Touchscreens are a potential source of pathogen transmission. In our facility, patients and visitors rarely perform hand hygiene after using interactive touchscreen computer kiosks. An automated ultraviolet-C touchscreen disinfection device was effective in reducing bacteriophage MS2, bacteriophage ϕX174, methicillin-resistant Staphylococcus aureus, and Clostridium difficile spores inoculated onto a touchscreen. In simulations, an automated ultraviolet-C touchscreen disinfection device alone or in combination with hand hygiene reduced transfer of the viruses from contaminated touchscreens to fingertips. Published by Elsevier Inc.
Automated Bilingual Circulation System Using PC Local Area Networks.
ERIC Educational Resources Information Center
Iskanderani, A. I.; Anwar, M. A.
1992-01-01
Describes a personal computer and LAN-based automated circulation system capable of handling both Arabic and Latin characters that was developed for use at King Abdullaziz University (Jeddah, Saudi Arabia). Outlines system requirements, system structure, hardware needs, and individual functional modules of the system. Numerous examples and flow…
ESTIMATION OF PHYSIOCHEMICAL PROPERTIES OF ORGANIC COMPOUNDS BY SPARC
The computer program SPARC (SPARC Performs Automated Reasoning in Chemistry) has been under development for several years to estimate physical properties and chemical reactivity parameters of organic compounds strictly from molecular structure. SPARC uses computational algorithms...
An, Gao; Hong, Li; Zhou, Xiao-Bing; Yang, Qiong; Li, Mei-Qing; Tang, Xiang-Yang
2017-03-01
We investigated and compared the functionality of two 3D visualization software provided by a CT vendor and a third-party vendor, respectively. Using surgical anatomical measurement as baseline, we evaluated the accuracy of 3D visualization and verified their utility in computer-aided anatomical analysis. The study cohort consisted of 50 adult cadavers fixed with the classical formaldehyde method. The computer-aided anatomical analysis was based on CT images (in DICOM format) acquired by helical scan with contrast enhancement, using a CT vendor provided 3D visualization workstation (Syngo) and a third-party 3D visualization software (Mimics) that was installed on a PC. Automated and semi-automated segmentations were utilized in the 3D visualization workstation and software, respectively. The functionality and efficiency of automated and semi-automated segmentation methods were compared. Using surgical anatomical measurement as a baseline, the accuracy of 3D visualization based on automated and semi-automated segmentations was quantitatively compared. In semi-automated segmentation, the Mimics 3D visualization software outperformed the Syngo 3D visualization workstation. No significant difference was observed in anatomical data measurement by the Syngo 3D visualization workstation and the Mimics 3D visualization software (P>0.05). Both the Syngo 3D visualization workstation provided by a CT vendor and the Mimics 3D visualization software by a third-party vendor possessed the needed functionality, efficiency and accuracy for computer-aided anatomical analysis. Copyright © 2016 Elsevier GmbH. All rights reserved.
Automation to improve efficiency of field expedient injury prediction screening.
Teyhen, Deydre S; Shaffer, Scott W; Umlauf, Jon A; Akerman, Raymond J; Canada, John B; Butler, Robert J; Goffar, Stephen L; Walker, Michael J; Kiesel, Kyle B; Plisky, Phillip J
2012-07-01
Musculoskeletal injuries are a primary source of disability in the U.S. Military. Physical training and sports-related activities account for up to 90% of all injuries, and 80% of these injuries are considered overuse in nature. As a result, there is a need to develop an evidence-based musculoskeletal screen that can assist with injury prevention. The purpose of this study was to assess the capability of an automated system to improve the efficiency of field expedient tests that may help predict injury risk and provide corrective strategies for deficits identified. The field expedient tests include survey questions and measures of movement quality, balance, trunk stability, power, mobility, and foot structure and mobility. Data entry for these tests was automated using handheld computers, barcode scanning, and netbook computers. An automated algorithm for injury risk stratification and mitigation techniques was run on a server computer. Without automation support, subjects were assessed in 84.5 ± 9.1 minutes per subject compared with 66.8 ± 6.1 minutes per subject with automation and 47.1 ± 5.2 minutes per subject with automation and process improvement measures (p < 0.001). The average time to manually enter the data was 22.2 ± 7.4 minutes per subject. An additional 11.5 ± 2.5 minutes per subject was required to manually assign an intervention strategy. Automation of this injury prevention screening protocol using handheld devices and netbook computers allowed for real-time data entry and enhanced the efficiency of injury screening, risk stratification, and prescription of a risk mitigation strategy.
ESTIMATION OF PHYSICAL PROPERTIES AND CHEMICAL REACTIVITY PARAMETERS OF ORGANIC COMPOUNDS
The computer program SPARC (Sparc Performs Automated Reasoning in Chemistry)has been under development for several years to estimate physical properties and chemical reactivity parameters of organic compounds strictly from molecular structure. SPARC uses computational algorithms ...
A Computational Workflow for the Automated Generation of Models of Genetic Designs.
Misirli, Göksel; Nguyen, Tramy; McLaughlin, James Alastair; Vaidyanathan, Prashant; Jones, Timothy S; Densmore, Douglas; Myers, Chris; Wipat, Anil
2018-06-05
Computational models are essential to engineer predictable biological systems and to scale up this process for complex systems. Computational modeling often requires expert knowledge and data to build models. Clearly, manual creation of models is not scalable for large designs. Despite several automated model construction approaches, computational methodologies to bridge knowledge in design repositories and the process of creating computational models have still not been established. This paper describes a workflow for automatic generation of computational models of genetic circuits from data stored in design repositories using existing standards. This workflow leverages the software tool SBOLDesigner to build structural models that are then enriched by the Virtual Parts Repository API using Systems Biology Open Language (SBOL) data fetched from the SynBioHub design repository. The iBioSim software tool is then utilized to convert this SBOL description into a computational model encoded using the Systems Biology Markup Language (SBML). Finally, this SBML model can be simulated using a variety of methods. This workflow provides synthetic biologists with easy to use tools to create predictable biological systems, hiding away the complexity of building computational models. This approach can further be incorporated into other computational workflows for design automation.
The importance of employing computational resources for the automation of drug discovery.
Rosales-Hernández, Martha Cecilia; Correa-Basurto, José
2015-03-01
The application of computational tools to drug discovery helps researchers to design and evaluate new drugs swiftly with a reduce economic resources. To discover new potential drugs, computational chemistry incorporates automatization for obtaining biological data such as adsorption, distribution, metabolism, excretion and toxicity (ADMET), as well as drug mechanisms of action. This editorial looks at examples of these computational tools, including docking, molecular dynamics simulation, virtual screening, quantum chemistry, quantitative structural activity relationship, principal component analysis and drug screening workflow systems. The authors then provide their perspectives on the importance of these techniques for drug discovery. Computational tools help researchers to design and discover new drugs for the treatment of several human diseases without side effects, thus allowing for the evaluation of millions of compounds with a reduced cost in both time and economic resources. The problem is that operating each program is difficult; one is required to use several programs and understand each of the properties being tested. In the future, it is possible that a single computer and software program will be capable of evaluating the complete properties (mechanisms of action and ADMET properties) of ligands. It is also possible that after submitting one target, this computer-software will be capable of suggesting potential compounds along with ways to synthesize them, and presenting biological models for testing.
Computer-Automated Approach for Scoring Short Essays in an Introductory Statistics Course
ERIC Educational Resources Information Center
Zimmerman, Whitney Alicia; Kang, Hyun Bin; Kim, Kyung; Gao, Mengzhao; Johnson, Glenn; Clariana, Roy; Zhang, Fan
2018-01-01
Over two semesters short essay prompts were developed for use with the Graphical Interface for Knowledge Structure (GIKS), an automated essay scoring system. Participants were students in an undergraduate-level online introductory statistics course. The GIKS compares students' writing samples with an expert's to produce keyword occurrence and…
The application of SSADM to modelling the logical structure of proteins.
Saldanha, J; Eccles, J
1991-10-01
A logical design that describes the overall structure of proteins, together with a more detailed design describing secondary and some supersecondary structures, has been constructed using the computer-aided software engineering (CASE) tool, Auto-mate. Auto-mate embodies the philosophy of the Structured Systems Analysis and Design Method (SSADM) which enables the logical design of computer systems. Our design will facilitate the building of large information systems, such as databases and knowledgebases in the field of protein structure, by the derivation of system requirements from our logical model prior to producing the final physical system. In addition, the study has highlighted the ease of employing SSADM as a formalism in which to conduct the transferral of concepts from an expert into a design for a knowledge-based system that can be implemented on a computer (the knowledge-engineering exercise). It has been demonstrated how SSADM techniques may be extended for the purpose of modelling the constituent Prolog rules. This facilitates the integration of the logical system design model with the derived knowledge-based system.
New reflective symmetry design capability in the JPL-IDEAS Structure Optimization Program
NASA Technical Reports Server (NTRS)
Strain, D.; Levy, R.
1986-01-01
The JPL-IDEAS antenna structure analysis and design optimization computer program was modified to process half structure models of symmetric structures subjected to arbitrary external static loads, synthesize the performance, and optimize the design of the full structure. Significant savings in computation time and cost (more than 50%) were achieved compared to the cost of full model computer runs. The addition of the new reflective symmetry analysis design capabilities to the IDEAS program allows processing of structure models whose size would otherwise prevent automated design optimization. The new program produced synthesized full model iterative design results identical to those of actual full model program executions at substantially reduced cost, time, and computer storage.
An Overview of Automated Scoring of Essays
ERIC Educational Resources Information Center
Dikli, Semire
2006-01-01
Automated Essay Scoring (AES) is defined as the computer technology that evaluates and scores the written prose (Shermis & Barrera, 2002; Shermis & Burstein, 2003; Shermis, Raymat, & Barrera, 2003). AES systems are mainly used to overcome time, cost, reliability, and generalizability issues in writing assessment (Bereiter, 2003; Burstein,…
Automated lettuce nutrient solution management using an array of ion-selective electrodes
USDA-ARS?s Scientific Manuscript database
Automated sensing and control of macronutrients in hydroponic solutions would allow more efficient management of nutrients for crop growth in closed systems. This paper describes the development and evaluation of a computer-controlled nutrient management system with an array of ion-selective electro...
Bindewald, Eckart; Grunewald, Calvin; Boyle, Brett; O'Connor, Mary; Shapiro, Bruce A
2008-10-01
One approach to designing RNA nanoscale structures is to use known RNA structural motifs such as junctions, kissing loops or bulges and to construct a molecular model by connecting these building blocks with helical struts. We previously developed an algorithm for detecting internal loops, junctions and kissing loops in RNA structures. Here we present algorithms for automating or assisting many of the steps that are involved in creating RNA structures from building blocks: (1) assembling building blocks into nanostructures using either a combinatorial search or constraint satisfaction; (2) optimizing RNA 3D ring structures to improve ring closure; (3) sequence optimisation; (4) creating a unique non-degenerate RNA topology descriptor. This effectively creates a computational pipeline for generating molecular models of RNA nanostructures and more specifically RNA ring structures with optimized sequences from RNA building blocks. We show several examples of how the algorithms can be utilized to generate RNA tecto-shapes.
Bindewald, Eckart; Grunewald, Calvin; Boyle, Brett; O’Connor, Mary; Shapiro, Bruce A.
2013-01-01
One approach to designing RNA nanoscale structures is to use known RNA structural motifs such as junctions, kissing loops or bulges and to construct a molecular model by connecting these building blocks with helical struts. We previously developed an algorithm for detecting internal loops, junctions and kissing loops in RNA structures. Here we present algorithms for automating or assisting many of the steps that are involved in creating RNA structures from building blocks: (1) assembling building blocks into nanostructures using either a combinatorial search or constraint satisfaction; (2) optimizing RNA 3D ring structures to improve ring closure; (3) sequence optimisation; (4) creating a unique non-degenerate RNA topology descriptor. This effectively creates a computational pipeline for generating molecular models of RNA nanostructures and more specifically RNA ring structures with optimized sequences from RNA building blocks. We show several examples of how the algorithms can be utilized to generate RNA tecto-shapes. PMID:18838281
Panjikar, Santosh; Parthasarathy, Venkataraman; Lamzin, Victor S; Weiss, Manfred S; Tucker, Paul A
2005-04-01
The EMBL-Hamburg Automated Crystal Structure Determination Platform is a system that combines a number of existing macromolecular crystallographic computer programs and several decision-makers into a software pipeline for automated and efficient crystal structure determination. The pipeline can be invoked as soon as X-ray data from derivatized protein crystals have been collected and processed. It is controlled by a web-based graphical user interface for data and parameter input, and for monitoring the progress of structure determination. A large number of possible structure-solution paths are encoded in the system and the optimal path is selected by the decision-makers as the structure solution evolves. The processes have been optimized for speed so that the pipeline can be used effectively for validating the X-ray experiment at a synchrotron beamline.
A mixed optimization method for automated design of fuselage structures.
NASA Technical Reports Server (NTRS)
Sobieszczanski, J.; Loendorf, D.
1972-01-01
A procedure for automating the design of transport aircraft fuselage structures has been developed and implemented in the form of an operational program. The structure is designed in two stages. First, an overall distribution of structural material is obtained by means of optimality criteria to meet strength and displacement constraints. Subsequently, the detailed design of selected rings and panels consisting of skin and stringers is performed by mathematical optimization accounting for a set of realistic design constraints. The practicality and computer efficiency of the procedure is demonstrated on cylindrical and area-ruled large transport fuselages.
ERIC Educational Resources Information Center
Pepyne, Edward W.
This project attempts to develop, evaluate and implement methods and materials for the automated analysis of the stylistic characteristics of counselor verbal behavior and its effects on client verbal behavior within the counseling interview. To achieve this purpose, the project designed a system of computer programs, the DISCOURSE ANALYSIS…
A Flexible and Non-instrusive Approach for Computing Complex Structural Coverage Metrics
NASA Technical Reports Server (NTRS)
Whalen, Michael W.; Person, Suzette J.; Rungta, Neha; Staats, Matt; Grijincu, Daniela
2015-01-01
Software analysis tools and techniques often leverage structural code coverage information to reason about the dynamic behavior of software. Existing techniques instrument the code with the required structural obligations and then monitor the execution of the compiled code to report coverage. Instrumentation based approaches often incur considerable runtime overhead for complex structural coverage metrics such as Modified Condition/Decision (MC/DC). Code instrumentation, in general, has to be approached with great care to ensure it does not modify the behavior of the original code. Furthermore, instrumented code cannot be used in conjunction with other analyses that reason about the structure and semantics of the code under test. In this work, we introduce a non-intrusive preprocessing approach for computing structural coverage information. It uses a static partial evaluation of the decisions in the source code and a source-to-bytecode mapping to generate the information necessary to efficiently track structural coverage metrics during execution. Our technique is flexible; the results of the preprocessing can be used by a variety of coverage-driven software analysis tasks, including automated analyses that are not possible for instrumented code. Experimental results in the context of symbolic execution show the efficiency and flexibility of our nonintrusive approach for computing code coverage information
NASA Astrophysics Data System (ADS)
Yu, H.; Barriga, S.; Agurto, C.; Zamora, G.; Bauman, W.; Soliz, P.
2012-03-01
Retinal vasculature is one of the most important anatomical structures in digital retinal photographs. Accurate segmentation of retinal blood vessels is an essential task in automated analysis of retinopathy. This paper presents a new and effective vessel segmentation algorithm that features computational simplicity and fast implementation. This method uses morphological pre-processing to decrease the disturbance of bright structures and lesions before vessel extraction. Next, a vessel probability map is generated by computing the eigenvalues of the second derivatives of Gaussian filtered image at multiple scales. Then, the second order local entropy thresholding is applied to segment the vessel map. Lastly, a rule-based decision step, which measures the geometric shape difference between vessels and lesions is applied to reduce false positives. The algorithm is evaluated on the low-resolution DRIVE and STARE databases and the publicly available high-resolution image database from Friedrich-Alexander University Erlangen-Nuremberg, Germany). The proposed method achieved comparable performance to state of the art unsupervised vessel segmentation methods with a competitive faster speed on the DRIVE and STARE databases. For the high resolution fundus image database, the proposed algorithm outperforms an existing approach both on performance and speed. The efficiency and robustness make the blood vessel segmentation method described here suitable for broad application in automated analysis of retinal images.
Design Through Manufacturing: The Solid Model - Finite Element Analysis Interface
NASA Technical Reports Server (NTRS)
Rubin, Carol
2003-01-01
State-of-the-art computer aided design (CAD) presently affords engineers the opportunity to create solid models of machine parts which reflect every detail of the finished product. Ideally, these models should fulfill two very important functions: (1) they must provide numerical control information for automated manufacturing of precision parts, and (2) they must enable analysts to easily evaluate the stress levels (using finite element analysis - FEA) for all structurally significant parts used in space missions. Today's state-of-the-art CAD programs perform function (1) very well, providing an excellent model for precision manufacturing. But they do not provide a straightforward and simple means of automating the translation from CAD to FEA models, especially for aircraft-type structures. The research performed during the fellowship period investigated the transition process from the solid CAD model to the FEA stress analysis model with the final goal of creating an automatic interface between the two. During the period of the fellowship a detailed multi-year program for the development of such an interface was created. The ultimate goal of this program will be the development of a fully parameterized automatic ProE/FEA translator for parts and assemblies, with the incorporation of data base management into the solution, and ultimately including computational fluid dynamics and thermal modeling in the interface.
NASA Astrophysics Data System (ADS)
Rainieri, Carlo; Fabbrocino, Giovanni
2015-08-01
In the last few decades large research efforts have been devoted to the development of methods for automated detection of damage and degradation phenomena at an early stage. Modal-based damage detection techniques are well-established methods, whose effectiveness for Level 1 (existence) and Level 2 (location) damage detection is demonstrated by several studies. The indirect estimation of tensile loads in cables and tie-rods is another attractive application of vibration measurements. It provides interesting opportunities for cheap and fast quality checks in the construction phase, as well as for safety evaluations and structural maintenance over the structure lifespan. However, the lack of automated modal identification and tracking procedures has been for long a relevant drawback to the extensive application of the above-mentioned techniques in the engineering practice. An increasing number of field applications of modal-based structural health and performance assessment are appearing after the development of several automated output-only modal identification procedures in the last few years. Nevertheless, additional efforts are still needed to enhance the robustness of automated modal identification algorithms, control the computational efforts and improve the reliability of modal parameter estimates (in particular, damping). This paper deals with an original algorithm for automated output-only modal parameter estimation. Particular emphasis is given to the extensive validation of the algorithm based on simulated and real datasets in view of continuous monitoring applications. The results point out that the algorithm is fairly robust and demonstrate its ability to provide accurate and precise estimates of the modal parameters, including damping ratios. As a result, it has been used to develop systems for vibration-based estimation of tensile loads in cables and tie-rods. Promising results have been achieved for non-destructive testing as well as continuous monitoring purposes. They are documented in the last sections of the paper.
Fragon: rapid high-resolution structure determination from ideal protein fragments.
Jenkins, Huw T
2018-03-01
Correctly positioning ideal protein fragments by molecular replacement presents an attractive method for obtaining preliminary phases when no template structure for molecular replacement is available. This has been exploited in several existing pipelines. This paper presents a new pipeline, named Fragon, in which fragments (ideal α-helices or β-strands) are placed using Phaser and the phases calculated from these coordinates are then improved by the density-modification methods provided by ACORN. The reliable scoring algorithm provided by ACORN identifies success. In these cases, the resulting phases are usually of sufficient quality to enable automated model building of the entire structure. Fragon was evaluated against two test sets comprising mixed α/β folds and all-β folds at resolutions between 1.0 and 1.7 Å. Success rates of 61% for the mixed α/β test set and 30% for the all-β test set were achieved. In almost 70% of successful runs, fragment placement and density modification took less than 30 min on relatively modest four-core desktop computers. In all successful runs the best set of phases enabled automated model building with ARP/wARP to complete the structure.
NASA Astrophysics Data System (ADS)
Odry, Benjamin L.; Kiraly, Atilla P.; Novak, Carol L.; Naidich, David P.; Lerallut, Jean-Francois
2006-03-01
Pulmonary diseases such as bronchiectasis, asthma, and emphysema are characterized by abnormalities in airway dimensions. Multi-slice computed tomography (MSCT) has become one of the primary means to depict these abnormalities, as the availability of high-resolution near-isotropic data makes it possible to evaluate airways at oblique angles to the scanner plane. However, currently, clinical evaluation of airways is typically limited to subjective visual inspection only: systematic evaluation of the airways to take advantage of high-resolution data has not proved practical without automation. We present an automated method to quantitatively evaluate airway lumen diameter, wall thickness and broncho-arterial ratios. In addition, our method provides 3D visualization of these values, graphically illustrating the location and extent of disease. Our algorithm begins by automatic airway segmentation to extract paths to the distal airways, and to create a map of airway diameters. Normally, airway diameters decrease as paths progress distally; failure to taper indicates abnormal dilatation. Our approach monitors airway lumen diameters along each airway path in order to detect abnormal profiles, allowing even subtle degrees of pathologic dilatation to be identified. Our method also systematically computes the broncho-arterial ratio at every terminal branch of the tree model, as a ratio above 1 indicates potentially abnormal bronchial dilatation. Finally, the airway wall thickness is computed at corresponding locations. These measurements are used to highlight abnormal branches for closer inspection, and can be summed to compute a quantitative global score for the entire airway tree, allowing reproducible longitudinal assessment of disease severity. Preliminary tests on patients diagnosed with bronchiectasis demonstrated rapid identification of lack of tapering, which also was confirmed by corresponding demonstration of elevated broncho-arterial ratios.
Automation of the CFD Process on Distributed Computing Systems
NASA Technical Reports Server (NTRS)
Tejnil, Ed; Gee, Ken; Rizk, Yehia M.
2000-01-01
A script system was developed to automate and streamline portions of the CFD process. The system was designed to facilitate the use of CFD flow solvers on supercomputer and workstation platforms within a parametric design event. Integrating solver pre- and postprocessing phases, the fully automated ADTT script system marshalled the required input data, submitted the jobs to available computational resources, and processed the resulting output data. A number of codes were incorporated into the script system, which itself was part of a larger integrated design environment software package. The IDE and scripts were used in a design event involving a wind tunnel test. This experience highlighted the need for efficient data and resource management in all parts of the CFD process. To facilitate the use of CFD methods to perform parametric design studies, the script system was developed using UNIX shell and Perl languages. The goal of the work was to minimize the user interaction required to generate the data necessary to fill a parametric design space. The scripts wrote out the required input files for the user-specified flow solver, transferred all necessary input files to the computational resource, submitted and tracked the jobs using the resource queuing structure, and retrieved and post-processed the resulting dataset. For computational resources that did not run queueing software, the script system established its own simple first-in-first-out queueing structure to manage the workload. A variety of flow solvers were incorporated in the script system, including INS2D, PMARC, TIGER and GASP. Adapting the script system to a new flow solver was made easier through the use of object-oriented programming methods. The script system was incorporated into an ADTT integrated design environment and evaluated as part of a wind tunnel experiment. The system successfully generated the data required to fill the desired parametric design space. This stressed the computational resources required to compute and store the information. The scripts were continually modified to improve the utilization of the computational resources and reduce the likelihood of data loss due to failures. An ad-hoc file server was created to manage the large amount of data being generated as part of the design event. Files were stored and retrieved as needed to create new jobs and analyze the results. Additional information is contained in the original.
Ogata, Y; Nishizawa, K
1995-10-01
An automated smear counting and data processing system for a life science laboratory was developed to facilitate routine surveys and eliminate human errors by using a notebook computer. This system was composed of a personal computer, a liquid scintillation counter and a well-type NaI(Tl) scintillation counter. The radioactivity of smear samples was automatically measured by these counters. The personal computer received raw signals from the counters through an interface of RS-232C. The software for the computer evaluated the surface density of each radioisotope and printed out that value along with other items as a report. The software was programmed in Pascal language. This system was successfully applied to routine surveys for contamination in our facility.
NASA Astrophysics Data System (ADS)
Theveneau, P.; Baker, R.; Barrett, R.; Beteva, A.; Bowler, M. W.; Carpentier, P.; Caserotto, H.; de Sanctis, D.; Dobias, F.; Flot, D.; Guijarro, M.; Giraud, T.; Lentini, M.; Leonard, G. A.; Mattenet, M.; McCarthy, A. A.; McSweeney, S. M.; Morawe, C.; Nanao, M.; Nurizzo, D.; Ohlsson, S.; Pernot, P.; Popov, A. N.; Round, A.; Royant, A.; Schmid, W.; Snigirev, A.; Surr, J.; Mueller-Dieckmann, C.
2013-03-01
Automation and advances in technology are the key elements in addressing the steadily increasing complexity of Macromolecular Crystallography (MX) experiments. Much of this complexity is due to the inter-and intra-crystal heterogeneity in diffraction quality often observed for crystals of multi-component macromolecular assemblies or membrane proteins. Such heterogeneity makes high-throughput sample evaluation an important and necessary tool for increasing the chances of a successful structure determination. The introduction at the ESRF of automatic sample changers in 2005 dramatically increased the number of samples that were tested for diffraction quality. This "first generation" of automation, coupled with advances in software aimed at optimising data collection strategies in MX, resulted in a three-fold increase in the number of crystal structures elucidated per year using data collected at the ESRF. In addition, sample evaluation can be further complemented using small angle scattering experiments on the newly constructed bioSAXS facility on BM29 and the micro-spectroscopy facility (ID29S). The construction of a second generation of automated facilities on the MASSIF (Massively Automated Sample Screening Integrated Facility) beam lines will build on these advances and should provide a paradigm shift in how MX experiments are carried out which will benefit the entire Structural Biology community.
Designing automation for human use: empirical studies and quantitative models.
Parasuraman, R
2000-07-01
An emerging knowledge base of human performance research can provide guidelines for designing automation that can be used effectively by human operators of complex systems. Which functions should be automated and to what extent in a given system? A model for types and levels of automation that provides a framework and an objective basis for making such choices is described. The human performance consequences of particular types and levels of automation constitute primary evaluative criteria for automation design when using the model. Four human performance areas are considered--mental workload, situation awareness, complacency and skill degradation. Secondary evaluative criteria include such factors as automation reliability, the risks of decision/action consequences and the ease of systems integration. In addition to this qualitative approach, quantitative models can inform design. Several computational and formal models of human interaction with automation that have been proposed by various researchers are reviewed. An important future research need is the integration of qualitative and quantitative approaches. Application of these models provides an objective basis for designing automation for effective human use.
Automated CPX support system preliminary design phase
NASA Technical Reports Server (NTRS)
Bordeaux, T. A.; Carson, E. T.; Hepburn, C. D.; Shinnick, F. M.
1984-01-01
The development of the Distributed Command and Control System (DCCS) is discussed. The development of an automated C2 system stimulated the development of an automated command post exercise (CPX) support system to provide a more realistic stimulus to DCCS than could be achieved with the existing manual system. An automated CPX system to support corps-level exercise was designed. The effort comprised four tasks: (1) collecting and documenting user requirements; (2) developing a preliminary system design; (3) defining a program plan; and (4) evaluating the suitability of the TRASANA FOURCE computer model.
An Automated Method for High-Definition Transcranial Direct Current Stimulation Modeling*
Huang, Yu; Su, Yuzhuo; Rorden, Christopher; Dmochowski, Jacek; Datta, Abhishek; Parra, Lucas C.
2014-01-01
Targeted transcranial stimulation with electric currents requires accurate models of the current flow from scalp electrodes to the human brain. Idiosyncratic anatomy of individual brains and heads leads to significant variability in such current flows across subjects, thus, necessitating accurate individualized head models. Here we report on an automated processing chain that computes current distributions in the head starting from a structural magnetic resonance image (MRI). The main purpose of automating this process is to reduce the substantial effort currently required for manual segmentation, electrode placement, and solving of finite element models. In doing so, several weeks of manual labor were reduced to no more than 4 hours of computation time and minimal user interaction, while current-flow results for the automated method deviated by less than 27.9% from the manual method. Key facilitating factors are the addition of three tissue types (skull, scalp and air) to a state-of-the-art automated segmentation process, morphological processing to correct small but important segmentation errors, and automated placement of small electrodes based on easily reproducible standard electrode configurations. We anticipate that such an automated processing will become an indispensable tool to individualize transcranial direct current stimulation (tDCS) therapy. PMID:23367144
Readerbench: Automated Evaluation of Collaboration Based on Cohesion and Dialogism
ERIC Educational Resources Information Center
Dascalu, Mihai; Trausan-Matu, Stefan; McNamara, Danielle S.; Dessus, Philippe
2015-01-01
As Computer-Supported Collaborative Learning (CSCL) gains a broader usage, the need for automated tools capable of supporting tutors in the time-consuming process of analyzing conversations becomes more pressing. Moreover, collaboration, which presumes the intertwining of ideas or points of view among participants, is a central element of dialogue…
Integration of XNAT/PACS, DICOM, and Research Software for Automated Multi-modal Image Analysis.
Gao, Yurui; Burns, Scott S; Lauzon, Carolyn B; Fong, Andrew E; James, Terry A; Lubar, Joel F; Thatcher, Robert W; Twillie, David A; Wirt, Michael D; Zola, Marc A; Logan, Bret W; Anderson, Adam W; Landman, Bennett A
2013-03-29
Traumatic brain injury (TBI) is an increasingly important public health concern. While there are several promising avenues of intervention, clinical assessments are relatively coarse and comparative quantitative analysis is an emerging field. Imaging data provide potentially useful information for evaluating TBI across functional, structural, and microstructural phenotypes. Integration and management of disparate data types are major obstacles. In a multi-institution collaboration, we are collecting electroencephalogy (EEG), structural MRI, diffusion tensor MRI (DTI), and single photon emission computed tomography (SPECT) from a large cohort of US Army service members exposed to mild or moderate TBI who are undergoing experimental treatment. We have constructed a robust informatics backbone for this project centered on the DICOM standard and eXtensible Neuroimaging Archive Toolkit (XNAT) server. Herein, we discuss (1) optimization of data transmission, validation and storage, (2) quality assurance and workflow management, and (3) integration of high performance computing with research software.
Integration of XNAT/PACS, DICOM, and research software for automated multi-modal image analysis
NASA Astrophysics Data System (ADS)
Gao, Yurui; Burns, Scott S.; Lauzon, Carolyn B.; Fong, Andrew E.; James, Terry A.; Lubar, Joel F.; Thatcher, Robert W.; Twillie, David A.; Wirt, Michael D.; Zola, Marc A.; Logan, Bret W.; Anderson, Adam W.; Landman, Bennett A.
2013-03-01
Traumatic brain injury (TBI) is an increasingly important public health concern. While there are several promising avenues of intervention, clinical assessments are relatively coarse and comparative quantitative analysis is an emerging field. Imaging data provide potentially useful information for evaluating TBI across functional, structural, and microstructural phenotypes. Integration and management of disparate data types are major obstacles. In a multi-institution collaboration, we are collecting electroencephalogy (EEG), structural MRI, diffusion tensor MRI (DTI), and single photon emission computed tomography (SPECT) from a large cohort of US Army service members exposed to mild or moderate TBI who are undergoing experimental treatment. We have constructed a robust informatics backbone for this project centered on the DICOM standard and eXtensible Neuroimaging Archive Toolkit (XNAT) server. Herein, we discuss (1) optimization of data transmission, validation and storage, (2) quality assurance and workflow management, and (3) integration of high performance computing with research software.
Integration of XNAT/PACS, DICOM, and Research Software for Automated Multi-modal Image Analysis
Gao, Yurui; Burns, Scott S.; Lauzon, Carolyn B.; Fong, Andrew E.; James, Terry A.; Lubar, Joel F.; Thatcher, Robert W.; Twillie, David A.; Wirt, Michael D.; Zola, Marc A.; Logan, Bret W.; Anderson, Adam W.; Landman, Bennett A.
2013-01-01
Traumatic brain injury (TBI) is an increasingly important public health concern. While there are several promising avenues of intervention, clinical assessments are relatively coarse and comparative quantitative analysis is an emerging field. Imaging data provide potentially useful information for evaluating TBI across functional, structural, and microstructural phenotypes. Integration and management of disparate data types are major obstacles. In a multi-institution collaboration, we are collecting electroencephalogy (EEG), structural MRI, diffusion tensor MRI (DTI), and single photon emission computed tomography (SPECT) from a large cohort of US Army service members exposed to mild or moderate TBI who are undergoing experimental treatment. We have constructed a robust informatics backbone for this project centered on the DICOM standard and eXtensible Neuroimaging Archive Toolkit (XNAT) server. Herein, we discuss (1) optimization of data transmission, validation and storage, (2) quality assurance and workflow management, and (3) integration of high performance computing with research software. PMID:24386548
Brown, David M. L.; Cho, Herman; de Jong, Wibe A.
2016-02-09
Here, the testing of theoretical models with experimental data is an integral part of the scientific method, and a logical place to search for new ways of stimulating scientific productivity. Often experiment/theory comparisons may be viewed as a workflow comprised of well-defined, rote operations distributed over several distinct computers, as exemplified by the way in which predictions from electronic structure theories are evaluated with results from spectroscopic experiments. For workflows such as this, which may be laborious and time consuming to perform manually, software that could orchestrate the operations and transfer results between computers in a seamless and automated fashionmore » would offer major efficiency gains. Such tools also promise to alter how researchers interact with data outside their field of specialization by, e.g., making raw experimental results more accessible to theorists, and the outputs of theoretical calculations more readily comprehended by experimentalists.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, David M. L.; Cho, Herman; de Jong, Wibe A.
Here, the testing of theoretical models with experimental data is an integral part of the scientific method, and a logical place to search for new ways of stimulating scientific productivity. Often experiment/theory comparisons may be viewed as a workflow comprised of well-defined, rote operations distributed over several distinct computers, as exemplified by the way in which predictions from electronic structure theories are evaluated with results from spectroscopic experiments. For workflows such as this, which may be laborious and time consuming to perform manually, software that could orchestrate the operations and transfer results between computers in a seamless and automated fashionmore » would offer major efficiency gains. Such tools also promise to alter how researchers interact with data outside their field of specialization by, e.g., making raw experimental results more accessible to theorists, and the outputs of theoretical calculations more readily comprehended by experimentalists.« less
ERIC Educational Resources Information Center
Bucks, Gregory Warren
2010-01-01
Computers have become an integral part of how engineers complete their work, allowing them to collect and analyze data, model potential solutions and aiding in production through automation and robotics. In addition, computers are essential elements of the products themselves, from tennis shoes to construction materials. An understanding of how…
NASA Technical Reports Server (NTRS)
Swenson, Harry N.; Zelenka, Richard E.; Dearing, Munro G.; Hardy, Gordon H.; Clark, Raymond; Davis, Tom; Amatrudo, Gary; Zirkler, Andre
1994-01-01
NASA and the U.S. Army have designed, developed, and flight evaluated a Computer Aiding for Low Altitude Helicopter Flight (CALAHF) guidance system. This system provides guidance to the pilot for near terrain covert helicopter operations. It automates the processing of precision navigation information, helicopter mission requirements, and terrain flight guidance. The automation is presented to the pilot through symbology on a helmet-mounted display. The symbology is a 'pilot-centered' design which preserves pilot flexibility and authority over the CALAHF system's automation. An extensive flight evaluation of the system has been conducted using the U.S. Army's NUH-60 STAR (Systems Testbed for Avionics Research) research helicopter. The evaluations were flown over a multiwaypoint helicopter mission in rugged mountainous terrain, at terrain clearance altitudes from 300 to 125 ft and airspeeds from 40 to 110 knots. The results of these evaluations showed that the pilots could precisely follow the automation symbology while maintaining a high degree of situational awareness.
NASA Technical Reports Server (NTRS)
Thompson David S.; Soni, Bharat K.
2001-01-01
An integrated geometry/grid/simulation software package, ICEG2D, is being developed to automate computational fluid dynamics (CFD) simulations for single- and multi-element airfoils with ice accretions. The current version, ICEG213 (v2.0), was designed to automatically perform four primary functions: (1) generate a grid-ready surface definition based on the geometrical characteristics of the iced airfoil surface, (2) generate high-quality structured and generalized grids starting from a defined surface definition, (3) generate the input and restart files needed to run the structured grid CFD solver NPARC or the generalized grid CFD solver HYBFL2D, and (4) using the flow solutions, generate solution-adaptive grids. ICEG2D (v2.0) can be operated in either a batch mode using a script file or in an interactive mode by entering directives from a command line within a Unix shell. This report summarizes activities completed in the first two years of a three-year research and development program to address automation issues related to CFD simulations for airfoils with ice accretions. As well as describing the technology employed in the software, this document serves as a users manual providing installation and operating instructions. An evaluation of the software is also presented.
NASA Astrophysics Data System (ADS)
Vrettaros, John; Vouros, George; Drigas, Athanasios S.
This article studies the expediency of using neural networks technology and the development of back-propagation networks (BPN) models for modeling automated evaluation of the answers and progress of deaf students' that possess basic knowledge of the English language and computer skills, within a virtual e-learning environment. The performance of the developed neural models is evaluated with the correlation factor between the neural networks' response values and the real value data as well as the percentage measurement of the error between the neural networks' estimate values and the real value data during its training process and afterwards with unknown data that weren't used in the training process.
Mechanical Engineering Department engineering research: Annual report, FY 1986
DOE Office of Scientific and Technical Information (OSTI.GOV)
Denney, R.M.; Essary, K.L.; Genin, M.S.
1986-12-01
This report provides information on the five areas of research interest in LLNL's Mechanical Engineering Department. In Computer Code Development, a solid geometric modeling program is described. In Dynamic Systems and Control, structure control and structure dynamics are discussed. Fabrication technology involves machine cutting, interferometry, and automated optical component manufacturing. Materials engineering reports on composite material research and measurement of molten metal surface properties. In Nondestructive Evaluation, NMR, CAT, and ultrasound machines are applied to manufacturing processes. A model for underground collapse is developed. Finally, an alternative heat exchanger is investigated for use in a fusion power plant. Separate abstractsmore » were prepared for each of the 13 reports in this publication. (JDH)« less
Impact of pharmacy automation on patient waiting time: an application of computer simulation.
Tan, Woan Shin; Chua, Siang Li; Yong, Keng Woh; Wu, Tuck Seng
2009-06-01
This paper aims to illustrate the use of computer simulation in evaluating the impact of a prototype automated dispensing system on waiting time in an outpatient pharmacy and its potential as a routine tool in pharmacy management. A discrete event simulation model was developed to investigate the impact of a prototype automated dispensing system on operational efficiency and service standards in an outpatient pharmacy. The simulation results suggest that automating the prescription-filing function using a prototype that picks and packs at 20 seconds per item will not assist the pharmacy in achieving the waiting time target of 30 minutes for all patients. Regardless of the state of automation, to meet the waiting time target, 2 additional pharmacists are needed to overcome the process bottleneck at the point of medication dispense. However, if the automated dispensing is the preferred option, the speed of the system needs to be twice as fast as the current configuration to facilitate the reduction of the 95th percentile patient waiting time to below 30 minutes. The faster processing speed will concomitantly allow the pharmacy to reduce the number of pharmacy technicians from 11 to 8. Simulation was found to be a useful and low cost method that allows an otherwise expensive and resource intensive evaluation of new work processes and technology to be completed within a short time.
The Phenix Software for Automated Determination of Macromolecular Structures
Adams, Paul D.; Afonine, Pavel V.; Bunkóczi, Gábor; Chen, Vincent B.; Echols, Nathaniel; Headd, Jeffrey J.; Hung, Li-Wei; Jain, Swati; Kapral, Gary J.; Grosse Kunstleve, Ralf W.; McCoy, Airlie J.; Moriarty, Nigel W.; Oeffner, Robert D.; Read, Randy J.; Richardson, David C.; Richardson, Jane S.; Terwilliger, Thomas C.; Zwart, Peter H.
2011-01-01
X-ray crystallography is a critical tool in the study of biological systems. It is able to provide information that has been a prerequisite to understanding the fundamentals of life. It is also a method that is central to the development of new therapeutics for human disease. Significant time and effort are required to determine and optimize many macromolecular structures because of the need for manual interpretation of complex numerical data, often using many different software packages, and the repeated use of interactive three-dimensional graphics. The Phenix software package has been developed to provide a comprehensive system for macromolecular crystallographic structure solution with an emphasis on automation. This has required the development of new algorithms that minimize or eliminate subjective input in favour of built-in expert-systems knowledge, the automation of procedures that are traditionally performed by hand, and the development of a computational framework that allows a tight integration between the algorithms. The application of automated methods is particularly appropriate in the field of structural proteomics, where high throughput is desired. Features in Phenix for the automation of experimental phasing with subsequent model building, molecular replacement, structure refinement and validation are described and examples given of running Phenix from both the command line and graphical user interface. PMID:21821126
Automated Measurement of Patient-Specific Tibial Slopes from MRI
Amerinatanzi, Amirhesam; Summers, Rodney K.; Ahmadi, Kaveh; Goel, Vijay K.; Hewett, Timothy E.; Nyman, Edward
2017-01-01
Background: Multi-planar proximal tibial slopes may be associated with increased likelihood of osteoarthritis and anterior cruciate ligament injury, due in part to their role in checking the anterior-posterior stability of the knee. Established methods suffer repeatability limitations and lack computational efficiency for intuitive clinical adoption. The aims of this study were to develop a novel automated approach and to compare the repeatability and computational efficiency of the approach against previously established methods. Methods: Tibial slope geometries were obtained via MRI and measured using an automated Matlab-based approach. Data were compared for repeatability and evaluated for computational efficiency. Results: Mean lateral tibial slope (LTS) for females (7.2°) was greater than for males (1.66°). Mean LTS in the lateral concavity zone was greater for females (7.8° for females, 4.2° for males). Mean medial tibial slope (MTS) for females was greater (9.3° vs. 4.6°). Along the medial concavity zone, female subjects demonstrated greater MTS. Conclusion: The automated method was more repeatable and computationally efficient than previously identified methods and may aid in the clinical assessment of knee injury risk, inform surgical planning, and implant design efforts. PMID:28952547
RCrane: semi-automated RNA model building.
Keating, Kevin S; Pyle, Anna Marie
2012-08-01
RNA crystals typically diffract to much lower resolutions than protein crystals. This low-resolution diffraction results in unclear density maps, which cause considerable difficulties during the model-building process. These difficulties are exacerbated by the lack of computational tools for RNA modeling. Here, RCrane, a tool for the partially automated building of RNA into electron-density maps of low or intermediate resolution, is presented. This tool works within Coot, a common program for macromolecular model building. RCrane helps crystallographers to place phosphates and bases into electron density and then automatically predicts and builds the detailed all-atom structure of the traced nucleotides. RCrane then allows the crystallographer to review the newly built structure and select alternative backbone conformations where desired. This tool can also be used to automatically correct the backbone structure of previously built nucleotides. These automated corrections can fix incorrect sugar puckers, steric clashes and other structural problems.
Rudyanto, Rina D.; Kerkstra, Sjoerd; van Rikxoort, Eva M.; Fetita, Catalin; Brillet, Pierre-Yves; Lefevre, Christophe; Xue, Wenzhe; Zhu, Xiangjun; Liang, Jianming; Öksüz, İlkay; Ünay, Devrim; Kadipaşaogandcaron;lu, Kamuran; Estépar, Raúl San José; Ross, James C.; Washko, George R.; Prieto, Juan-Carlos; Hoyos, Marcela Hernández; Orkisz, Maciej; Meine, Hans; Hüllebrand, Markus; Stöcker, Christina; Mir, Fernando Lopez; Naranjo, Valery; Villanueva, Eliseo; Staring, Marius; Xiao, Changyan; Stoel, Berend C.; Fabijanska, Anna; Smistad, Erik; Elster, Anne C.; Lindseth, Frank; Foruzan, Amir Hossein; Kiros, Ryan; Popuri, Karteek; Cobzas, Dana; Jimenez-Carretero, Daniel; Santos, Andres; Ledesma-Carbayo, Maria J.; Helmberger, Michael; Urschler, Martin; Pienn, Michael; Bosboom, Dennis G.H.; Campo, Arantza; Prokop, Mathias; de Jong, Pim A.; Ortiz-de-Solorzano, Carlos; Muñoz-Barrutia, Arrate; van Ginneken, Bram
2016-01-01
The VESSEL12 (VESsel SEgmentation in the Lung) challenge objectively compares the performance of different algorithms to identify vessels in thoracic computed tomography (CT) scans. Vessel segmentation is fundamental in computer aided processing of data generated by 3D imaging modalities. As manual vessel segmentation is prohibitively time consuming, any real world application requires some form of automation. Several approaches exist for automated vessel segmentation, but judging their relative merits is difficult due to a lack of standardized evaluation. We present an annotated reference dataset containing 20 CT scans and propose nine categories to perform a comprehensive evaluation of vessel segmentation algorithms from both academia and industry. Twenty algorithms participated in the VESSEL12 challenge, held at International Symposium on Biomedical Imaging (ISBI) 2012. All results have been published at the VESSEL12 website http://vessel12.grand-challenge.org. The challenge remains ongoing and open to new participants. Our three contributions are: (1) an annotated reference dataset available online for evaluation of new algorithms; (2) a quantitative scoring system for objective comparison of algorithms; and (3) performance analysis of the strengths and weaknesses of the various vessel segmentation methods in the presence of various lung diseases. PMID:25113321
Ercan, Ertuğrul; Kırılmaz, Bahadır; Kahraman, İsmail; Bayram, Vildan; Doğan, Hüseyin
2012-11-01
Flow-mediated dilation (FMD) is used to evaluate endothelial functions. Computer-assisted analysis utilizing edge detection permits continuous measurements along the vessel wall. We have developed a new fully automated software program to allow accurate and reproducible measurement. FMD has been measured and analyzed in 18 coronary artery disease (CAD) patients and 17 controls both by manually and by the software developed (computer supported) methods. The agreement between methods was assessed by Bland-Altman analysis. The mean age, body mass index and cardiovascular risk factors were higher in CAD group. Automated FMD% measurement for the control subjects was 18.3±8.5 and 6.8±6.5 for the CAD group (p=0.0001). The intraobserver and interobserver correlation for automated measurement was high (r=0.974, r=0.981, r=0.937, r=0.918, respectively). Manual FMD% at 60th second was correlated with automated FMD % (r=0.471, p=0.004). The new fully automated software© can be used to precise measurement of FMD with low intra- and interobserver variability than manual assessment.
The ZOG Technology Demonstration Project: A System Evaluation of USS CARL VINSON (CVN 70)
1984-12-01
part of a larger project involving development of a wide range of computer technologies, including artifcial intelligence and a long-range computer...shipboard manage- ment, aircraft management, expert systems, menu selection, man- machine interface, artificial intelligence , automation; shipboard It AWM...functions, planning, evaluation, training, hierarchical data bases The objective of this project was to conduct an evaluation of ZOG, a general purpose
Automated segmentation of cardiac visceral fat in low-dose non-contrast chest CT images
NASA Astrophysics Data System (ADS)
Xie, Yiting; Liang, Mingzhu; Yankelevitz, David F.; Henschke, Claudia I.; Reeves, Anthony P.
2015-03-01
Cardiac visceral fat was segmented from low-dose non-contrast chest CT images using a fully automated method. Cardiac visceral fat is defined as the fatty tissues surrounding the heart region, enclosed by the lungs and posterior to the sternum. It is measured by constraining the heart region with an Anatomy Label Map that contains robust segmentations of the lungs and other major organs and estimating the fatty tissue within this region. The algorithm was evaluated on 124 low-dose and 223 standard-dose non-contrast chest CT scans from two public datasets. Based on visual inspection, 343 cases had good cardiac visceral fat segmentation. For quantitative evaluation, manual markings of cardiac visceral fat regions were made in 3 image slices for 45 low-dose scans and the Dice similarity coefficient (DSC) was computed. The automated algorithm achieved an average DSC of 0.93. Cardiac visceral fat volume (CVFV), heart region volume (HRV) and their ratio were computed for each case. The correlation between cardiac visceral fat measurement and coronary artery and aortic calcification was also evaluated. Results indicated the automated algorithm for measuring cardiac visceral fat volume may be an alternative method to the traditional manual assessment of thoracic region fat content in the assessment of cardiovascular disease risk.
ERIC Educational Resources Information Center
1984
This 63-paper collection represents a variety of interests and areas of expertise related to technology and its impact on the educational process at all levels. Topics include automated instructional management, computer literacy, software evaluation, beginning a computer program, finding software, networking, programming, and the computer and…
D. L. Johnson; D. J. Nowak; V. A. Jouraeva
1999-01-01
Leaves from twenty-three deciduous tree species and five conifer species were collected within a limited geographic range (1 km radius) and evaluated for possible application of scanning electron microscopy and X-ray microanalysis techniques of individual particle analysis (IPA). The goal was to identify tree species with leaves suitable for the automated...
Experimental Evidence on the Effectiveness of Automated Essay Scoring in Teacher Education Cases
ERIC Educational Resources Information Center
Riedel, Eric; Dexter, Sara L.; Scharber, Cassandra; Doering, Aaron
2006-01-01
Research on computer-based writing evaluation has only recently focused on the potential for providing formative feedback rather than summative assessment. This study tests the impact of an automated essay scorer (AES) that provides formative feedback on essay drafts written as part of a series of online teacher education case studies. Seventy…
A Computational Framework for Automation of Point Defect Calculations
NASA Astrophysics Data System (ADS)
Goyal, Anuj; Gorai, Prashun; Peng, Haowei; Lany, Stephan; Stevanovic, Vladan; National Renewable Energy Laboratory, Golden, Colorado 80401 Collaboration
A complete and rigorously validated open-source Python framework to automate point defect calculations using density functional theory has been developed. The framework provides an effective and efficient method for defect structure generation, and creation of simple yet customizable workflows to analyze defect calculations. The package provides the capability to compute widely accepted correction schemes to overcome finite-size effects, including (1) potential alignment, (2) image-charge correction, and (3) band filling correction to shallow defects. Using Si, ZnO and In2O3as test examples, we demonstrate the package capabilities and validate the methodology. We believe that a robust automated tool like this will enable the materials by design community to assess the impact of point defects on materials performance. National Renewable Energy Laboratory, Golden, Colorado 80401.
A computational framework for automation of point defect calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goyal, Anuj; Gorai, Prashun; Peng, Haowei
We have developed a complete and rigorously validated open-source Python framework to automate point defect calculations using density functional theory. Furthermore, the framework provides an effective and efficient method for defect structure generation, and creation of simple yet customizable workflows to analyze defect calculations. This package provides the capability to compute widely-accepted correction schemes to overcome finite-size effects, including (1) potential alignment, (2) image-charge correction, and (3) band filling correction to shallow defects. Using Si, ZnO and In2O3 as test examples, we demonstrate the package capabilities and validate the methodology.
A computational framework for automation of point defect calculations
Goyal, Anuj; Gorai, Prashun; Peng, Haowei; ...
2017-01-13
We have developed a complete and rigorously validated open-source Python framework to automate point defect calculations using density functional theory. Furthermore, the framework provides an effective and efficient method for defect structure generation, and creation of simple yet customizable workflows to analyze defect calculations. This package provides the capability to compute widely-accepted correction schemes to overcome finite-size effects, including (1) potential alignment, (2) image-charge correction, and (3) band filling correction to shallow defects. Using Si, ZnO and In2O3 as test examples, we demonstrate the package capabilities and validate the methodology.
Automated compound classification using a chemical ontology.
Bobach, Claudia; Böhme, Timo; Laube, Ulf; Püschel, Anett; Weber, Lutz
2012-12-29
Classification of chemical compounds into compound classes by using structure derived descriptors is a well-established method to aid the evaluation and abstraction of compound properties in chemical compound databases. MeSH and recently ChEBI are examples of chemical ontologies that provide a hierarchical classification of compounds into general compound classes of biological interest based on their structural as well as property or use features. In these ontologies, compounds have been assigned manually to their respective classes. However, with the ever increasing possibilities to extract new compounds from text documents using name-to-structure tools and considering the large number of compounds deposited in databases, automated and comprehensive chemical classification methods are needed to avoid the error prone and time consuming manual classification of compounds. In the present work we implement principles and methods to construct a chemical ontology of classes that shall support the automated, high-quality compound classification in chemical databases or text documents. While SMARTS expressions have already been used to define chemical structure class concepts, in the present work we have extended the expressive power of such class definitions by expanding their structure-based reasoning logic. Thus, to achieve the required precision and granularity of chemical class definitions, sets of SMARTS class definitions are connected by OR and NOT logical operators. In addition, AND logic has been implemented to allow the concomitant use of flexible atom lists and stereochemistry definitions. The resulting chemical ontology is a multi-hierarchical taxonomy of concept nodes connected by directed, transitive relationships. A proposal for a rule based definition of chemical classes has been made that allows to define chemical compound classes more precisely than before. The proposed structure-based reasoning logic allows to translate chemistry expert knowledge into a computer interpretable form, preventing erroneous compound assignments and allowing automatic compound classification. The automated assignment of compounds in databases, compound structure files or text documents to their related ontology classes is possible through the integration with a chemical structure search engine. As an application example, the annotation of chemical structure files with a prototypic ontology is demonstrated.
Automated compound classification using a chemical ontology
2012-01-01
Background Classification of chemical compounds into compound classes by using structure derived descriptors is a well-established method to aid the evaluation and abstraction of compound properties in chemical compound databases. MeSH and recently ChEBI are examples of chemical ontologies that provide a hierarchical classification of compounds into general compound classes of biological interest based on their structural as well as property or use features. In these ontologies, compounds have been assigned manually to their respective classes. However, with the ever increasing possibilities to extract new compounds from text documents using name-to-structure tools and considering the large number of compounds deposited in databases, automated and comprehensive chemical classification methods are needed to avoid the error prone and time consuming manual classification of compounds. Results In the present work we implement principles and methods to construct a chemical ontology of classes that shall support the automated, high-quality compound classification in chemical databases or text documents. While SMARTS expressions have already been used to define chemical structure class concepts, in the present work we have extended the expressive power of such class definitions by expanding their structure-based reasoning logic. Thus, to achieve the required precision and granularity of chemical class definitions, sets of SMARTS class definitions are connected by OR and NOT logical operators. In addition, AND logic has been implemented to allow the concomitant use of flexible atom lists and stereochemistry definitions. The resulting chemical ontology is a multi-hierarchical taxonomy of concept nodes connected by directed, transitive relationships. Conclusions A proposal for a rule based definition of chemical classes has been made that allows to define chemical compound classes more precisely than before. The proposed structure-based reasoning logic allows to translate chemistry expert knowledge into a computer interpretable form, preventing erroneous compound assignments and allowing automatic compound classification. The automated assignment of compounds in databases, compound structure files or text documents to their related ontology classes is possible through the integration with a chemical structure search engine. As an application example, the annotation of chemical structure files with a prototypic ontology is demonstrated. PMID:23273256
Concepts and algorithms for terminal-area traffic management
NASA Technical Reports Server (NTRS)
Erzberger, H.; Chapel, J. D.
1984-01-01
The nation's air-traffic-control system is the subject of an extensive modernization program, including the planned introduction of advanced automation techniques. This paper gives an overview of a concept for automating terminal-area traffic management. Four-dimensional (4D) guidance techniques, which play an essential role in the automated system, are reviewed. One technique, intended for on-board computer implementation, is based on application of optimal control theory. The second technique is a simplified approach to 4D guidance intended for ground computer implementation. It generates advisory messages to help the controller maintain scheduled landing times of aircraft not equipped with on-board 4D guidance systems. An operational system for the second technique, recently evaluated in a simulation, is also described.
Automated Illustration of Patients Instructions
Bui, Duy; Nakamura, Carlos; Bray, Bruce E.; Zeng-Treitler, Qing
2012-01-01
A picture can be a powerful communication tool. However, creating pictures to illustrate patient instructions can be a costly and time-consuming task. Building on our prior research in this area, we developed a computer application that automatically converts text to pictures using natural language processing and computer graphics techniques. After iterative testing, the automated illustration system was evaluated using 49 previously unseen cardiology discharge instructions. The completeness of the system-generated illustrations was assessed by three raters using a three-level scale. The average inter-rater agreement for text correctly represented in the pictograph was about 66 percent. Since illustration in this context is intended to enhance rather than replace text, these results support the feasibility of conducting automated illustration. PMID:23304392
ERIC Educational Resources Information Center
Ozonoff, Sally; Cook, Ian; Coon, Hilary; Dawson, Geraldine; Joseph, Robert M.; Klin, Ami; McMahon, William M.; Minshew, Nancy; Munson, Jeffrey A.
2004-01-01
Recent structural and functional imaging work, as well as neuropathology and neuropsychology studies, provide strong empirical support for the involvement of frontal cortex in autism. The Cambridge Neuropsychological Test Automated Battery (CANTAB) is a computer-administered set of neuropsychological tests developed to examine specific components…
Jasko, D J; Lein, D H; Foote, R H
1990-01-01
Two commercially available computer-automate semen analysis instruments (CellSoft Automated Semen Analyzer and HTM-2000 Motion Analyzer) were compared for their ability to report similar results based on the analysis of pre-recorded video tapes of extended, motile stallion semen. The determinations of the percentage of motile cells by these instruments were more similar than the comparisons between subjective estimates and either instrument. However, mean values obtained from the same sample may still differ by as much as 30 percentage units between instruments. Instruments varied with regard to the determinations of mean sperm curvilinear velocity and sperm concentration, but mean sperm linearity determinations were similar between the instruments. We concluded that the determinations of sperm motion characteristics by subjective estimation, CellSoft Automated Semen Analyzer, and HTM-2000 Motility Analyzer are often dissimilar, making direct comparisons of results difficult.
Automated design optimization of supersonic airplane wing structures under dynamic constraints
NASA Technical Reports Server (NTRS)
Fox, R. L.; Miura, H.; Rao, S. S.
1972-01-01
The problems of the preliminary and first level detail design of supersonic aircraft wings are stated as mathematical programs and solved using automated optimum design techniques. The problem is approached in two phases: the first is a simplified equivalent plate model in which the envelope, planform and structural parameters are varied to produce a design, the second is a finite element model with fixed configuration in which the material distribution is varied. Constraints include flutter, aeroelastically computed stresses and deflections, natural frequency and a variety of geometric limitations.
Kuhn, Stefan; Egert, Björn; Neumann, Steffen; Steinbeck, Christoph
2008-09-25
Current efforts in Metabolomics, such as the Human Metabolome Project, collect structures of biological metabolites as well as data for their characterisation, such as spectra for identification of substances and measurements of their concentration. Still, only a fraction of existing metabolites and their spectral fingerprints are known. Computer-Assisted Structure Elucidation (CASE) of biological metabolites will be an important tool to leverage this lack of knowledge. Indispensable for CASE are modules to predict spectra for hypothetical structures. This paper evaluates different statistical and machine learning methods to perform predictions of proton NMR spectra based on data from our open database NMRShiftDB. A mean absolute error of 0.18 ppm was achieved for the prediction of proton NMR shifts ranging from 0 to 11 ppm. Random forest, J48 decision tree and support vector machines achieved similar overall errors. HOSE codes being a notably simple method achieved a comparatively good result of 0.17 ppm mean absolute error. NMR prediction methods applied in the course of this work delivered precise predictions which can serve as a building block for Computer-Assisted Structure Elucidation for biological metabolites.
Using computers for planning and evaluating nursing in the health care services.
Emuziene, Vilma
2009-01-01
This paper describes that the nurses attitudes, using and motivation towards the computer usage significantly influenced by area of nursing/health care service. Today most of the nurses traditionally document patient information in a medical record using pen and paper. Most nursing administrators not currently involved with computer applications in their settings are interested in exploring whether technology could help them with the day-to-day and long - range tasks of planning and evaluating nursing services. The results of this investigation showed that respondents (nurses), as specialists and nursing informatics, make their activity well: they had "positive" attitude towards computers and "good" or "average" computer skills. The nurses overall computer attitude did influence by the age of the nurses, by sex, by professional qualification. Younger nurses acquire informatics skills while in nursing school and are more accepting of computer advancements. The knowledge about computer among nurses who don't have any training in computers' significantly differs, who have training and using the computer once a week or everyday. In the health care services often are using the computers and the automated data systems, data for the statistical information (visit information, patient information) and billing information. In nursing field often automated data systems are using for statistical information, billing information, information about the vaccination, patient assessment and patient classification.
Automated volumetric evaluation of stereoscopic disc photography
Xu, Juan; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A; Kagemann, Larry; Craig, Jamie E; Mackey, David A; Hewitt, Alex W; Schuman, Joel S
2010-01-01
PURPOSE: To develop a fully automated algorithm (AP) to perform a volumetric measure of the optic disc using conventional stereoscopic optic nerve head (ONH) photographs, and to compare algorithm-produced parameters with manual photogrammetry (MP), scanning laser ophthalmoscope (SLO) and optical coherence tomography (OCT) measurements. METHODS: One hundred twenty-two stereoscopic optic disc photographs (61 subjects) were analyzed. Disc area, rim area, cup area, cup/disc area ratio, vertical cup/disc ratio, rim volume and cup volume were automatically computed by the algorithm. Latent variable measurement error models were used to assess measurement reproducibility for the four techniques. RESULTS: AP had better reproducibility for disc area and cup volume and worse reproducibility for cup/disc area ratio and vertical cup/disc ratio, when the measurements were compared to the MP, SLO and OCT methods. CONCLUSION: AP provides a useful technique for an objective quantitative assessment of 3D ONH structures. PMID:20588996
Electronic Data Interchange in Procurement
1990-04-01
contract management and order processing systems. This conversion of automated information to paper and back to automated form is not only slow and...automated purchasing computer and the contractor’s order processing computer through telephone lines, as illustrated in Figure 1-1. Computer-to-computer...into the contractor’s order processing or contract management system. This approach - converting automated information to paper and back to automated
Bagci, Ulas; Foster, Brent; Miller-Jaster, Kirsten; Luna, Brian; Dey, Bappaditya; Bishai, William R; Jonsson, Colleen B; Jain, Sanjay; Mollura, Daniel J
2013-07-23
Infectious diseases are the second leading cause of death worldwide. In order to better understand and treat them, an accurate evaluation using multi-modal imaging techniques for anatomical and functional characterizations is needed. For non-invasive imaging techniques such as computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET), there have been many engineering improvements that have significantly enhanced the resolution and contrast of the images, but there are still insufficient computational algorithms available for researchers to use when accurately quantifying imaging data from anatomical structures and functional biological processes. Since the development of such tools may potentially translate basic research into the clinic, this study focuses on the development of a quantitative and qualitative image analysis platform that provides a computational radiology perspective for pulmonary infections in small animal models. Specifically, we designed (a) a fast and robust automated and semi-automated image analysis platform and a quantification tool that can facilitate accurate diagnostic measurements of pulmonary lesions as well as volumetric measurements of anatomical structures, and incorporated (b) an image registration pipeline to our proposed framework for volumetric comparison of serial scans. This is an important investigational tool for small animal infectious disease models that can help advance researchers' understanding of infectious diseases. We tested the utility of our proposed methodology by using sequentially acquired CT and PET images of rabbit, ferret, and mouse models with respiratory infections of Mycobacterium tuberculosis (TB), H1N1 flu virus, and an aerosolized respiratory pathogen (necrotic TB) for a total of 92, 44, and 24 scans for the respective studies with half of the scans from CT and the other half from PET. Institutional Administrative Panel on Laboratory Animal Care approvals were obtained prior to conducting this research. First, the proposed computational framework registered PET and CT images to provide spatial correspondences between images. Second, the lungs from the CT scans were segmented using an interactive region growing (IRG) segmentation algorithm with mathematical morphology operations to avoid false positive (FP) uptake in PET images. Finally, we segmented significant radiotracer uptake from the PET images in lung regions determined from CT and computed metabolic volumes of the significant uptake. All segmentation processes were compared with expert radiologists' delineations (ground truths). Metabolic and gross volume of lesions were automatically computed with the segmentation processes using PET and CT images, and percentage changes in those volumes over time were calculated. (Continued on next page)(Continued from previous page) Standardized uptake value (SUV) analysis from PET images was conducted as a complementary quantitative metric for disease severity assessment. Thus, severity and extent of pulmonary lesions were examined through both PET and CT images using the aforementioned quantification metrics outputted from the proposed framework. Each animal study was evaluated within the same subject class, and all steps of the proposed methodology were evaluated separately. We quantified the accuracy of the proposed algorithm with respect to the state-of-the-art segmentation algorithms. For evaluation of the segmentation results, dice similarity coefficient (DSC) as an overlap measure and Haussdorf distance as a shape dissimilarity measure were used. Significant correlations regarding the estimated lesion volumes were obtained both in CT and PET images with respect to the ground truths (R2=0.8922,p<0.01 and R2=0.8664,p<0.01, respectively). The segmentation accuracy (DSC (%)) was 93.4±4.5% for normal lung CT scans and 86.0±7.1% for pathological lung CT scans. Experiments showed excellent agreements (all above 85%) with expert evaluations for both structural and functional imaging modalities. Apart from quantitative analysis of each animal, we also qualitatively showed how metabolic volumes were changing over time by examining serial PET/CT scans. Evaluation of the registration processes was based on precisely defined anatomical landmark points by expert clinicians. An average of 2.66, 3.93, and 2.52 mm errors was found in rabbit, ferret, and mouse data (all within the resolution limits), respectively. Quantitative results obtained from the proposed methodology were visually related to the progress and severity of the pulmonary infections as verified by the participating radiologists. Moreover, we demonstrated that lesions due to the infections were metabolically active and appeared multi-focal in nature, and we observed similar patterns in the CT images as well. Consolidation and ground glass opacity were the main abnormal imaging patterns and consistently appeared in all CT images. We also found that the gross and metabolic lesion volume percentage follow the same trend as the SUV-based evaluation in the longitudinal analysis. We explored the feasibility of using PET and CT imaging modalities in three distinct small animal models for two diverse pulmonary infections. We concluded from the clinical findings, derived from the proposed computational pipeline, that PET-CT imaging is an invaluable hybrid modality for tracking pulmonary infections longitudinally in small animals and has great potential to become routinely used in clinics. Our proposed methodology showed that automated computed-aided lesion detection and quantification of pulmonary infections in small animal models are efficient and accurate as compared to the clinical standard of manual and semi-automated approaches. Automated analysis of images in pre-clinical applications can increase the efficiency and quality of pre-clinical findings that ultimately inform downstream experimental design in human clinical studies; this innovation will allow researchers and clinicians to more effectively allocate study resources with respect to research demands without compromising accuracy.
Concurrent ultrasonic weld evaluation system
Hood, Donald W.; Johnson, John A.; Smartt, Herschel B.
1987-01-01
A system for concurrent, non-destructive evaluation of partially completed welds for use in conjunction with an automated welder. The system utilizes real time, automated ultrasonic inspection of a welding operation as the welds are being made by providing a transducer which follows a short distance behind the welding head. Reflected ultrasonic signals are analyzed utilizing computer based digital pattern recognition techniques to discriminate between good and flawed welds on a pass by pass basis. The system also distinguishes between types of weld flaws.
Concurrent ultrasonic weld evaluation system
Hood, D.W.; Johnson, J.A.; Smartt, H.B.
1985-09-04
A system for concurrent, non-destructive evaluation of partially completed welds for use in conjunction with an automated welder. The system utilizes real time, automated ultrasonic inspection of a welding operation as the welds are being made by providing a transducer which follows a short distance behind the welding head. Reflected ultrasonic signals are analyzed utilizing computer based digital pattern recognition techniques to discriminate between good and flawed welds on a pass by pass basis. The system also distinguishes between types of weld flaws.
Concurrent ultrasonic weld evaluation system
Hood, D.W.; Johnson, J.A.; Smartt, H.B.
1987-12-15
A system for concurrent, non-destructive evaluation of partially completed welds for use in conjunction with an automated welder is disclosed. The system utilizes real time, automated ultrasonic inspection of a welding operation as the welds are being made by providing a transducer which follows a short distance behind the welding head. Reflected ultrasonic signals are analyzed utilizing computer based digital pattern recognition techniques to discriminate between good and flawed welds on a pass by pass basis. The system also distinguishes between types of weld flaws. 5 figs.
The Design and Development of an Evaluation System for Online Instruction.
ERIC Educational Resources Information Center
Wentling, Tim L.; Johnson, Scott D.
This paper describes the conceptualization and development of an evaluation system that can be used to monitor and evaluate online instructional efforts. The evaluation system addresses concerns of both program administrators and course instructors. Computer technology is used to provide partial automation to reduce respondent burden and to…
The importance of data curation on QSAR Modeling ...
During the last few decades many QSAR models and tools have been developed at the US EPA, including the widely used EPISuite. During this period the arsenal of computational capabilities supporting cheminformatics has broadened dramatically with multiple software packages. These modern tools allow for more advanced techniques in terms of chemical structure representation and storage, as well as enabling automated data-mining and standardization approaches to examine and fix data quality issues.This presentation will investigate the impact of data curation on the reliability of QSAR models being developed within the EPA‘s National Center for Computational Toxicology. As part of this work we have attempted to disentangle the influence of the quality versus quantity of data based on the Syracuse PHYSPROP database partly used by EPISuite software. We will review our automated approaches to examining key datasets related to the EPISuite data to validate across chemical structure representations (e.g., mol file and SMILES) and identifiers (chemical names and registry numbers) and approaches to standardize data into QSAR-ready formats prior to modeling procedures. Our efforts to quantify and segregate data into quality categories has allowed us to evaluate the resulting models that can be developed from these data slices and to quantify to what extent efforts developing high-quality datasets have the expected pay-off in terms of predicting performance. The most accur
Automated social skills training with audiovisual information.
Tanaka, Hiroki; Sakti, Sakriani; Neubig, Graham; Negoro, Hideki; Iwasaka, Hidemi; Nakamura, Satoshi
2016-08-01
People with social communication difficulties tend to have superior skills using computers, and as a result computer-based social skills training systems are flourishing. Social skills training, performed by human trainers, is a well-established method to obtain appropriate skills in social interaction. Previous works have attempted to automate one or several parts of social skills training through human-computer interaction. However, while previous work on simulating social skills training considered only acoustic and linguistic features, human social skills trainers take into account visual features (e.g. facial expression, posture). In this paper, we create and evaluate a social skills training system that closes this gap by considering audiovisual features regarding ratio of smiling, yaw, and pitch. An experimental evaluation measures the difference in effectiveness of social skill training when using audio features and audiovisual features. Results showed that the visual features were effective to improve users' social skills.
NASA Astrophysics Data System (ADS)
Litjens, G.; Ehteshami Bejnordi, B.; Timofeeva, N.; Swadi, G.; Kovacs, I.; Hulsbergen-van de Kaa, C.; van der Laak, J.
2015-03-01
Automated detection of prostate cancer in digitized H and E whole-slide images is an important first step for computer-driven grading. Most automated grading algorithms work on preselected image patches as they are too computationally expensive to calculate on the multi-gigapixel whole-slide images. An automated multi-resolution cancer detection system could reduce the computational workload for subsequent grading and quantification in two ways: by excluding areas of definitely normal tissue within a single specimen or by excluding entire specimens which do not contain any cancer. In this work we present a multi-resolution cancer detection algorithm geared towards the latter. The algorithm methodology is as follows: at a coarse resolution the system uses superpixels, color histograms and local binary patterns in combination with a random forest classifier to assess the likelihood of cancer. The five most suspicious superpixels are identified and at a higher resolution more computationally expensive graph and gland features are added to refine classification for these superpixels. Our methods were evaluated in a data set of 204 digitized whole-slide H and E stained images of MR-guided biopsy specimens from 163 patients. A pathologist exhaustively annotated the specimens for areas containing cancer. The performance of our system was evaluated using ten-fold cross-validation, stratified according to patient. Image-based receiver operating characteristic (ROC) analysis was subsequently performed where a specimen containing cancer was considered positive and specimens without cancer negative. We obtained an area under the ROC curve of 0.96 and a 0.4 specificity at a 1.0 sensitivity.
Application of Computer Simulation to Teach ATM Access to Individuals with Intellectual Disabilities
ERIC Educational Resources Information Center
Davies, Daniel K.; Stock, Steven E.; Wehmeyer, Michael L.
2003-01-01
This study investigates use of computer simulation for teaching ATM use to adults with intellectual disabilities. ATM-SIM is a computer-based trainer used for teaching individuals with intellectual disabilities how to use an automated teller machine (ATM) to access their personal bank accounts. In the pilot evaluation, a prototype system was…
Chang, Herng-Hua; Chang, Yu-Ning
2017-04-01
Bilateral filters have been substantially exploited in numerous magnetic resonance (MR) image restoration applications for decades. Due to the deficiency of theoretical basis on the filter parameter setting, empirical manipulation with fixed values and noise variance-related adjustments has generally been employed. The outcome of these strategies is usually sensitive to the variation of the brain structures and not all the three parameter values are optimal. This article is in an attempt to investigate the optimal setting of the bilateral filter, from which an accelerated and automated restoration framework is developed. To reduce the computational burden of the bilateral filter, parallel computing with the graphics processing unit (GPU) architecture is first introduced. The NVIDIA Tesla K40c GPU with the compute unified device architecture (CUDA) functionality is specifically utilized to emphasize thread usages and memory resources. To correlate the filter parameters with image characteristics for automation, optimal image texture features are subsequently acquired based on the sequential forward floating selection (SFFS) scheme. Subsequently, the selected features are introduced into the back propagation network (BPN) model for filter parameter estimation. Finally, the k-fold cross validation method is adopted to evaluate the accuracy of the proposed filter parameter prediction framework. A wide variety of T1-weighted brain MR images with various scenarios of noise levels and anatomic structures were utilized to train and validate this new parameter decision system with CUDA-based bilateral filtering. For a common brain MR image volume of 256 × 256 × 256 pixels, the speed-up gain reached 284. Six optimal texture features were acquired and associated with the BPN to establish a "high accuracy" parameter prediction system, which achieved a mean absolute percentage error (MAPE) of 5.6%. Automatic restoration results on 2460 brain MR images received an average relative error in terms of peak signal-to-noise ratio (PSNR) less than 0.1%. In comparison with many state-of-the-art filters, the proposed automation framework with CUDA-based bilateral filtering provided more favorable results both quantitatively and qualitatively. Possessing unique characteristics and demonstrating exceptional performances, the proposed CUDA-based bilateral filter adequately removed random noise in multifarious brain MR images for further study in neurosciences and radiological sciences. It requires no prior knowledge of the noise variance and automatically restores MR images while preserving fine details. The strategy of exploiting the CUDA to accelerate the computation and incorporating texture features into the BPN to completely automate the bilateral filtering process is achievable and validated, from which the best performance is reached. © 2017 American Association of Physicists in Medicine.
Impacts: NIST Building and Fire Research Laboratory (technical and societal)
NASA Astrophysics Data System (ADS)
Raufaste, N. J.
1993-08-01
The Building and Fire Research Laboratory (BFRL) of the National Institute of Standards and Technology (NIST) is dedicated to the life cycle quality of constructed facilities. The report describes major effects of BFRL's program on building and fire research. Contents of the document include: structural reliability; nondestructive testing of concrete; structural failure investigations; seismic design and construction standards; rehabilitation codes and standards; alternative refrigerants research; HVAC simulation models; thermal insulation; residential equipment energy efficiency; residential plumbing standards; computer image evaluation of building materials; corrosion-protection for reinforcing steel; prediction of the service lives of building materials; quality of construction materials laboratory testing; roofing standards; simulating fires with computers; fire safety evaluation system; fire investigations; soot formation and evolution; cone calorimeter development; smoke detector standards; standard for the flammability of children's sleepwear; smoldering insulation fires; wood heating safety research; in-place testing of concrete; communication protocols for building automation and control systems; computer simulation of the properties of concrete and other porous materials; cigarette-induced furniture fires; carbon monoxide formation in enclosure fires; halon alternative fire extinguishing agents; turbulent mixing research; materials fire research; furniture flammability testing; standard for the cigarette ignition resistance of mattresses; support of navy firefighter trainer program; and using fire to clean up oil spills.
GRAMPS: An Automated Ambulatory Geriatric Record
Hammond, Kenric W.; King, Carol A.; Date, Vishvanath V.; Prather, Robert J.; Loo, Lawrence; Siddiqui, Khwaja
1988-01-01
GRAMPS (Geriatric Record and Multidisciplinary Planning System) is an interactive MUMPS system developed for VA outpatient use. It allows physicians to effectively document care in problem-oriented format with structured narrative and free text, eliminating handwritten input. We evaluated the system in a one-year controlled cohort study. When the computer, was used, appointment times averaged 8.2 minutes longer (32.6 vs. 24.4 minutes) compared to control visits with the same physicians. Computer use was associated with better quality of care as measured in the management of a common problem, hypertension, as well as decreased overall costs of care. When a faster computer was installed, data entry times improved, suggesting that slower processing had accounted for a substantial portion of the observed difference in appointment lengths. The GRAMPS system was well-accepted by providers. The modular design used in GRAMPS has been extended to medical-care applications in Nursing and Mental Health.
Evaluation of verification and testing tools for FORTRAN programs
NASA Technical Reports Server (NTRS)
Smith, K. A.
1980-01-01
Two automated software verification and testing systems were developed for use in the analysis of computer programs. An evaluation of the static analyzer DAVE and the dynamic analyzer PET, which are used in the analysis of FORTRAN programs on Control Data (CDC) computers, are described. Both systems were found to be effective and complementary, and are recommended for use in testing FORTRAN programs.
Methodology for Prototyping Increased Levels of Automation for Spacecraft Rendezvous Functions
NASA Technical Reports Server (NTRS)
Hart, Jeremy J.; Valasek, John
2007-01-01
The Crew Exploration Vehicle necessitates higher levels of automation than previous NASA vehicles, due to program requirements for automation, including Automated Rendezvous and Docking. Studies of spacecraft development often point to the locus of decision-making authority between humans and computers (i.e. automation) as a prime driver for cost, safety, and mission success. Therefore, a critical component in the Crew Exploration Vehicle development is the determination of the correct level of automation. To identify the appropriate levels of automation and autonomy to design into a human space flight vehicle, NASA has created the Function-specific Level of Autonomy and Automation Tool. This paper develops a methodology for prototyping increased levels of automation for spacecraft rendezvous functions. This methodology is used to evaluate the accuracy of the Function-specific Level of Autonomy and Automation Tool specified levels of automation, via prototyping. Spacecraft rendezvous planning tasks are selected and then prototyped in Matlab using Fuzzy Logic techniques and existing Space Shuttle rendezvous trajectory algorithms.
NASA Technical Reports Server (NTRS)
Reichel, R. H.; Hague, D. S.; Jones, R. T.; Glatt, C. R.
1973-01-01
This computer program manual describes in two parts the automated combustor design optimization code AUTOCOM. The program code is written in the FORTRAN 4 language. The input data setup and the program outputs are described, and a sample engine case is discussed. The program structure and programming techniques are also described, along with AUTOCOM program analysis.
Model-Independent Phenotyping of C. elegans Locomotion Using Scale-Invariant Feature Transform
Koren, Yelena; Sznitman, Raphael; Arratia, Paulo E.; Carls, Christopher; Krajacic, Predrag; Brown, André E. X.; Sznitman, Josué
2015-01-01
To uncover the genetic basis of behavioral traits in the model organism C. elegans, a common strategy is to study locomotion defects in mutants. Despite efforts to introduce (semi-)automated phenotyping strategies, current methods overwhelmingly depend on worm-specific features that must be hand-crafted and as such are not generalizable for phenotyping motility in other animal models. Hence, there is an ongoing need for robust algorithms that can automatically analyze and classify motility phenotypes quantitatively. To this end, we have developed a fully-automated approach to characterize C. elegans’ phenotypes that does not require the definition of nematode-specific features. Rather, we make use of the popular computer vision Scale-Invariant Feature Transform (SIFT) from which we construct histograms of commonly-observed SIFT features to represent nematode motility. We first evaluated our method on a synthetic dataset simulating a range of nematode crawling gaits. Next, we evaluated our algorithm on two distinct datasets of crawling C. elegans with mutants affecting neuromuscular structure and function. Not only is our algorithm able to detect differences between strains, results capture similarities in locomotory phenotypes that lead to clustering that is consistent with expectations based on genetic relationships. Our proposed approach generalizes directly and should be applicable to other animal models. Such applicability holds promise for computational ethology as more groups collect high-resolution image data of animal behavior. PMID:25816290
Shahidi, Shoaleh; Bahrampour, Ehsan; Soltanimehr, Elham; Zamani, Ali; Oshagh, Morteza; Moattari, Marzieh; Mehdizadeh, Alireza
2014-09-16
Two-dimensional projection radiographs have been traditionally considered the modality of choice for cephalometric analysis. To overcome the shortcomings of two-dimensional images, three-dimensional computed tomography (CT) has been used to evaluate craniofacial structures. However, manual landmark detection depends on medical expertise, and the process is time-consuming. The present study was designed to produce software capable of automated localization of craniofacial landmarks on cone beam (CB) CT images based on image registration and to evaluate its accuracy. The software was designed using MATLAB programming language. The technique was a combination of feature-based (principal axes registration) and voxel similarity-based methods for image registration. A total of 8 CBCT images were selected as our reference images for creating a head atlas. Then, 20 CBCT images were randomly selected as the test images for evaluating the method. Three experts twice located 14 landmarks in all 28 CBCT images during two examinations set 6 weeks apart. The differences in the distances of coordinates of each landmark on each image between manual and automated detection methods were calculated and reported as mean errors. The combined intraclass correlation coefficient for intraobserver reliability was 0.89 and for interobserver reliability 0.87 (95% confidence interval, 0.82 to 0.93). The mean errors of all 14 landmarks were <4 mm. Additionally, 63.57% of landmarks had a mean error of <3 mm compared with manual detection (gold standard method). The accuracy of our approach for automated localization of craniofacial landmarks, which was based on combining feature-based and voxel similarity-based methods for image registration, was acceptable. Nevertheless we recommend repetition of this study using other techniques, such as intensity-based methods.
NASA Astrophysics Data System (ADS)
Hasan, M.; Helal, A.; Gabr, M.
2014-12-01
In this project, we focus on providing a computer-automated platform for a better assessment of the potential failures and retrofit measures of flood-protecting earth structures, e.g., dams and levees. Such structures play an important role during extreme flooding events as well as during normal operating conditions. Furthermore, they are part of other civil infrastructures such as water storage and hydropower generation. Hence, there is a clear need for accurate evaluation of stability and functionality levels during their service lifetime so that the rehabilitation and maintenance costs are effectively guided. Among condition assessment approaches based on the factor of safety, the limit states (LS) approach utilizes numerical modeling to quantify the probability of potential failures. The parameters for LS numerical modeling include i) geometry and side slopes of the embankment, ii) loading conditions in terms of rate of rising and duration of high water levels in the reservoir, and iii) cycles of rising and falling water levels simulating the effect of consecutive storms throughout the service life of the structure. Sample data regarding the correlations of these parameters are available through previous research studies. We have unified these criteria and extended the risk assessment in term of loss of life through the implementation of a graphical user interface to automate input parameters that divides data into training and testing sets, and then feeds them into Artificial Neural Network (ANN) tool through MATLAB programming. The ANN modeling allows us to predict risk values of flood protective structures based on user feedback quickly and easily. In future, we expect to fine-tune the software by adding extensive data on variations of parameters.
Automated selection of computed tomography display parameters using neural networks
NASA Astrophysics Data System (ADS)
Zhang, Di; Neu, Scott; Valentino, Daniel J.
2001-07-01
A collection of artificial neural networks (ANN's) was trained to identify simple anatomical structures in a set of x-ray computed tomography (CT) images. These neural networks learned to associate a point in an image with the anatomical structure containing the point by using the image pixels located on the horizontal and vertical lines that ran through the point. The neural networks were integrated into a computer software tool whose function is to select an index into a list of CT window/level values from the location of the user's mouse cursor. Based upon the anatomical structure selected by the user, the software tool automatically adjusts the image display to optimally view the structure.
NASA Astrophysics Data System (ADS)
Srivastava, Vishal; Dalal, Devjyoti; Kumar, Anuj; Prakash, Surya; Dalal, Krishna
2018-06-01
Moisture content is an important feature of fruits and vegetables. As 80% of apple content is water, so decreasing the moisture content will degrade the quality of apples (Golden Delicious). The computational and texture features of the apples were extracted from optical coherence tomography (OCT) images. A support vector machine with a Gaussian kernel model was used to perform automated classification. To evaluate the quality of wax coated apples during storage in vivo, our proposed method opens up the possibility of fully automated quantitative analysis based on the morphological features of apples. Our results demonstrate that the analysis of the computational and texture features of OCT images may be a good non-destructive method for the assessment of the quality of apples.
A Perspective on Computational Human Performance Models as Design Tools
NASA Technical Reports Server (NTRS)
Jones, Patricia M.
2010-01-01
The design of interactive systems, including levels of automation, displays, and controls, is usually based on design guidelines and iterative empirical prototyping. A complementary approach is to use computational human performance models to evaluate designs. An integrated strategy of model-based and empirical test and evaluation activities is particularly attractive as a methodology for verification and validation of human-rated systems for commercial space. This talk will review several computational human performance modeling approaches and their applicability to design of display and control requirements.
Robot graphic simulation testbed
NASA Technical Reports Server (NTRS)
Cook, George E.; Sztipanovits, Janos; Biegl, Csaba; Karsai, Gabor; Springfield, James F.
1991-01-01
The objective of this research was twofold. First, the basic capabilities of ROBOSIM (graphical simulation system) were improved and extended by taking advantage of advanced graphic workstation technology and artificial intelligence programming techniques. Second, the scope of the graphic simulation testbed was extended to include general problems of Space Station automation. Hardware support for 3-D graphics and high processing performance make high resolution solid modeling, collision detection, and simulation of structural dynamics computationally feasible. The Space Station is a complex system with many interacting subsystems. Design and testing of automation concepts demand modeling of the affected processes, their interactions, and that of the proposed control systems. The automation testbed was designed to facilitate studies in Space Station automation concepts.
Automated Program Recognition by Graph Parsing
1992-07-01
structures (cliches) in a program can help an experienced programmer understand the program. Based on the known relationships between the clichis, a...Graph Parsing Linda Mary Wills Abstract The recognition of standard computational structures (cliches) in a program can help an experienced programmer...3.4.1 Structure -Sharing ....... ............................ 76 3.4.2 Aggregation ....................................... 80 2 3.5 Chart Parsing Flow
Development of a plan for automating integrated circuit processing
NASA Technical Reports Server (NTRS)
1971-01-01
The operations analysis and equipment evaluations pertinent to the design of an automated production facility capable of manufacturing beam-lead CMOS integrated circuits are reported. The overall plan shows approximate cost of major equipment, production rate and performance capability, flexibility, and special maintenance requirements. Direct computer control is compared with supervisory-mode operations. The plan is limited to wafer processing operations from the starting wafer to the finished beam-lead die after separation etching. The work already accomplished in implementing various automation schemes, and the type of equipment which can be found for instant automation are described. The plan is general, so that small shops or large production units can perhaps benefit. Examples of major types of automated processing machines are shown to illustrate the general concepts of automated wafer processing.
Employment Opportunities for the Handicapped in Programmable Automation.
ERIC Educational Resources Information Center
Swift, Richard; Leneway, Robert
A Computer Integrated Manufacturing System may make it possible for severely disabled people to custom design, machine, and manufacture either wood or metal parts. Programmable automation merges computer aided design, computer aided manufacturing, computer aided engineering, and computer integrated manufacturing systems with automated production…
Peak picking multidimensional NMR spectra with the contour geometry based algorithm CYPICK.
Würz, Julia M; Güntert, Peter
2017-01-01
The automated identification of signals in multidimensional NMR spectra is a challenging task, complicated by signal overlap, noise, and spectral artifacts, for which no universally accepted method is available. Here, we present a new peak picking algorithm, CYPICK, that follows, as far as possible, the manual approach taken by a spectroscopist who analyzes peak patterns in contour plots of the spectrum, but is fully automated. Human visual inspection is replaced by the evaluation of geometric criteria applied to contour lines, such as local extremality, approximate circularity (after appropriate scaling of the spectrum axes), and convexity. The performance of CYPICK was evaluated for a variety of spectra from different proteins by systematic comparison with peak lists obtained by other, manual or automated, peak picking methods, as well as by analyzing the results of automated chemical shift assignment and structure calculation based on input peak lists from CYPICK. The results show that CYPICK yielded peak lists that compare in most cases favorably to those obtained by other automated peak pickers with respect to the criteria of finding a maximal number of real signals, a minimal number of artifact peaks, and maximal correctness of the chemical shift assignments and the three-dimensional structure obtained by fully automated assignment and structure calculation.
Ballanger, Bénédicte; Tremblay, Léon; Sgambato-Faure, Véronique; Beaudoin-Gobert, Maude; Lavenne, Franck; Le Bars, Didier; Costes, Nicolas
2013-08-15
MRI templates and digital atlases are needed for automated and reproducible quantitative analysis of non-human primate PET studies. Segmenting brain images via multiple atlases outperforms single-atlas labelling in humans. We present a set of atlases manually delineated on brain MRI scans of the monkey Macaca fascicularis. We use this multi-atlas dataset to evaluate two automated methods in terms of accuracy, robustness and reliability in segmenting brain structures on MRI and extracting regional PET measures. Twelve individual Macaca fascicularis high-resolution 3DT1 MR images were acquired. Four individual atlases were created by manually drawing 42 anatomical structures, including cortical and sub-cortical structures, white matter regions, and ventricles. To create the MRI template, we first chose one MRI to define a reference space, and then performed a two-step iterative procedure: affine registration of individual MRIs to the reference MRI, followed by averaging of the twelve resampled MRIs. Automated segmentation in native space was obtained in two ways: 1) Maximum probability atlases were created by decision fusion of two to four individual atlases in the reference space, and transformation back into the individual native space (MAXPROB)(.) 2) One to four individual atlases were registered directly to the individual native space, and combined by decision fusion (PROPAG). Accuracy was evaluated by computing the Dice similarity index and the volume difference. The robustness and reproducibility of PET regional measurements obtained via automated segmentation was evaluated on four co-registered MRI/PET datasets, which included test-retest data. Dice indices were always over 0.7 and reached maximal values of 0.9 for PROPAG with all four individual atlases. There was no significant mean volume bias. The standard deviation of the bias decreased significantly when increasing the number of individual atlases. MAXPROB performed better when increasing the number of atlases used. When all four atlases were used for the MAXPROB creation, the accuracy of morphometric segmentation approached that of the PROPAG method. PET measures extracted either via automatic methods or via the manually defined regions were strongly correlated, with no significant regional differences between methods. Intra-class correlation coefficients for test-retest data were over 0.87. Compared to single atlas extractions, multi-atlas methods improve the accuracy of region definition. They also perform comparably to manually defined regions for PET quantification. Multiple atlases of Macaca fascicularis brains are now available and allow reproducible and simplified analyses. Copyright © 2013 Elsevier Inc. All rights reserved.
ANL statement of site strategy for computing workstations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fenske, K.R.; Boxberger, L.M.; Amiot, L.W.
1991-11-01
This Statement of Site Strategy describes the procedure at Argonne National Laboratory for defining, acquiring, using, and evaluating scientific and office workstations and related equipment and software in accord with DOE Order 1360.1A (5-30-85), and Laboratory policy. It is Laboratory policy to promote the installation and use of computing workstations to improve productivity and communications for both programmatic and support personnel, to ensure that computing workstations acquisitions meet the expressed need in a cost-effective manner, and to ensure that acquisitions of computing workstations are in accord with Laboratory and DOE policies. The overall computing site strategy at ANL is tomore » develop a hierarchy of integrated computing system resources to address the current and future computing needs of the laboratory. The major system components of this hierarchical strategy are: Supercomputers, Parallel computers, Centralized general purpose computers, Distributed multipurpose minicomputers, and Computing workstations and office automation support systems. Computing workstations include personal computers, scientific and engineering workstations, computer terminals, microcomputers, word processing and office automation electronic workstations, and associated software and peripheral devices costing less than $25,000 per item.« less
Automated method for structural segmentation of nasal airways based on cone beam computed tomography
NASA Astrophysics Data System (ADS)
Tymkovych, Maksym Yu.; Avrunin, Oleg G.; Paliy, Victor G.; Filzow, Maksim; Gryshkov, Oleksandr; Glasmacher, Birgit; Omiotek, Zbigniew; DzierŻak, RóŻa; Smailova, Saule; Kozbekova, Ainur
2017-08-01
The work is dedicated to the segmentation problem of human nasal airways using Cone Beam Computed Tomography. During research, we propose a specialized approach of structured segmentation of nasal airways. That approach use spatial information, symmetrisation of the structures. The proposed stages can be used for construction a virtual three dimensional model of nasal airways and for production full-scale personalized atlases. During research we build the virtual model of nasal airways, which can be used for construction specialized medical atlases and aerodynamics researches.
Xu, Zhoubing; Gertz, Adam L.; Burke, Ryan P.; Bansal, Neil; Kang, Hakmook; Landman, Bennett A.; Abramson, Richard G.
2016-01-01
OBJECTIVES Multi-atlas fusion is a promising approach for computer-assisted segmentation of anatomical structures. The purpose of this study was to evaluate the accuracy and time efficiency of multi-atlas segmentation for estimating spleen volumes on clinically-acquired CT scans. MATERIALS AND METHODS Under IRB approval, we obtained 294 deidentified (HIPAA-compliant) abdominal CT scans on 78 subjects from a recent clinical trial. We compared five pipelines for obtaining splenic volumes: Pipeline 1–manual segmentation of all scans, Pipeline 2–automated segmentation of all scans, Pipeline 3–automated segmentation of all scans with manual segmentation for outliers on a rudimentary visual quality check, Pipelines 4 and 5–volumes derived from a unidimensional measurement of craniocaudal spleen length and three-dimensional splenic index measurements, respectively. Using Pipeline 1 results as ground truth, the accuracy of Pipelines 2–5 (Dice similarity coefficient [DSC], Pearson correlation, R-squared, and percent and absolute deviation of volume from ground truth) were compared for point estimates of splenic volume and for change in splenic volume over time. Time cost was also compared for Pipelines 1–5. RESULTS Pipeline 3 was dominant in terms of both accuracy and time cost. With a Pearson correlation coefficient of 0.99, average absolute volume deviation 23.7 cm3, and 1 minute per scan, Pipeline 3 yielded the best results. The second-best approach was Pipeline 5, with a Pearson correlation coefficient 0.98, absolute deviation 46.92 cm3, and 1 minute 30 seconds per scan. Manual segmentation (Pipeline 1) required 11 minutes per scan. CONCLUSION A computer-automated segmentation approach with manual correction of outliers generated accurate splenic volumes with reasonable time efficiency. PMID:27519156
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
2000-01-01
The purpose of this paper is to discuss grid generation issues and to challenge the grid generation community to develop tools suitable for automated multidisciplinary analysis and design optimization of aerospace vehicles. Special attention is given to the grid generation issues of computational fluid dynamics and computational structural mechanics disciplines.
Multi-atlas pancreas segmentation: Atlas selection based on vessel structure.
Karasawa, Ken'ichi; Oda, Masahiro; Kitasaka, Takayuki; Misawa, Kazunari; Fujiwara, Michitaka; Chu, Chengwen; Zheng, Guoyan; Rueckert, Daniel; Mori, Kensaku
2017-07-01
Automated organ segmentation from medical images is an indispensable component for clinical applications such as computer-aided diagnosis (CAD) and computer-assisted surgery (CAS). We utilize a multi-atlas segmentation scheme, which has recently been used in different approaches in the literature to achieve more accurate and robust segmentation of anatomical structures in computed tomography (CT) volume data. Among abdominal organs, the pancreas has large inter-patient variability in its position, size and shape. Moreover, the CT intensity of the pancreas closely resembles adjacent tissues, rendering its segmentation a challenging task. Due to this, conventional intensity-based atlas selection for pancreas segmentation often fails to select atlases that are similar in pancreas position and shape to those of the unlabeled target volume. In this paper, we propose a new atlas selection strategy based on vessel structure around the pancreatic tissue and demonstrate its application to a multi-atlas pancreas segmentation. Our method utilizes vessel structure around the pancreas to select atlases with high pancreatic resemblance to the unlabeled volume. Also, we investigate two types of applications of the vessel structure information to the atlas selection. Our segmentations were evaluated on 150 abdominal contrast-enhanced CT volumes. The experimental results showed that our approach can segment the pancreas with an average Jaccard index of 66.3% and an average Dice overlap coefficient of 78.5%. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Mathur, F. P.
1972-01-01
Description of an on-line interactive computer program called CARE (Computer-Aided Reliability Estimation) which can model self-repair and fault-tolerant organizations and perform certain other functions. Essentially CARE consists of a repository of mathematical equations defining the various basic redundancy schemes. These equations, under program control, are then interrelated to generate the desired mathematical model to fit the architecture of the system under evaluation. The mathematical model is then supplied with ground instances of its variables and is then evaluated to generate values for the reliability-theoretic functions applied to the model.
Perceptions of L1 Glossed Feedback in Automated Writing Evaluation: A Case Study
ERIC Educational Resources Information Center
Wilken, Jayme Lynn
2018-01-01
Learner perceptions toward and utilization of L1 glossed feedback in an automated writing evaluation (AWE) program were investigated in an Intensive English Program (IEP) class. This small case study focused on two Chinese students who responded to weekly surveys, semi-structured interviews, and screen capture videos of their revisions over a…
Automated data collection based on RoboDiff at the ESRF beamline MASSIF-1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nurizzo, Didier, E-mail: Didier.nurizzo@esrf.fr; Guichard, Nicolas; McSweeney, Sean
2016-07-27
The European Synchrotron Radiation Facility has a long standing history in the automation of experiments in Macromolecular Crystallography. MASSIF-1 (Massively Automated Sample Screening and evaluation Integrated Facility), a beamline constructed as part of the ESRF Upgrade Phase I program, has been open to the external user community since July 2014 and offers a unique completely automated data collection service to both academic and industrial structural biologists.
Promoting autonomy in a smart home environment with a smarter interface.
Brennan, C P; McCullagh, P J; Galway, L; Lightbody, G
2015-01-01
In the not too distant future, the median population age will tend towards 65; an age at which the need for dependency increases. Most older people want to remain autonomous and self-sufficient for as long as possible. As environments become smarter home automation solutions can be provided to support this aspiration. The technology discussed within this paper focuses on providing a home automation system that can be controlled by most users regardless of mobility restrictions, and hence it may be applicable to older people. It comprises a hybrid Brain-Computer Interface, home automation user interface and actuators. In the first instance, our system is controlled with conventional computer input, which is then replaced with eye tracking and finally a BCI and eye tracking collaboration. The systems have been assessed in terms of information throughput; benefits and limitations are evaluated.
Parmodel: a web server for automated comparative modeling of proteins.
Uchôa, Hugo Brandão; Jorge, Guilherme Eberhart; Freitas Da Silveira, Nelson José; Camera, João Carlos; Canduri, Fernanda; De Azevedo, Walter Filgueira
2004-12-24
Parmodel is a web server for automated comparative modeling and evaluation of protein structures. The aim of this tool is to help inexperienced users to perform modeling, assessment, visualization, and optimization of protein models as well as crystallographers to evaluate structures solved experimentally. It is subdivided in four modules: Parmodel Modeling, Parmodel Assessment, Parmodel Visualization, and Parmodel Optimization. The main module is the Parmodel Modeling that allows the building of several models for a same protein in a reduced time, through the distribution of modeling processes on a Beowulf cluster. Parmodel automates and integrates the main softwares used in comparative modeling as MODELLER, Whatcheck, Procheck, Raster3D, Molscript, and Gromacs. This web server is freely accessible at .
Training and Personnel Systems Technology R&D Program Description FY 1988/1989. Revision
1988-05-20
scenario software /database, and computer generated imagery (CIG) subsystem resources; (d) investigation of feasibility of, and preparation of plans... computer language to Army flight simulator for demonstration and evaluation. The objective is to have flight simulators which use the same software as...the Automated Performance and Readiness Training System (APARTS), which is a computer software system which facilitates training management through
Which Do Students Prefer to Evaluate Their Essays: Peers or Computer Program
ERIC Educational Resources Information Center
Lai, Yi-hsiu
2010-01-01
The purpose of this study was to investigate problems and potentials of new technologies in English writing education. The effectiveness of automated writing evaluation (AWE) ("MY Access") and of peer evaluation (PE) was compared. Twenty-two English as a foreign language (EFL) learners in Taiwan participated in this study. They submitted…
Student Engagement with Computer-Generated Feedback: A Case Study
ERIC Educational Resources Information Center
Zhang, Zhe
2017-01-01
In order to benefit from feedback on their writing, students need to engage effectively with it. This article reports a case study on student engagement with computer-generated feedback, known as automated writing evaluation (AWE) feedback, in an EFL context. Differing from previous studies that explored commercially available AWE programs, this…
USSR Report, Cybernetics, Computers and Automation Technology
1987-03-31
version of the system was tested by adapting PAL-11 and MACRO-11 assembly code for the "Elektronika=60" and "Elektronika-60M" computers; ASM -86 for the...GS, "On the Results of Evaluation of Insurance Payments in Collective and State Farms and Private Households," the actuarial analysis tables based
ERIC Educational Resources Information Center
Shotsberger, Paul G.
The National Council of Teachers of Mathematics (1991) has identified the use of computers as a necessary teaching tool for enhancing mathematical discourse in schools. One possible vehicle of technological change in mathematics classrooms is the Intelligent Tutoring System (ITS), an artificially intelligent computer-based tutor. This paper…
Cockpit Adaptive Automation and Pilot Performance
NASA Technical Reports Server (NTRS)
Parasuraman, Raja
2001-01-01
The introduction of high-level automated systems in the aircraft cockpit has provided several benefits, e.g., new capabilities, enhanced operational efficiency, and reduced crew workload. At the same time, conventional 'static' automation has sometimes degraded human operator monitoring performance, increased workload, and reduced situation awareness. Adaptive automation represents an alternative to static automation. In this approach, task allocation between human operators and computer systems is flexible and context-dependent rather than static. Adaptive automation, or adaptive task allocation, is thought to provide for regulation of operator workload and performance, while preserving the benefits of static automation. In previous research we have reported beneficial effects of adaptive automation on the performance of both pilots and non-pilots of flight-related tasks. For adaptive systems to be viable, however, such benefits need to be examined jointly in the context of a single set of tasks. The studies carried out under this project evaluated a systematic method for combining different forms of adaptive automation. A model for effective combination of different forms of adaptive automation, based on matching adaptation to operator workload was proposed and tested. The model was evaluated in studies using IFR-rated pilots flying a general-aviation simulator. Performance, subjective, and physiological (heart rate variability, eye scan-paths) measures of workload were recorded. The studies compared workload-based adaptation to to non-adaptive control conditions and found evidence for systematic benefits of adaptive automation. The research provides an empirical basis for evaluating the effectiveness of adaptive automation in the cockpit. The results contribute to the development of design principles and guidelines for the implementation of adaptive automation in the cockpit, particularly in general aviation, and in other human-machine systems. Project goals were met or exceeded. The results of the research extended knowledge of automation-related performance decrements in pilots and demonstrated the positive effects of adaptive task allocation. In addition, several practical implications for cockpit automation design were drawn from the research conducted. A total of 12 articles deriving from the project were published.
Nondestructive Evaluation for Aerospace Composites
NASA Technical Reports Server (NTRS)
Leckey, Cara; Cramer, Elliott; Perey, Daniel
2015-01-01
Nondestructive evaluation (NDE) techniques are important for enabling NASA's missions in space exploration and aeronautics. The expanded and continued use of composite materials for aerospace components and vehicles leads to a need for advanced NDE techniques capable of quantitatively characterizing damage in composites. Quantitative damage detection techniques help to ensure safety, reliability and durability of space and aeronautic vehicles. This presentation will give a broad outline of NASA's range of technical work and an overview of the NDE research performed in the Nondestructive Evaluation Sciences Branch at NASA Langley Research Center. The presentation will focus on ongoing research in the development of NDE techniques for composite materials and structures, including development of automated data processing tools to turn NDE data into quantitative location and sizing results. Composites focused NDE research in the areas of ultrasonics, thermography, X-ray computed tomography, and NDE modeling will be discussed.
Computational neuroanatomy: ontology-based representation of neural components and connectivity.
Rubin, Daniel L; Talos, Ion-Florin; Halle, Michael; Musen, Mark A; Kikinis, Ron
2009-02-05
A critical challenge in neuroscience is organizing, managing, and accessing the explosion in neuroscientific knowledge, particularly anatomic knowledge. We believe that explicit knowledge-based approaches to make neuroscientific knowledge computationally accessible will be helpful in tackling this challenge and will enable a variety of applications exploiting this knowledge, such as surgical planning. We developed ontology-based models of neuroanatomy to enable symbolic lookup, logical inference and mathematical modeling of neural systems. We built a prototype model of the motor system that integrates descriptive anatomic and qualitative functional neuroanatomical knowledge. In addition to modeling normal neuroanatomy, our approach provides an explicit representation of abnormal neural connectivity in disease states, such as common movement disorders. The ontology-based representation encodes both structural and functional aspects of neuroanatomy. The ontology-based models can be evaluated computationally, enabling development of automated computer reasoning applications. Neuroanatomical knowledge can be represented in machine-accessible format using ontologies. Computational neuroanatomical approaches such as described in this work could become a key tool in translational informatics, leading to decision support applications that inform and guide surgical planning and personalized care for neurological disease in the future.
Bjornsson, Christopher S; Lin, Gang; Al-Kofahi, Yousef; Narayanaswamy, Arunachalam; Smith, Karen L; Shain, William; Roysam, Badrinath
2009-01-01
Brain structural complexity has confounded prior efforts to extract quantitative image-based measurements. We present a systematic ‘divide and conquer’ methodology for analyzing three-dimensional (3D) multi-parameter images of brain tissue to delineate and classify key structures, and compute quantitative associations among them. To demonstrate the method, thick (~100 μm) slices of rat brain tissue were labeled using 3 – 5 fluorescent signals, and imaged using spectral confocal microscopy and unmixing algorithms. Automated 3D segmentation and tracing algorithms were used to delineate cell nuclei, vasculature, and cell processes. From these segmentations, a set of 23 intrinsic and 8 associative image-based measurements was computed for each cell. These features were used to classify astrocytes, microglia, neurons, and endothelial cells. Associations among cells and between cells and vasculature were computed and represented as graphical networks to enable further analysis. The automated results were validated using a graphical interface that permits investigator inspection and corrective editing of each cell in 3D. Nuclear counting accuracy was >89%, and cell classification accuracy ranged from 81–92% depending on cell type. We present a software system named FARSIGHT implementing our methodology. Its output is a detailed XML file containing measurements that may be used for diverse quantitative hypothesis-driven and exploratory studies of the central nervous system. PMID:18294697
A Decision Support System for Control and Automation of Dynamical Processes
1990-03-01
would like to thank my Advisor, Asok Ray , for giving me the opportunity to become involved in the Artificial Intelligence field, and for his guidance in...Applications, IEEE Computer Society, December 1984, pp 460-464. 76 [Ray87) Ray , A., Joshi, S. M., Whitney, C. K., Jow, H. N., "Information...Thomp88] Thompson, D. R., Ray , A., Kumara, S., "A Hierarchically Structured Knowledge-Based System for Welding Automation and Control", Journal of
NASA Astrophysics Data System (ADS)
Zhou, Chuan; Chan, Heang-Ping; Hadjiiski, Lubomir M.; Chughtai, Aamer; Patel, Smita; Cascade, Philip N.; Sahiner, Berkman; Wei, Jun; Ge, Jun; Kazerooni, Ella A.
2007-03-01
CT pulmonary angiography (CTPA) has been reported to be an effective means for clinical diagnosis of pulmonary embolism (PE). We are developing a computer-aided detection (CAD) system to assist radiologist in PE detection in CTPA images. 3D multiscale filters in combination with a newly designed response function derived from the eigenvalues of Hessian matrices is used to enhance vascular structures including the vessel bifurcations and suppress non-vessel structures such as the lymphoid tissues surrounding the vessels. A hierarchical EM estimation is then used to segment the vessels by extracting the high response voxels at each scale. The segmented vessels are pre-screened for suspicious PE areas using a second adaptive multiscale EM estimation. A rule-based false positive (FP) reduction method was designed to identify the true PEs based on the features of PE and vessels. 43 CTPA scans were used as an independent test set to evaluate the performance of PE detection. Experienced chest radiologists identified the PE locations which were used as "gold standard". 435 PEs were identified in the artery branches, of which 172 and 263 were subsegmental and proximal to the subsegmental, respectively. The computer-detected volume was considered true positive (TP) when it overlapped with 10% or more of the gold standard PE volume. Our preliminary test results show that, at an average of 33 and 24 FPs/case, the sensitivities of our PE detection method were 81% and 78%, respectively, for proximal PEs, and 79% and 73%, respectively, for subsegmental PEs. The study demonstrates the feasibility that the automated method can identify PE accurately on CTPA images. Further study is underway to improve the sensitivity and reduce the FPs.
A “loop” shape descriptor and its application to automated segmentation of airways from CT scans
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pu, Jiantao; Jin, Chenwang, E-mail: jcw76@163.com; Yu, Nan
2015-06-15
Purpose: A novel shape descriptor is presented to aid an automated identification of the airways depicted on computed tomography (CT) images. Methods: Instead of simplifying the tubular characteristic of the airways as an ideal mathematical cylindrical or circular shape, the proposed “loop” shape descriptor exploits the fact that the cross sections of any tubular structure (regardless of its regularity) always appear as a loop. In implementation, the authors first reconstruct the anatomical structures in volumetric CT as a three-dimensional surface model using the classical marching cubes algorithm. Then, the loop descriptor is applied to locate the airways with a concavemore » loop cross section. To deal with the variation of the airway walls in density as depicted on CT images, a multiple threshold strategy is proposed. A publicly available chest CT database consisting of 20 CT scans, which was designed specifically for evaluating an airway segmentation algorithm, was used for quantitative performance assessment. Measures, including length, branch count, and generations, were computed under the aid of a skeletonization operation. Results: For the test dataset, the airway length ranged from 64.6 to 429.8 cm, the generation ranged from 7 to 11, and the branch number ranged from 48 to 312. These results were comparable to the performance of the state-of-the-art algorithms validated on the same dataset. Conclusions: The authors’ quantitative experiment demonstrated the feasibility and reliability of the developed shape descriptor in identifying lung airways.« less
Human performance cognitive-behavioral modeling: a benefit for occupational safety.
Gore, Brian F
2002-01-01
Human Performance Modeling (HPM) is a computer-aided job analysis software methodology used to generate predictions of complex human-automation integration and system flow patterns with the goal of improving operator and system safety. The use of HPM tools has recently been increasing due to reductions in computational cost, augmentations in the tools' fidelity, and usefulness in the generated output. An examination of an Air Man-machine Integration Design and Analysis System (Air MIDAS) model evaluating complex human-automation integration currently underway at NASA Ames Research Center will highlight the importance to occupational safety of considering both cognitive and physical aspects of performance when researching human error.
Human performance cognitive-behavioral modeling: a benefit for occupational safety
NASA Technical Reports Server (NTRS)
Gore, Brian F.
2002-01-01
Human Performance Modeling (HPM) is a computer-aided job analysis software methodology used to generate predictions of complex human-automation integration and system flow patterns with the goal of improving operator and system safety. The use of HPM tools has recently been increasing due to reductions in computational cost, augmentations in the tools' fidelity, and usefulness in the generated output. An examination of an Air Man-machine Integration Design and Analysis System (Air MIDAS) model evaluating complex human-automation integration currently underway at NASA Ames Research Center will highlight the importance to occupational safety of considering both cognitive and physical aspects of performance when researching human error.
Human Factors Assessment: The Passive Final Approach Spacing Tool (pFAST) Operational Evaluation
NASA Technical Reports Server (NTRS)
Lee, Katharine K.; Sanford, Beverly D.
1998-01-01
Automation to assist air traffic controllers in the current terminal and en route air traff ic environments is being developed at Ames Research Center in conjunction with the Federal Aviation Administration. This automation, known collectively as the Center-TRACON Automation System (CTAS), provides decision- making assistance to air traffic controllers through computer-generated advisories. One of the CTAS tools developed specifically to assist terminal area air traffic controllers is the Passive Final Approach Spacing Tool (pFAST). An operational evaluation of PFAST was conducted at the Dallas/Ft. Worth, Texas, Terminal Radar Approach Control (TRACON) facility. Human factors data collected during the test describe the impact of the automation upon the air traffic controller in terms of perceived workload and acceptance. Results showed that controller self-reported workload was not significantly increased or reduced by the PFAST automation; rather, controllers reported that the levels of workload remained primarily the same. Controller coordination and communication data were analyzed, and significant differences in the nature of controller coordination were found. Controller acceptance ratings indicated that PFAST was acceptable. This report describes the human factors data and results from the 1996 Operational Field Evaluation of Passive FAST.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, R.C.
This thesis involved the construction of (1) a grammar that incorporates knowledge on base invariancy and secondary structure in a molecule and (2) a parser engine that uses the grammar to position bases into the structural subunits of the molecule. These concepts were combined with a novel pinning technique to form a tool that semi-automates insertion of a new species into the alignment for the 16S rRNA molecule (a component of the ribosome) maintained by Dr. Carl Woese's group at the University of Illinois at Urbana. The tool was tested on species extracted from the alignment and on a groupmore » of entirely new species. The results were very encouraging, and the tool should be substantial aid to the curators of the 16S alignment. The construction of the grammar was itself automated, allowing application of the tool to alignments for other molecules. The logic programming language Prolog was used to construct all programs involved. The computational linguistics approach used here was found to be a useful way to attach the problem of insertion into an alignment.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, Ronald C.
This thesis involved the construction of (1) a grammar that incorporates knowledge on base invariancy and secondary structure in a molecule and (2) a parser engine that uses the grammar to position bases into the structural subunits of the molecule. These concepts were combined with a novel pinning technique to form a tool that semi-automates insertion of a new species into the alignment for the 16S rRNA molecule (a component of the ribosome) maintained by Dr. Carl Woese`s group at the University of Illinois at Urbana. The tool was tested on species extracted from the alignment and on a groupmore » of entirely new species. The results were very encouraging, and the tool should be substantial aid to the curators of the 16S alignment. The construction of the grammar was itself automated, allowing application of the tool to alignments for other molecules. The logic programming language Prolog was used to construct all programs involved. The computational linguistics approach used here was found to be a useful way to attach the problem of insertion into an alignment.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
This technical note describes the current capabilities and availability of the Automated Dredging and Disposal Alternatives Management System (ADDAMS). The technical note replaces the earlier Technical Note EEDP-06-12, which should be discarded. Planning, design, and management of dredging and dredged material disposal projects often require complex or tedious calculations or involve complex decision-making criteria. In addition, the evaluations often must be done for several disposal alternatives or disposal sites. ADDAMS is a personal computer (PC)-based system developed to assist in making such evaluations in a timely manner. ADDAMS contains a collection of computer programs (applications) designed to assist in managingmore » dredging projects. This technical note describes the system, currently available applications, mechanisms for acquiring and running the system, and provisions for revision and expansion.« less
2013-01-01
Background Infectious diseases are the second leading cause of death worldwide. In order to better understand and treat them, an accurate evaluation using multi-modal imaging techniques for anatomical and functional characterizations is needed. For non-invasive imaging techniques such as computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET), there have been many engineering improvements that have significantly enhanced the resolution and contrast of the images, but there are still insufficient computational algorithms available for researchers to use when accurately quantifying imaging data from anatomical structures and functional biological processes. Since the development of such tools may potentially translate basic research into the clinic, this study focuses on the development of a quantitative and qualitative image analysis platform that provides a computational radiology perspective for pulmonary infections in small animal models. Specifically, we designed (a) a fast and robust automated and semi-automated image analysis platform and a quantification tool that can facilitate accurate diagnostic measurements of pulmonary lesions as well as volumetric measurements of anatomical structures, and incorporated (b) an image registration pipeline to our proposed framework for volumetric comparison of serial scans. This is an important investigational tool for small animal infectious disease models that can help advance researchers’ understanding of infectious diseases. Methods We tested the utility of our proposed methodology by using sequentially acquired CT and PET images of rabbit, ferret, and mouse models with respiratory infections of Mycobacterium tuberculosis (TB), H1N1 flu virus, and an aerosolized respiratory pathogen (necrotic TB) for a total of 92, 44, and 24 scans for the respective studies with half of the scans from CT and the other half from PET. Institutional Administrative Panel on Laboratory Animal Care approvals were obtained prior to conducting this research. First, the proposed computational framework registered PET and CT images to provide spatial correspondences between images. Second, the lungs from the CT scans were segmented using an interactive region growing (IRG) segmentation algorithm with mathematical morphology operations to avoid false positive (FP) uptake in PET images. Finally, we segmented significant radiotracer uptake from the PET images in lung regions determined from CT and computed metabolic volumes of the significant uptake. All segmentation processes were compared with expert radiologists’ delineations (ground truths). Metabolic and gross volume of lesions were automatically computed with the segmentation processes using PET and CT images, and percentage changes in those volumes over time were calculated. (Continued on next page)(Continued from previous page) Standardized uptake value (SUV) analysis from PET images was conducted as a complementary quantitative metric for disease severity assessment. Thus, severity and extent of pulmonary lesions were examined through both PET and CT images using the aforementioned quantification metrics outputted from the proposed framework. Results Each animal study was evaluated within the same subject class, and all steps of the proposed methodology were evaluated separately. We quantified the accuracy of the proposed algorithm with respect to the state-of-the-art segmentation algorithms. For evaluation of the segmentation results, dice similarity coefficient (DSC) as an overlap measure and Haussdorf distance as a shape dissimilarity measure were used. Significant correlations regarding the estimated lesion volumes were obtained both in CT and PET images with respect to the ground truths (R2=0.8922,p<0.01 and R2=0.8664,p<0.01, respectively). The segmentation accuracy (DSC (%)) was 93.4±4.5% for normal lung CT scans and 86.0±7.1% for pathological lung CT scans. Experiments showed excellent agreements (all above 85%) with expert evaluations for both structural and functional imaging modalities. Apart from quantitative analysis of each animal, we also qualitatively showed how metabolic volumes were changing over time by examining serial PET/CT scans. Evaluation of the registration processes was based on precisely defined anatomical landmark points by expert clinicians. An average of 2.66, 3.93, and 2.52 mm errors was found in rabbit, ferret, and mouse data (all within the resolution limits), respectively. Quantitative results obtained from the proposed methodology were visually related to the progress and severity of the pulmonary infections as verified by the participating radiologists. Moreover, we demonstrated that lesions due to the infections were metabolically active and appeared multi-focal in nature, and we observed similar patterns in the CT images as well. Consolidation and ground glass opacity were the main abnormal imaging patterns and consistently appeared in all CT images. We also found that the gross and metabolic lesion volume percentage follow the same trend as the SUV-based evaluation in the longitudinal analysis. Conclusions We explored the feasibility of using PET and CT imaging modalities in three distinct small animal models for two diverse pulmonary infections. We concluded from the clinical findings, derived from the proposed computational pipeline, that PET-CT imaging is an invaluable hybrid modality for tracking pulmonary infections longitudinally in small animals and has great potential to become routinely used in clinics. Our proposed methodology showed that automated computed-aided lesion detection and quantification of pulmonary infections in small animal models are efficient and accurate as compared to the clinical standard of manual and semi-automated approaches. Automated analysis of images in pre-clinical applications can increase the efficiency and quality of pre-clinical findings that ultimately inform downstream experimental design in human clinical studies; this innovation will allow researchers and clinicians to more effectively allocate study resources with respect to research demands without compromising accuracy. PMID:23879987
Analyzing the Cohesion of English Text and Discourse with Automated Computer Tools
ERIC Educational Resources Information Center
Jeon, Moongee
2014-01-01
This article investigates the lexical and discourse features of English text and discourse with automated computer technologies. Specifically, this article examines the cohesion of English text and discourse with automated computer tools, Coh-Metrix and TEES. Coh-Metrix is a text analysis computer tool that can analyze English text and discourse…
Nakanishi, Rine; Sankaran, Sethuraman; Grady, Leo; Malpeso, Jenifer; Yousfi, Razik; Osawa, Kazuhiro; Ceponiene, Indre; Nazarat, Negin; Rahmani, Sina; Kissel, Kendall; Jayawardena, Eranthi; Dailing, Christopher; Zarins, Christopher; Koo, Bon-Kwon; Min, James K; Taylor, Charles A; Budoff, Matthew J
2018-03-23
Our goal was to evaluate the efficacy of a fully automated method for assessing the image quality (IQ) of coronary computed tomography angiography (CCTA). The machine learning method was trained using 75 CCTA studies by mapping features (noise, contrast, misregistration scores, and un-interpretability index) to an IQ score based on manual ground truth data. The automated method was validated on a set of 50 CCTA studies and subsequently tested on a new set of 172 CCTA studies against visual IQ scores on a 5-point Likert scale. The area under the curve in the validation set was 0.96. In the 172 CCTA studies, our method yielded a Cohen's kappa statistic for the agreement between automated and visual IQ assessment of 0.67 (p < 0.01). In the group where good to excellent (n = 163), fair (n = 6), and poor visual IQ scores (n = 3) were graded, 155, 5, and 2 of the patients received an automated IQ score > 50 %, respectively. Fully automated assessment of the IQ of CCTA data sets by machine learning was reproducible and provided similar results compared with visual analysis within the limits of inter-operator variability. • The proposed method enables automated and reproducible image quality assessment. • Machine learning and visual assessments yielded comparable estimates of image quality. • Automated assessment potentially allows for more standardised image quality. • Image quality assessment enables standardization of clinical trial results across different datasets.
Image segmentation evaluation for very-large datasets
NASA Astrophysics Data System (ADS)
Reeves, Anthony P.; Liu, Shuang; Xie, Yiting
2016-03-01
With the advent of modern machine learning methods and fully automated image analysis there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. Current approaches of visual inspection and manual markings do not scale well to big data. We present a new approach that depends on fully automated algorithm outcomes for segmentation documentation, requires no manual marking, and provides quantitative evaluation for computer algorithms. The documentation of new image segmentations and new algorithm outcomes are achieved by visual inspection. The burden of visual inspection on large datasets is minimized by (a) customized visualizations for rapid review and (b) reducing the number of cases to be reviewed through analysis of quantitative segmentation evaluation. This method has been applied to a dataset of 7,440 whole-lung CT images for 6 different segmentation algorithms designed to fully automatically facilitate the measurement of a number of very important quantitative image biomarkers. The results indicate that we could achieve 93% to 99% successful segmentation for these algorithms on this relatively large image database. The presented evaluation method may be scaled to much larger image databases.
Automated 3D closed surface segmentation: application to vertebral body segmentation in CT images.
Liu, Shuang; Xie, Yiting; Reeves, Anthony P
2016-05-01
A fully automated segmentation algorithm, progressive surface resolution (PSR), is presented in this paper to determine the closed surface of approximately convex blob-like structures that are common in biomedical imaging. The PSR algorithm was applied to the cortical surface segmentation of 460 vertebral bodies on 46 low-dose chest CT images, which can be potentially used for automated bone mineral density measurement and compression fracture detection. The target surface is realized by a closed triangular mesh, which thereby guarantees the enclosure. The surface vertices of the triangular mesh representation are constrained along radial trajectories that are uniformly distributed in 3D angle space. The segmentation is accomplished by determining for each radial trajectory the location of its intersection with the target surface. The surface is first initialized based on an input high confidence boundary image and then resolved progressively based on a dynamic attraction map in an order of decreasing degree of evidence regarding the target surface location. For the visual evaluation, the algorithm achieved acceptable segmentation for 99.35 % vertebral bodies. Quantitative evaluation was performed on 46 vertebral bodies and achieved overall mean Dice coefficient of 0.939 (with max [Formula: see text] 0.957, min [Formula: see text] 0.906 and standard deviation [Formula: see text] 0.011) using manual annotations as the ground truth. Both visual and quantitative evaluations demonstrate encouraging performance of the PSR algorithm. This novel surface resolution strategy provides uniform angular resolution for the segmented surface with computation complexity and runtime that are linearly constrained by the total number of vertices of the triangular mesh representation.
Automated planning of computer assisted mosaic arthroplasty.
Inoue, Jiro; Kunz, Manuela; Hurtig, Mark B; Waldman, Stephen D; Stewart, A James
2011-01-01
We describe and evaluate a computer algorithm that automatically develops a surgical plan for computer assisted mosaic arthroplasty, a technically demanding procedure in which a set of osteochondral plugs are transplanted from a non-load-bearing area of the joint to the site of a cartilage defect. We found that the algorithm produced plans that were at least as good as a human expert, had less variability, and took less time.
1980-06-01
courseware package on how to program lessons for an automated system. Since PLANIT (Programming Language for Interactive Teaching) is the student/author...assisted instruction (CAI), how to program PLANIT lessons, and to evaluate the effectiveness of the package for select Army users. The resultant courseware
1979-11-23
Entered) ACKNOWLEDGMENTS The author hereby expresses his appreciation to Mr. J. A. Schaeffel Jr. for his guidance on interferometry and the computer...were collected by an automated laser speckle interferometry displacement contour analyzer developed by John A. Schaeffel , Jr. [3]. The new method of 10...Fringe Patterns, US Army Missile Command, Redstone Arsenal, Alabama, Technical Report RL-76-18, 20 April 1976. 3. Schaeffel , J. A., Automated Laser
Towards Automated Screening of Two-dimensional Crystals
Cheng, Anchi; Leung, Albert; Fellmann, Denis; Quispe, Joel; Suloway, Christian; Pulokas, James; Carragher, Bridget; Potter, Clinton S.
2007-01-01
Screening trials to determine the presence of two-dimensional (2D) protein crystals suitable for three-dimensional structure determination using electron crystallography is a very labor-intensive process. Methods compatible with fully automated screening have been developed for the process of crystal production by dialysis and for producing negatively stained grids of the resulting trials. Further automation via robotic handling of the EM grids, and semi-automated transmission electron microscopic imaging and evaluation of the trial grids is also possible. We, and others, have developed working prototypes for several of these tools and tested and evaluated them in a simple screen of 24 crystallization conditions. While further development of these tools is certainly required for a turn-key system, the goal of fully automated screening appears to be within reach. PMID:17977016
Fink, Christine; Uhlmann, Lorenz; Klose, Christina; Haenssle, Holger A
2018-05-17
Reliable and accurate assessment of severity in psoriasis is very important in order to meet indication criteria for initiation of systemic treatment or to evaluate treatment efficacy. The most acknowledged tool for measuring the extent of psoriatic skin changes is the Psoriasis Area and Severity Index (PASI). However, the calculation of PASI can be tedious and subjective and high intraobserver and interobserver variability is an important concern. Therefore, there is a great need for a standardised and objective method that guarantees a reproducible PASI calculation. Within this study we will investigate the precision and reproducibility of automated, computer-guided PASI measurements in comparison to trained physicians to address these limitations. Non-interventional analyses of PASI calculations by either physicians in a prospective versus retrospective setting or an automated computer-guided algorithm in 120 patients with plaque psoriasis. All retrospective PASI calculations by physicians or by the computer algorithm are based on total body digital images. The primary objective of this study is comparison of automated computer-guided PASI measurements by means of digital image analysis versus conventional, prospective or retrospective physicians' PASI assessments. Secondary endpoints include (1) the assessment of physicians' interobserver variance in PASI calculations, (2) the assessment of physicians' intraobserver variance in PASI assessments of the same patients' images after a time interval of at least 4 weeks, (3) the assessment of the deviation between physicians' prospective versus retrospective PASI calculations, and (4) the reproducibility of automated computer-guided PASI measurements by assessment of two sets of total body digital images of the same patients taken at one time point. Ethical approval was provided by the Ethics Committee of the Medical Faculty of the University of Heidelberg (ethics approval number S-379/2016). DRKS00011818; Results. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Shingrani, Rahul; Krenz, Gary; Molthen, Robert
2010-01-01
With advances in medical imaging scanners, it has become commonplace to generate large multidimensional datasets. These datasets require tools for a rapid, thorough analysis. To address this need, we have developed an automated algorithm for morphometric analysis incorporating A Visualization Workshop computational and image processing libraries for three-dimensional segmentation, vascular tree generation and structural hierarchical ordering with a two-stage numeric optimization procedure for estimating vessel diameters. We combine this new technique with our mathematical models of pulmonary vascular morphology to quantify structural and functional attributes of lung arterial trees. Our physiological studies require repeated measurements of vascular structure to determine differences in vessel biomechanical properties between animal models of pulmonary disease. Automation provides many advantages including significantly improved speed and minimized operator interaction and biasing. The results are validated by comparison with previously published rat pulmonary arterial micro-CT data analysis techniques, in which vessels were manually mapped and measured using intense operator intervention. Published by Elsevier Ireland Ltd.
COINS: A composites information database system
NASA Technical Reports Server (NTRS)
Siddiqi, Shahid; Vosteen, Louis F.; Edlow, Ralph; Kwa, Teck-Seng
1992-01-01
An automated data abstraction form (ADAF) was developed to collect information on advanced fabrication processes and their related costs. The information will be collected for all components being fabricated as part of the ACT program and include in a COmposites INformation System (COINS) database. The aim of the COINS development effort is to provide future airframe preliminary design and fabrication teams with a tool through which production cost can become a deterministic variable in the design optimization process. The effort was initiated by the Structures Technology Program Office (STPO) of the NASA LaRC to implement the recommendations of a working group comprised of representatives from the commercial airframe companies. The principal working group recommendation was to re-institute collection of composite part fabrication data in a format similar to the DOD/NASA Structural Composites Fabrication Guide. The fabrication information collection form was automated with current user friendly computer technology. This work in progress paper describes the new automated form and features that make the form easy to use by an aircraft structural design-manufacturing team.
EFFECTS OF BRANCHING IN A COMPUTER-CONTROLLED AUTO-INSTRUCTIONAL DEVICE.
ERIC Educational Resources Information Center
COULSON, JOHN E.; AND OTHERS
A STUDY ON THE EFFECTIVENESS OF USING BOTH THE STUDENT'S ERRORS ON TRAINING ITEMS AND HIS OWN EVALUATION OF HIS LEARNING PROGRESS WAS PRESENTED. TWO GROUPS OF 15 HIGH SCHOOL STUDENTS WERE GIVEN AUTOMATED INSTRUCTION ON LOGIC BY MEANS OF A FLEXIBLE SEQUENCE, COMPUTER-CONTROLLED AUTO-INSTRUCTIONAL DEVICE. ONE GROUP WAS DESIGNATED THE FIXED-SEQUENCE…
ERIC Educational Resources Information Center
Protopapas, Athanassios; Skaloumbakas, Christos; Bali, Persefoni
2008-01-01
After reviewing past efforts related to computer-based reading disability (RD) assessment, we present a fully automated screening battery that evaluates critical skills relevant for RD diagnosis designed for unsupervised application in the Greek educational system. Psychometric validation in 301 children, 8-10 years old (grades 3 and 4; including…
A digital peer-to-peer learning platform for clinical skills development.
Basnak, Jesse; Ortynski, Jennifer; Chow, Meghan; Nzekwu, Emeka
2017-02-01
Due to constraints in time and resources, medical curricula may not provide adequate opportunities for pre-clerkship students to practice clinical skills. To address this, medical students at the University of Alberta developed a digital peer-to-peer learning initiative. The initiative assessed if students can learn clinical skills from their peers in co-curricular practice objective structured clinical exams (OSCEs). A total of 144 first-year medical students participated. Students wrote case scenarios that were reviewed by physicians. Students enacted the cases in practice OSCEs, acting as the patient, physician, and evaluator. Verbal and electronic evaluations were completed. A digital platform was used to automate the process. Surveys were disseminated to assess student perceptions of their experience. Seventy-five percent of participants said they needed opportunities to practice patient histories and physical exams in addition to those provided in the medical school curriculum. All participants agreed that the co-curricular practice OSCEs met this need. The majority of participants also agreed that the digital platform was efficient and easy to use. Students found the practice OSCEs and digital platform effective for learning clinical skills. Thus, peer-to-peer learning and computer automation can be useful adjuncts to traditional medical curricula.
Vorberg, Ellen; Fleischer, Heidi; Junginger, Steffen; Liu, Hui; Stoll, Norbert; Thurow, Kerstin
2016-10-01
Life science areas require specific sample pretreatment to increase the concentration of the analytes and/or to convert the analytes into an appropriate form for the detection and separation systems. Various workstations are commercially available, allowing for automated biological sample pretreatment. Nevertheless, due to the required temperature, pressure, and volume conditions in typical element and structure-specific measurements, automated platforms are not suitable for analytical processes. Thus, the purpose of the presented investigation was the design, realization, and evaluation of an automated system ensuring high-precision sample preparation for a variety of analytical measurements. The developed system has to enable system adaption and high performance flexibility. Furthermore, the system has to be capable of dealing with the wide range of required vessels simultaneously, allowing for less cost and time-consuming process steps. However, the system's functionality has been confirmed in various validation sequences. Using element-specific measurements, the automated system was up to 25% more precise compared to the manual procedure and as precise as the manual procedure using structure-specific measurements. © 2015 Society for Laboratory Automation and Screening.
NASA Technical Reports Server (NTRS)
Rives, T. B.; Ingels, F. M.
1988-01-01
An analysis of the Automated Booster Assembly Checkout System (ABACS) has been conducted. A computer simulation of the ETHERNET LAN has been written. The simulation allows one to investigate different structures of the ABACS system. The simulation code is in PASCAL and is VAX compatible.
Calculating Henry’s Constants of Charged Molecules Using SPARC
SPARC Performs Automated Reasoning in Chemistry is a computer program designed to model physical and chemical properties of molecules solely based on thier chemical structure. SPARC uses a toolbox of mechanistic perturbation models to model intermolecular interactions. SPARC has ...
NASA Technical Reports Server (NTRS)
Svalbonas, V.
1973-01-01
A procedure for the structural analysis of stiffened shells of revolution is presented. A digital computer program based on the Love-Reissner first order shell theory was developed. The computer program can analyze orthotropic thin shells of revolution, subjected to unsymmetric distributed loading or concentrated line loads, as well as thermal strains. The geometrical shapes of the shells which may be analyzed are described. The shell wall cross section can be a sheet, sandwich, or reinforced sheet or sandwich. General stiffness input options are also available.
Computer-automated ABCD versus dermatologists with different degrees of experience in dermoscopy.
Piccolo, Domenico; Crisman, Giuliana; Schoinas, Spyridon; Altamura, Davide; Peris, Ketty
2014-01-01
Dermoscopy is a very useful and non-invasive technique for in vivo observation and preoperative diagnosis of pigmented skin lesions (PSLs) inasmuch as it enables analysis of surface and subsurface structures that are not discernible to the naked eye. The authors used the ABCD rule of dermoscopy to test the accuracy of melanoma diagnosis with respect to a panel of 165 PSLs and the intra- and inter-observer diagnostic agreement obtained between three dermatologists with different degrees of experience, one General Practitioner and a DDA for computer-assisted diagnosis (Nevuscreen(®), Arkè s.a.s., Avezzano, Italy). 165 Pigmented Skin Lesions from 165 patients were selected. Histopathological examination revealed 132 benign melanocytic skin lesions and 33 melanomas. The kappa statistic, sensitivity, specificity and predictive positive and negative values were calculated to measure agreement between all the human observers and in comparison with the automated DDA. Our results revealed poor reproducibility of the semi-quantitative algorithm devised by Stolz et al. independently of observers' experience in dermoscopy. Nevuscreen(®) (Arkè s.a.s., Avezzano, Italy) proved to be 'user friendly' to all observers, thus enabling a more critical evaluation of each lesion and representing a helpful tool for clinicians without significant experience in dermoscopy in improving and achieving more accurate diagnosis of PSLs.
Automation and Optimization of Multipulse Laser Zona Drilling of Mouse Embryos During Embryo Biopsy.
Wong, Christopher Yee; Mills, James K
2017-03-01
Laser zona drilling (LZD) is a required step in many embryonic surgical procedures, for example, assisted hatching and preimplantation genetic diagnosis. LZD involves the ablation of the zona pellucida (ZP) using a laser while minimizing potentially harmful thermal effects on critical internal cell structures. Develop a method for the automation and optimization of multipulse LZD, applied to cleavage-stage embryos. A two-stage optimization is used. The first stage uses computer vision algorithms to identify embryonic structures and determines the optimal ablation zone farthest away from critical structures such as blastomeres. The second stage combines a genetic algorithm with a previously reported thermal analysis of LZD to optimize the combination of laser pulse locations and pulse durations. The goal is to minimize the peak temperature experienced by the blastomeres while creating the desired opening in the ZP. A proof of concept of the proposed LZD automation and optimization method is demonstrated through experiments on mouse embryos with positive results, as adequately sized openings are created. Automation of LZD is feasible and is a viable step toward the automation of embryo biopsy procedures. LZD is a common but delicate procedure performed by human operators using subjective methods to gauge proper LZD procedure. Automation of LZD removes human error to increase the success rate of LZD. Although the proposed methods are developed for cleavage-stage embryos, the same methods may be applied to most types LZD procedures, embryos at different developmental stages, or nonembryonic cells.
Automated Grading System for Evaluation of Superficial Punctate Keratitis Associated With Dry Eye.
Rodriguez, John D; Lane, Keith J; Ousler, George W; Angjeli, Endri; Smith, Lisa M; Abelson, Mark B
2015-04-01
To develop an automated method of grading fluorescein staining that accurately reproduces the clinical grading system currently in use. From the slit lamp photograph of the fluorescein-stained cornea, the region of interest was selected and punctate dot number calculated using software developed with the OpenCV computer vision library. Images (n = 229) were then divided into six incremental severity categories based on computed scores. The final selection of 54 photographs represented the full range of scores: nine images from each of six categories. These were then evaluated by three investigators using a clinical 0 to 4 corneal staining scale. Pearson correlations were calculated to compare investigator scores, and mean investigator and automated scores. Lin's Concordance Correlation Coefficients (CCC) and Bland-Altman plots were used to assess agreement between methods and between investigators. Pearson's correlation between investigators was 0.914; mean CCC between investigators was 0.882. Bland-Altman analysis indicated that scores assessed by investigator 3 were significantly higher than those of investigators 1 and 2 (paired t-test). The predicted grade was calculated to be: Gpred = 1.48log(Ndots) - 0.206. The two-point Pearson's correlation coefficient between the methods was 0.927 (P < 0.0001). The CCC between predicted automated score Gpred and mean investigator score was 0.929, 95% confidence interval (0.884-0.957). Bland-Altman analysis did not indicate bias. The difference in SD between clinical and automated methods was 0.398. An objective, automated analysis of corneal staining provides a quality assurance tool to be used to substantiate clinical grading of key corneal staining endpoints in multicentered clinical trials of dry eye.
Information Structure, Information Technology, and the Human Services Organizational Environment.
ERIC Educational Resources Information Center
Semke, Jeanette I.; Nurius, Paula S.
1991-01-01
Examines current trends in data collection and information use in human services organizations. Describes issues for managers who are planning information systems, including practitioner resistance to automation. Proposes that conceptual integration of agendas for human services automation, practice evaluation, and service effectiveness enables…
NASA Technical Reports Server (NTRS)
Svalbonas, V.; Ogilvie, P.
1973-01-01
The engineering programming information for the digital computer program for analyzing shell structures is presented. The program is designed to permit small changes such as altering the geometry or a table size to fit the specific requirements. Each major subroutine is discussed and the following subjects are included: (1) subroutine description, (2) pertinent engineering symbols and the FORTRAN coded counterparts, (3) subroutine flow chart, and (4) subroutine FORTRAN listing.
Automatic structured grid generation using Gridgen (some restrictions apply)
NASA Technical Reports Server (NTRS)
Chawner, John R.; Steinbrenner, John P.
1995-01-01
The authors have noticed in the recent grid generation literature an emphasis on the automation of structured grid generation. The motivation behind such work is clear; grid generation is easily the most despised task in the grid-analyze-visualize triad of computational analysis (CA). However, because grid generation is closely coupled to both the design and analysis software and because quantitative measures of grid quality are lacking, 'push button' grid generation usually results in a compromise between speed, control, and quality. Overt emphasis on automation obscures the substantive issues of providing users with flexible tools for generating and modifying high quality grids in a design environment. In support of this paper's tongue-in-cheek title, many features of the Gridgen software are described. Gridgen is by no stretch of the imagination an automatic grid generator. Despite this fact, the code does utilize many automation techniques that permit interesting regenerative features.
Zuluaga, Maria A; Rodionov, Roman; Nowell, Mark; Achhala, Sufyan; Zombori, Gergely; Mendelson, Alex F; Cardoso, M Jorge; Miserocchi, Anna; McEvoy, Andrew W; Duncan, John S; Ourselin, Sébastien
2015-08-01
Brain vessels are among the most critical landmarks that need to be assessed for mitigating surgical risks in stereo-electroencephalography (SEEG) implantation. Intracranial haemorrhage is the most common complication associated with implantation, carrying significantly associated morbidity. SEEG planning is done pre-operatively to identify avascular trajectories for the electrodes. In current practice, neurosurgeons have no assistance in the planning of electrode trajectories. There is great interest in developing computer-assisted planning systems that can optimise the safety profile of electrode trajectories, maximising the distance to critical structures. This paper presents a method that integrates the concepts of scale, neighbourhood structure and feature stability with the aim of improving robustness and accuracy of vessel extraction within a SEEG planning system. The developed method accounts for scale and vicinity of a voxel by formulating the problem within a multi-scale tensor voting framework. Feature stability is achieved through a similarity measure that evaluates the multi-modal consistency in vesselness responses. The proposed measurement allows the combination of multiple images modalities into a single image that is used within the planning system to visualise critical vessels. Twelve paired data sets from two image modalities available within the planning system were used for evaluation. The mean Dice similarity coefficient was 0.89 ± 0.04, representing a statistically significantly improvement when compared to a semi-automated single human rater, single-modality segmentation protocol used in clinical practice (0.80 ± 0.03). Multi-modal vessel extraction is superior to semi-automated single-modality segmentation, indicating the possibility of safer SEEG planning, with reduced patient morbidity.
An automated dose tracking system for adaptive radiation therapy.
Liu, Chang; Kim, Jinkoo; Kumarasiri, Akila; Mayyas, Essa; Brown, Stephen L; Wen, Ning; Siddiqui, Farzan; Chetty, Indrin J
2018-02-01
The implementation of adaptive radiation therapy (ART) into routine clinical practice is technically challenging and requires significant resources to perform and validate each process step. The objective of this report is to identify the key components of ART, to illustrate how a specific automated procedure improves efficiency, and to facilitate the routine clinical application of ART. Data was used from patient images, exported from a clinical database and converted to an intermediate format for point-wise dose tracking and accumulation. The process was automated using in-house developed software containing three modularized components: an ART engine, user interactive tools, and integration tools. The ART engine conducts computing tasks using the following modules: data importing, image pre-processing, dose mapping, dose accumulation, and reporting. In addition, custom graphical user interfaces (GUIs) were developed to allow user interaction with select processes such as deformable image registration (DIR). A commercial scripting application programming interface was used to incorporate automated dose calculation for application in routine treatment planning. Each module was considered an independent program, written in C++or C#, running in a distributed Windows environment, scheduled and monitored by integration tools. The automated tracking system was retrospectively evaluated for 20 patients with prostate cancer and 96 patients with head and neck cancer, under institutional review board (IRB) approval. In addition, the system was evaluated prospectively using 4 patients with head and neck cancer. Altogether 780 prostate dose fractions and 2586 head and neck cancer dose fractions went processed, including DIR and dose mapping. On average, daily cumulative dose was computed in 3 h and the manual work was limited to 13 min per case with approximately 10% of cases requiring an additional 10 min for image registration refinement. An efficient and convenient dose tracking system for ART in the clinical setting is presented. The software and automated processes were rigorously evaluated and validated using patient image datasets. Automation of the various procedures has improved efficiency significantly, allowing for the routine clinical application of ART for improving radiation therapy effectiveness. Copyright © 2017 Elsevier B.V. All rights reserved.
Cupek, Rafal; Ziębiński, Adam
2016-01-01
Rheumatoid arthritis is the most common rheumatic disease with arthritis, and causes substantial functional disability in approximately 50% patients after 10 years. Accurate measurement of the disease activity is crucial to provide an adequate treatment and care to the patients. The aim of this study is focused on a computer aided diagnostic system that supports an assessment of synovitis severity. This paper focus on a computer aided diagnostic system that was developed within joint Polish-Norwegian research project related to the automated assessment of the severity of synovitis. Semiquantitative ultrasound with power Doppler is a reliable and widely used method of assessing synovitis. Synovitis is estimated by ultrasound examiner using the scoring system graded from 0 to 3. Activity score is estimated on the basis of the examiner's experience or standardized ultrasound atlases. The method needs trained medical personnel and the result can be affected by a human error. The porotype of a computer-aided diagnostic system and algorithms essential for an analysis of ultrasonic images of finger joints are main scientific output of the MEDUSA project. Medusa Evaluation System prototype uses bone, skin, joint and synovitis area detectors for mutual structural model based evaluation of synovitis. Finally, several algorithms that support the semi-automatic or automatic detection of the bone region were prepared as well as a system that uses the statistical data processing approach in order to automatically localize the regions of interest. Semiquantitative ultrasound with power Doppler is a reliable and widely used method of assessing synovitis. Activity score is estimated on the basis of the examiner's experience and the result can be affected by a human error. In this paper we presented the MEDUSA project which is focused on a computer aided diagnostic system that supports an assessment of synovitis severity.
NASA Astrophysics Data System (ADS)
Neradilová, Hana; Fedorko, Gabriel
2016-12-01
Automated logistic systems are becoming more widely used within enterprise logistics processes. Their main advantage is that they allow increasing the efficiency and reliability of logistics processes. In terms of evaluating their effectiveness, it is necessary to take into account the economic aspect of the entire process. However, many users ignore and underestimate this area,which is not correct. One of the reasons why the economic aspect is overlooked is the fact that obtaining information for such an analysis is not easy. The aim of this paper is to present the possibilities of computer simulation methods for obtaining data for full-scale economic analysis implementation.
Automated event generation for loop-induced processes
Hirschi, Valentin; Mattelaer, Olivier
2015-10-22
We present the first fully automated implementation of cross-section computation and event generation for loop-induced processes. This work is integrated in the MadGraph5_aMC@NLO framework. We describe the optimisations implemented at the level of the matrix element evaluation, phase space integration and event generation allowing for the simulation of large multiplicity loop-induced processes. Along with some selected differential observables, we illustrate our results with a table showing inclusive cross-sections for all loop-induced hadronic scattering processes with up to three final states in the SM as well as for some relevant 2 → 4 processes. Furthermore, many of these are computed heremore » for the first time.« less
A human factors approach to range scheduling for satellite control
NASA Technical Reports Server (NTRS)
Wright, Cameron H. G.; Aitken, Donald J.
1991-01-01
Range scheduling for satellite control presents a classical problem: supervisory control of a large-scale dynamic system, with unwieldy amounts of interrelated data used as inputs to the decision process. Increased automation of the task, with the appropriate human-computer interface, is highly desirable. The development and user evaluation of a semi-automated network range scheduling system is described. The system incorporates a synergistic human-computer interface consisting of a large screen color display, voice input/output, a 'sonic pen' pointing device, a touchscreen color CRT, and a standard keyboard. From a human factors standpoint, this development represents the first major improvement in almost 30 years to the satellite control network scheduling task.
A Computational Methodology to Screen Activities of Enzyme Variants
Hediger, Martin R.; De Vico, Luca; Svendsen, Allan; Besenmatter, Werner; Jensen, Jan H.
2012-01-01
We present a fast computational method to efficiently screen enzyme activity. In the presented method, the effect of mutations on the barrier height of an enzyme-catalysed reaction can be computed within 24 hours on roughly 10 processors. The methodology is based on the PM6 and MOZYME methods as implemented in MOPAC2009, and is tested on the first step of the amide hydrolysis reaction catalyzed by the Candida Antarctica lipase B (CalB) enzyme. The barrier heights are estimated using adiabatic mapping and shown to give barrier heights to within 3 kcal/mol of B3LYP/6-31G(d)//RHF/3-21G results for a small model system. Relatively strict convergence criteria (0.5 kcal/(molÅ)), long NDDO cutoff distances within the MOZYME method (15 Å) and single point evaluations using conventional PM6 are needed for reliable results. The generation of mutant structures and subsequent setup of the semiempirical calculations are automated so that the effect on barrier heights can be estimated for hundreds of mutants in a matter of weeks using high performance computing. PMID:23284627
Unit cell-based computer-aided manufacturing system for tissue engineering.
Kang, Hyun-Wook; Park, Jeong Hun; Kang, Tae-Yun; Seol, Young-Joon; Cho, Dong-Woo
2012-03-01
Scaffolds play an important role in the regeneration of artificial tissues or organs. A scaffold is a porous structure with a micro-scale inner architecture in the range of several to several hundreds of micrometers. Therefore, computer-aided construction of scaffolds should provide sophisticated functionality for porous structure design and a tool path generation strategy that can achieve micro-scale architecture. In this study, a new unit cell-based computer-aided manufacturing (CAM) system was developed for the automated design and fabrication of a porous structure with micro-scale inner architecture that can be applied to composite tissue regeneration. The CAM system was developed by first defining a data structure for the computing process of a unit cell representing a single pore structure. Next, an algorithm and software were developed and applied to construct porous structures with a single or multiple pore design using solid freeform fabrication technology and a 3D tooth/spine computer-aided design model. We showed that this system is quite feasible for the design and fabrication of a scaffold for tissue engineering.
Development of automation and robotics for space via computer graphic simulation methods
NASA Technical Reports Server (NTRS)
Fernandez, Ken
1988-01-01
A robot simulation system, has been developed to perform automation and robotics system design studies. The system uses a procedure-oriented solid modeling language to produce a model of the robotic mechanism. The simulator generates the kinematics, inverse kinematics, dynamics, control, and real-time graphic simulations needed to evaluate the performance of the model. Simulation examples are presented, including simulation of the Space Station and the design of telerobotics for the Orbital Maneuvering Vehicle.
The Change to Administrative Computing in Schools.
ERIC Educational Resources Information Center
Brown, Daniel J.
1984-01-01
Describes a study of the process of school office automation which focuses on personnel reactions to administrative computing, what users view as advantages and disadvantages of the automation, perceived barriers and facilitators of the change to automation, school personnel view of long term effects, and implications for school computer policy.…
Fundamentals of Library Automation and Technology. Participant Workbook.
ERIC Educational Resources Information Center
Bridge, Frank; Walton, Robert
This workbook presents outlines of topics to be covered during a two-day workshop on the fundamentals for library automation. Topics for the first day include: (1) Introduction; (2) Computer Technology--A Historical Overview; (3) Evolution of Library Automation; (4) Computer Hardware Technology--An Introduction; (5) Computer Software…
System for Computer Automated Typesetting (SCAT) of Computer Authored Texts.
ERIC Educational Resources Information Center
Keeler, F. Laurence
This description of the System for Automated Typesetting (SCAT), an automated system for typesetting text and inserting special graphic symbols in programmed instructional materials created by the computer aided authoring system AUTHOR, provides an outline of the design architecture of the system and an overview including the component…
Data engineering systems: Computerized modeling and data bank capabilities for engineering analysis
NASA Technical Reports Server (NTRS)
Kopp, H.; Trettau, R.; Zolotar, B.
1984-01-01
The Data Engineering System (DES) is a computer-based system that organizes technical data and provides automated mechanisms for storage, retrieval, and engineering analysis. The DES combines the benefits of a structured data base system with automated links to large-scale analysis codes. While the DES provides the user with many of the capabilities of a computer-aided design (CAD) system, the systems are actually quite different in several respects. A typical CAD system emphasizes interactive graphics capabilities and organizes data in a manner that optimizes these graphics. On the other hand, the DES is a computer-aided engineering system intended for the engineer who must operationally understand an existing or planned design or who desires to carry out additional technical analysis based on a particular design. The DES emphasizes data retrieval in a form that not only provides the engineer access to search and display the data but also links the data automatically with the computer analysis codes.
Software design for automated assembly of truss structures
NASA Technical Reports Server (NTRS)
Herstrom, Catherine L.; Grantham, Carolyn; Allen, Cheryl L.; Doggett, William R.; Will, Ralph W.
1992-01-01
Concern over the limited intravehicular activity time has increased the interest in performing in-space assembly and construction operations with automated robotic systems. A technique being considered at LaRC is a supervised-autonomy approach, which can be monitored by an Earth-based supervisor that intervenes only when the automated system encounters a problem. A test-bed to support evaluation of the hardware and software requirements for supervised-autonomy assembly methods was developed. This report describes the design of the software system necessary to support the assembly process. The software is hierarchical and supports both automated assembly operations and supervisor error-recovery procedures, including the capability to pause and reverse any operation. The software design serves as a model for the development of software for more sophisticated automated systems and as a test-bed for evaluation of new concepts and hardware components.
DOT National Transportation Integrated Search
2009-02-01
The Office of Special Investigations at Iowa Department of Transportation (DOT) collects FWD data on regular basis to evaluate pavement structural conditions. The primary objective of this study was to develop a fully-automated software system for ra...
Mission Critical Computer Resources Management Guide
1988-09-01
Support Analyzers, Management, Generators Environments Word Workbench Processors Showroom System Structure HO Compilers IMath 1OperatingI Functions I...Simulated Automated, On-Line Generators Support Exercises Catalog, Function Environments Formal Spec Libraries Showroom System Structure I ADA Trackers I...shown in Figure 13-2. In this model, showrooms of larger more capable piecesare developed off-line for later integration and use in multiple systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stützer, Kristin; Haase, Robert; Exner, Florian
2016-09-15
Purpose: Rating both a lung segmentation algorithm and a deformable image registration (DIR) algorithm for subsequent lung computed tomography (CT) images by different evaluation techniques. Furthermore, investigating the relative performance and the correlation of the different evaluation techniques to address their potential value in a clinical setting. Methods: Two to seven subsequent CT images (69 in total) of 15 lung cancer patients were acquired prior, during, and after radiochemotherapy. Automated lung segmentations were compared to manually adapted contours. DIR between the first and all following CT images was performed with a fast algorithm specialized for lung tissue registration, requiring themore » lung segmentation as input. DIR results were evaluated based on landmark distances, lung contour metrics, and vector field inconsistencies in different subvolumes defined by eroding the lung contour. Correlations between the results from the three methods were evaluated. Results: Automated lung contour segmentation was satisfactory in 18 cases (26%), failed in 6 cases (9%), and required manual correction in 45 cases (66%). Initial and corrected contours had large overlap but showed strong local deviations. Landmark-based DIR evaluation revealed high accuracy compared to CT resolution with an average error of 2.9 mm. Contour metrics of deformed contours were largely satisfactory. The median vector length of inconsistency vector fields was 0.9 mm in the lung volume and slightly smaller for the eroded volumes. There was no clear correlation between the three evaluation approaches. Conclusions: Automatic lung segmentation remains challenging but can assist the manual delineation process. Proven by three techniques, the inspected DIR algorithm delivers reliable results for the lung CT data sets acquired at different time points. Clinical application of DIR demands a fast DIR evaluation to identify unacceptable results, for instance, by combining different automated DIR evaluation methods.« less
Characterization of a 16-Bit Digitizer for Lidar Data Acquisition
NASA Technical Reports Server (NTRS)
Williamson, Cynthia K.; DeYoung, Russell J.
2000-01-01
A 6-MHz 16-bit waveform digitizer was evaluated for use in atmospheric differential absorption lidar (DIAL) measurements of ozone. The digitizer noise characteristics were evaluated, and actual ozone DIAL atmospheric returns were digitized. This digitizer could replace computer-automated measurement and control (CAMAC)-based commercial digitizers and improve voltage accuracy.
A New Internet Tool for Automatic Evaluation in Control Systems and Programming
ERIC Educational Resources Information Center
Munoz de la Pena, D.; Gomez-Estern, F.; Dormido, S.
2012-01-01
In this paper we present a web-based innovative education tool designed for automating the collection, evaluation and error detection in practical exercises assigned to computer programming and control engineering students. By using a student/instructor code-fusion architecture, the conceptual limits of multiple-choice tests are overcome by far.…
Pilot interaction with automated airborne decision making systems
NASA Technical Reports Server (NTRS)
Rouse, W. B.; Chu, Y. Y.; Greenstein, J. S.; Walden, R. S.
1976-01-01
An investigation was made of interaction between a human pilot and automated on-board decision making systems. Research was initiated on the topic of pilot problem solving in automated and semi-automated flight management systems and attempts were made to develop a model of human decision making in a multi-task situation. A study was made of allocation of responsibility between human and computer, and discussed were various pilot performance parameters with varying degrees of automation. Optimal allocation of responsibility between human and computer was considered and some theoretical results found in the literature were presented. The pilot as a problem solver was discussed. Finally the design of displays, controls, procedures, and computer aids for problem solving tasks in automated and semi-automated systems was considered.
Reeves, Anthony P; Xie, Yiting; Liu, Shuang
2017-04-01
With the advent of fully automated image analysis and modern machine learning methods, there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. This paper presents a method and implementation for facilitating such datasets that addresses the critical issue of size scaling for algorithm validation and evaluation; current evaluation methods that are usually used in academic studies do not scale to large datasets. This method includes protocols for the documentation of many regions in very large image datasets; the documentation may be incrementally updated by new image data and by improved algorithm outcomes. This method has been used for 5 years in the context of chest health biomarkers from low-dose chest CT images that are now being used with increasing frequency in lung cancer screening practice. The lung scans are segmented into over 100 different anatomical regions, and the method has been applied to a dataset of over 20,000 chest CT images. Using this framework, the computer algorithms have been developed to achieve over 90% acceptable image segmentation on the complete dataset.
Operational Assessment of Color Vision
2016-06-20
evaluated in this study. 15. SUBJECT TERMS Color vision, aviation, cone contrast test, Colour Assessment & Diagnosis , color Dx, OBVA 16. SECURITY...symbologies are frequently used to aid or direct critical activities such as aircraft landing approaches or railroad right-of-way designations...computer-generated display systems have facilitated the development of computer-based, automated tests of color vision [14,15]. The United Kingdom’s
Visual texture for automated characterisation of geological features in borehole televiewer imagery
NASA Astrophysics Data System (ADS)
Al-Sit, Waleed; Al-Nuaimy, Waleed; Marelli, Matteo; Al-Ataby, Ali
2015-08-01
Detailed characterisation of the structure of subsurface fractures is greatly facilitated by digital borehole logging instruments, the interpretation of which is typically time-consuming and labour-intensive. Despite recent advances towards autonomy and automation, the final interpretation remains heavily dependent on the skill, experience, alertness and consistency of a human operator. Existing computational tools fail to detect layers between rocks that do not exhibit distinct fracture boundaries, and often struggle characterising cross-cutting layers and partial fractures. This paper presents a novel approach to the characterisation of planar rock discontinuities from digital images of borehole logs. Multi-resolution texture segmentation and pattern recognition techniques utilising Gabor filters are combined with an iterative adaptation of the Hough transform to enable non-distinct, partial, distorted and steep fractures and layers to be accurately identified and characterised in a fully automated fashion. This approach has successfully detected fractures and layers with high detection accuracy and at a relatively low computational cost.
A Hybrid Human-Computer Approach to the Extraction of Scientific Facts from the Literature.
Tchoua, Roselyne B; Chard, Kyle; Audus, Debra; Qin, Jian; de Pablo, Juan; Foster, Ian
2016-01-01
A wealth of valuable data is locked within the millions of research articles published each year. Reading and extracting pertinent information from those articles has become an unmanageable task for scientists. This problem hinders scientific progress by making it hard to build on results buried in literature. Moreover, these data are loosely structured, encoded in manuscripts of various formats, embedded in different content types, and are, in general, not machine accessible. We present a hybrid human-computer solution for semi-automatically extracting scientific facts from literature. This solution combines an automated discovery, download, and extraction phase with a semi-expert crowd assembled from students to extract specific scientific facts. To evaluate our approach we apply it to a challenging molecular engineering scenario, extraction of a polymer property: the Flory-Huggins interaction parameter. We demonstrate useful contributions to a comprehensive database of polymer properties.
A unified approach for composite cost reporting and prediction in the ACT program
NASA Technical Reports Server (NTRS)
Freeman, W. Tom; Vosteen, Louis F.; Siddiqi, Shahid
1991-01-01
The Structures Technology Program Office (STPO) at NASA Langley Research Center has held two workshops with representatives from the commercial airframe companies to establish a plan for development of a standard cost reporting format and a cost prediction tool for conceptual and preliminary designers. This paper reviews the findings of the workshop representatives with a plan for implementation of their recommendations. The recommendations of the cost tracking and reporting committee will be implemented by reinstituting the collection of composite part fabrication data in a format similar to the DoD/NASA Structural Composites Fabrication Guide. The process of data collection will be automated by taking advantage of current technology with user friendly computer interfaces and electronic data transmission. Development of a conceptual and preliminary designers' cost prediction model will be initiated. The model will provide a technically sound method for evaluating the relative cost of different composite structural designs, fabrication processes, and assembly methods that can be compared to equivalent metallic parts or assemblies. The feasibility of developing cost prediction software in a modular form for interfacing with state of the art preliminary design tools and computer aided design (CAD) programs is assessed.
Meeting the DHCP Challenge: A Model for Implementing a Decentralized Hospital Computer Program
Catellier, Julie; Benway, Paula K.; Perez, Kathleen
1987-01-01
The James A. Haley Veterans' Hospital in Tampa has been a consistent leader in the implementation of automated systems within the VA. Our approach has been essentially to focus on obtaining maximum user involvement and contribution to the automation program within the Medical Center. Since clinical acceptance is vital to a viable program, a great deal of our efforts have been aimed at maximizing the training and participation of physicians, nurses and other clinical staff. The following is a description of our organization structure relative to this topic. We believe it to be a highly workable approach which can be easily implemented structurally at any hospital — public or private.
Automated extraction of knowledge for model-based diagnostics
NASA Technical Reports Server (NTRS)
Gonzalez, Avelino J.; Myler, Harley R.; Towhidnejad, Massood; Mckenzie, Frederic D.; Kladke, Robin R.
1990-01-01
The concept of accessing computer aided design (CAD) design databases and extracting a process model automatically is investigated as a possible source for the generation of knowledge bases for model-based reasoning systems. The resulting system, referred to as automated knowledge generation (AKG), uses an object-oriented programming structure and constraint techniques as well as internal database of component descriptions to generate a frame-based structure that describes the model. The procedure has been designed to be general enough to be easily coupled to CAD systems that feature a database capable of providing label and connectivity data from the drawn system. The AKG system is capable of defining knowledge bases in formats required by various model-based reasoning tools.
The role of automation and artificial intelligence
NASA Astrophysics Data System (ADS)
Schappell, R. T.
1983-07-01
Consideration is given to emerging technologies that are not currently in common use, yet will be mature enough for implementation in a space station. Artificial intelligence (AI) will permit more autonomous operation and improve the man-machine interfaces. Technology goals include the development of expert systems, a natural language query system, automated planning systems, and AI image understanding systems. Intelligent robots and teleoperators will be needed, together with improved sensory systems for the robotics, housekeeping, vehicle control, and spacecraft housekeeping systems. Finally, NASA is developing the ROBSIM computer program to evaluate level of automation, perform parametric studies and error analyses, optimize trajectories and control systems, and assess AI technology.
Reduced complexity structural modeling for automated airframe synthesis
NASA Technical Reports Server (NTRS)
Hajela, Prabhat
1987-01-01
A procedure is developed for the optimum sizing of wing structures based on representing the built-up finite element assembly of the structure by equivalent beam models. The reduced-order beam models are computationally less demanding in an optimum design environment which dictates repetitive analysis of several trial designs. The design procedure is implemented in a computer program requiring geometry and loading information to create the wing finite element model and its equivalent beam model, and providing a rapid estimate of the optimum weight obtained from a fully stressed design approach applied to the beam. The synthesis procedure is demonstrated for representative conventional-cantilever and joined wing configurations.
NASA Technical Reports Server (NTRS)
Hribar, Michelle R.; Frumkin, Michael; Jin, Haoqiang; Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)
1998-01-01
Over the past decade, high performance computing has evolved rapidly; systems based on commodity microprocessors have been introduced in quick succession from at least seven vendors/families. Porting codes to every new architecture is a difficult problem; in particular, here at NASA, there are many large CFD applications that are very costly to port to new machines by hand. The LCM ("Legacy Code Modernization") Project is the development of an integrated parallelization environment (IPE) which performs the automated mapping of legacy CFD (Fortran) applications to state-of-the-art high performance computers. While most projects to port codes focus on the parallelization of the code, we consider porting to be an iterative process consisting of several steps: 1) code cleanup, 2) serial optimization,3) parallelization, 4) performance monitoring and visualization, 5) intelligent tools for automated tuning using performance prediction and 6) machine specific optimization. The approach for building this parallelization environment is to build the components for each of the steps simultaneously and then integrate them together. The demonstration will exhibit our latest research in building this environment: 1. Parallelizing tools and compiler evaluation. 2. Code cleanup and serial optimization using automated scripts 3. Development of a code generator for performance prediction 4. Automated partitioning 5. Automated insertion of directives. These demonstrations will exhibit the effectiveness of an automated approach for all the steps involved with porting and tuning a legacy code application for a new architecture.
The application of automated operations at the Institutional Processing Center
NASA Technical Reports Server (NTRS)
Barr, Thomas H.
1993-01-01
The JPL Institutional and Mission Computing Division, Communications, Computing and Network Services Section, with its mission contractor, OAO Corporation, have for some time been applying automation to the operation of JPL's Information Processing Center (IPC). Automation does not come in one easy to use package. Automation for a data processing center is made up of many different software and hardware products supported by trained personnel. The IPC automation effort formally began with console automation, and has since spiraled out to include production scheduling, data entry, report distribution, online reporting, failure reporting and resolution, documentation, library storage, and operator and user education, while requiring the interaction of multi-vendor and locally developed software. To begin the process, automation goals are determined. Then a team including operations personnel is formed to research and evaluate available options. By acquiring knowledge of current products and those in development, taking an active role in industry organizations, and learning of other data center's experiences, a forecast can be developed as to what direction technology is moving. With IPC management's approval, an implementation plan is developed and resources identified to test or implement new systems. As an example, IPC's new automated data entry system was researched by Data Entry, Production Control, and Advance Planning personnel. A proposal was then submitted to management for review. A determination to implement the new system was made and elements/personnel involved with the initial planning performed the implementation. The final steps of the implementation were educating data entry personnel in the areas effected and procedural changes necessary to the successful operation of the new system.
Effects of automation of information-processing functions on teamwork.
Wright, Melanie C; Kaber, David B
2005-01-01
We investigated the effects of automation as applied to different stages of information processing on team performance in a complex decision-making task. Forty teams of 2 individuals performed a simulated Theater Defense Task. Four automation conditions were simulated with computer assistance applied to realistic combinations of information acquisition, information analysis, and decision selection functions across two levels of task difficulty. Multiple measures of team effectiveness and team coordination were used. Results indicated different forms of automation have different effects on teamwork. Compared with a baseline condition, an increase in automation of information acquisition led to an increase in the ratio of information transferred to information requested; an increase in automation of information analysis resulted in higher team coordination ratings; and automation of decision selection led to better team effectiveness under low levels of task difficulty but at the cost of higher workload. The results support the use of early and intermediate forms of automation related to acquisition and analysis of information in the design of team tasks. Decision-making automation may provide benefits in more limited contexts. Applications of this research include the design and evaluation of automation in team environments.
Laser interferometric measurement of ion electrode shape and charge exchange erosion
NASA Technical Reports Server (NTRS)
Macrae, Gregory S.; Mercer, Carolyn R.
1991-01-01
A projected fringe profilometry system was applied to surface contour measurements of an accelerator electrode from an ion thrustor. The system permitted noncontact, nondestructive evaluation of the fine and gross structure of the electrode. A 3-D surface map of a dished electrode was generated without altering the electrode surface. The same system was used to examine charge exchange erosion pits near the periphery of the electrode to determine the depth, location, and volume of material lost. This electro-optical measurement system allowed rapid, nondestructive, digital data acquisition coupled with automated computer data processing. In addition, variable sensitivity allowed both coarse and fine measurements of objects having various surface finishes.
Automatically generated code for relativistic inhomogeneous cosmologies
NASA Astrophysics Data System (ADS)
Bentivegna, Eloisa
2017-02-01
The applications of numerical relativity to cosmology are on the rise, contributing insight into such cosmological problems as structure formation, primordial phase transitions, gravitational-wave generation, and inflation. In this paper, I present the infrastructure for the computation of inhomogeneous dust cosmologies which was used recently to measure the effect of nonlinear inhomogeneity on the cosmic expansion rate. I illustrate the code's architecture, provide evidence for its correctness in a number of familiar cosmological settings, and evaluate its parallel performance for grids of up to several billion points. The code, which is available as free software, is based on the Einstein Toolkit infrastructure, and in particular leverages the automated code generation capabilities provided by its component Kranc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, Andrew; Lawrence, Earl
The Response Surface Modeling (RSM) Tool Suite is a collection of three codes used to generate an empirical interpolation function for a collection of drag coefficient calculations computed with Test Particle Monte Carlo (TPMC) simulations. The first code, "Automated RSM", automates the generation of a drag coefficient RSM for a particular object to a single command. "Automated RSM" first creates a Latin Hypercube Sample (LHS) of 1,000 ensemble members to explore the global parameter space. For each ensemble member, a TPMC simulation is performed and the object drag coefficient is computed. In the next step of the "Automated RSM" code,more » a Gaussian process is used to fit the TPMC simulations. In the final step, Markov Chain Monte Carlo (MCMC) is used to evaluate the non-analytic probability distribution function from the Gaussian process. The second code, "RSM Area", creates a look-up table for the projected area of the object based on input limits on the minimum and maximum allowed pitch and yaw angles and pitch and yaw angle intervals. The projected area from the look-up table is used to compute the ballistic coefficient of the object based on its pitch and yaw angle. An accurate ballistic coefficient is crucial in accurately computing the drag on an object. The third code, "RSM Cd", uses the RSM generated by the "Automated RSM" code and the projected area look-up table generated by the "RSM Area" code to accurately compute the drag coefficient and ballistic coefficient of the object. The user can modify the object velocity, object surface temperature, the translational temperature of the gas, the species concentrations of the gas, and the pitch and yaw angles of the object. Together, these codes allow for the accurate derivation of an object's drag coefficient and ballistic coefficient under any conditions with only knowledge of the object's geometry and mass.« less
Automating ATLAS Computing Operations using the Site Status Board
NASA Astrophysics Data System (ADS)
J, Andreeva; Iglesias C, Borrego; S, Campana; Girolamo A, Di; I, Dzhunov; Curull X, Espinal; S, Gayazov; E, Magradze; M, Nowotka M.; L, Rinaldi; P, Saiz; J, Schovancova; A, Stewart G.; M, Wright
2012-12-01
The automation of operations is essential to reduce manpower costs and improve the reliability of the system. The Site Status Board (SSB) is a framework which allows Virtual Organizations to monitor their computing activities at distributed sites and to evaluate site performance. The ATLAS experiment intensively uses the SSB for the distributed computing shifts, for estimating data processing and data transfer efficiencies at a particular site, and for implementing automatic exclusion of sites from computing activities, in case of potential problems. The ATLAS SSB provides a real-time aggregated monitoring view and keeps the history of the monitoring metrics. Based on this history, usability of a site from the perspective of ATLAS is calculated. The paper will describe how the SSB is integrated in the ATLAS operations and computing infrastructure and will cover implementation details of the ATLAS SSB sensors and alarm system, based on the information in the SSB. It will demonstrate the positive impact of the use of the SSB on the overall performance of ATLAS computing activities and will overview future plans.
Automated dynamic analytical model improvement for damped structures
NASA Technical Reports Server (NTRS)
Fuh, J. S.; Berman, A.
1985-01-01
A method is described to improve a linear nonproportionally damped analytical model of a structure. The procedure finds the smallest changes in the analytical model such that the improved model matches the measured modal parameters. Features of the method are: (1) ability to properly treat complex valued modal parameters of a damped system; (2) applicability to realistically large structural models; and (3) computationally efficiency without involving eigensolutions and inversion of a large matrix.
Evaluating Management Information Systems, A Protocol for Automated Peer Review Systems
Black, Gordon C.
1980-01-01
This paper discusses key issues in evaluating an automated Peer Review System. Included are the conceptual base, design, steps in planning structural components, operation parameters, criteria, costs and a detailed outline or protocol for use in the evaluation. At the heart of the Peer Review System is the criteria utilized for measuring quality. Criteria evaluation should embrace, as a minimum, appropriateness, validity and reliability, and completemess or comprehensiveness of content. Such an evaluation is not complete without determining the impact (clinical outcome) of the service system or the patient and the population served.
NASA Astrophysics Data System (ADS)
Hopp, T.; Zapf, M.; Ruiter, N. V.
2014-03-01
An essential processing step for comparison of Ultrasound Computer Tomography images to other modalities, as well as for the use in further image processing, is to segment the breast from the background. In this work we present a (semi-) automated 3D segmentation method which is based on the detection of the breast boundary in coronal slice images and a subsequent surface fitting. The method was evaluated using a software phantom and in-vivo data. The fully automatically processed phantom results showed that a segmentation of approx. 10% of the slices of a dataset is sufficient to recover the overall breast shape. Application to 16 in-vivo datasets was performed successfully using semi-automated processing, i.e. using a graphical user interface for manual corrections of the automated breast boundary detection. The processing time for the segmentation of an in-vivo dataset could be significantly reduced by a factor of four compared to a fully manual segmentation. Comparison to manually segmented images identified a smoother surface for the semi-automated segmentation with an average of 11% of differing voxels and an average surface deviation of 2mm. Limitations of the edge detection may be overcome by future updates of the KIT USCT system, allowing a fully-automated usage of our segmentation approach.
Computational neuroanatomy: ontology-based representation of neural components and connectivity
Rubin, Daniel L; Talos, Ion-Florin; Halle, Michael; Musen, Mark A; Kikinis, Ron
2009-01-01
Background A critical challenge in neuroscience is organizing, managing, and accessing the explosion in neuroscientific knowledge, particularly anatomic knowledge. We believe that explicit knowledge-based approaches to make neuroscientific knowledge computationally accessible will be helpful in tackling this challenge and will enable a variety of applications exploiting this knowledge, such as surgical planning. Results We developed ontology-based models of neuroanatomy to enable symbolic lookup, logical inference and mathematical modeling of neural systems. We built a prototype model of the motor system that integrates descriptive anatomic and qualitative functional neuroanatomical knowledge. In addition to modeling normal neuroanatomy, our approach provides an explicit representation of abnormal neural connectivity in disease states, such as common movement disorders. The ontology-based representation encodes both structural and functional aspects of neuroanatomy. The ontology-based models can be evaluated computationally, enabling development of automated computer reasoning applications. Conclusion Neuroanatomical knowledge can be represented in machine-accessible format using ontologies. Computational neuroanatomical approaches such as described in this work could become a key tool in translational informatics, leading to decision support applications that inform and guide surgical planning and personalized care for neurological disease in the future. PMID:19208191
Computer Technology for Industry
NASA Technical Reports Server (NTRS)
1979-01-01
In this age of the computer, more and more business firms are automating their operations for increased efficiency in a great variety of jobs, from simple accounting to managing inventories, from precise machining to analyzing complex structures. In the interest of national productivity, NASA is providing assistance both to longtime computer users and newcomers to automated operations. Through a special technology utilization service, NASA saves industry time and money by making available already developed computer programs which have secondary utility. A computer program is essentially a set of instructions which tells the computer how to produce desired information or effect by drawing upon its stored input. Developing a new program from scratch can be costly and time-consuming. Very often, however, a program developed for one purpose can readily be adapted to a totally different application. To help industry take advantage of existing computer technology, NASA operates the Computer Software Management and Information Center (COSMIC)(registered TradeMark),located at the University of Georgia. COSMIC maintains a large library of computer programs developed for NASA, the Department of Defense, the Department of Energy and other technology-generating agencies of the government. The Center gets a continual flow of software packages, screens them for adaptability to private sector usage, stores them and informs potential customers of their availability.
[Health technology assessment report: Computer-assisted Pap test for cervical cancer screening].
Della Palma, Paolo; Moresco, Luca; Giorgi Rossi, Paolo
2012-01-01
HEALTH PROBLEM: Cervical cancer is a disease which is highly preventable by means of Pap test screening for the precancerous lesions, which can be easily treated. Furthermore, in the near future, control of the disease will be enhanced by the vaccination which prevents the infection of those human papillomavirus types that cause the vast majority of cervical cancers. The effectiveness of screening in drastically reducing cervical cancer incidence has been clearly demonstrated. The epidemiology of cervical cancer in industrialised countries is now determined mostly by the Pap test coverage of the female population and by the ability of health systems to assure appropriate follow up after an abnormal Pap test. Today there are two fully automated systems for computer-assisted Pap test: the BD FocalPoint and the Hologic Imager. Recently, the Hologic Integrated Imager, a semi-automated system, was launched. The two fully automated systems are composed of a central scanner, where the machine examines the cytologic slide, and of one or more review stations, where the cytologists analyze the slides previously centrally scanned. The softwares used by the two systems identify the fields of interest so that the cytologists can look only at those points, automatically pointed out by the review station. Furthermore, the FocalPoint system classifies the slides according to their level of risk of containing signs of relevant lesions. Those in the upper classes--about one fifth of the slides--are labelled as « further review », while those in the lower level of risk, i.e. slides that have such a low level of risk that they can be considered as negative with no human review, are labelled as « no further review ». The aim of computer-assisted Pap test is to reduce the time of slide examination and to increase productivity. Furthermore, the number of errors due to lack of attention may decrease. Both the systems can be applied to liquidbased cytology, while only the BD Focal Point can be used on conventional smears. Cytology screening has some critical points: there is a shortage of cytologists/cytotechnicians; the quality strongly depends on the experience and ability of the cytologist; there is a subjective component in the cytological diagnosis; in highly screened populations, the prevalence of lesions is very low and the activity of cytologists is very monotonous. On the other hand, a progressive shift to molecular screening using HPV-DNA test as primary screening test is very likely in the near future; cytology will be used as triage test, dramatically reducing the number of slides to process and increasing the prevalence of lesions in those Pap tests. In this Report we assume that the diagnostic accuracy of computer-assisted Pap test is equal to the accuracy of manual Pap test and, consequently, that screening using computer-assisted Pap test has the same efficacy in reducing cervical cancer incidence and mortality. Under this assumption, the effectiveness/ benefit/utility is the same for the two screening modes, i.e. the economic analysis will be a cost minimization study. Furthermore, the screening process is identical for the two modalities in all the phases except for slide interpretation. The cost minimization analysis will be limited to the only phase differing between the two modes, i.e. the study will be a differential cost analysis between a labour-intensive strategy (traditional Pap test) and a technology-intensive strategy (the computer-assisted Pap test). Briefly, the objectives of this HTA Report are: to determine the break even point of computer-assisted Pap test systems, i.e. the volume of slides processed per year at which putting in place a computer-assisted Pap test system becomes economically convenient; to quantify the cost per Pap test in different scenarios according to screening centre activity volume, productivity of cytologist, type of cytology (conventional smear or liquid-based, fully automated or semi-automated computer-assisted); to analyse the computer-assisted Pap test in the Italian context, through a survey of the centres using the technology, collecting data useful for the sensitivity analysis of the economic evaluation; to evaluate the acceptability of the technology in the screening services; to evaluate the organizational and financial impact of the computer-assisted Pap test in different scenarios; to illustrate the ideal organization to implement computer-assisted Pap test in terms of volume of activity, productivity, and human and technological resources. to produce this Report, the following process was adopted: application to the Ministry of health for a grant « Analysis of the impact of professional involvement in evidence generation for the HTA process »; within this project, the sub-project « Cost effectiveness evaluation of the computer-assisted Pap test in the Italian screening programmes » was financed; constitution of the Working Group, which included the project coordinator, the principal investigator, and the health economist; identification of the centres using the computer-assisted Pap test and which had published scientific reports on the subject; identification of the Consulting Committee (stakeholder), which included screening programmes managers, pathologists, economists, health policy-makers, citizen organizations, and manufacturers. Once the evaluation was concluded, a plenary meeting with Working Group and Consulting Committee was held. The working group drafted the final version of this Report, which took into account the comments received. the fully automated computer-assisted Pap test has an important financial and organizational impact on screening programmes. The assessment of this health technology reached the following conclusions: according to the survey results, after some distrust, cytologists accepted the use of the machine and appreciated the reduction in interpretation time and the reliability in identifying the fields of interest; from an economic point of view, the automated computer-assisted Pap test can be convenient only with conventional smears if the screening centre has a volume of more than 49,000 slides/year and the cytologist productivity increases about threefold. It must be highlighted that it is not sufficient to adopt the automated Pap test to reach such an increase in productivity; the laboratory must be organised or re-organised to optimise the use of the review stations and the person time. In the case of liquid-based cytology, the adoption of automated computer- assisted Pap test can only increase the costs. In fact, liquid-based cytology increases the cost of consumable materials but reduces the interpretation time, even in manual screening. Consequently, the reduction of human costs is smaller in the case of computer-assisted screening. Liquid-based cytology has other implications and advantages not linked to the use of computer-assisted Pap test that should be taken into account and are beyond the scope of this Report; given that the computer-assisted Pap test reduces human costs, it may be more advantageous where the cost of cytologists is higher; given the relatively small volume of activity of screening centres in Italy, computer-assisted Pap test may be reasonable for a network using only one central scanner and several remote review stations; the use of automated computer-assisted Pap test only for quality control in a single centre is not economically sustainable. In this case as well, several centres, for example at the regional level, may form a consortium to reach a reasonable number of slides to achieve the break even point. Regarding the use of a machine rather than human intelligence to interpret the slides, some ethical issues were initially raised, but both the scientific community and healthcare professionals have accepted this technology. The identification of fields of interest by the machine is highly reproducible, reducing subjectivity in the diagnostic process. The Hologic system always includes a check by the human eye, while the FocalPoint system identifies about one fifth of the slides as No Further Review. Several studies, some of which conducted in Italy, confirmed the reliability of this classification. There is still some resistance to accept the practice of No Further Review. A check of previous slides and clinical data can be useful to make the cytologist and the clinician more confident. Computer-assisted automated Pap test may be introduced only if there is a need to increase the volume of slides screened to cover the screening target population and sufficient human resources are not available. Switching a programme using conventional slides to automatic scanning can only lead to a reduction in costs if the volume of slides per year exceeds 49,000 slides/annum and cytologist productivity is optimised to more than 20,000 slides per year. At a productivity of 15,000 or fewer, the automated computer-assisted Pap test cannot be convenient. Switching from manual screening with conventional slides to automatic scanning with liquid-based cytology cannot generate any economic saving, but the system could increase output with a given number of staff. The transition from manual to computer assisted automated screening of liquid based cytology will not generate savings and the increase in productivity will be lower than that of the switch from manual/conventional to automated/conventional. The use of biologists or pathologists as cytologists is more costly than the use of cytoscreeners. Given that the automated computer-assisted Pap test reduces human resource costs, its adoption in a model using only biologists and pathologists for screening is more economically advantageous. (ABSTRACT TRUNCATED)
Dynamic CT myocardial perfusion imaging: performance of 3D semi-automated evaluation software.
Ebersberger, Ullrich; Marcus, Roy P; Schoepf, U Joseph; Lo, Gladys G; Wang, Yining; Blanke, Philipp; Geyer, Lucas L; Gray, J Cranston; McQuiston, Andrew D; Cho, Young Jun; Scheuering, Michael; Canstein, Christian; Nikolaou, Konstantin; Hoffmann, Ellen; Bamberg, Fabian
2014-01-01
To evaluate the performance of three-dimensional semi-automated evaluation software for the assessment of myocardial blood flow (MBF) and blood volume (MBV) at dynamic myocardial perfusion computed tomography (CT). Volume-based software relying on marginal space learning and probabilistic boosting tree-based contour fitting was applied to CT myocardial perfusion imaging data of 37 subjects. In addition, all image data were analysed manually and both approaches were compared with SPECT findings. Study endpoints included time of analysis and conventional measures of diagnostic accuracy. Of 592 analysable segments, 42 showed perfusion defects on SPECT. Average analysis times for the manual and software-based approaches were 49.1 ± 11.2 and 16.5 ± 3.7 min respectively (P < 0.01). There was strong agreement between the two measures of interest (MBF, ICC = 0.91, and MBV, ICC = 0.88, both P < 0.01) and no significant difference in MBF/MBV with respect to diagnostic accuracy between the two approaches for both MBF and MBV for manual versus software-based approach; respectively; all comparisons P > 0.05. Three-dimensional semi-automated evaluation of dynamic myocardial perfusion CT data provides similar measures and diagnostic accuracy to manual evaluation, albeit with substantially reduced analysis times. This capability may aid the integration of this test into clinical workflows. • Myocardial perfusion CT is attractive for comprehensive coronary heart disease assessment. • Traditional image analysis methods are cumbersome and time-consuming. • Automated 3D perfusion software shortens analysis times. • Automated 3D perfusion software increases standardisation of myocardial perfusion CT. • Automated, standardised analysis fosters myocardial perfusion CT integration into clinical practice.
Juan-Albarracín, Javier; Fuster-Garcia, Elies; Manjón, José V; Robles, Montserrat; Aparici, F; Martí-Bonmatí, L; García-Gómez, Juan M
2015-01-01
Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR) images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM), whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF). An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS) 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation.
Recycling isotachophoresis - A novel approach to preparative protein fractionation
NASA Technical Reports Server (NTRS)
Sloan, Jeffrey E.; Thormann, Wolfgang; Bier, Milan; Twitty, Garland E.; Mosher, Richard A.
1986-01-01
The concept of automated recycling isotachophoresis (RITP) as a purification methodology is discussed, in addition to a description of the apparatus. In the present automated RITP, the computer system follows the achievement of steady state using arrays of universal and specific sensors, monitors the position of the front edge of the zone structure, activates the counterflow if the leading boundary passes a specified position along the separation axis, or changes the applied current, accordingly. The system demonstrates high resolution, in addition to higher processing rates than are possible in zone electrophoresis or isoelectric focusing.
DockoMatic: automated peptide analog creation for high throughput virtual screening.
Jacob, Reed B; Bullock, Casey W; Andersen, Tim; McDougal, Owen M
2011-10-01
The purpose of this manuscript is threefold: (1) to describe an update to DockoMatic that allows the user to generate cyclic peptide analog structure files based on protein database (pdb) files, (2) to test the accuracy of the peptide analog structure generation utility, and (3) to evaluate the high throughput capacity of DockoMatic. The DockoMatic graphical user interface interfaces with the software program Treepack to create user defined peptide analogs. To validate this approach, DockoMatic produced cyclic peptide analogs were tested for three-dimensional structure consistency and binding affinity against four experimentally determined peptide structure files available in the Research Collaboratory for Structural Bioinformatics database. The peptides used to evaluate this new functionality were alpha-conotoxins ImI, PnIA, and their published analogs. Peptide analogs were generated by DockoMatic and tested for their ability to bind to X-ray crystal structure models of the acetylcholine binding protein originating from Aplysia californica. The results, consisting of more than 300 simulations, demonstrate that DockoMatic predicts the binding energy of peptide structures to within 3.5 kcal mol(-1), and the orientation of bound ligand compares to within 1.8 Å root mean square deviation for ligand structures as compared to experimental data. Evaluation of high throughput virtual screening capacity demonstrated that Dockomatic can collect, evaluate, and summarize the output of 10,000 AutoDock jobs in less than 2 hours of computational time, while 100,000 jobs requires approximately 15 hours and 1,000,000 jobs is estimated to take up to a week. Copyright © 2011 Wiley Periodicals, Inc.
Code of Federal Regulations, 2012 CFR
2012-10-01
..., financial records, and automated data systems; (ii) The data are free from computational errors and are... records, financial records, and automated data systems; (ii) The data are free from computational errors... records, and automated data systems; (ii) The data are free from computational errors and are internally...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Orimoto, Yuuichi, E-mail: orimoto.yuuichi.888@m.kyushu-u.ac.jp; Aoki, Yuriko; Japan Science and Technology Agency, CREST, 4-1-8 Hon-chou, Kawaguchi, Saitama 332-0012
An automated property optimization method was developed based on the ab initio O(N) elongation (ELG) method and applied to the optimization of nonlinear optical (NLO) properties in DNA as a first test. The ELG method mimics a polymerization reaction on a computer, and the reaction terminal of a starting cluster is attacked by monomers sequentially to elongate the electronic structure of the system by solving in each step a limited space including the terminal (localized molecular orbitals at the terminal) and monomer. The ELG-finite field (ELG-FF) method for calculating (hyper-)polarizabilities was used as the engine program of the optimization method,more » and it was found to show linear scaling efficiency while maintaining high computational accuracy for a random sequenced DNA model. Furthermore, the self-consistent field convergence was significantly improved by using the ELG-FF method compared with a conventional method, and it can lead to more feasible NLO property values in the FF treatment. The automated optimization method successfully chose an appropriate base pair from four base pairs (A, T, G, and C) for each elongation step according to an evaluation function. From test optimizations for the first order hyper-polarizability (β) in DNA, a substantial difference was observed depending on optimization conditions between “choose-maximum” (choose a base pair giving the maximum β for each step) and “choose-minimum” (choose a base pair giving the minimum β). In contrast, there was an ambiguous difference between these conditions for optimizing the second order hyper-polarizability (γ) because of the small absolute value of γ and the limitation of numerical differential calculations in the FF method. It can be concluded that the ab initio level property optimization method introduced here can be an effective step towards an advanced computer aided material design method as long as the numerical limitation of the FF method is taken into account.« less
Orimoto, Yuuichi; Aoki, Yuriko
2016-07-14
An automated property optimization method was developed based on the ab initio O(N) elongation (ELG) method and applied to the optimization of nonlinear optical (NLO) properties in DNA as a first test. The ELG method mimics a polymerization reaction on a computer, and the reaction terminal of a starting cluster is attacked by monomers sequentially to elongate the electronic structure of the system by solving in each step a limited space including the terminal (localized molecular orbitals at the terminal) and monomer. The ELG-finite field (ELG-FF) method for calculating (hyper-)polarizabilities was used as the engine program of the optimization method, and it was found to show linear scaling efficiency while maintaining high computational accuracy for a random sequenced DNA model. Furthermore, the self-consistent field convergence was significantly improved by using the ELG-FF method compared with a conventional method, and it can lead to more feasible NLO property values in the FF treatment. The automated optimization method successfully chose an appropriate base pair from four base pairs (A, T, G, and C) for each elongation step according to an evaluation function. From test optimizations for the first order hyper-polarizability (β) in DNA, a substantial difference was observed depending on optimization conditions between "choose-maximum" (choose a base pair giving the maximum β for each step) and "choose-minimum" (choose a base pair giving the minimum β). In contrast, there was an ambiguous difference between these conditions for optimizing the second order hyper-polarizability (γ) because of the small absolute value of γ and the limitation of numerical differential calculations in the FF method. It can be concluded that the ab initio level property optimization method introduced here can be an effective step towards an advanced computer aided material design method as long as the numerical limitation of the FF method is taken into account.
Yousef Kalafi, Elham; Town, Christopher; Kaur Dhillon, Sarinder
2017-09-04
Identification of taxonomy at a specific level is time consuming and reliant upon expert ecologists. Hence the demand for automated species identification increased over the last two decades. Automation of data classification is primarily focussed on images, incorporating and analysing image data has recently become easier due to developments in computational technology. Research efforts in identification of species include specimens' image processing, extraction of identical features, followed by classifying them into correct categories. In this paper, we discuss recent automated species identification systems, categorizing and evaluating their methods. We reviewed and compared different methods in step by step scheme of automated identification and classification systems of species images. The selection of methods is influenced by many variables such as level of classification, number of training data and complexity of images. The aim of writing this paper is to provide researchers and scientists an extensive background study on work related to automated species identification, focusing on pattern recognition techniques in building such systems for biodiversity studies.
The Plight of Manufacturing: What Can Be Done?
ERIC Educational Resources Information Center
Cyert, Richard M.
1985-01-01
Proposes that full automation is the best current option for the United States' manufacturing industries. Advocates increased use of electronics, robotics, and computers in the establishment of unmanned factories. Implications of this movement are examined in terms of labor, management, and the structure of the economy. (ML)
Jimenez-Del-Toro, Oscar; Muller, Henning; Krenn, Markus; Gruenberg, Katharina; Taha, Abdel Aziz; Winterstein, Marianne; Eggel, Ivan; Foncubierta-Rodriguez, Antonio; Goksel, Orcun; Jakab, Andras; Kontokotsios, Georgios; Langs, Georg; Menze, Bjoern H; Salas Fernandez, Tomas; Schaer, Roger; Walleyo, Anna; Weber, Marc-Andre; Dicente Cid, Yashin; Gass, Tobias; Heinrich, Mattias; Jia, Fucang; Kahl, Fredrik; Kechichian, Razmig; Mai, Dominic; Spanier, Assaf B; Vincent, Graham; Wang, Chunliang; Wyeth, Daniel; Hanbury, Allan
2016-11-01
Variations in the shape and appearance of anatomical structures in medical images are often relevant radiological signs of disease. Automatic tools can help automate parts of this manual process. A cloud-based evaluation framework is presented in this paper including results of benchmarking current state-of-the-art medical imaging algorithms for anatomical structure segmentation and landmark detection: the VISCERAL Anatomy benchmarks. The algorithms are implemented in virtual machines in the cloud where participants can only access the training data and can be run privately by the benchmark administrators to objectively compare their performance in an unseen common test set. Overall, 120 computed tomography and magnetic resonance patient volumes were manually annotated to create a standard Gold Corpus containing a total of 1295 structures and 1760 landmarks. Ten participants contributed with automatic algorithms for the organ segmentation task, and three for the landmark localization task. Different algorithms obtained the best scores in the four available imaging modalities and for subsets of anatomical structures. The annotation framework, resulting data set, evaluation setup, results and performance analysis from the three VISCERAL Anatomy benchmarks are presented in this article. Both the VISCERAL data set and Silver Corpus generated with the fusion of the participant algorithms on a larger set of non-manually-annotated medical images are available to the research community.
Automated matching software for clinical trials eligibility: measuring efficiency and flexibility.
Penberthy, Lynne; Brown, Richard; Puma, Federico; Dahman, Bassam
2010-05-01
Clinical trials (CT) serve as the media that translates clinical research into standards of care. Low or slow recruitment leads to delays in delivery of new therapies to the public. Determination of eligibility in all patients is one of the most important factors to assure unbiased results from the clinical trials process and represents the first step in addressing the issue of under representation and equal access to clinical trials. This is a pilot project evaluating the efficiency, flexibility, and generalizibility of an automated clinical trials eligibility screening tool across 5 different clinical trials and clinical trial scenarios. There was a substantial total savings during the study period in research staff time spent in evaluating patients for eligibility ranging from 165h to 1329h. There was a marked enhancement in efficiency with the automated system for all but one study in the pilot. The ratio of mean staff time required per eligible patient identified ranged from 0.8 to 19.4 for the manual versus the automated process. The results of this study demonstrate that automation offers an opportunity to reduce the burden of the manual processes required for CT eligibility screening and to assure that all patients have an opportunity to be evaluated for participation in clinical trials as appropriate. The automated process greatly reduces the time spent on eligibility screening compared with the traditional manual process by effectively transferring the load of the eligibility assessment process to the computer. Copyright (c) 2010 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Colón, Yamil J.; Gómez-Gualdrón, Diego A.; Snurr, Randall Q.
Metal-organic frameworks (MOFs) are promising materials for a range of energy and environmental applications. Here we describe in detail a computational algorithm and code to generate MOFs based on edge-transitive topological nets for subsequent evaluation via molecular simulation. This algorithm has been previously used by us to construct and evaluate 13 512 MOFs of 41 different topologies for cryo-adsorbed hydrogen storage. Grand canonical Monte Carlo simulations are used here to evaluate the 13 512 structures for the storage of gaseous fuels such as hydrogen and methane and nondistillative separation of xenon/krypton mixtures at various operating conditions. MOF performance for bothmore » gaseous fuel storage and xenon/krypton separation is influenced by topology. Simulation data suggest that gaseous fuel storage performance is topology-dependent due to MOF properties such as void fraction and surface area combining differently in different topologies, whereas xenon/krypton separation performance is topology-dependent due to how topology constrains the pore size distribution.« less
1986-09-01
implement a computer program as a function of the Function Point Total. As shown in Table 9, the software product (referred to as SPQR ) establishes the...language being used. Source code statements are defined in SPQR as consisting of executable statements and data definitions. The factors used to calculate... SPQR is a trademark of Software Productivity Research, Inc, 233 TABLE 9 NUMBER OF COMPUTER PROGRAM SOURCE STATEMENTS PER FUNCTION POINT TOTAL
Automated methods for hippocampus segmentation: the evolution and a review of the state of the art.
Dill, Vanderson; Franco, Alexandre Rosa; Pinho, Márcio Sarroglia
2015-04-01
The segmentation of the hippocampus in Magnetic Resonance Imaging (MRI) has been an important procedure to diagnose and monitor several clinical situations. The precise delineation of the borders of this brain structure makes it possible to obtain a measure of the volume and estimate its shape, which can be used to diagnose some diseases, such as Alzheimer's disease, schizophrenia and epilepsy. As the manual segmentation procedure in three-dimensional images is highly time consuming and the reproducibility is low, automated methods introduce substantial gains. On the other hand, the implementation of those methods is a challenge because of the low contrast of this structure in relation to the neighboring areas of the brain. Within this context, this research presents a review of the evolution of automatized methods for the segmentation of the hippocampus in MRI. Many proposed methods for segmentation of the hippocampus have been published in leading journals in the medical image processing area. This paper describes these methods presenting the techniques used and quantitatively comparing the methods based on Dice Similarity Coefficient. Finally, we present an evaluation of those methods considering the degree of user intervention, computational cost, segmentation accuracy and feasibility of application in a clinical routine.
Computer-Aided Diagnosis of Acute Lymphoblastic Leukaemia
2018-01-01
Leukaemia is a form of blood cancer which affects the white blood cells and damages the bone marrow. Usually complete blood count (CBC) and bone marrow aspiration are used to diagnose the acute lymphoblastic leukaemia. It can be a fatal disease if not diagnosed at the earlier stage. In practice, manual microscopic evaluation of stained sample slide is used for diagnosis of leukaemia. But manual diagnostic methods are time-consuming, less accurate, and prone to errors due to various human factors like stress, fatigue, and so forth. Therefore, different automated systems have been proposed to wrestle the glitches in the manual diagnostic methods. In recent past, some computer-aided leukaemia diagnosis methods are presented. These automated systems are fast, reliable, and accurate as compared to manual diagnosis methods. This paper presents review of computer-aided diagnosis systems regarding their methodologies that include enhancement, segmentation, feature extraction, classification, and accuracy. PMID:29681996
A knowledge-based approach to automated flow-field zoning for computational fluid dynamics
NASA Technical Reports Server (NTRS)
Vogel, Alison Andrews
1989-01-01
An automated three-dimensional zonal grid generation capability for computational fluid dynamics is shown through the development of a demonstration computer program capable of automatically zoning the flow field of representative two-dimensional (2-D) aerodynamic configurations. The applicability of a knowledge-based programming approach to the domain of flow-field zoning is examined. Several aspects of flow-field zoning make the application of knowledge-based techniques challenging: the need for perceptual information, the role of individual bias in the design and evaluation of zonings, and the fact that the zoning process is modeled as a constructive, design-type task (for which there are relatively few examples of successful knowledge-based systems in any domain). Engineering solutions to the problems arising from these aspects are developed, and a demonstration system is implemented which can design, generate, and output flow-field zonings for representative 2-D aerodynamic configurations.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-24
... Production Act of 1993--Joint Venture Under Tip Award No. 70NANB10H014 To Perform Project Entitled: Automated... Project Entitled: Automated Nondestructive Evaluation and Rehabilitation System (``ANDERS'') for Bridge... approaches or fragmented NDE, (2) comprehensive condition and structural assessment (including the...
Automated diagnosis of fetal alcohol syndrome using 3D facial image analysis
Fang, Shiaofen; McLaughlin, Jason; Fang, Jiandong; Huang, Jeffrey; Autti-Rämö, Ilona; Fagerlund, Åse; Jacobson, Sandra W.; Robinson, Luther K.; Hoyme, H. Eugene; Mattson, Sarah N.; Riley, Edward; Zhou, Feng; Ward, Richard; Moore, Elizabeth S.; Foroud, Tatiana
2012-01-01
Objectives Use three-dimensional (3D) facial laser scanned images from children with fetal alcohol syndrome (FAS) and controls to develop an automated diagnosis technique that can reliably and accurately identify individuals prenatally exposed to alcohol. Methods A detailed dysmorphology evaluation, history of prenatal alcohol exposure, and 3D facial laser scans were obtained from 149 individuals (86 FAS; 63 Control) recruited from two study sites (Cape Town, South Africa and Helsinki, Finland). Computer graphics, machine learning, and pattern recognition techniques were used to automatically identify a set of facial features that best discriminated individuals with FAS from controls in each sample. Results An automated feature detection and analysis technique was developed and applied to the two study populations. A unique set of facial regions and features were identified for each population that accurately discriminated FAS and control faces without any human intervention. Conclusion Our results demonstrate that computer algorithms can be used to automatically detect facial features that can discriminate FAS and control faces. PMID:18713153
ICECAP: an integrated, general-purpose, automation-assisted IC50/EC50 assay platform.
Li, Ming; Chou, Judy; King, Kristopher W; Jing, Jing; Wei, Dong; Yang, Liyu
2015-02-01
IC50 and EC50 values are commonly used to evaluate drug potency. Mass spectrometry (MS)-centric bioanalytical and biomarker labs are now conducting IC50/EC50 assays, which, if done manually, are tedious and error-prone. Existing bioanalytical sample preparation automation systems cannot meet IC50/EC50 assay throughput demand. A general-purpose, automation-assisted IC50/EC50 assay platform was developed to automate the calculations of spiking solutions and the matrix solutions preparation scheme, the actual spiking and matrix solutions preparations, as well as the flexible sample extraction procedures after incubation. In addition, the platform also automates the data extraction, nonlinear regression curve fitting, computation of IC50/EC50 values, graphing, and reporting. The automation-assisted IC50/EC50 assay platform can process the whole class of assays of varying assay conditions. In each run, the system can handle up to 32 compounds and up to 10 concentration levels per compound, and it greatly improves IC50/EC50 assay experimental productivity and data processing efficiency. © 2014 Society for Laboratory Automation and Screening.
Mooers, Blaine H. M.
2016-03-24
Using direct methods starting from random phases, the crystal structure of a 32-base-pair RNA (675 non-H RNA atoms in the asymmetric unit) was determined using only the native diffraction data (resolution limit 1.05 Å) and the computer program SIR2014. The almost three helical turns of the RNA in the asymmetric unit introduced partial or imperfect translational pseudosymmetry (TPS) that modulated the intensities when averaged by the lMiller indices but still escaped automated detection. Almost six times as many random phase sets had to be tested on average to reach a correct structure compared with a similar-sized RNA hairpin (27 nucleotides,more » 580 non-H RNA atoms) without TPS. Lastly, more sensitive methods are needed for the automated detection of partial TPS.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mooers, Blaine H. M.
Using direct methods starting from random phases, the crystal structure of a 32-base-pair RNA (675 non-H RNA atoms in the asymmetric unit) was determined using only the native diffraction data (resolution limit 1.05 Å) and the computer program SIR2014. The almost three helical turns of the RNA in the asymmetric unit introduced partial or imperfect translational pseudosymmetry (TPS) that modulated the intensities when averaged by the lMiller indices but still escaped automated detection. Almost six times as many random phase sets had to be tested on average to reach a correct structure compared with a similar-sized RNA hairpin (27 nucleotides,more » 580 non-H RNA atoms) without TPS. Lastly, more sensitive methods are needed for the automated detection of partial TPS.« less
Semi-Automated Identification of Rocks in Images
NASA Technical Reports Server (NTRS)
Bornstein, Benjamin; Castano, Andres; Anderson, Robert
2006-01-01
Rock Identification Toolkit Suite is a computer program that assists users in identifying and characterizing rocks shown in images returned by the Mars Explorer Rover mission. Included in the program are components for automated finding of rocks, interactive adjustments of outlines of rocks, active contouring of rocks, and automated analysis of shapes in two dimensions. The program assists users in evaluating the surface properties of rocks and soil and reports basic properties of rocks. The program requires either the Mac OS X operating system running on a G4 (or more capable) processor or a Linux operating system running on a Pentium (or more capable) processor, plus at least 128MB of random-access memory.
A microprocessor-based automation test system for the experiment of the multi-stage compressor
NASA Astrophysics Data System (ADS)
Zhang, Huisheng; Lin, Chongping
1991-08-01
An automation test system that is controlled by the microprocessor and used in the multistage compressor experiment is described. Based on the analysis of the compressor experiment performances, a complete hardware system structure is set up. It is composed of a IBM PC/XT computer, a large scale sampled data system, the moving machine with three directions, the scanners, the digital instrumentation and some output devices. A program structure of real-time software system is described. The testing results show that this test system can take the measure of many parameter magnitudes in the blade row places and on a boundary layer in different states. The automatic extent and the accuracy of experiment is increased and the experimental cost is reduced.
Automated Help System For A Supercomputer
NASA Technical Reports Server (NTRS)
Callas, George P.; Schulbach, Catherine H.; Younkin, Michael
1994-01-01
Expert-system software developed to provide automated system of user-helping displays in supercomputer system at Ames Research Center Advanced Computer Facility. Users located at remote computer terminals connected to supercomputer and each other via gateway computers, local-area networks, telephone lines, and satellite links. Automated help system answers routine user inquiries about how to use services of computer system. Available 24 hours per day and reduces burden on human experts, freeing them to concentrate on helping users with complicated problems.
Computer Programs For Automated Welding System
NASA Technical Reports Server (NTRS)
Agapakis, John E.
1993-01-01
Computer programs developed for use in controlling automated welding system described in MFS-28578. Together with control computer, computer input and output devices and control sensors and actuators, provide flexible capability for planning and implementation of schemes for automated welding of specific workpieces. Developed according to macro- and task-level programming schemes, which increases productivity and consistency by reducing amount of "teaching" of system by technician. System provides for three-dimensional mathematical modeling of workpieces, work cells, robots, and positioners.
Milanese, Gianluca; Eberhard, Matthias; Martini, Katharina; Vittoria De Martini, Ilaria; Frauenfelder, Thomas
2018-04-01
To evaluate whether vessel-suppressed computed tomography (VSCT) can be reliably used for semi-automated volumetric measurements of solid pulmonary nodules, as compared to standard CT (SCT) MATERIAL AND METHODS: Ninety-three SCT were elaborated by dedicated software (ClearRead CT, Riverain Technologies, Miamisburg, OH, USA), that allows subtracting vessels from lung parenchyma. Semi-automated volumetric measurements of 65 solid nodules were compared between SCT and VSCT. The measurements were repeated by two readers. For each solid nodule, volume measured on SCT by Reader 1 and Reader 2 was averaged and the average volume between readers acted as standard of reference value. Concordance between measurements was assessed using Lin's Concordance Correlation Coefficient (CCC). Limits of agreement (LoA) between readers and CT datasets were evaluated. Standard of reference nodule volume ranged from 13 to 366 mm 3 . The mean overestimation between readers was 3 mm 3 and 2.9 mm 3 on SCT and VSCT, respectively. Semi-automated volumetric measurements on VSCT showed substantial agreement with the standard of reference (Lin's CCC = 0.990 for Reader 1; 0.985 for Reader 2). The upper and lower LoA between readers' measurements were (16.3, -22.4 mm 3 ) and (15.5, -21.4 mm 3 ) for SCT and VSCT, respectively. VSCT datasets are feasible for the measurements of solid nodules, showing an almost perfect concordance between readers and with measurements on SCT. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Rea, F. G.; Pittenger, J. L.; Conlon, R. J.; Allen, J. D.
1975-01-01
Techniques developed for identifying launch vehicle system requirements for NASA automated space missions are discussed. Emphasis is placed on development of computer programs and investigation of astrionics for OSS missions and Scout. The Earth Orbit Mission Program - 1 which performs linear error analysis of launch vehicle dispersions for both vehicle and navigation system factors is described along with the Interactive Graphic Orbit Selection program which allows the user to select orbits which satisfy mission requirements and to evaluate the necessary injection accuracy.
ERIC Educational Resources Information Center
Spielberg, Freya; Kurth, Ann; Reidy, William; McKnight, Teka; Dikobe, Wame; Wilson, Charles
2011-01-01
This article highlights findings from an evaluation that explored the impact of mobile versus clinic-based testing, rapid versus central-lab based testing, incentives for testing, and the use of a computer counseling program to guide counseling and automate evaluation in a mobile program reaching people of color at risk for HIV. The program's…
A Survey of Automated Assessment Approaches for Programming Assignments
ERIC Educational Resources Information Center
Ala-Mutka, Kirsti M.
2005-01-01
Practical programming is one of the basic skills pursued in computer science education. On programming courses, the coursework consists of programming assignments that need to be assessed from different points of view. Since the submitted assignments are executable programs with a formal structure, some features can be assessed automatically. The…
NASA Technical Reports Server (NTRS)
Pokras, V. M.; Yevdokimov, V. P.; Maslov, V. D.
1978-01-01
The structure and potential of the information reference system OZhUR designed for the automated data processing systems of scientific space vehicles (SV) is considered. The system OZhUR ensures control of the extraction phase of processing with respect to a concrete SV and the exchange of data between phases.The practical application of the system OZhUR is exemplified in the construction of a data processing system for satellites of the Cosmos series. As a result of automating the operations of exchange and control, the volume of manual preparation of data is significantly reduced, and there is no longer any need for individual logs which fix the status of data processing. The system Ozhur is included in the automated data processing system Nauka which is realized in language PL-1 in a binary one-address system one-state (BOS OS) electronic computer.
Shah, Pranav; Kerns, Edward; Nguyen, Dac-Trung; Obach, R Scott; Wang, Amy Q; Zakharov, Alexey; McKew, John; Simeonov, Anton; Hop, Cornelis E C A; Xu, Xin
2016-10-01
Advancement of in silico tools would be enabled by the availability of data for metabolic reaction rates and intrinsic clearance (CLint) of a diverse compound structure data set by specific metabolic enzymes. Our goal is to measure CLint for a large set of compounds with each major human cytochrome P450 (P450) isozyme. To achieve our goal, it is of utmost importance to develop an automated, robust, sensitive, high-throughput metabolic stability assay that can efficiently handle a large volume of compound sets. The substrate depletion method [in vitro half-life (t1/2) method] was chosen to determine CLint The assay (384-well format) consisted of three parts: 1) a robotic system for incubation and sample cleanup; 2) two different integrated, ultraperformance liquid chromatography/mass spectrometry (UPLC/MS) platforms to determine the percent remaining of parent compound, and 3) an automated data analysis system. The CYP3A4 assay was evaluated using two long t1/2 compounds, carbamazepine and antipyrine (t1/2 > 30 minutes); one moderate t1/2 compound, ketoconazole (10 < t1/2 < 30 minutes); and two short t1/2 compounds, loperamide and buspirone (t½ < 10 minutes). Interday and intraday precision and accuracy of the assay were within acceptable range (∼12%) for the linear range observed. Using this assay, CYP3A4 CLint and t1/2 values for more than 3000 compounds were measured. This high-throughput, automated, and robust assay allows for rapid metabolic stability screening of large compound sets and enables advanced computational modeling for individual human P450 isozymes. U.S. Government work not protected by U.S. copyright.
Design and real-time control of a robotic system for fracture manipulation.
Dagnino, G; Georgilas, I; Tarassoli, P; Atkins, R; Dogramadzi, S
2015-08-01
This paper presents the design, development and control of a new robotic system for fracture manipulation. The objective is to improve the precision, ergonomics and safety of the traditional surgical procedure to treat joint fractures. The achievements toward this direction are here reported and include the design, the real-time control architecture and the evaluation of a new robotic manipulator system. The robotic manipulator is a 6-DOF parallel robot with the struts developed as linear actuators. The control architecture is also described here. The high-level controller implements a host-target structure composed by a host computer (PC), a real-time controller, and an FPGA. A graphical user interface was designed allowing the surgeon to comfortably automate and monitor the robotic system. The real-time controller guarantees the determinism of the control algorithms adding an extra level of safety for the robotic automation. The system's positioning accuracy and repeatability have been demonstrated showing a maximum positioning RMSE of 1.18 ± 1.14mm (translations) and 1.85 ± 1.54° (rotations).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ouyang, Lizhi
Advanced Ultra Supercritical Boiler (AUSC) requires materials that can operate in corrosive environment at temperature and pressure as high as 760°C (or 1400°F) and 5000psi, respectively, while at the same time maintain good ductility at low temperature. We develop automated simulation software tools to enable fast large scale screening studies of candidate designs. While direct evaluation of creep rupture strength and ductility are currently not feasible, properties such as energy, elastic constants, surface energy, interface energy, and stack fault energy can be used to assess their relative ductility and creeping strength. We implemented software to automate the complex calculations tomore » minimize human inputs in the tedious screening studies which involve model structures generation, settings for first principles calculations, results analysis and reporting. The software developed in the project and library of computed mechanical properties of phases found in ferritic steels, many are complex solid solutions estimated for the first time, will certainly help the development of low cost ferritic steel for AUSC.« less
Steiding, Christian; Kolditz, Daniel; Kalender, Willi A
2014-03-01
Thousands of cone-beam computed tomography (CBCT) scanners for vascular, maxillofacial, neurological, and body imaging are in clinical use today, but there is no consensus on uniform acceptance and constancy testing for image quality (IQ) and dose yet. The authors developed a quality assurance (QA) framework for fully automated and time-efficient performance evaluation of these systems. In addition, the dependence of objective Fourier-based IQ metrics on direction and position in 3D volumes was investigated for CBCT. The authors designed a dedicated QA phantom 10 cm in length consisting of five compartments, each with a diameter of 10 cm, and an optional extension ring 16 cm in diameter. A homogeneous section of water-equivalent material allows measuring CT value accuracy, image noise and uniformity, and multidimensional global and local noise power spectra (NPS). For the quantitative determination of 3D high-contrast spatial resolution, the modulation transfer function (MTF) of centrally and peripherally positioned aluminum spheres was computed from edge profiles. Additional in-plane and axial resolution patterns were used to assess resolution qualitatively. The characterization of low-contrast detectability as well as CT value linearity and artifact behavior was tested by utilizing sections with soft-tissue-equivalent and metallic inserts. For an automated QA procedure, a phantom detection algorithm was implemented. All tests used in the dedicated QA program were initially verified in simulation studies and experimentally confirmed on a clinical dental CBCT system. The automated IQ evaluation of volume data sets of the dental CBCT system was achieved with the proposed phantom requiring only one scan for the determination of all desired parameters. Typically, less than 5 min were needed for phantom set-up, scanning, and data analysis. Quantitative evaluation of system performance over time by comparison to previous examinations was also verified. The maximum percentage interscan variation of repeated measurements was less than 4% and 1.7% on average for all investigated quality criteria. The NPS-based image noise differed by less than 5% from the conventional standard deviation approach and spatially selective 10% MTF values were well comparable to subjective results obtained with 3D resolution pattern. Determining only transverse spatial resolution and global noise behavior in the central field of measurement turned out to be insufficient. The proposed framework transfers QA routines employed in conventional CT in an advanced version to CBCT for fully automated and time-efficient evaluation of technical equipment. With the modular phantom design, a routine as well as an expert version for assessing IQ is provided. The QA program can be used for arbitrary CT units to evaluate 3D imaging characteristics automatically.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steiding, Christian; Kolditz, Daniel; Kalender, Willi A., E-mail: willi.kalender@imp.uni-erlangen.de
Purpose: Thousands of cone-beam computed tomography (CBCT) scanners for vascular, maxillofacial, neurological, and body imaging are in clinical use today, but there is no consensus on uniform acceptance and constancy testing for image quality (IQ) and dose yet. The authors developed a quality assurance (QA) framework for fully automated and time-efficient performance evaluation of these systems. In addition, the dependence of objective Fourier-based IQ metrics on direction and position in 3D volumes was investigated for CBCT. Methods: The authors designed a dedicated QA phantom 10 cm in length consisting of five compartments, each with a diameter of 10 cm, andmore » an optional extension ring 16 cm in diameter. A homogeneous section of water-equivalent material allows measuring CT value accuracy, image noise and uniformity, and multidimensional global and local noise power spectra (NPS). For the quantitative determination of 3D high-contrast spatial resolution, the modulation transfer function (MTF) of centrally and peripherally positioned aluminum spheres was computed from edge profiles. Additional in-plane and axial resolution patterns were used to assess resolution qualitatively. The characterization of low-contrast detectability as well as CT value linearity and artifact behavior was tested by utilizing sections with soft-tissue-equivalent and metallic inserts. For an automated QA procedure, a phantom detection algorithm was implemented. All tests used in the dedicated QA program were initially verified in simulation studies and experimentally confirmed on a clinical dental CBCT system. Results: The automated IQ evaluation of volume data sets of the dental CBCT system was achieved with the proposed phantom requiring only one scan for the determination of all desired parameters. Typically, less than 5 min were needed for phantom set-up, scanning, and data analysis. Quantitative evaluation of system performance over time by comparison to previous examinations was also verified. The maximum percentage interscan variation of repeated measurements was less than 4% and 1.7% on average for all investigated quality criteria. The NPS-based image noise differed by less than 5% from the conventional standard deviation approach and spatially selective 10% MTF values were well comparable to subjective results obtained with 3D resolution pattern. Determining only transverse spatial resolution and global noise behavior in the central field of measurement turned out to be insufficient. Conclusions: The proposed framework transfers QA routines employed in conventional CT in an advanced version to CBCT for fully automated and time-efficient evaluation of technical equipment. With the modular phantom design, a routine as well as an expert version for assessing IQ is provided. The QA program can be used for arbitrary CT units to evaluate 3D imaging characteristics automatically.« less
Benefits Analysis of Multi-Center Dynamic Weather Routes
NASA Technical Reports Server (NTRS)
Sheth, Kapil; McNally, David; Morando, Alexander; Clymer, Alexis; Lock, Jennifer; Petersen, Julien
2014-01-01
Dynamic weather routes are flight plan corrections that can provide airborne flights more than user-specified minutes of flying-time savings, compared to their current flight plan. These routes are computed from the aircraft's current location to a flight plan fix downstream (within a predefined limit region), while avoiding forecasted convective weather regions. The Dynamic Weather Routes automation has been continuously running with live air traffic data for a field evaluation at the American Airlines Integrated Operations Center in Fort Worth, TX since July 31, 2012, where flights within the Fort Worth Air Route Traffic Control Center are evaluated for time savings. This paper extends the methodology to all Centers in United States and presents benefits analysis of Dynamic Weather Routes automation, if it was implemented in multiple airspace Centers individually and concurrently. The current computation of dynamic weather routes requires a limit rectangle so that a downstream capture fix can be selected, preventing very large route changes spanning several Centers. In this paper, first, a method of computing a limit polygon (as opposed to a rectangle used for Fort Worth Center) is described for each of the 20 Centers in the National Airspace System. The Future ATM Concepts Evaluation Tool, a nationwide simulation and analysis tool, is used for this purpose. After a comparison of results with the Center-based Dynamic Weather Routes automation in Fort Worth Center, results are presented for 11 Centers in the contiguous United States. These Centers are generally most impacted by convective weather. A breakdown of individual Center and airline savings is presented and the results indicate an overall average savings of about 10 minutes of flying time are obtained per flight.
USSR Report: Cybernetics, Computers and Automation Technology. No. 69.
1983-05-06
computers in multiprocessor and multistation design , control and scientific research automation systems. The results of comparing the efficiency of...Podvizhnaya, Scientific Research Institute of Control Computers, Severodonetsk] [Text] The most significant change in the design of the SM-2M compared to...UPRAVLYAYUSHCHIYE SISTEMY I MASHINY, Nov-Dec 82) 95 APPLICATIONS Kiev Automated Control System, Design Features and Prospects for Development (V. A
Development and verification testing of automation and robotics for assembly of space structures
NASA Technical Reports Server (NTRS)
Rhodes, Marvin D.; Will, Ralph W.; Quach, Cuong C.
1993-01-01
A program was initiated within the past several years to develop operational procedures for automated assembly of truss structures suitable for large-aperture antennas. The assembly operations require the use of a robotic manipulator and are based on the principle of supervised autonomy to minimize crew resources. A hardware testbed was established to support development and evaluation testing. A brute-force automation approach was used to develop the baseline assembly hardware and software techniques. As the system matured and an operation was proven, upgrades were incorprated and assessed against the baseline test results. This paper summarizes the developmental phases of the program, the results of several assembly tests, the current status, and a series of proposed developments for additional hardware and software control capability. No problems that would preclude automated in-space assembly of truss structures have been encountered. The current system was developed at a breadboard level and continued development at an enhanced level is warranted.
Loss Factor Estimation Using the Impulse Response Decay Method on a Stiffened Structure
NASA Technical Reports Server (NTRS)
Cabell, Randolph; Schiller, Noah; Allen, Albert; Moeller, Mark
2009-01-01
High-frequency vibroacoustic modeling is typically performed using energy-based techniques such as Statistical Energy Analysis (SEA). Energy models require an estimate of the internal damping loss factor. Unfortunately, the loss factor is difficult to estimate analytically, and experimental methods such as the power injection method can require extensive measurements over the structure of interest. This paper discusses the implications of estimating damping loss factors using the impulse response decay method (IRDM) from a limited set of response measurements. An automated procedure for implementing IRDM is described and then evaluated using data from a finite element model of a stiffened, curved panel. Estimated loss factors are compared with loss factors computed using a power injection method and a manual curve fit. The paper discusses the sensitivity of the IRDM loss factor estimates to damping of connected subsystems and the number and location of points in the measurement ensemble.
NASA Astrophysics Data System (ADS)
Christopher, Mark; Tang, Li; Fingert, John H.; Scheetz, Todd E.; Abramoff, Michael D.
2014-03-01
Evaluation of optic nerve head (ONH) structure is a commonly used clinical technique for both diagnosis and monitoring of glaucoma. Glaucoma is associated with characteristic changes in the structure of the ONH. We present a method for computationally identifying ONH structural features using both imaging and genetic data from a large cohort of participants at risk for primary open angle glaucoma (POAG). Using 1054 participants from the Ocular Hypertension Treatment Study, ONH structure was measured by application of a stereo correspondence algorithm to stereo fundus images. In addition, the genotypes of several known POAG genetic risk factors were considered for each participant. ONH structural features were discovered using both a principal component analysis approach to identify the major modes of variance within structural measurements and a linear discriminant analysis approach to capture the relationship between genetic risk factors and ONH structure. The identified ONH structural features were evaluated based on the strength of their associations with genotype and development of POAG by the end of the OHTS study. ONH structural features with strong associations with genotype were identified for each of the genetic loci considered. Several identified ONH structural features were significantly associated (p < 0.05) with the development of POAG after Bonferroni correction. Further, incorporation of genetic risk status was found to substantially increase performance of early POAG prediction. These results suggest incorporating both imaging and genetic data into ONH structural modeling significantly improves the ability to explain POAG-related changes to ONH structure.
1987-06-01
head. The electrical connection points are embedded in silicone sealing compound. The photo elements are varnished; the mirrors are chromium-plated metal...control of barrage walls and retaining dams using reversible pendulums, the suspension points of which are located in boreholes deep under the structure in...rock layers that can very probably be considered as invariable relation points . A measuring device installed in the foundation area of a barrage wall
An Evaluation of the TRIPS Computer System (Extended Technical Report)
2008-07-08
Mario Marino Nitya Ranganathan Behnam Robatmili Aaron Smith James Burrill Stephen W. Keckler Doug Burger Kathryn S. McKinley Computer Architecture and...Marino, Nitya Ranganathan , Behnam Robatmili, Aaron Smith, James Burrill, Stephen W. Keckler, Doug Burger, Kathryn S. McKinley; ASPLOS 2009, Washington DC...aggressively register allo- cate more memory accesses by using programmer knowledge about pointer aliasing, much of which may be automated. They also
NASA Technical Reports Server (NTRS)
Haley, D. C.; Almand, B. J.; Thomas, M. M.; Krauze, L. D.; Gremban, K. D.; Sanborn, J. C.; Kelly, J. H.; Depkovich, T. M.
1984-01-01
A generic computer simulation for manipulator systems (ROBSIM) was implemented and the specific technologies necessary to increase the role of automation in various missions were developed. The specific items developed are: (1) capability for definition of a manipulator system consisting of multiple arms, load objects, and an environment; (2) capability for kinematic analysis, requirements analysis, and response simulation of manipulator motion; (3) postprocessing options such as graphic replay of simulated motion and manipulator parameter plotting; (4) investigation and simulation of various control methods including manual force/torque and active compliances control; (5) evaluation and implementation of three obstacle avoidance methods; (6) video simulation and edge detection; and (7) software simulation validation.
ERIC Educational Resources Information Center
Atwood, Nancy K.
School districts have begun examining the feasibility of, and in some cases are developing and implementing automated systems for, managing and evaluating instructional programs. This paper describes and analyzes the issues and problems that emerged over the course of three projects--a large suburban school in the West, a consortium of five small…
Automated validation of a computer operating system
NASA Technical Reports Server (NTRS)
Dervage, M. M.; Milberg, B. A.
1970-01-01
Programs apply selected input/output loads to complex computer operating system and measure performance of that system under such loads. Technique lends itself to checkout of computer software designed to monitor automated complex industrial systems.
Reeves, Anthony P.; Xie, Yiting; Liu, Shuang
2017-01-01
Abstract. With the advent of fully automated image analysis and modern machine learning methods, there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. This paper presents a method and implementation for facilitating such datasets that addresses the critical issue of size scaling for algorithm validation and evaluation; current evaluation methods that are usually used in academic studies do not scale to large datasets. This method includes protocols for the documentation of many regions in very large image datasets; the documentation may be incrementally updated by new image data and by improved algorithm outcomes. This method has been used for 5 years in the context of chest health biomarkers from low-dose chest CT images that are now being used with increasing frequency in lung cancer screening practice. The lung scans are segmented into over 100 different anatomical regions, and the method has been applied to a dataset of over 20,000 chest CT images. Using this framework, the computer algorithms have been developed to achieve over 90% acceptable image segmentation on the complete dataset. PMID:28612037
Computer-Aided Experiment Planning toward Causal Discovery in Neuroscience.
Matiasz, Nicholas J; Wood, Justin; Wang, Wei; Silva, Alcino J; Hsu, William
2017-01-01
Computers help neuroscientists to analyze experimental results by automating the application of statistics; however, computer-aided experiment planning is far less common, due to a lack of similar quantitative formalisms for systematically assessing evidence and uncertainty. While ontologies and other Semantic Web resources help neuroscientists to assimilate required domain knowledge, experiment planning requires not only ontological but also epistemological (e.g., methodological) information regarding how knowledge was obtained. Here, we outline how epistemological principles and graphical representations of causality can be used to formalize experiment planning toward causal discovery. We outline two complementary approaches to experiment planning: one that quantifies evidence per the principles of convergence and consistency, and another that quantifies uncertainty using logical representations of constraints on causal structure. These approaches operationalize experiment planning as the search for an experiment that either maximizes evidence or minimizes uncertainty. Despite work in laboratory automation, humans must still plan experiments and will likely continue to do so for some time. There is thus a great need for experiment-planning frameworks that are not only amenable to machine computation but also useful as aids in human reasoning.
Juan-Albarracín, Javier; Fuster-Garcia, Elies; Manjón, José V.; Robles, Montserrat; Aparici, F.; Martí-Bonmatí, L.; García-Gómez, Juan M.
2015-01-01
Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR) images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM), whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF). An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS) 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation. PMID:25978453
Pre-operative segmentation of neck CT datasets for the planning of neck dissections
NASA Astrophysics Data System (ADS)
Cordes, Jeanette; Dornheim, Jana; Preim, Bernhard; Hertel, Ilka; Strauss, Gero
2006-03-01
For the pre-operative segmentation of CT neck datasets, we developed the software assistant NeckVision. The relevant anatomical structures for neck dissection planning can be segmented and the resulting patient-specific 3D-models are visualized afterwards in another software system for intervention planning. As a first step, we examined the appropriateness of elementary segmentation techniques based on gray values and contour information to extract the structures in the neck region from CT data. Region growing, interactive watershed transformation and live-wire are employed for segmentation of different target structures. It is also examined, which of the segmentation tasks can be automated. Based on this analysis, the software assistant NeckVision was developed to optimally support the workflow of image analysis for clinicians. The usability of NeckVision was tested within a first evaluation with four otorhinolaryngologists from the university hospital of Leipzig, four computer scientists from the university of Magdeburg and two laymen in both fields.
Faster Evolution of More Multifunctional Logic Circuits
NASA Technical Reports Server (NTRS)
Stoica, Adrian; Zebulum, Ricardo
2005-01-01
A modification in a method of automated evolutionary synthesis of voltage-controlled multifunctional logic circuits makes it possible to synthesize more circuits in less time. Prior to the modification, the computations for synthesizing a four-function logic circuit by this method took about 10 hours. Using the method as modified, it is possible to synthesize a six-function circuit in less than half an hour. The concepts of automated evolutionary synthesis and voltage-controlled multifunctional logic circuits were described in a number of prior NASA Tech Briefs articles. To recapitulate: A circuit is designed to perform one of several different logic functions, depending on the value of an applied control voltage. The circuit design is synthesized following an automated evolutionary approach that is so named because it is modeled partly after the repetitive trial-and-error process of biological evolution. In this process, random populations of integer strings that encode electronic circuits play a role analogous to that of chromosomes. An evolved circuit is tested by computational simulation (prior to testing in real hardware to verify a final design). Then, in a fitness-evaluation step, responses of the circuit are compared with specifications of target responses and circuits are ranked according to how close they come to satisfying specifications. The results of the evaluation provide guidance for refining designs through further iteration.
Assessing Creative Problem-Solving with Automated Text Grading
ERIC Educational Resources Information Center
Wang, Hao-Chuan; Chang, Chun-Yen; Li, Tsai-Yen
2008-01-01
The work aims to improve the assessment of creative problem-solving in science education by employing language technologies and computational-statistical machine learning methods to grade students' natural language responses automatically. To evaluate constructs like creative problem-solving with validity, open-ended questions that elicit…
Software for Collaborative Engineering of Launch Rockets
NASA Technical Reports Server (NTRS)
Stanley, Thomas Troy
2003-01-01
The Rocket Evaluation and Cost Integration for Propulsion and Engineering software enables collaborative computing with automated exchange of information in the design and analysis of launch rockets and other complex systems. RECIPE can interact with and incorporate a variety of programs, including legacy codes, that model aspects of a system from the perspectives of different technological disciplines (e.g., aerodynamics, structures, propulsion, trajectory, aeroheating, controls, and operations) and that are used by different engineers on different computers running different operating systems. RECIPE consists mainly of (1) ISCRM a file-transfer subprogram that makes it possible for legacy codes executed in their original operating systems on their original computers to exchange data and (2) CONES an easy-to-use filewrapper subprogram that enables the integration of legacy codes. RECIPE provides a tightly integrated conceptual framework that emphasizes connectivity among the programs used by the collaborators, linking these programs in a manner that provides some configuration control while facilitating collaborative engineering tradeoff studies, including design to cost studies. In comparison with prior collaborative-engineering schemes, one based on the use of RECIPE enables fewer engineers to do more in less time.
NASA Technical Reports Server (NTRS)
1975-01-01
A system is presented which processes FORTRAN based software systems to surface potential problems before they become execution malfunctions. The system complements the diagnostic capabilities of compilers, loaders, and execution monitors rather than duplicating these functions. Also, it emphasizes frequent sources of FORTRAN problems which require inordinate manual effort to identify. The principle value of the system is extracting small sections of unusual code from the bulk of normal sequences. Code structures likely to cause immediate or future problems are brought to the user's attention. These messages stimulate timely corrective action of solid errors and promote identification of 'tricky' code. Corrective action may require recoding or simply extending software documentation to explain the unusual technique.
Distributed computing for macromolecular crystallography
Krissinel, Evgeny; Uski, Ville; Lebedev, Andrey; Ballard, Charles
2018-01-01
Modern crystallographic computing is characterized by the growing role of automated structure-solution pipelines, which represent complex expert systems utilizing a number of program components, decision makers and databases. They also require considerable computational resources and regular database maintenance, which is increasingly more difficult to provide at the level of individual desktop-based CCP4 setups. On the other hand, there is a significant growth in data processed in the field, which brings up the issue of centralized facilities for keeping both the data collected and structure-solution projects. The paradigm of distributed computing and data management offers a convenient approach to tackling these problems, which has become more attractive in recent years owing to the popularity of mobile devices such as tablets and ultra-portable laptops. In this article, an overview is given of developments by CCP4 aimed at bringing distributed crystallographic computations to a wide crystallographic community. PMID:29533240
Distributed computing for macromolecular crystallography.
Krissinel, Evgeny; Uski, Ville; Lebedev, Andrey; Winn, Martyn; Ballard, Charles
2018-02-01
Modern crystallographic computing is characterized by the growing role of automated structure-solution pipelines, which represent complex expert systems utilizing a number of program components, decision makers and databases. They also require considerable computational resources and regular database maintenance, which is increasingly more difficult to provide at the level of individual desktop-based CCP4 setups. On the other hand, there is a significant growth in data processed in the field, which brings up the issue of centralized facilities for keeping both the data collected and structure-solution projects. The paradigm of distributed computing and data management offers a convenient approach to tackling these problems, which has become more attractive in recent years owing to the popularity of mobile devices such as tablets and ultra-portable laptops. In this article, an overview is given of developments by CCP4 aimed at bringing distributed crystallographic computations to a wide crystallographic community.
Rosenblatt, Alana J; Scrivani, Peter V; Boisclair, Yves R; Reeves, Anthony P; Ramos-Nieves, Jose M; Xie, Yiting; Erb, Hollis N
2017-10-01
Computed tomography (CT) is a suitable tool for measuring body fat, since it is non-destructive and can be used to differentiate metabolically active visceral fat from total body fat. Whole body analysis of body fat is likely to be more accurate than single CT slice estimates of body fat. The aim of this study was to assess the agreement between semi-automated computer analysis of whole body volumetric CT data and conventional proximate (chemical) analysis of body fat in lambs. Data were collected prospectively from 12 lambs that underwent duplicate whole body CT, followed by slaughter and carcass analysis by dissection and chemical analysis. Agreement between methods for quantification of total and visceral fat was assessed by Bland-Altman plot analysis. The repeatability of CT was assessed for these measures using the mean difference of duplicated measures. When compared to chemical analysis, CT systematically underestimated total and visceral fat contents by more than 10% of the mean fat weight. Therefore, carcass analysis and semi-automated CT computer measurements were not interchangeable for quantifying body fat content without the use of a correction factor. CT acquisition was repeatable, with a mean difference of repeated measures being close to zero. Therefore, uncorrected whole body CT might have an application for assessment of relative changes in fat content, especially in growing lambs. Copyright © 2017 Elsevier Ltd. All rights reserved.
Closed-form solution of decomposable stochastic models
NASA Technical Reports Server (NTRS)
Sjogren, Jon A.
1990-01-01
Markov and semi-Markov processes are increasingly being used in the modeling of complex reconfigurable systems (fault tolerant computers). The estimation of the reliability (or some measure of performance) of the system reduces to solving the process for its state probabilities. Such a model may exhibit numerous states and complicated transition distributions, contributing to an expensive and numerically delicate solution procedure. Thus, when a system exhibits a decomposition property, either structurally (autonomous subsystems), or behaviorally (component failure versus reconfiguration), it is desirable to exploit this decomposition in the reliability calculation. In interesting cases there can be failure states which arise from non-failure states of the subsystems. Equations are presented which allow the computation of failure probabilities of the total (combined) model without requiring a complete solution of the combined model. This material is presented within the context of closed-form functional representation of probabilities as utilized in the Symbolic Hierarchical Automated Reliability and Performance Evaluator (SHARPE) tool. The techniques adopted enable one to compute such probability functions for a much wider class of systems at a reduced computational cost. Several examples show how the method is used, especially in enhancing the versatility of the SHARPE tool.
Detection of lobular structures in normal breast tissue.
Apou, Grégory; Schaadt, Nadine S; Naegel, Benoît; Forestier, Germain; Schönmeyer, Ralf; Feuerhake, Friedrich; Wemmert, Cédric; Grote, Anne
2016-07-01
Ongoing research into inflammatory conditions raises an increasing need to evaluate immune cells in histological sections in biologically relevant regions of interest (ROIs). Herein, we compare different approaches to automatically detect lobular structures in human normal breast tissue in digitized whole slide images (WSIs). This automation is required to perform objective and consistent quantitative studies on large data sets. In normal breast tissue from nine healthy patients immunohistochemically stained for different markers, we evaluated and compared three different image analysis methods to automatically detect lobular structures in WSIs: (1) a bottom-up approach using the cell-based data for subsequent tissue level classification, (2) a top-down method starting with texture classification at tissue level analysis of cell densities in specific ROIs, and (3) a direct texture classification using deep learning technology. All three methods result in comparable overall quality allowing automated detection of lobular structures with minor advantage in sensitivity (approach 3), specificity (approach 2), or processing time (approach 1). Combining the outputs of the approaches further improved the precision. Different approaches of automated ROI detection are feasible and should be selected according to the individual needs of biomarker research. Additionally, detected ROIs could be used as a basis for quantification of immune infiltration in lobular structures. Copyright © 2016 Elsevier Ltd. All rights reserved.
Stella, Stefano; Italia, Leonardo; Geremia, Giulia; Rosa, Isabella; Ancona, Francesco; Marini, Claudia; Capogrosso, Cristina; Giglio, Manuela; Montorfano, Matteo; Latib, Azeem; Margonato, Alberto; Colombo, Antonio; Agricola, Eustachio
2018-02-06
A 3D transoesophageal echocardiography (3D-TOE) reconstruction tool has recently been introduced. The system automatically configures a geometric model of the aortic root and performs quantitative analysis of these structures. We compared the measurements of the aortic annulus (AA) obtained by semi-automated 3D-TOE quantitative software and manual analysis vs. multislice computed tomography (MSCT) ones. One hundred and seventy-five patients (mean age 81.3 ± 6.3 years, 77 men) who underwent both MSCT and 3D-TOE for annulus assessment before transcatheter aortic valve implantation were analysed. Hypothetical prosthetic valve sizing was evaluated using the 3D manual, semi-automated measurements using manufacturer-recommended CT-based sizing algorithm as gold standard. Good correlation between 3D-TOE methods vs. MSCT measurements was found, but the semi-automated analysis demonstrated slightly better correlations for AA major diameter (r = 0.89), perimeter (r = 0.89), and area (r = 0.85) (all P < 0.0001) than manual one. Both 3D methods underestimated the MSCT measurements, but semi-automated measurements showed narrower limits of agreement and lesser bias than manual measurements for most of AA parameters. On average, 3D-TOE semi-automated major diameter, area, and perimeter underestimated the respective MSCT measurements by 7.4%, 3.5%, and 4.4%, respectively, whereas minor diameter was overestimated by 0.3%. Moderate agreement for valve sizing for both 3D-TOE techniques was found: Kappa agreement 0.5 for both semi-automated and manual analysis. Interobserver and intraobserver agreements for the AA measurements were excellent for both techniques (intraclass correlation coefficients for all parameters >0.80). The 3D-TOE semi-automated analysis of AA is feasible and reliable and can be used in clinical practice as an alternative to MSCT for AA assessment. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author(s) 2018. For permissions, please email: journals.permissions@oup.com.
Pilot factors guidelines for the operational inspection of navigation systems
NASA Technical Reports Server (NTRS)
Sadler, J. F.; Boucek, G. P.
1988-01-01
A computerized human engineered inspection technique is developed for use by FAA inspectors in evaluating the pilot factors aspects of aircraft navigation systems. The short title for this project is Nav Handbook. A menu-driven checklist, computer program and data base (Human Factors Design Criteria) were developed and merged to form a self-contained, portable, human factors inspection checklist tool for use in a laboratory or field setting. The automated checklist is tailored for general aviation navigation systems and can be expanded for use with other aircraft systems, transports or military aircraft. The Nav Handbook inspection concept was demonstrated using a lap-top computer and an Omega/VLF CDU. The program generates standardized inspection reports. Automated checklists for LORAN/C and R NAV were also developed. A Nav Handbook User's Guide is included.
NASA Technical Reports Server (NTRS)
Newcomb, J. S.
1975-01-01
The present paper describes an automated system for measuring stellar proper motions on the basis of information contained in photographic plates. In this system, the images on a star plate are digitized by a scanning microdensitometer using light from a He-Ne gas laser, and a special-purpose computer arranges the measurements in computer-compatible form on magnetic tape. The scanning and image-reconstruction processes are briefly outlined, and the image-evaluation techniques are discussed. It is shown that the present system has been especially successful in measuring the proper motions of low-luminosity stars, including 119 stars with less than 1/10,000 of the solar bolometric luminosity. Plans for measurements of high-density Milky Way star plates are noted.
SciSpark's SRDD : A Scientific Resilient Distributed Dataset for Multidimensional Data
NASA Astrophysics Data System (ADS)
Palamuttam, R. S.; Wilson, B. D.; Mogrovejo, R. M.; Whitehall, K. D.; Mattmann, C. A.; McGibbney, L. J.; Ramirez, P.
2015-12-01
Remote sensing data and climate model output are multi-dimensional arrays of massive sizes locked away in heterogeneous file formats (HDF5/4, NetCDF 3/4) and metadata models (HDF-EOS, CF) making it difficult to perform multi-stage, iterative science processing since each stage requires writing and reading data to and from disk. We have developed SciSpark, a robust Big Data framework, that extends ApacheTM Spark for scaling scientific computations. Apache Spark improves the map-reduce implementation in ApacheTM Hadoop for parallel computing on a cluster, by emphasizing in-memory computation, "spilling" to disk only as needed, and relying on lazy evaluation. Central to Spark is the Resilient Distributed Dataset (RDD), an in-memory distributed data structure that extends the functional paradigm provided by the Scala programming language. However, RDDs are ideal for tabular or unstructured data, and not for highly dimensional data. The SciSpark project introduces the Scientific Resilient Distributed Dataset (sRDD), a distributed-computing array structure which supports iterative scientific algorithms for multidimensional data. SciSpark processes data stored in NetCDF and HDF files by partitioning them across time or space and distributing the partitions among a cluster of compute nodes. We show usability and extensibility of SciSpark by implementing distributed algorithms for geospatial operations on large collections of multi-dimensional grids. In particular we address the problem of scaling an automated method for finding Mesoscale Convective Complexes. SciSpark provides a tensor interface to support the pluggability of different matrix libraries. We evaluate performance of the various matrix libraries in distributed pipelines, such as Nd4jTM and BreezeTM. We detail the architecture and design of SciSpark, our efforts to integrate climate science algorithms, parallel ingest and partitioning (sharding) of A-Train satellite observations from model grids. These solutions are encompassed in SciSpark, an open-source software framework for distributed computing on scientific data.
Superior model for fault tolerance computation in designing nano-sized circuit systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, N. S. S., E-mail: narinderjit@petronas.com.my; Muthuvalu, M. S., E-mail: msmuthuvalu@gmail.com; Asirvadam, V. S., E-mail: vijanth-sagayan@petronas.com.my
2014-10-24
As CMOS technology scales nano-metrically, reliability turns out to be a decisive subject in the design methodology of nano-sized circuit systems. As a result, several computational approaches have been developed to compute and evaluate reliability of desired nano-electronic circuits. The process of computing reliability becomes very troublesome and time consuming as the computational complexity build ups with the desired circuit size. Therefore, being able to measure reliability instantly and superiorly is fast becoming necessary in designing modern logic integrated circuits. For this purpose, the paper firstly looks into the development of an automated reliability evaluation tool based on the generalizationmore » of Probabilistic Gate Model (PGM) and Boolean Difference-based Error Calculator (BDEC) models. The Matlab-based tool allows users to significantly speed-up the task of reliability analysis for very large number of nano-electronic circuits. Secondly, by using the developed automated tool, the paper explores into a comparative study involving reliability computation and evaluation by PGM and, BDEC models for different implementations of same functionality circuits. Based on the reliability analysis, BDEC gives exact and transparent reliability measures, but as the complexity of the same functionality circuits with respect to gate error increases, reliability measure by BDEC tends to be lower than the reliability measure by PGM. The lesser reliability measure by BDEC is well explained in this paper using distribution of different signal input patterns overtime for same functionality circuits. Simulation results conclude that the reliability measure by BDEC depends not only on faulty gates but it also depends on circuit topology, probability of input signals being one or zero and also probability of error on signal lines.« less
Ikeya, Teppei; Terauchi, Tsutomu; Güntert, Peter; Kainosho, Masatsune
2006-07-01
Recently we have developed the stereo-array isotope labeling (SAIL) technique to overcome the conventional molecular size limitation in NMR protein structure determination by employing complete stereo- and regiospecific patterns of stable isotopes. SAIL sharpens signals and simplifies spectra without the loss of requisite structural information, thus making large classes of proteins newly accessible to detailed solution structure determination. The automated structure calculation program CYANA can efficiently analyze SAIL-NOESY spectra and calculate structures without manual analysis. Nevertheless, the original SAIL method might not be capable of determining the structures of proteins larger than 50 kDa or membrane proteins, for which the spectra are characterized by many broadened and overlapped peaks. Here we have carried out simulations of new SAIL patterns optimized for minimal relaxation and overlap, to evaluate the combined use of SAIL and CYANA for solving the structures of larger proteins and membrane proteins. The modified approach reduces the number of peaks to nearly half of that observed with uniform labeling, while still yielding well-defined structures and is expected to enable NMR structure determinations of these challenging systems.
Chung, Chi-Jung; Kuo, Yu-Chen; Hsieh, Yun-Yu; Li, Tsai-Chung; Lin, Cheng-Chieh; Liang, Wen-Miin; Liao, Li-Na; Li, Chia-Ing; Lin, Hsueh-Chun
2017-11-01
This study applied open source technology to establish a subject-enabled analytics model that can enhance measurement statistics of case studies with the public health data in cloud computing. The infrastructure of the proposed model comprises three domains: 1) the health measurement data warehouse (HMDW) for the case study repository, 2) the self-developed modules of online health risk information statistics (HRIStat) for cloud computing, and 3) the prototype of a Web-based process automation system in statistics (PASIS) for the health risk assessment of case studies with subject-enabled evaluation. The system design employed freeware including Java applications, MySQL, and R packages to drive a health risk expert system (HRES). In the design, the HRIStat modules enforce the typical analytics methods for biomedical statistics, and the PASIS interfaces enable process automation of the HRES for cloud computing. The Web-based model supports both modes, step-by-step analysis and auto-computing process, respectively for preliminary evaluation and real time computation. The proposed model was evaluated by computing prior researches in relation to the epidemiological measurement of diseases that were caused by either heavy metal exposures in the environment or clinical complications in hospital. The simulation validity was approved by the commercial statistics software. The model was installed in a stand-alone computer and in a cloud-server workstation to verify computing performance for a data amount of more than 230K sets. Both setups reached efficiency of about 10 5 sets per second. The Web-based PASIS interface can be used for cloud computing, and the HRIStat module can be flexibly expanded with advanced subjects for measurement statistics. The analytics procedure of the HRES prototype is capable of providing assessment criteria prior to estimating the potential risk to public health. Copyright © 2017 Elsevier B.V. All rights reserved.
Brohet, C R; Richman, H G
1979-06-01
Automated processing of electrocardiograms by the Veterans Administration program was evaluated for both agreement with physician interpretation and interpretative accuracy as assessed with nonelectrocardiographic criteria. One thousand unselected electrocardiograms were analyzed by two reviewer groups, one familiar and the other unfamiliar with the computer program. A significant number of measurement errors involving repolarization changes and left axis deviation occurred; however, interpretative disagreements related to statistical decision were largely language-related. Use of a printout with a more traditional format resulted in agreement with physician interpretation by both reviewer groups in more than 80 percent of cases. Overall sensitivity based on agreement with nonelectrocardiographic criteria was significantly greater with use of the computer program than with use of the conventional criteria utilized by the reviewers. This difference was particularly evident in the subgroup analysis of myocardial infarction and left ventricular hypertrophy. The degree of overdiagnosis of left ventricular hypertrophy and posteroinferior infarction was initially unacceptable, but this difficulty was corrected by adjustment of probabilities. Clinical acceptability of the Veterans Administration program appears to require greater physician education than that needed for other computer programs of electrocardiographic analysis; the flexibility of interpretation by statistical decision offers the potential for better diagnostic accuracy.
Control mechanism of double-rotator-structure ternary optical computer
NASA Astrophysics Data System (ADS)
Kai, SONG; Liping, YAN
2017-03-01
Double-rotator-structure ternary optical processor (DRSTOP) has two characteristics, namely, giant data-bits parallel computing and reconfigurable processor, which can handle thousands of data bits in parallel, and can run much faster than computers and other optical computer systems so far. In order to put DRSTOP into practical application, this paper established a series of methods, namely, task classification method, data-bits allocation method, control information generation method, control information formatting and sending method, and decoded results obtaining method and so on. These methods form the control mechanism of DRSTOP. This control mechanism makes DRSTOP become an automated computing platform. Compared with the traditional calculation tools, DRSTOP computing platform can ease the contradiction between high energy consumption and big data computing due to greatly reducing the cost of communications and I/O. Finally, the paper designed a set of experiments for DRSTOP control mechanism to verify its feasibility and correctness. Experimental results showed that the control mechanism is correct, feasible and efficient.
Azadmanjir, Zahra; Safdari, Reza; Ghazisaeedi, Marjan; Mokhtaran, Mehrshad; Kameli, Mohammad Esmail
2017-06-01
Accurate coded data in the healthcare are critical. Computer-Assisted Coding (CAC) is an effective tool to improve clinical coding in particular when a new classification will be developed and implemented. But determine the appropriate method for development need to consider the specifications of existing CAC systems, requirements for each type, our infrastructure and also, the classification scheme. The aim of the study was the development of a decision model for determining accurate code of each medical intervention in Iranian Classification of Health Interventions (IRCHI) that can be implemented as a suitable CAC system. first, a sample of existing CAC systems was reviewed. Then feasibility of each one of CAC types was examined with regard to their prerequisites for their implementation. The next step, proper model was proposed according to the structure of the classification scheme and was implemented as an interactive system. There is a significant relationship between the level of assistance of a CAC system and integration of it with electronic medical documents. Implementation of fully automated CAC systems is impossible due to immature development of electronic medical record and problems in using language for medical documenting. So, a model was proposed to develop semi-automated CAC system based on hierarchical relationships between entities in the classification scheme and also the logic of decision making to specify the characters of code step by step through a web-based interactive user interface for CAC. It was composed of three phases to select Target, Action and Means respectively for an intervention. The proposed model was suitable the current status of clinical documentation and coding in Iran and also, the structure of new classification scheme. Our results show it was practical. However, the model needs to be evaluated in the next stage of the research.
AELAS: Automatic ELAStic property derivations via high-throughput first-principles computation
NASA Astrophysics Data System (ADS)
Zhang, S. H.; Zhang, R. F.
2017-11-01
The elastic properties are fundamental and important for crystalline materials as they relate to other mechanical properties, various thermodynamic qualities as well as some critical physical properties. However, a complete set of experimentally determined elastic properties is only available for a small subset of known materials, and an automatic scheme for the derivations of elastic properties that is adapted to high-throughput computation is much demanding. In this paper, we present the AELAS code, an automated program for calculating second-order elastic constants of both two-dimensional and three-dimensional single crystal materials with any symmetry, which is designed mainly for high-throughput first-principles computation. Other derivations of general elastic properties such as Young's, bulk and shear moduli as well as Poisson's ratio of polycrystal materials, Pugh ratio, Cauchy pressure, elastic anisotropy and elastic stability criterion, are also implemented in this code. The implementation of the code has been critically validated by a lot of evaluations and tests on a broad class of materials including two-dimensional and three-dimensional materials, providing its efficiency and capability for high-throughput screening of specific materials with targeted mechanical properties. Program Files doi:http://dx.doi.org/10.17632/f8fwg4j9tw.1 Licensing provisions: BSD 3-Clause Programming language: Fortran Nature of problem: To automate the calculations of second-order elastic constants and the derivations of other elastic properties for two-dimensional and three-dimensional materials with any symmetry via high-throughput first-principles computation. Solution method: The space-group number is firstly determined by the SPGLIB code [1] and the structure is then redefined to unit cell with IEEE-format [2]. Secondly, based on the determined space group number, a set of distortion modes is automatically specified and the distorted structure files are generated. Afterwards, the total energy for each distorted structure is calculated by the first-principles codes, e.g. VASP [3]. Finally, the second-order elastic constants are determined from the quadratic coefficients of the polynomial fitting of the energies vs strain relationships and other elastic properties are accordingly derived. References [1] http://atztogo.github.io/spglib/. [2] A. Meitzler, H.F. Tiersten, A.W. Warner, D. Berlincourt, G.A. Couqin, F.S. Welsh III, IEEE standard on piezoelectricity, Society, 1988. [3] G. Kresse, J. Furthmüller, Phys. Rev. B 54 (1996) 11169.
Computed tomography-based volumetric tool for standardized measurement of the maxillary sinus
Giacomini, Guilherme; Pavan, Ana Luiza Menegatti; Altemani, João Mauricio Carrasco; Duarte, Sergio Barbosa; Fortaleza, Carlos Magno Castelo Branco; Miranda, José Ricardo de Arruda
2018-01-01
Volume measurements of maxillary sinus may be useful to identify diseases affecting paranasal sinuses. However, literature shows a lack of consensus in studies measuring the volume. This may be attributable to different computed tomography data acquisition techniques, segmentation methods, focuses of investigation, among other reasons. Furthermore, methods for volumetrically quantifying the maxillary sinus are commonly manual or semiautomated, which require substantial user expertise and are time-consuming. The purpose of the present study was to develop an automated tool for quantifying the total and air-free volume of the maxillary sinus based on computed tomography images. The quantification tool seeks to standardize maxillary sinus volume measurements, thus allowing better comparisons and determinations of factors that influence maxillary sinus size. The automated tool utilized image processing techniques (watershed, threshold, and morphological operators). The maxillary sinus volume was quantified in 30 patients. To evaluate the accuracy of the automated tool, the results were compared with manual segmentation that was performed by an experienced radiologist using a standard procedure. The mean percent differences between the automated and manual methods were 7.19% ± 5.83% and 6.93% ± 4.29% for total and air-free maxillary sinus volume, respectively. Linear regression and Bland-Altman statistics showed good agreement and low dispersion between both methods. The present automated tool for maxillary sinus volume assessment was rapid, reliable, robust, accurate, and reproducible and may be applied in clinical practice. The tool may be used to standardize measurements of maxillary volume. Such standardization is extremely important for allowing comparisons between studies, providing a better understanding of the role of the maxillary sinus, and determining the factors that influence maxillary sinus size under normal and pathological conditions. PMID:29304130
The Effect of Computer Automation on Institutional Review Board (IRB) Office Efficiency
ERIC Educational Resources Information Center
Oder, Karl; Pittman, Stephanie
2015-01-01
Companies purchase computer systems to make their processes more efficient through automation. Some academic medical centers (AMC) have purchased computer systems for their institutional review boards (IRB) to increase efficiency and compliance with regulations. IRB computer systems are expensive to purchase, deploy, and maintain. An AMC should…
pmx: Automated protein structure and topology generation for alchemical perturbations
Gapsys, Vytautas; Michielssens, Servaas; Seeliger, Daniel; de Groot, Bert L
2015-01-01
Computational protein design requires methods to accurately estimate free energy changes in protein stability or binding upon an amino acid mutation. From the different approaches available, molecular dynamics-based alchemical free energy calculations are unique in their accuracy and solid theoretical basis. The challenge in using these methods lies in the need to generate hybrid structures and topologies representing two physical states of a system. A custom made hybrid topology may prove useful for a particular mutation of interest, however, a high throughput mutation analysis calls for a more general approach. In this work, we present an automated procedure to generate hybrid structures and topologies for the amino acid mutations in all commonly used force fields. The described software is compatible with the Gromacs simulation package. The mutation libraries are readily supported for five force fields, namely Amber99SB, Amber99SB*-ILDN, OPLS-AA/L, Charmm22*, and Charmm36. PMID:25487359
Solving coiled-coil protein structures
Dauter, Zbigniew
2015-02-26
With the availability of more than 100,000 entries stored in the Protein Data Bank (PDB) that can be used as search models, molecular replacement (MR) is currently the most popular method of solving crystal structures of macromolecules. Significant methodological efforts have been directed in recent years towards making this approach more powerful and practical. This resulted in the creation of several computer programs, highly automated and user friendly, that are able to successfully solve many structures even by researchers who, although interested in structures of biomolecules, are not very experienced in crystallography.
Automated acquisition system for routine, noninvasive monitoring of physiological data.
Ogawa, M; Tamura, T; Togawa, T
1998-01-01
A fully automated, noninvasive data-acquisition system was developed to permit long-term measurement of physiological functions at home, without disturbing subjects' normal routines. The system consists of unconstrained monitors built into furnishings and structures in a home environment. An electrocardiographic (ECG) monitor in the bathtub measures heart function during bathing, a temperature monitor in the bed measures body temperature, and a weight monitor built into the toilet serves as a scale to record weight. All three monitors are connected to one computer and function with data-acquisition programs and a data format rule. The unconstrained physiological parameter monitors and fully automated measurement procedures collect data noninvasively without the subject's awareness. The system was tested for 1 week by a healthy male subject, aged 28, in laboratory-based facilities.
Space missions for automation and robotics technologies (SMART) program
NASA Technical Reports Server (NTRS)
Ciffone, D. L.; Lum, H., Jr.
1985-01-01
The motivations, features and expected benefits and applications of the NASA SMART program are summarized. SMART is intended to push the state of the art in automation and robotics, a goal that Public Law 98-371 mandated be an inherent part of the Space Station program. The effort would first require tests of sensors, manipulators, computers and other subsystems as seeds for the evolution of flight-qualified subsystems. Consideration is currently being given to robotics systems as add-ons to the RMS, MMU and OMV and a self-contained automation and robotics module which would be tended by astronaut visits. Probable experimentation and development paths that would be pursued with the equipment are discussed, along with the management structure and procedures for the program. The first hardware flight is projected for 1989.
Possible Computer Vision Systems and Automated or Computer-Aided Edging and Trimming
Philip A. Araman
1990-01-01
This paper discusses research which is underway to help our industry reduce costs, increase product volume and value recovery, and market more accurately graded and described products. The research is part of a team effort to help the hardwood sawmill industry automate with computer vision systems, and computer-aided or computer controlled processing. This paper...
NASA Technical Reports Server (NTRS)
Svalbonas, V.; Levine, H.; Ogilvie, P.
1975-01-01
Engineering programming information is presented for the STARS-2P (shell theory automated for rotational structures-2P (plasticity)) digital computer program, and FORTRAN 4 was used in writing the various subroutines. The execution of this program requires the use of thirteen temporary storage units. The program was initially written and debugged on the IBM 370-165 computer and converted to the UNIVAC 1108 computer, where it utilizes approximately 60,000 words of core. Only basic FORTRAN library routines are required by the program: sine, cosine, absolute value, and square root.
DELTA: An Expert System for Diesel Electric Locomotive Repair
1984-06-01
Rules and Inference Mechanisms. AD-P003 943 The ACE (Automated Cable Expert) Exlpelient: Initial Evaluation of an Expert System for Preventive...tions. The first field prototype expert system, designated CATS -i (Computer-Aided Troubleshooting System - Version 1), was delivered in July 1983 and is
DOT National Transportation Integrated Search
1998-05-01
Recent technological advances in computer hardware, software, and image processing have led to the development of automated license plate reading equipment. This equipment has primarily been developed for enforcement and security applications, such a...
Towards fully automated structure-based function prediction in structural genomics: a case study.
Watson, James D; Sanderson, Steve; Ezersky, Alexandra; Savchenko, Alexei; Edwards, Aled; Orengo, Christine; Joachimiak, Andrzej; Laskowski, Roman A; Thornton, Janet M
2007-04-13
As the global Structural Genomics projects have picked up pace, the number of structures annotated in the Protein Data Bank as hypothetical protein or unknown function has grown significantly. A major challenge now involves the development of computational methods to assign functions to these proteins accurately and automatically. As part of the Midwest Center for Structural Genomics (MCSG) we have developed a fully automated functional analysis server, ProFunc, which performs a battery of analyses on a submitted structure. The analyses combine a number of sequence-based and structure-based methods to identify functional clues. After the first stage of the Protein Structure Initiative (PSI), we review the success of the pipeline and the importance of structure-based function prediction. As a dataset, we have chosen all structures solved by the MCSG during the 5 years of the first PSI. Our analysis suggests that two of the structure-based methods are particularly successful and provide examples of local similarity that is difficult to identify using current sequence-based methods. No one method is successful in all cases, so, through the use of a number of complementary sequence and structural approaches, the ProFunc server increases the chances that at least one method will find a significant hit that can help elucidate function. Manual assessment of the results is a time-consuming process and subject to individual interpretation and human error. We present a method based on the Gene Ontology (GO) schema using GO-slims that can allow the automated assessment of hits with a success rate approaching that of expert manual assessment.
Liu, Y; Wickens, C D
1994-11-01
The evaluation of mental workload is becoming increasingly important in system design and analysis. The present study examined the structure and assessment of mental workload in performing decision and monitoring tasks by focusing on two mental workload measurements: subjective assessment and time estimation. The task required the assignment of a series of incoming customers to the shortest of three parallel service lines displayed on a computer monitor. The subject was either in charge of the customer assignment (manual mode) or was monitoring an automated system performing the same task (automatic mode). In both cases, the subjects were required to detect the non-optimal assignments that they or the computer had made. Time pressure was manipulated by the experimenter to create fast and slow conditions. The results revealed a multi-dimensional structure of mental workload and a multi-step process of subjective workload assessment. The results also indicated that subjective workload was more influenced by the subject's participatory mode than by the factor of task speed. The time estimation intervals produced while performing the decision and monitoring tasks had significantly greater length and larger variability than those produced while either performing no other tasks or performing a well practised customer assignment task. This result seemed to indicate that time estimation was sensitive to the presence of perceptual/cognitive demands, but not to response related activities to which behavioural automaticity has developed.
NASA Technical Reports Server (NTRS)
Goodrich, Kenneth H.; Flemisch, Frank O.; Schutte, Paul C.; Williams, Ralph A.
2006-01-01
Driven by increased safety, efficiency, and airspace capacity, automation is playing an increasing role in aircraft operations. As aircraft become increasingly able to autonomously respond to a range of situations with performance surpassing human operators, we are compelled to look for new methods that help us understand their use and guide their design using new forms of automation and interaction. We propose a novel design metaphor to aid the conceptualization, design, and operation of highly-automated aircraft. Design metaphors transfer meaning from common experiences to less familiar applications or functions. A notable example is the "Desktop metaphor" for manipulating files on a computer. This paper describes a metaphor for highly automated vehicles known as the H-metaphor and a specific embodiment of the metaphor known as the H-mode as applied to aircraft. The fundamentals of the H-metaphor are reviewed followed by an overview of an exploratory usability study investigating human-automation interaction issues for a simple H-mode implementation. The envisioned application of the H-mode concept to aircraft is then described as are two planned evaluations.
NASA Technical Reports Server (NTRS)
Al-Jaar, Robert Y.; Desrochers, Alan A.
1989-01-01
The main objective of this research is to develop a generic modeling methodology with a flexible and modular framework to aid in the design and performance evaluation of integrated manufacturing systems using a unified model. After a thorough examination of the available modeling methods, the Petri Net approach was adopted. The concurrent and asynchronous nature of manufacturing systems are easily captured by Petri Net models. Three basic modules were developed: machine, buffer, and Decision Making Unit. The machine and buffer modules are used for modeling transfer lines and production networks. The Decision Making Unit models the functions of a computer node in a complex Decision Making Unit Architecture. The underlying model is a Generalized Stochastic Petri Net (GSPN) that can be used for performance evaluation and structural analysis. GSPN's were chosen because they help manage the complexity of modeling large manufacturing systems. There is no need to enumerate all the possible states of the Markov Chain since they are automatically generated from the GSPN model.
Open chemistry: RESTful web APIs, JSON, NWChem and the modern web application.
Hanwell, Marcus D; de Jong, Wibe A; Harris, Christopher J
2017-10-30
An end-to-end platform for chemical science research has been developed that integrates data from computational and experimental approaches through a modern web-based interface. The platform offers an interactive visualization and analytics environment that functions well on mobile, laptop and desktop devices. It offers pragmatic solutions to ensure that large and complex data sets are more accessible. Existing desktop applications/frameworks were extended to integrate with high-performance computing resources, and offer command-line tools to automate interaction-connecting distributed teams to this software platform on their own terms. The platform was developed openly, and all source code hosted on the GitHub platform with automated deployment possible using Ansible coupled with standard Ubuntu-based machine images deployed to cloud machines. The platform is designed to enable teams to reap the benefits of the connected web-going beyond what conventional search and analytics platforms offer in this area. It also has the goal of offering federated instances, that can be customized to the sites/research performed. Data gets stored using JSON, extending upon previous approaches using XML, building structures that support computational chemistry calculations. These structures were developed to make it easy to process data across different languages, and send data to a JavaScript-based web client.
Open chemistry: RESTful web APIs, JSON, NWChem and the modern web application
Hanwell, Marcus D.; de Jong, Wibe A.; Harris, Christopher J.
2017-10-30
An end-to-end platform for chemical science research has been developed that integrates data from computational and experimental approaches through a modern web-based interface. The platform offers an interactive visualization and analytics environment that functions well on mobile, laptop and desktop devices. It offers pragmatic solutions to ensure that large and complex data sets are more accessible. Existing desktop applications/frameworks were extended to integrate with high-performance computing resources, and offer command-line tools to automate interaction - connecting distributed teams to this software platform on their own terms. The platform was developed openly, and all source code hosted on the GitHub platformmore » with automated deployment possible using Ansible coupled with standard Ubuntu-based machine images deployed to cloud machines. The platform is designed to enable teams to reap the benefits of the connected web - going beyond what conventional search and analytics platforms offer in this area. It also has the goal of offering federated instances, that can be customized to the sites/research performed. Data gets stored using JSON, extending upon previous approaches using XML, building structures that support computational chemistry calculations. These structures were developed to make it easy to process data across different languages, and send data to a JavaScript-based web client.« less
Open chemistry: RESTful web APIs, JSON, NWChem and the modern web application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanwell, Marcus D.; de Jong, Wibe A.; Harris, Christopher J.
An end-to-end platform for chemical science research has been developed that integrates data from computational and experimental approaches through a modern web-based interface. The platform offers an interactive visualization and analytics environment that functions well on mobile, laptop and desktop devices. It offers pragmatic solutions to ensure that large and complex data sets are more accessible. Existing desktop applications/frameworks were extended to integrate with high-performance computing resources, and offer command-line tools to automate interaction - connecting distributed teams to this software platform on their own terms. The platform was developed openly, and all source code hosted on the GitHub platformmore » with automated deployment possible using Ansible coupled with standard Ubuntu-based machine images deployed to cloud machines. The platform is designed to enable teams to reap the benefits of the connected web - going beyond what conventional search and analytics platforms offer in this area. It also has the goal of offering federated instances, that can be customized to the sites/research performed. Data gets stored using JSON, extending upon previous approaches using XML, building structures that support computational chemistry calculations. These structures were developed to make it easy to process data across different languages, and send data to a JavaScript-based web client.« less
ClassyFire: automated chemical classification with a comprehensive, computable taxonomy.
Djoumbou Feunang, Yannick; Eisner, Roman; Knox, Craig; Chepelev, Leonid; Hastings, Janna; Owen, Gareth; Fahy, Eoin; Steinbeck, Christoph; Subramanian, Shankar; Bolton, Evan; Greiner, Russell; Wishart, David S
2016-01-01
Scientists have long been driven by the desire to describe, organize, classify, and compare objects using taxonomies and/or ontologies. In contrast to biology, geology, and many other scientific disciplines, the world of chemistry still lacks a standardized chemical ontology or taxonomy. Several attempts at chemical classification have been made; but they have mostly been limited to either manual, or semi-automated proof-of-principle applications. This is regrettable as comprehensive chemical classification and description tools could not only improve our understanding of chemistry but also improve the linkage between chemistry and many other fields. For instance, the chemical classification of a compound could help predict its metabolic fate in humans, its druggability or potential hazards associated with it, among others. However, the sheer number (tens of millions of compounds) and complexity of chemical structures is such that any manual classification effort would prove to be near impossible. We have developed a comprehensive, flexible, and computable, purely structure-based chemical taxonomy (ChemOnt), along with a computer program (ClassyFire) that uses only chemical structures and structural features to automatically assign all known chemical compounds to a taxonomy consisting of >4800 different categories. This new chemical taxonomy consists of up to 11 different levels (Kingdom, SuperClass, Class, SubClass, etc.) with each of the categories defined by unambiguous, computable structural rules. Furthermore each category is named using a consensus-based nomenclature and described (in English) based on the characteristic common structural properties of the compounds it contains. The ClassyFire webserver is freely accessible at http://classyfire.wishartlab.com/. Moreover, a Ruby API version is available at https://bitbucket.org/wishartlab/classyfire_api, which provides programmatic access to the ClassyFire server and database. ClassyFire has been used to annotate over 77 million compounds and has already been integrated into other software packages to automatically generate textual descriptions for, and/or infer biological properties of over 100,000 compounds. Additional examples and applications are provided in this paper. ClassyFire, in combination with ChemOnt (ClassyFire's comprehensive chemical taxonomy), now allows chemists and cheminformaticians to perform large-scale, rapid and automated chemical classification. Moreover, a freely accessible API allows easy access to more than 77 million "ClassyFire" classified compounds. The results can be used to help annotate well studied, as well as lesser-known compounds. In addition, these chemical classifications can be used as input for data integration, and many other cheminformatics-related tasks.
Computational Methods for Structural Mechanics and Dynamics, part 1
NASA Technical Reports Server (NTRS)
Stroud, W. Jefferson (Editor); Housner, Jerrold M. (Editor); Tanner, John A. (Editor); Hayduk, Robert J. (Editor)
1989-01-01
The structural analysis methods research has several goals. One goal is to develop analysis methods that are general. This goal of generality leads naturally to finite-element methods, but the research will also include other structural analysis methods. Another goal is that the methods be amenable to error analysis; that is, given a physical problem and a mathematical model of that problem, an analyst would like to know the probable error in predicting a given response quantity. The ultimate objective is to specify the error tolerances and to use automated logic to adjust the mathematical model or solution strategy to obtain that accuracy. A third goal is to develop structural analysis methods that can exploit parallel processing computers. The structural analysis methods research will focus initially on three types of problems: local/global nonlinear stress analysis, nonlinear transient dynamics, and tire modeling.
Computers Launch Faster, Better Job Matching
ERIC Educational Resources Information Center
Stevenson, Gloria
1976-01-01
Employment Security Automation Project (ESAP), a five-year program sponsored by the Employment and Training Administration, features an innovative computer-assisted job matching system and instantaneous computer-assisted service for unemployment insurance claimants. ESAP will also consolidate existing automated employment security systems to…
InPRO: Automated Indoor Construction Progress Monitoring Using Unmanned Aerial Vehicles
NASA Astrophysics Data System (ADS)
Hamledari, Hesam
In this research, an envisioned automated intelligent robotic solution for automated indoor data collection and inspection that employs a series of unmanned aerial vehicles (UAV), entitled "InPRO", is presented. InPRO consists of four stages, namely: 1) automated path planning; 2) autonomous UAV-based indoor inspection; 3) automated computer vision-based assessment of progress; and, 4) automated updating of 4D building information models (BIM). The works presented in this thesis address the third stage of InPRO. A series of computer vision-based methods that automate the assessment of construction progress using images captured at indoor sites are introduced. The proposed methods employ computer vision and machine learning techniques to detect the components of under-construction indoor partitions. In particular, framing (studs), insulation, electrical outlets, and different states of drywall sheets (installing, plastering, and painting) are automatically detected using digital images. High accuracy rates, real-time performance, and operation without a priori information are indicators of the methods' promising performance.
Segmentation of images of abdominal organs.
Wu, Jie; Kamath, Markad V; Noseworthy, Michael D; Boylan, Colm; Poehlman, Skip
2008-01-01
Abdominal organ segmentation, which is, the delineation of organ areas in the abdomen, plays an important role in the process of radiological evaluation. Attempts to automate segmentation of abdominal organs will aid radiologists who are required to view thousands of images daily. This review outlines the current state-of-the-art semi-automated and automated methods used to segment abdominal organ regions from computed tomography (CT), magnetic resonance imaging (MEI), and ultrasound images. Segmentation methods generally fall into three categories: pixel based, region based and boundary tracing. While pixel-based methods classify each individual pixel, region-based methods identify regions with similar properties. Boundary tracing is accomplished by a model of the image boundary. This paper evaluates the effectiveness of the above algorithms with an emphasis on their advantages and disadvantages for abdominal organ segmentation. Several evaluation metrics that compare machine-based segmentation with that of an expert (radiologist) are identified and examined. Finally, features based on intensity as well as the texture of a small region around a pixel are explored. This review concludes with a discussion of possible future trends for abdominal organ segmentation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, J; Ates, O; Li, X
Purpose: To develop a tool that can quickly and automatically assess contour quality generated from auto segmentation during online adaptive replanning. Methods: Due to the strict time requirement of online replanning and lack of ‘ground truth’ contours in daily images, our method starts with assessing image registration accuracy focusing on the surface of the organ in question. Several metrics tightly related to registration accuracy including Jacobian maps, contours shell deformation, and voxel-based root mean square (RMS) analysis were computed. To identify correct contours, additional metrics and an adaptive decision tree are introduced. To approve in principle, tests were performed withmore » CT sets, planned and daily CTs acquired using a CT-on-rails during routine CT-guided RT delivery for 20 prostate cancer patients. The contours generated on daily CTs using an auto-segmentation tool (ADMIRE, Elekta, MIM) based on deformable image registration of the planning CT and daily CT were tested. Results: The deformed contours of 20 patients with total of 60 structures were manually checked as baselines. The incorrect rate of total contours is 49%. To evaluate the quality of local deformation, the Jacobian determinant (1.047±0.045) on contours has been analyzed. In an analysis of rectum contour shell deformed, the higher rate (0.41) of error contours detection was obtained compared to 0.32 with manual check. All automated detections took less than 5 seconds. Conclusion: The proposed method can effectively detect contour errors in micro and macro scope by evaluating multiple deformable registration metrics in a parallel computing process. Future work will focus on improving practicability and optimizing calculation algorithms and metric selection.« less
NASA Astrophysics Data System (ADS)
Tang, Xiaoying; Kutten, Kwame; Ceritoglu, Can; Mori, Susumu; Miller, Michael I.
2015-03-01
In this paper, we propose and validate a fully automated pipeline for simultaneous skull-stripping and lateral ventricle segmentation using T1-weighted images. The pipeline is built upon a segmentation algorithm entitled fast multi-atlas likelihood-fusion (MALF) which utilizes multiple T1 atlases that have been pre-segmented into six whole-brain labels - the gray matter, the white matter, the cerebrospinal fluid, the lateral ventricles, the skull, and the background of the entire image. This algorithm, MALF, was designed for estimating brain anatomical structures in the framework of coordinate changes via large diffeomorphisms. In the proposed pipeline, we use a variant of MALF to estimate those six whole-brain labels in the test T1-weighted image. The three tissue labels (gray matter, white matter, and cerebrospinal fluid) and the lateral ventricles are then grouped together to form a binary brain mask to which we apply morphological smoothing so as to create the final mask for brain extraction. For computational purposes, all input images to MALF are down-sampled by a factor of two. In addition, small deformations are used for the changes of coordinates. This substantially reduces the computational complexity, hence we use the term "fast MALF". The skull-stripping performance is qualitatively evaluated on a total of 486 brain scans from a longitudinal study on Alzheimer dementia. Quantitative error analysis is carried out on 36 scans for evaluating the accuracy of the pipeline in segmenting the lateral ventricle. The volumes of the automated lateral ventricle segmentations, obtained from the proposed pipeline, are compared across three different clinical groups. The ventricle volumes from our pipeline are found to be sensitive to the diagnosis.
MRIVIEW: An interactive computational tool for investigation of brain structure and function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ranken, D.; George, J.
MRIVIEW is a software system which uses image processing and visualization to provide neuroscience researchers with an integrated environment for combining functional and anatomical information. Key features of the software include semi-automated segmentation of volumetric head data and an interactive coordinate reconciliation method which utilizes surface visualization. The current system is a precursor to a computational brain atlas. We describe features this atlas will incorporate, including methods under development for visualizing brain functional data obtained from several different research modalities.
Designing Semiconductor Heterostructures Using Digitally Accessible Electronic-Structure Data
NASA Astrophysics Data System (ADS)
Shapera, Ethan; Schleife, Andre
Semiconductor sandwich structures, so-called heterojunctions, are at the heart of modern applications with tremendous societal impact: Light-emitting diodes shape the future of lighting and solar cells are promising for renewable energy. However, their computer-based design is hampered by the high cost of electronic structure techniques used to select materials based on alignment of valence and conduction bands and to evaluate excited state properties. We describe, validate, and demonstrate an open source Python framework which rapidly screens existing online databases and user-provided data to find combinations of suitable, previously fabricated materials for optoelectronic applications. The branch point energy aligns valence and conduction bands of different materials, requiring only the bulk density functional theory band structure. We train machine learning algorithms to predict the dielectric constant, electron mobility, and hole mobility with material descriptors available in online databases. Using CdSe and InP as emitting layers for LEDs and CH3NH3PbI3 and nanoparticle PbS as absorbers for solar cells, we demonstrate our broadly applicable, automated method.
Automatic system for computer program documentation
NASA Technical Reports Server (NTRS)
Simmons, D. B.; Elliott, R. W.; Arseven, S.; Colunga, D.
1972-01-01
Work done on a project to design an automatic system for computer program documentation aids was made to determine what existing programs could be used effectively to document computer programs. Results of the study are included in the form of an extensive bibliography and working papers on appropriate operating systems, text editors, program editors, data structures, standards, decision tables, flowchart systems, and proprietary documentation aids. The preliminary design for an automated documentation system is also included. An actual program has been documented in detail to demonstrate the types of output that can be produced by the proposed system.
Computer vision in the poultry industry
USDA-ARS?s Scientific Manuscript database
Computer vision is becoming increasingly important in the poultry industry due to increasing use and speed of automation in processing operations. Growing awareness of food safety concerns has helped add food safety inspection to the list of tasks that automated computer vision can assist. Researc...
Computing and Office Automation: Changing Variables.
ERIC Educational Resources Information Center
Staman, E. Michael
1981-01-01
Trends in computing and office automation and their applications, including planning, institutional research, and general administrative support in higher education, are discussed. Changing aspects of information processing and an increasingly larger user community are considered. The computing literacy cycle may involve programming, analysis, use…
In the Face of Fallible AWE Feedback: How Do Students Respond?
ERIC Educational Resources Information Center
Bai, Lifang; Hu, Guangwei
2017-01-01
Automated writing evaluation (AWE) systems can provide immediate computer-generated quantitative assessments and qualitative diagnostic feedback on an enormous number of submitted essays. However, limited research attention has been paid to locally designed AWE systems used in English as a foreign language (EFL) classroom contexts. This study…
Read-across is an important data gap filling technique used within category and analog approaches for regulatory hazard identification and risk assessment. Although much technical guidance is available that describes how to develop category/analog approaches, practical principles...
Evaluation of a laser scanning sensor for variable-rate tree sprayer development
USDA-ARS?s Scientific Manuscript database
Accurate canopy measurement capabilities are prerequisites to automate variable-rate sprayers. A 270° radial range laser scanning sensor was tested for its scanning accuracy to detect tree canopy profiles. Signals from the laser sensor and a ground speed sensor were processed with an embedded comput...
Independent Verification and Validation of Complex User Interfaces: A Human Factors Approach
NASA Technical Reports Server (NTRS)
Whitmore, Mihriban; Berman, Andrea; Chmielewski, Cynthia
1996-01-01
The Usability Testing and Analysis Facility (UTAF) at the NASA Johnson Space Center has identified and evaluated a potential automated software interface inspection tool capable of assessing the degree to which space-related critical and high-risk software system user interfaces meet objective human factors standards across each NASA program and project. Testing consisted of two distinct phases. Phase 1 compared analysis times and similarity of results for the automated tool and for human-computer interface (HCI) experts. In Phase 2, HCI experts critiqued the prototype tool's user interface. Based on this evaluation, it appears that a more fully developed version of the tool will be a promising complement to a human factors-oriented independent verification and validation (IV&V) process.
ADP Analysis project for the Human Resources Management Division
NASA Technical Reports Server (NTRS)
Tureman, Robert L., Jr.
1993-01-01
The ADP (Automated Data Processing) Analysis Project was conducted for the Human Resources Management Division (HRMD) of NASA's Langley Research Center. The three major areas of work in the project were computer support, automated inventory analysis, and an ADP study for the Division. The goal of the computer support work was to determine automation needs of Division personnel and help them solve computing problems. The goal of automated inventory analysis was to find a way to analyze installed software and usage on a Macintosh. Finally, the ADP functional systems study for the Division was designed to assess future HRMD needs concerning ADP organization and activities.
Textile technology development
NASA Technical Reports Server (NTRS)
Shah, Bharat M.
1995-01-01
The objectives of this report were to evaluate and select resin systems for Resin Transfer Molding (RTM) and Powder Towpreg Material, to develop and evaluate advanced textile processes by comparing 2-D and 3-D braiding for fuselage frame applications and develop window belt and side panel structural design concepts, to evaluate textile material properties, and to develop low cost manufacturing and tooling processes for the automated manufacturing of fuselage primary structures. This research was in support of the NASA and Langley Research Center (LaRc) Advanced Composite Structural Concepts and Materials Technologies for Primary Aircraft Structures program.
Rigo, Vincent; Graas, Estelle; Rigo, Jacques
2012-07-01
Selected optimal respiratory cycles should allow calculation of respiratory mechanic parameters focusing on patient-ventilator interaction. New computer software automatically selecting optimal breaths and respiratory mechanics derived from those cycles are evaluated. Retrospective study. University level III neonatal intensive care unit. Ten mins synchronized intermittent mandatory ventilation and assist/control ventilation recordings from ten newborns. The ventilator provided respiratory mechanic data (ventilator respiratory cycles) every 10 secs. Pressure, flow, and volume waves and pressure-volume, pressure-flow, and volume-flow loops were reconstructed from continuous pressure-volume recordings. Visual assessment determined assisted leak-free optimal respiratory cycles (selected respiratory cycles). New software graded the quality of cycles (automated respiratory cycles). Respiratory mechanic values were derived from both sets of optimal cycles. We evaluated quality selection and compared mean values and their variability according to ventilatory mode and respiratory mechanic provenance. To assess discriminating power, all 45 "t" values obtained from interpatient comparisons were compared for each respiratory mechanic parameter. A total of 11,724 breaths are evaluated. Automated respiratory cycle/selected respiratory cycle selections agreement is high: 88% of maximal κ with linear weighting. Specificity and positive predictive values are 0.98 and 0.96, respectively. Averaged values are similar between automated respiratory cycle and ventilator respiratory cycle. C20/C alone is markedly decreased in automated respiratory cycle (1.27 ± 0.37 vs. 1.81 ± 0.67). Tidal volume apparent similarity disappears in assist/control: automated respiratory cycle tidal volume (4.8 ± 1.0 mL/kg) is significantly lower than for ventilator respiratory cycle (5.6 ± 1.8 mL/kg). Coefficients of variation decrease for all automated respiratory cycle parameters in all infants. "t" values from ventilator respiratory cycle data are two to three times higher than ventilator respiratory cycles. Automated selection is highly specific. Automated respiratory cycle reflects most the interaction of both ventilator and patient. Improving discriminating power of ventilator monitoring will likely help in assessing disease status and following trends. Averaged parameters derived from automated respiratory cycles are more precise and could be displayed by ventilators to improve real-time fine tuning of ventilator settings.
NASA Astrophysics Data System (ADS)
Girolamo, D.; Girolamo, L.; Yuan, F. G.
2015-03-01
Nondestructive evaluation (NDE) for detection and quantification of damage in composite materials is fundamental in the assessment of the overall structural integrity of modern aerospace systems. Conventional NDE systems have been extensively used to detect the location and size of damages by propagating ultrasonic waves normal to the surface. However they usually require physical contact with the structure and are time consuming and labor intensive. An automated, contactless laser ultrasonic imaging system for barely visible impact damage (BVID) detection in advanced composite structures has been developed to overcome these limitations. Lamb waves are generated by a Q-switched Nd:YAG laser, raster scanned by a set of galvano-mirrors over the damaged area. The out-of-plane vibrations are measured through a laser Doppler Vibrometer (LDV) that is stationary at a point on the corner of the grid. The ultrasonic wave field of the scanned area is reconstructed in polar coordinates and analyzed for high resolution characterization of impact damage in the composite honeycomb panel. Two methodologies are used for ultrasonic wave-field analysis: scattered wave field analysis (SWA) and standing wave energy analysis (SWEA) in the frequency domain. The SWA is employed for processing the wave field and estimate spatially dependent wavenumber values, related to discontinuities in the structural domain. The SWEA algorithm extracts standing waves trapped within damaged areas and, by studying the spectrum of the standing wave field, returns high fidelity damage imaging. While the SWA can be used to locate the impact damage in the honeycomb panel, the SWEA produces damage images in good agreement with X-ray computed tomographic (X-ray CT) scans. The results obtained prove that the laser-based nondestructive system is an effective alternative to overcome limitations of conventional NDI technologies.
Automated computation of autonomous spectral submanifolds for nonlinear modal analysis
NASA Astrophysics Data System (ADS)
Ponsioen, Sten; Pedergnana, Tiemo; Haller, George
2018-04-01
We discuss an automated computational methodology for computing two-dimensional spectral submanifolds (SSMs) in autonomous nonlinear mechanical systems of arbitrary degrees of freedom. In our algorithm, SSMs, the smoothest nonlinear continuations of modal subspaces of the linearized system, are constructed up to arbitrary orders of accuracy, using the parameterization method. An advantage of this approach is that the construction of the SSMs does not break down when the SSM folds over its underlying spectral subspace. A further advantage is an automated a posteriori error estimation feature that enables a systematic increase in the orders of the SSM computation until the required accuracy is reached. We find that the present algorithm provides a major speed-up, relative to numerical continuation methods, in the computation of backbone curves, especially in higher-dimensional problems. We illustrate the accuracy and speed of the automated SSM algorithm on lower- and higher-dimensional mechanical systems.
Chan, Ernest G; Landreneau, James R; Schuchert, Matthew J; Odell, David D; Gu, Suicheng; Pu, Jiantao; Luketich, James D; Landreneau, Rodney J
2015-09-01
Accurate cancer localization and negative resection margins are necessary for successful segmentectomy. In this study, we evaluate a newly developed software package that permits automated segmentation of the pulmonary parenchyma, allowing 3-dimensional assessment of tumor size, location, and estimates of surgical margins. A pilot study using a newly developed 3-dimensional computed tomography analytic software package was performed to retrospectively evaluate preoperative computed tomography images of patients who underwent segmentectomy (n = 36) or lobectomy (n = 15) for stage 1 non-small cell lung cancer. The software accomplishes an automated reconstruction of anatomic pulmonary segments of the lung based on bronchial arborization. Estimates of anticipated surgical margins and pulmonary segmental volume were made on the basis of 3-dimensional reconstruction. Autosegmentation was achieved in 72.7% (32/44) of preoperative computed tomography images with slice thicknesses of 3 mm or less. Reasons for segmentation failure included local severe emphysema or pneumonitis, and lower computed tomography resolution. Tumor segmental localization was achieved in all autosegmented studies. The 3-dimensional computed tomography analysis provided a positive predictive value of 87% in predicting a marginal clearance greater than 1 cm and a 75% positive predictive value in predicting a margin to tumor diameter ratio greater than 1 in relation to the surgical pathology assessment. This preoperative 3-dimensional computed tomography analysis of segmental anatomy can confirm the tumor location within an anatomic segment and aid in predicting surgical margins. This 3-dimensional computed tomography information may assist in the preoperative assessment regarding the suitability of segmentectomy for peripheral lung cancers. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Bredfeldt, Jeremy S.; Liu, Yuming; Pehlke, Carolyn A.; Conklin, Matthew W.; Szulczewski, Joseph M.; Inman, David R.; Keely, Patricia J.; Nowak, Robert D.; Mackie, Thomas R.; Eliceiri, Kevin W.
2014-01-01
Second-harmonic generation (SHG) imaging can help reveal interactions between collagen fibers and cancer cells. Quantitative analysis of SHG images of collagen fibers is challenged by the heterogeneity of collagen structures and low signal-to-noise ratio often found while imaging collagen in tissue. The role of collagen in breast cancer progression can be assessed post acquisition via enhanced computation. To facilitate this, we have implemented and evaluated four algorithms for extracting fiber information, such as number, length, and curvature, from a variety of SHG images of collagen in breast tissue. The image-processing algorithms included a Gaussian filter, SPIRAL-TV filter, Tubeness filter, and curvelet-denoising filter. Fibers are then extracted using an automated tracking algorithm called fiber extraction (FIRE). We evaluated the algorithm performance by comparing length, angle and position of the automatically extracted fibers with those of manually extracted fibers in twenty-five SHG images of breast cancer. We found that the curvelet-denoising filter followed by FIRE, a process we call CT-FIRE, outperforms the other algorithms under investigation. CT-FIRE was then successfully applied to track collagen fiber shape changes over time in an in vivo mouse model for breast cancer.
45 CFR 310.1 - What definitions apply to this part?
Code of Federal Regulations, 2010 CFR
2010-10-01
... existing automated data processing computer system through an Intergovernmental Service Agreement; (4...) Office Automation means a generic adjunct component of a computer system that supports the routine... timely and satisfactory; (iv) Assurances that information in the computer system as well as access, use...
45 CFR 310.1 - What definitions apply to this part?
Code of Federal Regulations, 2013 CFR
2013-10-01
... existing automated data processing computer system through an Intergovernmental Service Agreement; (4...) Office Automation means a generic adjunct component of a computer system that supports the routine... timely and satisfactory; (iv) Assurances that information in the computer system as well as access, use...
45 CFR 310.1 - What definitions apply to this part?
Code of Federal Regulations, 2014 CFR
2014-10-01
... existing automated data processing computer system through an Intergovernmental Service Agreement; (4...) Office Automation means a generic adjunct component of a computer system that supports the routine... timely and satisfactory; (iv) Assurances that information in the computer system as well as access, use...
45 CFR 310.1 - What definitions apply to this part?
Code of Federal Regulations, 2011 CFR
2011-10-01
... existing automated data processing computer system through an Intergovernmental Service Agreement; (4...) Office Automation means a generic adjunct component of a computer system that supports the routine... timely and satisfactory; (iv) Assurances that information in the computer system as well as access, use...
45 CFR 310.1 - What definitions apply to this part?
Code of Federal Regulations, 2012 CFR
2012-10-01
... existing automated data processing computer system through an Intergovernmental Service Agreement; (4...) Office Automation means a generic adjunct component of a computer system that supports the routine... timely and satisfactory; (iv) Assurances that information in the computer system as well as access, use...
Automated Generation of Message-Passing Programs: An Evaluation Using CAPTools
NASA Technical Reports Server (NTRS)
Hribar, Michelle R.; Jin, Haoqiang; Yan, Jerry C.; Saini, Subhash (Technical Monitor)
1998-01-01
Scientists at NASA Ames Research Center have been developing computational aeroscience applications on highly parallel architectures over the past ten years. During that same time period, a steady transition of hardware and system software also occurred, forcing us to expend great efforts into migrating and re-coding our applications. As applications and machine architectures become increasingly complex, the cost and time required for this process will become prohibitive. In this paper, we present the first set of results in our evaluation of interactive parallelization tools. In particular, we evaluate CAPTool's ability to parallelize computational aeroscience applications. CAPTools was tested on serial versions of the NAS Parallel Benchmarks and ARC3D, a computational fluid dynamics application, on two platforms: the SGI Origin 2000 and the Cray T3E. This evaluation includes performance, amount of user interaction required, limitations and portability. Based on these results, a discussion on the feasibility of computer aided parallelization of aerospace applications is presented along with suggestions for future work.
Computer-automated tinnitus assessment: noise-band matching, maskability, and residual inhibition.
Henry, James A; Roberts, Larry E; Ellingson, Roger M; Thielman, Emily J
2013-06-01
Psychoacoustic measures of tinnitus typically include loudness and pitch match, minimum masking level (MML), and residual inhibition (RI). We previously developed and documented a computer-automated tinnitus evaluation system (TES) capable of subject-guided loudness and pitch matching. The TES was further developed to conduct computer-aided, subject-guided testing for noise-band matching (NBM), MML, and RI. The purpose of the present study was to document the capability of the upgraded TES to obtain measures of NBM, MML, and RI, and to determine the test-retest reliability of the responses obtained. Three subject-guided, computer-automated testing protocols were developed to conduct NBM. For MML and RI testing, a 2-12 kHz band of noise was used. All testing was repeated during a second session. Subjects meeting study criteria were selected from those who had previously been tested for loudness and pitch matching in our laboratory. A total of 21 subjects completed testing, including seven females and 14 males. The upgraded TES was found to be fairly time efficient. Subjects were generally reliable, both within and between sessions, with respect to the type of stimulus they chose as the best match to their tinnitus. Matching to bandwidth was more variable between measurements, with greater consistency seen for subjects reporting tonal tinnitus or wide-band noisy tinnitus than intermediate types. Between-session repeated MMLs were within 10 dB of each other for all but three of the subjects. Subjects who experienced RI during Session 1 tended to be those who experienced it during Session 2. This study may represent the first time that NBM, MML, and RI audiometric testing results have been obtained entirely through a self-contained, computer-automated system designed specifically for use in the clinic. Future plans include refinements to achieve greater testing efficiency. American Academy of Audiology.
17 CFR 38.156 - Automated trade surveillance system.
Code of Federal Regulations, 2014 CFR
2014-04-01
... potential trade practice violations. The automated system must load and process daily orders and trades no... anomalies; compute, retain, and compare trading statistics; compute trade gains, losses, and futures...
17 CFR 38.156 - Automated trade surveillance system.
Code of Federal Regulations, 2013 CFR
2013-04-01
... potential trade practice violations. The automated system must load and process daily orders and trades no... anomalies; compute, retain, and compare trading statistics; compute trade gains, losses, and futures...
ERIC Educational Resources Information Center
Majchrzak, Ann
A study was conducted of the training programs used by plants with Computer Automated Design/Computer Automated Manufacturing (CAD/CAM) to help their employees adapt to automated manufacturing. The study sought to determine the relative priorities of manufacturing establishments for training certain workers in certain skills; the status of…
Simulation of a combined-cycle engine
NASA Technical Reports Server (NTRS)
Vangerpen, Jon
1991-01-01
A FORTRAN computer program was developed to simulate the performance of combined-cycle engines. These engines combine features of both gas turbines and reciprocating engines. The computer program can simulate both design point and off-design operation. Widely varying engine configurations can be evaluated for their power, performance, and efficiency as well as the influence of altitude and air speed. Although the program was developed to simulate aircraft engines, it can be used with equal success for stationary and automative applications.
Design Aids for Real-Time Systems (DARTS)
NASA Technical Reports Server (NTRS)
Szulewski, P. A.
1982-01-01
Design-Aids for Real-Time Systems (DARTS) is a tool that assists in defining embedded computer systems through tree structured graphics, military standard documentation support, and various analyses including automated Software Science parameter counting and metrics calculation. These analyses provide both static and dynamic design quality feedback which can potentially aid in producing efficient, high quality software systems.
Multilevel decomposition of complete vehicle configuration in a parallel computing environment
NASA Technical Reports Server (NTRS)
Bhatt, Vinay; Ragsdell, K. M.
1989-01-01
This research summarizes various approaches to multilevel decomposition to solve large structural problems. A linear decomposition scheme based on the Sobieski algorithm is selected as a vehicle for automated synthesis of a complete vehicle configuration in a parallel processing environment. The research is in a developmental state. Preliminary numerical results are presented for several example problems.
Automation of checkout for the shuttle operations era
NASA Technical Reports Server (NTRS)
Anderson, J. A.; Hendrickson, K. O.
1985-01-01
The Space Shuttle checkout is different from its Apollo predecessor. The complexity of the hardware, the shortened turnaround time, and the software that performs ground checkout are outlined. Generating new techniques and standards for software development and the management structure to control it are implemented. The utilization of computer systems for vehicle testing is high lighted.
ERIC Educational Resources Information Center
King, D.; And Others
1994-01-01
Discusses the computational problems of automating paper-based spatial information. A new relational structure for soil science information based on the main conceptual concepts used during conventional cartographic work is proposed. This model is a computerized framework for coherent description of the geographical variability of soils, combined…
A unified representation of findings in clinical radiology using the UMLS and DICOM.
Bertaud, Valérie; Lasbleiz, Jérémy; Mougin, Fleur; Burgun, Anita; Duvauferrier, Régis
2008-09-01
Collecting and analyzing findings constitute the basis of medical activity. Computer assisted medical activity raises the problem of modelling findings. We propose a unified representation of findings integrating the representations of findings in the GAMUTS in Radiology [M.M. Reeder, B. Felson, GAMUTS in radiology Comprehensive lists of roentgen differential diagnosis, fourth ed., 2003], the Unified Medical Language System (UMLS), and the Digital Imaging and Communication in Medicine Structured Report (DICOM-SR). Starting from a corpus of findings in bone and joint radiology [M.M. Reeder, B. Felson, GAMUTS in Radiology comprehensive lists of roentgen differential diagnosis, fourth ed., 2003] (3481 words), an automated mapping to the UMLS was performed with the Metamap Program. The resulting UMLS terms and Semantic Types were analyzed in order to find a generic template in accordance with DICOM-SR structure. UMLS Concepts were missing for 45% of the GAMUTS findings. Three kinds of regularities were observed in the way the Semantic Types were combined: "pathological findings", "physiological findings" and "anatomical findings". A generic and original DICOM-SR template modelling finding was proposed. It was evaluated for representing GAMUTS jaws findings. 21% missing terms had to be picked up from Radlex (5%) or created (16%). This article shows that it is possible to represent findings using the UMLS and the DICOM SR formalism with a semi-automated method. The Metamap program helped to find a model to represent the semantic structure of free texts with standardized terms (UMLS Concepts). Nevertheless, the coverage of the UMLS is not comprehensive. This study shows that the UMLS should include more technical concepts and more concepts regarding findings, signs and symptoms to be suitable for radiology representation. The semi-automated translation of the whole GAMUTS using the UMLS concepts and the DICOM SR relations could help to create or supplement the DCMR Templates and Context Groups pertaining to the description of imaging findings.
Foreign object detection and removal to improve automated analysis of chest radiographs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogeweg, Laurens; Sanchez, Clara I.; Melendez, Jaime
2013-07-15
Purpose: Chest radiographs commonly contain projections of foreign objects, such as buttons, brassier clips, jewellery, or pacemakers and wires. The presence of these structures can substantially affect the output of computer analysis of these images. An automated method is presented to detect, segment, and remove foreign objects from chest radiographs.Methods: Detection is performed using supervised pixel classification with a kNN classifier, resulting in a probability estimate per pixel to belong to a projected foreign object. Segmentation is performed by grouping and post-processing pixels with a probability above a certain threshold. Next, the objects are replaced by texture inpainting.Results: The methodmore » is evaluated in experiments on 257 chest radiographs. The detection at pixel level is evaluated with receiver operating characteristic analysis on pixels within the unobscured lung fields and an A{sub z} value of 0.949 is achieved. Free response operator characteristic analysis is performed at the object level, and 95.6% of objects are detected with on average 0.25 false positive detections per image. To investigate the effect of removing the detected objects through inpainting, a texture analysis system for tuberculosis detection is applied to images with and without pathology and with and without foreign object removal. Unprocessed, the texture analysis abnormality score of normal images with foreign objects is comparable to those with pathology. After removing foreign objects, the texture score of normal images with and without foreign objects is similar, while abnormal images, whether they contain foreign objects or not, achieve on average higher scores.Conclusions: The authors conclude that removal of foreign objects from chest radiographs is feasible and beneficial for automated image analysis.« less
Using satellite communications for a mobile computer network
NASA Technical Reports Server (NTRS)
Wyman, Douglas J.
1993-01-01
The topics discussed include the following: patrol car automation, mobile computer network, network requirements, network design overview, MCN mobile network software, MCN hub operation, mobile satellite software, hub satellite software, the benefits of patrol car automation, the benefits of satellite mobile computing, and national law enforcement satellite.
Extraction of prostatic lumina and automated recognition for prostatic calculus image using PCA-SVM.
Wang, Zhuocai; Xu, Xiangmin; Ding, Xiaojun; Xiao, Hui; Huang, Yusheng; Liu, Jian; Xing, Xiaofen; Wang, Hua; Liao, D Joshua
2011-01-01
Identification of prostatic calculi is an important basis for determining the tissue origin. Computation-assistant diagnosis of prostatic calculi may have promising potential but is currently still less studied. We studied the extraction of prostatic lumina and automated recognition for calculus images. Extraction of lumina from prostate histology images was based on local entropy and Otsu threshold recognition using PCA-SVM and based on the texture features of prostatic calculus. The SVM classifier showed an average time 0.1432 second, an average training accuracy of 100%, an average test accuracy of 93.12%, a sensitivity of 87.74%, and a specificity of 94.82%. We concluded that the algorithm, based on texture features and PCA-SVM, can recognize the concentric structure and visualized features easily. Therefore, this method is effective for the automated recognition of prostatic calculi.
Computer-aided diagnostic detection system of venous beading in retinal images
NASA Astrophysics Data System (ADS)
Yang, Ching-Wen; Ma, DyeJyun; Chao, ShuennChing; Wang, ChuinMu; Wen, Chia-Hsien; Lo, ChienShun; Chung, Pau-Choo; Chang, Chein-I.
2000-05-01
The detection of venous beading in retinal images provides an early sign of diabetic retinopathy and plays an important role as a preprocessing step in diagnosing ocular diseases. We present a computer-aided diagnostic system to automatically detect venous beading of blood vessels. It comprises of two modules, referred to as the blood vessel extraction module and the venus beading detection module. The former uses a bell-shaped Gaussian kernel with 12 azimuths to extract blood vessels while the latter applies a neural network-based shape cognitron to detect venous beading among the extracted blood vessels for diagnosis. Both modules are fully computer-automated. To evaluate the proposed system, 61 retinal images (32 beaded and 29 normal images) are used for performance evaluation.
NASA Technical Reports Server (NTRS)
Lowrie, J. W.; Fermelia, A. J.; Haley, D. C.; Gremban, K. D.; Vanbaalen, J.; Walsh, R. W.
1982-01-01
Documentation of the preliminary software developed as a framework for a generalized integrated robotic system simulation is presented. The program structure is composed of three major functions controlled by a program executive. The three major functions are: system definition, analysis tools, and post processing. The system definition function handles user input of system parameters and definition of the manipulator configuration. The analysis tools function handles the computational requirements of the program. The post processing function allows for more detailed study of the results of analysis tool function executions. Also documented is the manipulator joint model software to be used as the basis of the manipulator simulation which will be part of the analysis tools capability.
Artificial intelligence for multi-mission planetary operations
NASA Technical Reports Server (NTRS)
Atkinson, David J.; Lawson, Denise L.; James, Mark L.
1990-01-01
A brief introduction is given to an automated system called the Spacecraft Health Automated Reasoning Prototype (SHARP). SHARP is designed to demonstrate automated health and status analysis for multi-mission spacecraft and ground data systems operations. The SHARP system combines conventional computer science methodologies with artificial intelligence techniques to produce an effective method for detecting and analyzing potential spacecraft and ground systems problems. The system performs real-time analysis of spacecraft and other related telemetry, and is also capable of examining data in historical context. Telecommunications link analysis of the Voyager II spacecraft is the initial focus for evaluation of the prototype in a real-time operations setting during the Voyager spacecraft encounter with Neptune in August, 1989. The preliminary results of the SHARP project and plans for future application of the technology are discussed.
FOLD-EM: automated fold recognition in medium- and low-resolution (4-15 Å) electron density maps.
Saha, Mitul; Morais, Marc C
2012-12-15
Owing to the size and complexity of large multi-component biological assemblies, the most tractable approach to determining their atomic structure is often to fit high-resolution radiographic or nuclear magnetic resonance structures of isolated components into lower resolution electron density maps of the larger assembly obtained using cryo-electron microscopy (cryo-EM). This hybrid approach to structure determination requires that an atomic resolution structure of each component, or a suitable homolog, is available. If neither is available, then the amount of structural information regarding that component is limited by the resolution of the cryo-EM map. However, even if a suitable homolog cannot be identified using sequence analysis, a search for structural homologs should still be performed because structural homology often persists throughout evolution even when sequence homology is undetectable, As macromolecules can often be described as a collection of independently folded domains, one way of searching for structural homologs would be to systematically fit representative domain structures from a protein domain database into the medium/low resolution cryo-EM map and return the best fits. Taken together, the best fitting non-overlapping structures would constitute a 'mosaic' backbone model of the assembly that could aid map interpretation and illuminate biological function. Using the computational principles of the Scale-Invariant Feature Transform (SIFT), we have developed FOLD-EM-a computational tool that can identify folded macromolecular domains in medium to low resolution (4-15 Å) electron density maps and return a model of the constituent polypeptides in a fully automated fashion. As a by-product, FOLD-EM can also do flexible multi-domain fitting that may provide insight into conformational changes that occur in macromolecular assemblies.
NASA Technical Reports Server (NTRS)
Boyle, W. G.; Barton, G. W.
1979-01-01
The feasibility of computerized automation of the Analytical Laboratories Section at NASA's Lewis Research Center was considered. Since that laboratory's duties are not routine, the automation goals were set with that in mind. Four instruments were selected as the most likely automation candidates: an atomic absorption spectrophotometer, an emission spectrometer, an X-ray fluorescence spectrometer, and an X-ray diffraction unit. Two options for computer automation were described: a time-shared central computer and a system with microcomputers for each instrument connected to a central computer. A third option, presented for future planning, expands the microcomputer version. Costs and benefits for each option were considered. It was concluded that the microcomputer version best fits the goals and duties of the laboratory and that such an automted system is needed to meet the laboratory's future requirements.
Automated segmentation of foveal avascular zone in fundus fluorescein angiography.
Zheng, Yalin; Gandhi, Jagdeep Singh; Stangos, Alexandros N; Campa, Claudio; Broadbent, Deborah M; Harding, Simon P
2010-07-01
PURPOSE. To describe and evaluate the performance of a computerized automated segmentation technique for use in quantification of the foveal avascular zone (FAZ). METHODS. A computerized technique for automated segmentation of the FAZ using images from fundus fluorescein angiography (FFA) was applied to 26 transit-phase images obtained from patients with various grades of diabetic retinopathy. The area containing the FAZ zone was first extracted from the original image and smoothed by a Gaussian kernel (sigma = 1.5). An initializing contour was manually placed inside the FAZ of the smoothed image and iteratively moved by the segmentation program toward the FAZ boundary. Five tests with different initializing curves were run on each of 26 images to assess reproducibility. The accuracy of the program was also validated by comparing results obtained by the program with the FAZ boundaries manually delineated by medical retina specialists. Interobserver performance was then evaluated by comparing delineations from two of the experts. RESULTS. One-way analysis of variance indicated that the disparities between different tests were not statistically significant, signifying excellent reproducibility for the computer program. There was a statistically significant linear correlation between the results obtained by automation and manual delineations by experts. CONCLUSIONS. This automated segmentation program can produce highly reproducible results that are comparable to those made by clinical experts. It has the potential to assist in the detection and management of foveal ischemia and to be integrated into automated grading systems.
NASA Astrophysics Data System (ADS)
Krappe, Sebastian; Wittenberg, Thomas; Haferlach, Torsten; Münzenmayer, Christian
2016-03-01
The morphological differentiation of bone marrow is fundamental for the diagnosis of leukemia. Currently, the counting and classification of the different types of bone marrow cells is done manually under the use of bright field microscopy. This is a time-consuming, subjective, tedious and error-prone process. Furthermore, repeated examinations of a slide may yield intra- and inter-observer variances. For that reason a computer assisted diagnosis system for bone marrow differentiation is pursued. In this work we focus (a) on a new method for the separation of nucleus and plasma parts and (b) on a knowledge-based hierarchical tree classifier for the differentiation of bone marrow cells in 16 different classes. Classification trees are easily interpretable and understandable and provide a classification together with an explanation. Using classification trees, expert knowledge (i.e. knowledge about similar classes and cell lines in the tree model of hematopoiesis) is integrated in the structure of the tree. The proposed segmentation method is evaluated with more than 10,000 manually segmented cells. For the evaluation of the proposed hierarchical classifier more than 140,000 automatically segmented bone marrow cells are used. Future automated solutions for the morphological analysis of bone marrow smears could potentially apply such an approach for the pre-classification of bone marrow cells and thereby shortening the examination time.
Comparison of Automated Brain Volume Measures obtained with NeuroQuant and FreeSurfer.
Ochs, Alfred L; Ross, David E; Zannoni, Megan D; Abildskov, Tracy J; Bigler, Erin D
2015-01-01
To examine intermethod reliabilities and differences between FreeSurfer and the FDA-cleared congener, NeuroQuant, both fully automated methods for structural brain MRI measurements. MRI scans from 20 normal control subjects, 20 Alzheimer's disease patients, and 20 mild traumatically brain-injured patients were analyzed with NeuroQuant and with FreeSurfer. Intermethod reliability was evaluated. Pairwise correlation coefficients, intraclass correlation coefficients, and effect size differences were computed. NeuroQuant versus FreeSurfer measures showed excellent to good intermethod reliability for the 21 regions evaluated (r: .63 to .99/ICC: .62 to .99/ES: -.33 to 2.08) except for the pallidum (r/ICC/ES = .31/.29/-2.2) and cerebellar white matter (r/ICC/ES = .31/.31/.08). Volumes reported by NeuroQuant were generally larger than those reported by FreeSurfer with the whole brain parenchyma volume reported by NeuroQuant 6.50% larger than the volume reported by FreeSurfer. There was no systematic difference in results between the 3 subgroups. NeuroQuant and FreeSurfer showed good to excellent intermethod reliability in volumetric measurements for all brain regions examined with the only exceptions being the pallidum and cerebellar white matter. This finding was robust for normal individuals, patients with Alzheimer's disease, and patients with mild traumatic brain injury. Copyright © 2015 by the American Society of Neuroimaging.
Automated pulmonary lobar ventilation measurements using volume-matched thoracic CT and MRI
NASA Astrophysics Data System (ADS)
Guo, F.; Svenningsen, S.; Bluemke, E.; Rajchl, M.; Yuan, J.; Fenster, A.; Parraga, G.
2015-03-01
Objectives: To develop and evaluate an automated registration and segmentation pipeline for regional lobar pulmonary structure-function measurements, using volume-matched thoracic CT and MRI in order to guide therapy. Methods: Ten subjects underwent pulmonary function tests and volume-matched 1H and 3He MRI and thoracic CT during a single 2-hr visit. CT was registered to 1H MRI using an affine method that incorporated block-matching and this was followed by a deformable step using free-form deformation. The resultant deformation field was used to deform the associated CT lobe mask that was generated using commercial software. 3He-1H image registration used the same two-step registration method and 3He ventilation was segmented using hierarchical k-means clustering. Whole lung and lobar 3He ventilation and ventilation defect percent (VDP) were generated by mapping ventilation defects to CT-defined whole lung and lobe volumes. Target CT-3He registration accuracy was evaluated using region- , surface distance- and volume-based metrics. Automated whole lung and lobar VDP was compared with semi-automated and manual results using paired t-tests. Results: The proposed pipeline yielded regional spatial agreement of 88.0+/-0.9% and surface distance error of 3.9+/-0.5 mm. Automated and manual whole lung and lobar ventilation and VDP were not significantly different and they were significantly correlated (r = 0.77, p < 0.0001). Conclusion: The proposed automated pipeline can be used to generate regional pulmonary structural-functional maps with high accuracy and robustness, providing an important tool for image-guided pulmonary interventions.
ERIC Educational Resources Information Center
Dikli, Semire
2006-01-01
The impacts of computers on writing have been widely studied for three decades. Even basic computers functions, i.e. word processing, have been of great assistance to writers in modifying their essays. The research on Automated Essay Scoring (AES) has revealed that computers have the capacity to function as a more effective cognitive tool (Attali,…
Computer-Aided Instruction in Automated Instrumentation.
ERIC Educational Resources Information Center
Stephenson, David T.
1986-01-01
Discusses functions of automated instrumentation systems, i.e., systems which combine electrical measuring instruments and a controlling computer to measure responses of a unit under test. The computer-assisted tutorial then described is programmed for use on such a system--a modern microwave spectrum analyzer--to introduce engineering students to…
Gaussian curvature analysis allows for automatic block placement in multi-block hexahedral meshing.
Ramme, Austin J; Shivanna, Kiran H; Magnotta, Vincent A; Grosland, Nicole M
2011-10-01
Musculoskeletal finite element analysis (FEA) has been essential to research in orthopaedic biomechanics. The generation of a volumetric mesh is often the most challenging step in a FEA. Hexahedral meshing tools that are based on a multi-block approach rely on the manual placement of building blocks for their mesh generation scheme. We hypothesise that Gaussian curvature analysis could be used to automatically develop a building block structure for multi-block hexahedral mesh generation. The Automated Building Block Algorithm incorporates principles from differential geometry, combinatorics, statistical analysis and computer science to automatically generate a building block structure to represent a given surface without prior information. We have applied this algorithm to 29 bones of varying geometries and successfully generated a usable mesh in all cases. This work represents a significant advancement in automating the definition of building blocks.
Creation of a virtual cutaneous tissue bank
NASA Astrophysics Data System (ADS)
LaFramboise, William A.; Shah, Sujal; Hoy, R. W.; Letbetter, D.; Petrosko, P.; Vennare, R.; Johnson, Peter C.
2000-04-01
Cellular and non-cellular constituents of skin contain fundamental morphometric features and structural patterns that correlate with tissue function. High resolution digital image acquisitions performed using an automated system and proprietary software to assemble adjacent images and create a contiguous, lossless, digital representation of individual microscope slide specimens. Serial extraction, evaluation and statistical analysis of cutaneous feature is performed utilizing an automated analysis system, to derive normal cutaneous parameters comprising essential structural skin components. Automated digital cutaneous analysis allows for fast extraction of microanatomic dat with accuracy approximating manual measurement. The process provides rapid assessment of feature both within individual specimens and across sample populations. The images, component data, and statistical analysis comprise a bioinformatics database to serve as an architectural blueprint for skin tissue engineering and as a diagnostic standard of comparison for pathologic specimens.
Application of a simple cerebellar model to geologic surface mapping
Hagens, A.; Doveton, J.H.
1991-01-01
Neurophysiological research into the structure and function of the cerebellum has inspired computational models that simulate information processing associated with coordination and motor movement. The cerebellar model arithmetic computer (CMAC) has a design structure which makes it readily applicable as an automated mapping device that "senses" a surface, based on a sample of discrete observations of surface elevation. The model operates as an iterative learning process, where cell weights are continuously modified by feedback to improve surface representation. The storage requirements are substantially less than those of a conventional memory allocation, and the model is extended easily to mapping in multidimensional space, where the memory savings are even greater. ?? 1991.
An Analysis for Capital Expenditure Decisions at a Naval Regional Medical Center.
1981-12-01
Service Equipment Review Committee 1. Portable defibrilator Computed tomographic scanner and cardioscope 2. ECG cart Automated blood cell counter 3. Gas...system sterilizer Gas system sterilizer 4. Automated blood cell Portable defibrilator and counter cardioscope 5. Computed tomographic ECG cart scanner...dictating and automated typing) systems. e. Filing equipment f. Automatic data processing equipment including data communications equipment. g
Fananapazir, Ghaneh; Bashir, Mustafa R; Marin, Daniele; Boll, Daniel T
2015-06-01
To evaluate the performance of a prototype, fully-automated post-processing solution for whole-liver and lobar segmentation based on MDCT datasets. A polymer liver phantom was used to assess accuracy of post-processing applications comparing phantom volumes determined via Archimedes' principle with MDCT segmented datasets. For the IRB-approved, HIPAA-compliant study, 25 patients were enrolled. Volumetry performance compared the manual approach with the automated prototype, assessing intraobserver variability, and interclass correlation for whole-organ and lobar segmentation using ANOVA comparison. Fidelity of segmentation was evaluated qualitatively. Phantom volume was 1581.0 ± 44.7 mL, manually segmented datasets estimated 1628.0 ± 47.8 mL, representing a mean overestimation of 3.0%, automatically segmented datasets estimated 1601.9 ± 0 mL, representing a mean overestimation of 1.3%. Whole-liver and segmental volumetry demonstrated no significant intraobserver variability for neither manual nor automated measurements. For whole-liver volumetry, automated measurement repetitions resulted in identical values; reproducible whole-organ volumetry was also achieved with manual segmentation, p(ANOVA) 0.98. For lobar volumetry, automated segmentation improved reproducibility over manual approach, without significant measurement differences for either methodology, p(ANOVA) 0.95-0.99. Whole-organ and lobar segmentation results from manual and automated segmentation showed no significant differences, p(ANOVA) 0.96-1.00. Assessment of segmentation fidelity found that segments I-IV/VI showed greater segmentation inaccuracies compared to the remaining right hepatic lobe segments. Automated whole-liver segmentation showed non-inferiority of fully-automated whole-liver segmentation compared to manual approaches with improved reproducibility and post-processing duration; automated dual-seed lobar segmentation showed slight tendencies for underestimating the right hepatic lobe volume and greater variability in edge detection for the left hepatic lobe compared to manual segmentation.
Advanced Methodology for Simulation of Complex Flows Using Structured Grid Systems
NASA Technical Reports Server (NTRS)
Steinthorsson, Erlendur; Modiano, David
1995-01-01
Detailed simulations of viscous flows in complicated geometries pose a significant challenge to current capabilities of Computational Fluid Dynamics (CFD). To enable routine application of CFD to this class of problems, advanced methodologies are required that employ (a) automated grid generation, (b) adaptivity, (c) accurate discretizations and efficient solvers, and (d) advanced software techniques. Each of these ingredients contributes to increased accuracy, efficiency (in terms of human effort and computer time), and/or reliability of CFD software. In the long run, methodologies employing structured grid systems will remain a viable choice for routine simulation of flows in complex geometries only if genuinely automatic grid generation techniques for structured grids can be developed and if adaptivity is employed more routinely. More research in both these areas is urgently needed.
Developments in REDES: The rocket engine design expert system
NASA Technical Reports Server (NTRS)
Davidian, Kenneth O.
1990-01-01
The Rocket Engine Design Expert System (REDES) is being developed at the NASA-Lewis to collect, automate, and perpetuate the existing expertise of performing a comprehensive rocket engine analysis and design. Currently, REDES uses the rigorous JANNAF methodology to analyze the performance of the thrust chamber and perform computational studies of liquid rocket engine problems. The following computer codes were included in REDES: a gas properties program named GASP, a nozzle design program named RAO, a regenerative cooling channel performance evaluation code named RTE, and the JANNAF standard liquid rocket engine performance prediction code TDK (including performance evaluation modules ODE, ODK, TDE, TDK, and BLM). Computational analyses are being conducted by REDES to provide solutions to liquid rocket engine thrust chamber problems. REDES is built in the Knowledge Engineering Environment (KEE) expert system shell and runs on a Sun 4/110 computer.
Developments in REDES: The Rocket Engine Design Expert System
NASA Technical Reports Server (NTRS)
Davidian, Kenneth O.
1990-01-01
The Rocket Engine Design Expert System (REDES) was developed at NASA-Lewis to collect, automate, and perpetuate the existing expertise of performing a comprehensive rocket engine analysis and design. Currently, REDES uses the rigorous JANNAF methodology to analyze the performance of the thrust chamber and perform computational studies of liquid rocket engine problems. The following computer codes were included in REDES: a gas properties program named GASP; a nozzle design program named RAO; a regenerative cooling channel performance evaluation code named RTE; and the JANNAF standard liquid rocket engine performance prediction code TDK (including performance evaluation modules ODE, ODK, TDE, TDK, and BLM). Computational analyses are being conducted by REDES to provide solutions to liquid rocket engine thrust chamber problems. REDES was built in the Knowledge Engineering Environment (KEE) expert system shell and runs on a Sun 4/110 computer.
1982-02-01
of i, nd to (! Lvel op an awareness of the T&E roles and responsioi Ii ties Viir~dte various Air Force organizations involved in the T&EC process... mathematical models to determine controller messages and issue controller messages using computer generated speech. AUTOMATED PERFORMANCE ALERTS: Signals
Automated Error Detection for Developing Grammar Proficiency of ESL Learners
ERIC Educational Resources Information Center
Feng, Hui-Hsien; Saricaoglu, Aysel; Chukharev-Hudilainen, Evgeny
2016-01-01
Thanks to natural language processing technologies, computer programs are actively being used not only for holistic scoring, but also for formative evaluation of writing. CyWrite is one such program that is under development. The program is built upon Second Language Acquisition theories and aims to assist ESL learners in higher education by…
How Learners Use Automated Computer-Based Feedback to Produce Revised Drafts of Essays
ERIC Educational Resources Information Center
Laing, Jonny; El Ebyary, Khaled; Windeatt, Scott
2012-01-01
Our previous results suggest that the use of "Criterion", an automatic writing evaluation (AWE) system, is particularly successful in encouraging learners to produce amended drafts of their essays, and that those amended drafts generally represent an improvement on the original submission. Our analysis of the submitted essays and the…
NDE: A key to engine rotor life prediction
NASA Technical Reports Server (NTRS)
Doherty, J. E.
1977-01-01
A key ingredient in the establishment of safe life times for critical components is the means of reliably detecting flaws which may potentially exist. Currently used nondestructive evaluation procedures are successful in detecting life limiting defects; however, the development of automated and computer aided NDE technology permits even greater assurance of flight safety.
Automated detection scheme of architectural distortion in mammograms using adaptive Gabor filter
NASA Astrophysics Data System (ADS)
Yoshikawa, Ruriha; Teramoto, Atsushi; Matsubara, Tomoko; Fujita, Hiroshi
2013-03-01
Breast cancer is a serious health concern for all women. Computer-aided detection for mammography has been used for detecting mass and micro-calcification. However, there are challenges regarding the automated detection of the architectural distortion about the sensitivity. In this study, we propose a novel automated method for detecting architectural distortion. Our method consists of the analysis of the mammary gland structure, detection of the distorted region, and reduction of false positive results. We developed the adaptive Gabor filter for analyzing the mammary gland structure that decides filter parameters depending on the thickness of the gland structure. As for post-processing, healthy mammary glands that run from the nipple to the chest wall are eliminated by angle analysis. Moreover, background mammary glands are removed based on the intensity output image obtained from adaptive Gabor filter. The distorted region of the mammary gland is then detected as an initial candidate using a concentration index followed by binarization and labeling. False positives in the initial candidate are eliminated using 23 types of characteristic features and a support vector machine. In the experiments, we compared the automated detection results with interpretations by a radiologist using 50 cases (200 images) from the Digital Database of Screening Mammography (DDSM). As a result, true positive rate was 82.72%, and the number of false positive per image was 1.39. There results indicate that the proposed method may be useful for detecting architectural distortion in mammograms.
Alxneit, Ivo
2018-03-30
A python module (HRTEMFringeAnalyzer) is reported to evaluate the local crystallinity of samples from high-resolution transmission electron microscopy images in a mostly automated fashion. The user only selects the size of a square analyser window and a step size which translates the window in the micrograph. Together they define the resolution of the results obtained. Regions where fringe patterns are visible are identified and their lattice spacing d and direction ϕ as well as the corresponding mean errors σ determined. 1/σd is proportional to the coherence length of the structure, whereas σφ is a measure of how well the direction of the fringes is defined. Maps of these four indicators are computed. The performance of the program is demonstrated on two very different samples: ill-crystalline carbon deposits on a coked Ni/LFNO (reduced LaFe 0.8 Ni 0.2 O3±δ) catalyst and well-crystallized nanoparticles of zinc doped ceria. In the latter case, the automatic segmentation of large aggregates into individual crystalline domains is achieved by ϕ maps. © 2018 The Authors Journal of Microscopy © 2018 Royal Microscopical Society.
Survey statistics of automated segmentations applied to optical imaging of mammalian cells.
Bajcsy, Peter; Cardone, Antonio; Chalfoun, Joe; Halter, Michael; Juba, Derek; Kociolek, Marcin; Majurski, Michael; Peskin, Adele; Simon, Carl; Simon, Mylene; Vandecreme, Antoine; Brady, Mary
2015-10-15
The goal of this survey paper is to overview cellular measurements using optical microscopy imaging followed by automated image segmentation. The cellular measurements of primary interest are taken from mammalian cells and their components. They are denoted as two- or three-dimensional (2D or 3D) image objects of biological interest. In our applications, such cellular measurements are important for understanding cell phenomena, such as cell counts, cell-scaffold interactions, cell colony growth rates, or cell pluripotency stability, as well as for establishing quality metrics for stem cell therapies. In this context, this survey paper is focused on automated segmentation as a software-based measurement leading to quantitative cellular measurements. We define the scope of this survey and a classification schema first. Next, all found and manually filteredpublications are classified according to the main categories: (1) objects of interests (or objects to be segmented), (2) imaging modalities, (3) digital data axes, (4) segmentation algorithms, (5) segmentation evaluations, (6) computational hardware platforms used for segmentation acceleration, and (7) object (cellular) measurements. Finally, all classified papers are converted programmatically into a set of hyperlinked web pages with occurrence and co-occurrence statistics of assigned categories. The survey paper presents to a reader: (a) the state-of-the-art overview of published papers about automated segmentation applied to optical microscopy imaging of mammalian cells, (b) a classification of segmentation aspects in the context of cell optical imaging, (c) histogram and co-occurrence summary statistics about cellular measurements, segmentations, segmented objects, segmentation evaluations, and the use of computational platforms for accelerating segmentation execution, and (d) open research problems to pursue. The novel contributions of this survey paper are: (1) a new type of classification of cellular measurements and automated segmentation, (2) statistics about the published literature, and (3) a web hyperlinked interface to classification statistics of the surveyed papers at https://isg.nist.gov/deepzoomweb/resources/survey/index.html.
NASA Technical Reports Server (NTRS)
Baumann, P. R. (Principal Investigator)
1979-01-01
Three computer quantitative techniques for determining urban land cover patterns are evaluated. The techniques examined deal with the selection of training samples by an automated process, the overlaying of two scenes from different seasons of the year, and the use of individual pixels as training points. Evaluation is based on the number and type of land cover classes generated and the marks obtained from an accuracy test. New Orleans, Louisiana and its environs form the study area.
Design and validation of an automated hydrostatic weighing system.
McClenaghan, B A; Rocchio, L
1986-08-01
The purpose of this study was to design and evaluate the validity of an automated technique to assess body density using a computerized hydrostatic weighing system. An existing hydrostatic tank was modified and interfaced with a microcomputer equipped with an analog-to-digital converter. Software was designed to input variables, control the collection of data, calculate selected measurements, and provide a summary of the results of each session. Validity of the data obtained utilizing the automated hydrostatic weighing system was estimated by: evaluating the reliability of the transducer/computer interface to measure objects of known underwater weight; comparing the data against a criterion measure; and determining inter-session subject reliability. Values obtained from the automated system were found to be highly correlated with known underwater weights (r = 0.99, SEE = 0.0060 kg). Data concurrently obtained utilizing the automated system and a manual chart recorder were also found to be highly correlated (r = 0.99, SEE = 0.0606 kg). Inter-session subject reliability was determined utilizing data collected on subjects (N = 16) tested on two occasions approximately 24 h apart. Correlations revealed high relationships between measures of underwater weight (r = 0.99, SEE = 0.1399 kg) and body density (r = 0.98, SEE = 0.00244 g X cm-1). Results indicate that a computerized hydrostatic weighing system is a valid and reliable method for determining underwater weight.
Study to design and develop remote manipulator system. [computer simulation of human performance
NASA Technical Reports Server (NTRS)
Hill, J. W.; Mcgovern, D. E.; Sword, A. J.
1974-01-01
Modeling of human performance in remote manipulation tasks is reported by automated procedures using computers to analyze and count motions during a manipulation task. Performance is monitored by an on-line computer capable of measuring the joint angles of both master and slave and in some cases the trajectory and velocity of the hand itself. In this way the operator's strategies with different transmission delays, displays, tasks, and manipulators can be analyzed in detail for comparison. Some progress is described in obtaining a set of standard tasks and difficulty measures for evaluating manipulator performance.
Abriata, Luciano A; Kinch, Lisa N; Tamò, Giorgio E; Monastyrskyy, Bohdan; Kryshtafovych, Andriy; Dal Peraro, Matteo
2018-03-01
For assessment purposes, CASP targets are split into evaluation units. We herein present the official definition of CASP12 evaluation units (EUs) and their classification into difficulty categories. Each target can be evaluated as one EU (the whole target) or/and several EUs (separate structural domains or groups of structural domains). The specific scenario for a target split is determined by the domain organization of available templates, the difference in server performance on separate domains versus combination of the domains, and visual inspection. In the end, 71 targets were split into 96 EUs. Classification of the EUs into difficulty categories was done semi-automatically with the assistance of metrics provided by the Prediction Center. These metrics account for sequence and structural similarities of the EUs to potential structural templates from the Protein Data Bank, and for the baseline performance of automated server predictions. The metrics readily separate the 96 EUs into 38 EUs that should be straightforward for template-based modeling (TBM) and 39 that are expected to be hard for homology modeling and are thus left for free modeling (FM). The remaining 19 borderline evaluation units were dubbed FM/TBM, and were inspected case by case. The article also overviews structural and evolutionary features of selected targets relevant to our accompanying article presenting the assessment of FM and FM/TBM predictions, and overviews structural features of the hardest evaluation units from the FM category. We finally suggest improvements for the EU definition and classification procedures. © 2017 Wiley Periodicals, Inc.
Automating usability of ATLAS Distributed Computing resources
NASA Astrophysics Data System (ADS)
Tupputi, S. A.; Di Girolamo, A.; Kouba, T.; Schovancová, J.; Atlas Collaboration
2014-06-01
The automation of ATLAS Distributed Computing (ADC) operations is essential to reduce manpower costs and allow performance-enhancing actions, which improve the reliability of the system. In this perspective a crucial case is the automatic handling of outages of ATLAS computing sites storage resources, which are continuously exploited at the edge of their capabilities. It is challenging to adopt unambiguous decision criteria for storage resources of non-homogeneous types, sizes and roles. The recently developed Storage Area Automatic Blacklisting (SAAB) tool has provided a suitable solution, by employing an inference algorithm which processes history of storage monitoring tests outcome. SAAB accomplishes both the tasks of providing global monitoring as well as automatic operations on single sites. The implementation of the SAAB tool has been the first step in a comprehensive review of the storage areas monitoring and central management at all levels. Such review has involved the reordering and optimization of SAM tests deployment and the inclusion of SAAB results in the ATLAS Site Status Board with both dedicated metrics and views. The resulting structure allows monitoring the storage resources status with fine time-granularity and automatic actions to be taken in foreseen cases, like automatic outage handling and notifications to sites. Hence, the human actions are restricted to reporting and following up problems, where and when needed. In this work we show SAAB working principles and features. We present also the decrease of human interactions achieved within the ATLAS Computing Operation team. The automation results in a prompt reaction to failures, which leads to the optimization of resource exploitation.
NASA Astrophysics Data System (ADS)
Morris, Phillip A.
The prevalence of low-cost side scanning sonar systems mounted on small recreational vessels has created improved opportunities to identify and map submerged navigational hazards in freshwater impoundments. However, these economical sensors also present unique challenges for automated techniques. This research explores related literature in automated sonar imagery processing and mapping technology, proposes and implements a framework derived from these sources, and evaluates the approach with video collected from a recreational grade sonar system. Image analysis techniques including optical character recognition and an unsupervised computer automated detection (CAD) algorithm are employed to extract the transducer GPS coordinates and slant range distance of objects protruding from the lake bottom. The retrieved information is formatted for inclusion into a spatial mapping model. Specific attributes of the sonar sensors are modeled such that probability profiles may be projected onto a three dimensional gridded map. These profiles are computed from multiple points of view as sonar traces crisscross or come near each other. As lake levels fluctuate over time so do the elevation points of view. With each sonar record, the probability of a hazard existing at certain elevations at the respective grid points is updated with Bayesian mechanics. As reinforcing data is collected, the confidence of the map improves. Given a lake's current elevation and a vessel draft, a final generated map can identify areas of the lake that have a high probability of containing hazards that threaten navigation. The approach is implemented in C/C++ utilizing OpenCV, Tesseract OCR, and QGIS open source software and evaluated in a designated test area at Lake Lavon, Collin County, Texas.
A fully automated system for quantification of background parenchymal enhancement in breast DCE-MRI
NASA Astrophysics Data System (ADS)
Ufuk Dalmiş, Mehmet; Gubern-Mérida, Albert; Borelli, Cristina; Vreemann, Suzan; Mann, Ritse M.; Karssemeijer, Nico
2016-03-01
Background parenchymal enhancement (BPE) observed in breast dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) has been identified as an important biomarker associated with risk for developing breast cancer. In this study, we present a fully automated framework for quantification of BPE. We initially segmented fibroglandular tissue (FGT) of the breasts using an improved version of an existing method. Subsequently, we computed BPEabs (volume of the enhancing tissue), BPErf (BPEabs divided by FGT volume) and BPErb (BPEabs divided by breast volume), using different relative enhancement threshold values between 1% and 100%. To evaluate and compare the previous and improved FGT segmentation methods, we used 20 breast DCE-MRI scans and we computed Dice similarity coefficient (DSC) values with respect to manual segmentations. For evaluation of the BPE quantification, we used a dataset of 95 breast DCE-MRI scans. Two radiologists, in individual reading sessions, visually analyzed the dataset and categorized each breast into minimal, mild, moderate and marked BPE. To measure the correlation between automated BPE values to the radiologists' assessments, we converted these values into ordinal categories and we used Spearman's rho as a measure of correlation. According to our results, the new segmentation method obtained an average DSC of 0.81 0.09, which was significantly higher (p<0.001) compared to the previous method (0.76 0.10). The highest correlation values between automated BPE categories and radiologists' assessments were obtained with the BPErf measurement (r=0.55, r=0.49, p<0.001 for both), while the correlation between the scores given by the two radiologists was 0.82 (p<0.001). The presented framework can be used to systematically investigate the correlation between BPE and risk in large screening cohorts.
Pahn, Gregor; Skornitzke, Stephan; Schlemmer, Hans-Peter; Kauczor, Hans-Ulrich; Stiller, Wolfram
2016-01-01
Based on the guidelines from "Report 87: Radiation Dose and Image-quality Assessment in Computed Tomography" of the International Commission on Radiation Units and Measurements (ICRU), a software framework for automated quantitative image quality analysis was developed and its usability for a variety of scientific questions demonstrated. The extendable framework currently implements the calculation of the recommended Fourier image quality (IQ) metrics modulation transfer function (MTF) and noise-power spectrum (NPS), and additional IQ quantities such as noise magnitude, CT number accuracy, uniformity across the field-of-view, contrast-to-noise ratio (CNR) and signal-to-noise ratio (SNR) of simulated lesions for a commercially available cone-beam phantom. Sample image data were acquired with different scan and reconstruction settings on CT systems from different manufacturers. Spatial resolution is analyzed in terms of edge-spread function, line-spread-function, and MTF. 3D NPS is calculated according to ICRU Report 87, and condensed to 2D and radially averaged 1D representations. Noise magnitude, CT numbers, and uniformity of these quantities are assessed on large samples of ROIs. Low-contrast resolution (CNR, SNR) is quantitatively evaluated as a function of lesion contrast and diameter. Simultaneous automated processing of several image datasets allows for straightforward comparative assessment. The presented framework enables systematic, reproducible, automated and time-efficient quantitative IQ analysis. Consistent application of the ICRU guidelines facilitates standardization of quantitative assessment not only for routine quality assurance, but for a number of research questions, e.g. the comparison of different scanner models or acquisition protocols, and the evaluation of new technology or reconstruction methods. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Structural/aerodynamic Blade Analyzer (SAB) User's Guide, Version 1.0
NASA Technical Reports Server (NTRS)
Morel, M. R.
1994-01-01
The structural/aerodynamic blade (SAB) analyzer provides an automated tool for the static-deflection analysis of turbomachinery blades with aerodynamic and rotational loads. A structural code calculates a deflected blade shape using aerodynamic loads input. An aerodynamic solver computes aerodynamic loads using deflected blade shape input. The two programs are iterated automatically until deflections converge. Currently, SAB version 1.0 is interfaced with MSC/NASTRAN to perform the structural analysis and PROP3D to perform the aerodynamic analysis. This document serves as a guide for the operation of the SAB system with specific emphasis on its use at NASA Lewis Research Center (LeRC). This guide consists of six chapters: an introduction which gives a summary of SAB; SAB's methodology, component files, links, and interfaces; input/output file structure; setup and execution of the SAB files on the Cray computers; hints and tips to advise the user; and an example problem demonstrating the SAB process. In addition, four appendices are presented to define the different computer programs used within the SAB analyzer and describe the required input decks.
Extracting and identifying concrete structural defects in GPR images
NASA Astrophysics Data System (ADS)
Ye, Qiling; Jiao, Liangbao; Liu, Chuanxin; Cao, Xuehong; Huston, Dryver; Xia, Tian
2018-03-01
Traditionally most GPR data interpretations are performed manually. With the advancement of computing technologies, how to automate GPR data interpretation to achieve high efficiency and accuracy has become an active research subject. In this paper, analytical characterizations of major defects in concrete structures, including delamination, air void and moisture in GPR images, are performed. In the study, the image features of different defects are compared. Algorithms are developed for defect feature extraction and identification. For validations, both simulation results and field test data are utilized.
Development and application of structural dynamics analysis capabilities
NASA Technical Reports Server (NTRS)
Heinemann, Klaus W.; Hozaki, Shig
1994-01-01
Extensive research activities were performed in the area of multidisciplinary modeling and simulation of aerospace vehicles that are relevant to NASA Dryden Flight Research Facility. The efforts involved theoretical development, computer coding, and debugging of the STARS code. New solution procedures were developed in such areas as structures, CFD, and graphics, among others. Furthermore, systems-oriented codes were developed for rendering the code truly multidisciplinary and rather automated in nature. Also, work was performed in pre- and post-processing of engineering analysis data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, Olive; Chan, Biu; Moseley, Joanne
Purpose: We have developed a semi-automated dose accumulation workflow for Head and Neck Cancer (HNC) patients to evaluate volumetric and dosimetric changes that take place during radiotherapy. This work will be used to assess how dosimetric changes affect both toxicity and disease control, hence inform the feasibility and design of a prospective HNC adaptive trial. Methods: RayStation 4.5.2 features deformable image registration (DIR), where structures already defined on the planning CT image set can be deformably mapped onto cone-beam computed tomography (CBCT) images, accounting for daily treatment set-up shifts and changes in patient anatomy. The daily delivered dose can bemore » calculated on each CBCT and mapped back to the planning CT to allow dose accumulation. The process is partially automated using Python scripts developed in collaboration with RaySearch. Results: To date we have performed dose accumulation on 18 HNC patients treated at our institution during 2013–2015 under REB approval. Our semi-automated process establishes clinical feasibility. Generally, dose accumulation for the entire treatment course of one case takes 60–120 minutes: importing all CBCTs requires 20–30 minutes as each patient has 30 to 40 treated fractions; image registration and dose accumulation require 60–90 minutes. This is in contrast to the process without automated scripts where dose accumulation alone would take 3–5 hours. Conclusions: We have developed a reliable workflow for retrospective dose tracking in HNC using RayStation. The process has been validated for HNC patients treated on both Elekta and Varian linacs with CBCTs acquired on XVI and OBI platforms respectively.« less
Automated computer grading of hardwood lumber
P. Klinkhachorn; J.P. Franklin; Charles W. McMillin; R.W. Conners; H.A. Huber
1988-01-01
This paper describes an improved computer program to grade hardwood lumber. The program was created as part of a system to automate various aspects of the hardwood manufacturing industry. It enhances previous efforts by considering both faces of the board and provides easy application of species dependent rules. The program can be readily interfaced with a computer...
NASA Astrophysics Data System (ADS)
Liu, Iching; Sun, Ying
1992-10-01
A system for reconstructing 3-D vascular structure from two orthogonally projected images is presented. The formidable problem of matching segments between two views is solved using knowledge of the epipolar constraint and the similarity of segment geometry and connectivity. The knowledge is represented in a rule-based system, which also controls the operation of several computational algorithms for tracking segments in each image, representing 2-D segments with directed graphs, and reconstructing 3-D segments from matching 2-D segment pairs. Uncertain reasoning governs the interaction between segmentation and matching; it also provides a framework for resolving the matching ambiguities in an iterative way. The system was implemented in the C language and the C Language Integrated Production System (CLIPS) expert system shell. Using video images of a tree model, the standard deviation of reconstructed centerlines was estimated to be 0.8 mm (1.7 mm) when the view direction was parallel (perpendicular) to the epipolar plane. Feasibility of clinical use was shown using x-ray angiograms of a human chest phantom. The correspondence of vessel segments between two views was accurate. Computational time for the entire reconstruction process was under 30 s on a workstation. A fully automated system for two-view reconstruction that does not require the a priori knowledge of vascular anatomy is demonstrated.
STAR - A computer language for hybrid AI applications
NASA Technical Reports Server (NTRS)
Borchardt, G. C.
1986-01-01
Constructing Artificial Intelligence application systems which rely on both symbolic and non-symbolic processing places heavy demands on the communication of data between dissimilar languages. This paper describes STAR (Simple Tool for Automated Reasoning), a computer language for the development of AI application systems which supports the transfer of data structures between a symbolic level and a non-symbolic level defined in languages such as FORTRAN, C and PASCAL. The organization of STAR is presented, followed by the description of an application involving STAR in the interpretation of airborne imaging spectrometer data.
Spielberg, Freya; Kurth, Ann; Reidy, William; McKnight, Teka; Dikobe, Wame; Wilson, Charles
2011-06-01
This article highlights findings from an evaluation that explored the impact of mobile versus clinic-based testing, rapid versus central-lab based testing, incentives for testing, and the use of a computer counseling program to guide counseling and automate evaluation in a mobile program reaching people of color at risk for HIV. The program's results show that an increased focus on mobile outreach using rapid testing, incentives and health information technology tools may improve program acceptability, quality, productivity and timeliness of reports. This article describes program design decisions based on continuous quality assessment efforts. It also examines the impact of the Computer Assessment and Risk Reduction Education computer tool on HIV testing rates, staff perception of counseling quality, program productivity, and on the timeliness of evaluation reports. The article concludes with a discussion of implications for programmatic responses to the Centers for Disease Control and Prevention's HIV testing recommendations.
Spielberg, Freya; Kurth, Ann; Reidy, William; McKnight, Teka; Dikobe, Wame; Wilson, Charles
2016-01-01
This article highlights findings from an evaluation that explored the impact of mobile versus clinic-based testing, rapid versus central-lab based testing, incentives for testing, and the use of a computer counseling program to guide counseling and automate evaluation in a mobile program reaching people of color at risk for HIV. The program’s results show that an increased focus on mobile outreach using rapid testing, incentives and health information technology tools may improve program acceptability, quality, productivity and timeliness of reports. This article describes program design decisions based on continuous quality assessment efforts. It also examines the impact of the Computer Assessment and Risk Reduction Education computer tool on HIV testing rates, staff perception of counseling quality, program productivity, and on the timeliness of evaluation reports. The article concludes with a discussion of implications for programmatic responses to the Centers for Disease Control and Prevention’s HIV testing recommendations. PMID:21689041
1985-04-01
characteristics of targets Tank 9.1 m (30 ft) in diameter by 6.7 m (22 ft) deep , automated with computer con- trol and analysis for detailed studies of acoustic...structures; and conducts experiments in the deep ocean, in acoustically shallow water, and in the Arctic. The Division carries out theoretical and...Laser Materials-Application Center Failure Analysis and Fractography Staff Research Activity Areas Environmental Effects Microstructural characterization
The Effect of Specific Language Features on the Complexity of Systems for Automated Essay Scoring.
ERIC Educational Resources Information Center
Cohen, Yoav; Ben-Simon, Anat; Hovav, Myra
This paper focuses on the relationship between different aspects of the linguistic structure of a given language and the complexity of the computer program, whether existing or prospective, that is to be used for the scoring of essays in that language. The first part of the paper discusses common scales used to assess writing products, then…
The Loci Multidisciplinary Simulation System Overview and Status
NASA Technical Reports Server (NTRS)
Luke, Edward A.; Tong, Xiao-Ling; Tang, Lin
2002-01-01
This paper will discuss the Loci system, an innovative tool for developing tightly coupled multidisciplinary three dimensional simulations. This presentation will overview some of the unique capabilities of the Loci system to automate the assembly of numerical simulations from libraries of fundamental computational components. We will discuss the demonstration of the Loci system on coupled fluid-structure problems related to RBCC propulsion systems.
SEC sensor parametric test and evaluation system
NASA Technical Reports Server (NTRS)
1978-01-01
This system provides the necessary automated hardware required to carry out, in conjunction with the existing 70 mm SEC television camera, the sensor evaluation tests which are described in detail. The Parametric Test Set (PTS) was completed and is used in a semiautomatic data acquisition and control mode to test the development of the 70 mm SEC sensor, WX 32193. Data analysis of raw data is performed on the Princeton IBM 360-91 computer.
Automated grading system for evaluation of ocular redness associated with dry eye.
Rodriguez, John D; Johnston, Patrick R; Ousler, George W; Smith, Lisa M; Abelson, Mark B
2013-01-01
We have observed that dry eye redness is characterized by a prominence of fine horizontal conjunctival vessels in the exposed ocular surface of the interpalpebral fissure, and have incorporated this feature into the grading of redness in clinical studies of dry eye. To develop an automated method of grading dry eye-associated ocular redness in order to expand on the clinical grading system currently used. Ninety nine images from 26 dry eye subjects were evaluated by five graders using a 0-4 (in 0.5 increments) dry eye redness (Ora Calibra™ Dry Eye Redness Scale [OCDER]) scale. For the automated method, the Opencv computer vision library was used to develop software for calculating redness and horizontal conjunctival vessels (noted as "horizontality"). From original photograph, the region of interest (ROI) was selected manually using the open source ImageJ software. Total average redness intensity (Com-Red) was calculated as a single channel 8-bit image as R - 0.83G - 0.17B, where R, G and B were the respective intensities of the red, green and blue channels. The location of vessels was detected by normalizing the blue channel and selecting pixels with an intensity of less than 97% of the mean. The horizontal component (Com-Hor) was calculated by the first order Sobel derivative in the vertical direction and the score was calculated as the average blue channel image intensity of this vertical derivative. Pearson correlation coefficients, accuracy and concordance correlation coefficients (CCC) were calculated after regression and standardized regression of the dataset. The agreement (both Pearson's and CCC) among investigators using the OCDER scale was 0.67, while the agreement of investigator to computer was 0.76. A multiple regression using both redness and horizontality improved the agreement CCC from 0.66 and 0.69 to 0.76, demonstrating the contribution of vessel geometry to the overall grade. Computer analysis of a given image has 100% repeatability and zero variability from session to session. This objective means of grading ocular redness in a unified fashion has potential significance as a new clinical endpoint. In comparisons between computer and investigator, computer grading proved to be more reliable than another investigator using the OCDER scale. The best fitting model based on the present sample, and usable for future studies, was [Formula: see text] is the predicted investigator grade, and [Formula: see text] and [Formula: see text] are logarithmic transformations of the computer calculated parameters COM-Hor and COM-Red. Considering the superior repeatability, computer automated grading might be preferable to investigator grading in multicentered dry eye studies in which the subtle differences in redness incurred by treatment have been historically difficult to define.
Development of Moire machine vision
NASA Technical Reports Server (NTRS)
Harding, Kevin G.
1987-01-01
Three dimensional perception is essential to the development of versatile robotics systems in order to handle complex manufacturing tasks in future factories and in providing high accuracy measurements needed in flexible manufacturing and quality control. A program is described which will develop the potential of Moire techniques to provide this capability in vision systems and automated measurements, and demonstrate artificial intelligence (AI) techniques to take advantage of the strengths of Moire sensing. Moire techniques provide a means of optically manipulating the complex visual data in a three dimensional scene into a form which can be easily and quickly analyzed by computers. This type of optical data manipulation provides high productivity through integrated automation, producing a high quality product while reducing computer and mechanical manipulation requirements and thereby the cost and time of production. This nondestructive evaluation is developed to be able to make full field range measurement and three dimensional scene analysis.
Automation of Data Traffic Control on DSM Architecture
NASA Technical Reports Server (NTRS)
Frumkin, Michael; Jin, Hao-Qiang; Yan, Jerry
2001-01-01
The design of distributed shared memory (DSM) computers liberates users from the duty to distribute data across processors and allows for the incremental development of parallel programs using, for example, OpenMP or Java threads. DSM architecture greatly simplifies the development of parallel programs having good performance on a few processors. However, to achieve a good program scalability on DSM computers requires that the user understand data flow in the application and use various techniques to avoid data traffic congestions. In this paper we discuss a number of such techniques, including data blocking, data placement, data transposition and page size control and evaluate their efficiency on the NAS (NASA Advanced Supercomputing) Parallel Benchmarks. We also present a tool which automates the detection of constructs causing data congestions in Fortran array oriented codes and advises the user on code transformations for improving data traffic in the application.
Development of Moire machine vision
NASA Astrophysics Data System (ADS)
Harding, Kevin G.
1987-10-01
Three dimensional perception is essential to the development of versatile robotics systems in order to handle complex manufacturing tasks in future factories and in providing high accuracy measurements needed in flexible manufacturing and quality control. A program is described which will develop the potential of Moire techniques to provide this capability in vision systems and automated measurements, and demonstrate artificial intelligence (AI) techniques to take advantage of the strengths of Moire sensing. Moire techniques provide a means of optically manipulating the complex visual data in a three dimensional scene into a form which can be easily and quickly analyzed by computers. This type of optical data manipulation provides high productivity through integrated automation, producing a high quality product while reducing computer and mechanical manipulation requirements and thereby the cost and time of production. This nondestructive evaluation is developed to be able to make full field range measurement and three dimensional scene analysis.
Niazi, Muhammad Khalid Khan; Abas, Fazly Salleh; Senaras, Caglar; Pennell, Michael; Sahiner, Berkman; Chen, Weijie; Opfer, John; Hasserjian, Robert; Louissaint, Abner; Shana'ah, Arwa; Lozanski, Gerard; Gurcan, Metin N
2018-01-01
Automatic and accurate detection of positive and negative nuclei from images of immunostained tissue biopsies is critical to the success of digital pathology. The evaluation of most nuclei detection algorithms relies on manually generated ground truth prepared by pathologists, which is unfortunately time-consuming and suffers from inter-pathologist variability. In this work, we developed a digital immunohistochemistry (IHC) phantom that can be used for evaluating computer algorithms for enumeration of IHC positive cells. Our phantom development consists of two main steps, 1) extraction of the individual as well as nuclei clumps of both positive and negative nuclei from real WSI images, and 2) systematic placement of the extracted nuclei clumps on an image canvas. The resulting images are visually similar to the original tissue images. We created a set of 42 images with different concentrations of positive and negative nuclei. These images were evaluated by four board certified pathologists in the task of estimating the ratio of positive to total number of nuclei. The resulting concordance correlation coefficients (CCC) between the pathologist and the true ratio range from 0.86 to 0.95 (point estimates). The same ratio was also computed by an automated computer algorithm, which yielded a CCC value of 0.99. Reading the phantom data with known ground truth, the human readers show substantial variability and lower average performance than the computer algorithm in terms of CCC. This shows the limitation of using a human reader panel to establish a reference standard for the evaluation of computer algorithms, thereby highlighting the usefulness of the phantom developed in this work. Using our phantom images, we further developed a function that can approximate the true ratio from the area of the positive and negative nuclei, hence avoiding the need to detect individual nuclei. The predicted ratios of 10 held-out images using the function (trained on 32 images) are within ±2.68% of the true ratio. Moreover, we also report the evaluation of a computerized image analysis method on the synthetic tissue dataset.
Liu, Yijin; Meirer, Florian; Williams, Phillip A.; Wang, Junyue; Andrews, Joy C.; Pianetta, Piero
2012-01-01
Transmission X-ray microscopy (TXM) has been well recognized as a powerful tool for non-destructive investigation of the three-dimensional inner structure of a sample with spatial resolution down to a few tens of nanometers, especially when combined with synchrotron radiation sources. Recent developments of this technique have presented a need for new tools for both system control and data analysis. Here a software package developed in MATLAB for script command generation and analysis of TXM data is presented. The first toolkit, the script generator, allows automating complex experimental tasks which involve up to several thousand motor movements. The second package was designed to accomplish computationally intense tasks such as data processing of mosaic and mosaic tomography datasets; dual-energy contrast imaging, where data are recorded above and below a specific X-ray absorption edge; and TXM X-ray absorption near-edge structure imaging datasets. Furthermore, analytical and iterative tomography reconstruction algorithms were implemented. The compiled software package is freely available. PMID:22338691
The normative structure of mathematization in systematic biology.
Sterner, Beckett; Lidgard, Scott
2014-06-01
We argue that the mathematization of science should be understood as a normative activity of advocating for a particular methodology with its own criteria for evaluating good research. As a case study, we examine the mathematization of taxonomic classification in systematic biology. We show how mathematization is a normative activity by contrasting its distinctive features in numerical taxonomy in the 1960s with an earlier reform advocated by Ernst Mayr starting in the 1940s. Both Mayr and the numerical taxonomists sought to formalize the work of classification, but Mayr introduced a qualitative formalism based on human judgment for determining the taxonomic rank of populations, while the numerical taxonomists introduced a quantitative formalism based on automated procedures for computing classifications. The key contrast between Mayr and the numerical taxonomists is how they conceptualized the temporal structure of the workflow of classification, specifically where they allowed meta-level discourse about difficulties in producing the classification. Copyright © 2014. Published by Elsevier Ltd.
Haider, Kamran; Cruz, Anthony; Ramsey, Steven; Gilson, Michael K; Kurtzman, Tom
2018-01-09
We have developed SSTMap, a software package for mapping structural and thermodynamic water properties in molecular dynamics trajectories. The package introduces automated analysis and mapping of local measures of frustration and enhancement of water structure. The thermodynamic calculations are based on Inhomogeneous Fluid Solvation Theory (IST), which is implemented using both site-based and grid-based approaches. The package also extends the applicability of solvation analysis calculations to multiple molecular dynamics (MD) simulation programs by using existing cross-platform tools for parsing MD parameter and trajectory files. SSTMap is implemented in Python and contains both command-line tools and a Python module to facilitate flexibility in setting up calculations and for automated generation of large data sets involving analysis of multiple solutes. Output is generated in formats compatible with popular Python data science packages. This tool will be used by the molecular modeling community for computational analysis of water in problems of biophysical interest such as ligand binding and protein function.
NASA Astrophysics Data System (ADS)
Paganelli, Chiara; Peroni, Marta; Riboldi, Marco; Sharp, Gregory C.; Ciardo, Delia; Alterio, Daniela; Orecchia, Roberto; Baroni, Guido
2013-01-01
Adaptive radiation therapy (ART) aims at compensating for anatomic and pathological changes to improve delivery along a treatment fraction sequence. Current ART protocols require time-consuming manual updating of all volumes of interest on the images acquired during treatment. Deformable image registration (DIR) and contour propagation stand as a state of the ART method to automate the process, but the lack of DIR quality control methods hinder an introduction into clinical practice. We investigated the scale invariant feature transform (SIFT) method as a quantitative automated tool (1) for DIR evaluation and (2) for re-planning decision-making in the framework of ART treatments. As a preliminary test, SIFT invariance properties at shape-preserving and deformable transformations were studied on a computational phantom, granting residual matching errors below the voxel dimension. Then a clinical dataset composed of 19 head and neck ART patients was used to quantify the performance in ART treatments. For the goal (1) results demonstrated SIFT potential as an operator-independent DIR quality assessment metric. We measured DIR group systematic residual errors up to 0.66 mm against 1.35 mm provided by rigid registration. The group systematic errors of both bony and all other structures were also analyzed, attesting the presence of anatomical deformations. The correct automated identification of 18 patients who might benefit from ART out of the total 22 cases using SIFT demonstrated its capabilities toward goal (2) achievement.
Systematic review automation technologies
2014-01-01
Systematic reviews, a cornerstone of evidence-based medicine, are not produced quickly enough to support clinical practice. The cost of production, availability of the requisite expertise and timeliness are often quoted as major contributors for the delay. This detailed survey of the state of the art of information systems designed to support or automate individual tasks in the systematic review, and in particular systematic reviews of randomized controlled clinical trials, reveals trends that see the convergence of several parallel research projects. We surveyed literature describing informatics systems that support or automate the processes of systematic review or each of the tasks of the systematic review. Several projects focus on automating, simplifying and/or streamlining specific tasks of the systematic review. Some tasks are already fully automated while others are still largely manual. In this review, we describe each task and the effect that its automation would have on the entire systematic review process, summarize the existing information system support for each task, and highlight where further research is needed for realizing automation for the task. Integration of the systems that automate systematic review tasks may lead to a revised systematic review workflow. We envisage the optimized workflow will lead to system in which each systematic review is described as a computer program that automatically retrieves relevant trials, appraises them, extracts and synthesizes data, evaluates the risk of bias, performs meta-analysis calculations, and produces a report in real time. PMID:25005128
NASA Astrophysics Data System (ADS)
Petrie, C.; Margaria, T.; Lausen, H.; Zaremba, M.
Explores trade-offs among existing approaches. Reveals strengths and weaknesses of proposed approaches, as well as which aspects of the problem are not yet covered. Introduces software engineering approach to evaluating semantic web services. Service-Oriented Computing is one of the most promising software engineering trends because of the potential to reduce the programming effort for future distributed industrial systems. However, only a small part of this potential rests on the standardization of tools offered by the web services stack. The larger part of this potential rests upon the development of sufficient semantics to automate service orchestration. Currently there are many different approaches to semantic web service descriptions and many frameworks built around them. A common understanding, evaluation scheme, and test bed to compare and classify these frameworks in terms of their capabilities and shortcomings, is necessary to make progress in developing the full potential of Service-Oriented Computing. The Semantic Web Services Challenge is an open source initiative that provides a public evaluation and certification of multiple frameworks on common industrially-relevant problem sets. This edited volume reports on the first results in developing common understanding of the various technologies intended to facilitate the automation of mediation, choreography and discovery for Web Services using semantic annotations. Semantic Web Services Challenge: Results from the First Year is designed for a professional audience composed of practitioners and researchers in industry. Professionals can use this book to evaluate SWS technology for their potential practical use. The book is also suitable for advanced-level students in computer science.
Automated high-grade prostate cancer detection and ranking on whole slide images
NASA Astrophysics Data System (ADS)
Huang, Chao-Hui; Racoceanu, Daniel
2017-03-01
Recently, digital pathology (DP) has been largely improved due to the development of computer vision and machine learning. Automated detection of high-grade prostate carcinoma (HG-PCa) is an impactful medical use-case showing the paradigm of collaboration between DP and computer science: given a field of view (FOV) from a whole slide image (WSI), the computer-aided system is able to determine the grade by classifying the FOV. Various approaches have been reported based on this approach. However, there are two reasons supporting us to conduct this work: first, there is still room for improvement in terms of detection accuracy of HG-PCa; second, a clinical practice is more complex than the operation of simple image classification. FOV ranking is also an essential step. E.g., in clinical practice, a pathologist usually evaluates a case based on a few FOVs from the given WSI. Then, makes decision based on the most severe FOV. This important ranking scenario is not yet being well discussed. In this work, we introduce an automated detection and ranking system for PCa based on Gleason pattern discrimination. Our experiments suggested that the proposed system is able to perform high-accuracy detection ( 95:57% +/- 2:1%) and excellent performance of ranking. Hence, the proposed system has a great potential to support the daily tasks in the medical routine of clinical pathology.
SeqMule: automated pipeline for analysis of human exome/genome sequencing data.
Guo, Yunfei; Ding, Xiaolei; Shen, Yufeng; Lyon, Gholson J; Wang, Kai
2015-09-18
Next-generation sequencing (NGS) technology has greatly helped us identify disease-contributory variants for Mendelian diseases. However, users are often faced with issues such as software compatibility, complicated configuration, and no access to high-performance computing facility. Discrepancies exist among aligners and variant callers. We developed a computational pipeline, SeqMule, to perform automated variant calling from NGS data on human genomes and exomes. SeqMule integrates computational-cluster-free parallelization capability built on top of the variant callers, and facilitates normalization/intersection of variant calls to generate consensus set with high confidence. SeqMule integrates 5 alignment tools, 5 variant calling algorithms and accepts various combinations all by one-line command, therefore allowing highly flexible yet fully automated variant calling. In a modern machine (2 Intel Xeon X5650 CPUs, 48 GB memory), when fast turn-around is needed, SeqMule generates annotated VCF files in a day from a 30X whole-genome sequencing data set; when more accurate calling is needed, SeqMule generates consensus call set that improves over single callers, as measured by both Mendelian error rate and consistency. SeqMule supports Sun Grid Engine for parallel processing, offers turn-key solution for deployment on Amazon Web Services, allows quality check, Mendelian error check, consistency evaluation, HTML-based reports. SeqMule is available at http://seqmule.openbioinformatics.org.
Suppa, Per; Hampel, Harald; Spies, Lothar; Fiebach, Jochen B; Dubois, Bruno; Buchert, Ralph
2015-01-01
Hippocampus volumetry based on magnetic resonance imaging (MRI) has not yet been translated into everyday clinical diagnostic patient care, at least in part due to limited availability of appropriate software tools. In the present study, we evaluate a fully-automated and computationally efficient processing pipeline for atlas based hippocampal volumetry using freely available Statistical Parametric Mapping (SPM) software in 198 amnestic mild cognitive impairment (MCI) subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI1). Subjects were grouped into MCI stable and MCI to probable Alzheimer's disease (AD) converters according to follow-up diagnoses at 12, 24, and 36 months. Hippocampal grey matter volume (HGMV) was obtained from baseline T1-weighted MRI and then corrected for total intracranial volume and age. Average processing time per subject was less than 4 minutes on a standard PC. The area under the receiver operator characteristic curve of the corrected HGMV for identification of MCI to probable AD converters within 12, 24, and 36 months was 0.78, 0.72, and 0.71, respectively. Thus, hippocampal volume computed with the fully-automated processing pipeline provides similar power for prediction of MCI to probable AD conversion as computationally more expensive methods. The whole processing pipeline has been made freely available as an SPM8 toolbox. It is easily set up and integrated into everyday clinical patient care.
Automated crack detection in conductive smart-concrete structures using a resistor mesh model
NASA Astrophysics Data System (ADS)
Downey, Austin; D'Alessandro, Antonella; Ubertini, Filippo; Laflamme, Simon
2018-03-01
Various nondestructive evaluation techniques are currently used to automatically detect and monitor cracks in concrete infrastructure. However, these methods often lack the scalability and cost-effectiveness over large geometries. A solution is the use of self-sensing carbon-doped cementitious materials. These self-sensing materials are capable of providing a measurable change in electrical output that can be related to their damage state. Previous work by the authors showed that a resistor mesh model could be used to track damage in structural components fabricated from electrically conductive concrete, where damage was located through the identification of high resistance value resistors in a resistor mesh model. In this work, an automated damage detection strategy that works through placing high value resistors into the previously developed resistor mesh model using a sequential Monte Carlo method is introduced. Here, high value resistors are used to mimic the internal condition of damaged cementitious specimens. The proposed automated damage detection method is experimentally validated using a 500 × 500 × 50 mm3 reinforced cement paste plate doped with multi-walled carbon nanotubes exposed to 100 identical impact tests. Results demonstrate that the proposed Monte Carlo method is capable of detecting and localizing the most prominent damage in a structure, demonstrating that automated damage detection in smart-concrete structures is a promising strategy for real-time structural health monitoring of civil infrastructure.
Chest wall segmentation in automated 3D breast ultrasound scans.
Tan, Tao; Platel, Bram; Mann, Ritse M; Huisman, Henkjan; Karssemeijer, Nico
2013-12-01
In this paper, we present an automatic method to segment the chest wall in automated 3D breast ultrasound images. Determining the location of the chest wall in automated 3D breast ultrasound images is necessary in computer-aided detection systems to remove automatically detected cancer candidates beyond the chest wall and it can be of great help for inter- and intra-modal image registration. We show that the visible part of the chest wall in an automated 3D breast ultrasound image can be accurately modeled by a cylinder. We fit the surface of our cylinder model to a set of automatically detected rib-surface points. The detection of the rib-surface points is done by a classifier using features representing local image intensity patterns and presence of rib shadows. Due to attenuation of the ultrasound signal, a clear shadow is visible behind the ribs. Evaluation of our segmentation method is done by computing the distance of manually annotated rib points to the surface of the automatically detected chest wall. We examined the performance on images obtained with the two most common 3D breast ultrasound devices in the market. In a dataset of 142 images, the average mean distance of the annotated points to the segmented chest wall was 5.59 ± 3.08 mm. Copyright © 2012 Elsevier B.V. All rights reserved.
Comparison of exercise blood pressure measured by technician and an automated system.
Garcia-Gregory, J A; Jackson, A S; Studeville, J; Squires, W G; Owen, C A
1984-05-01
We evaluated the automated system Blood Pressure Measuring System (BPMS) developed by NASA on 277 adult males who elected to have a treadmill test as part of their annual physical. The BPMS uses acoustic transduction with a computer-assisted ECG gating to detect nonsynchronous noise. The BPMS readings were compared to pressures simultaneously measured by trained technicians. For all stages of work, BPMS readings were higher for systolic and lower for diastolic than technician readings. At peak stages of work, BPMS systolic pressures were about 20 mmHg higher than technician readings. Within each 3-min workstage, BPMS readings were found to be more inconsistent than technician readings. The standard errors of measurement for BPMS were from two to three times higher than technician values. These data showed automated blood pressure readings were significantly different than technician values and subject to more random fluctuations. These findings demonstrate the need to view exercise blood pressure measured by automated systems with caution.
Extraction of Prostatic Lumina and Automated Recognition for Prostatic Calculus Image Using PCA-SVM
Wang, Zhuocai; Xu, Xiangmin; Ding, Xiaojun; Xiao, Hui; Huang, Yusheng; Liu, Jian; Xing, Xiaofen; Wang, Hua; Liao, D. Joshua
2011-01-01
Identification of prostatic calculi is an important basis for determining the tissue origin. Computation-assistant diagnosis of prostatic calculi may have promising potential but is currently still less studied. We studied the extraction of prostatic lumina and automated recognition for calculus images. Extraction of lumina from prostate histology images was based on local entropy and Otsu threshold recognition using PCA-SVM and based on the texture features of prostatic calculus. The SVM classifier showed an average time 0.1432 second, an average training accuracy of 100%, an average test accuracy of 93.12%, a sensitivity of 87.74%, and a specificity of 94.82%. We concluded that the algorithm, based on texture features and PCA-SVM, can recognize the concentric structure and visualized features easily. Therefore, this method is effective for the automated recognition of prostatic calculi. PMID:21461364
Automated design of degenerate codon libraries.
Mena, Marco A; Daugherty, Patrick S
2005-12-01
Degenerate codon libraries are frequently used in protein engineering and evolution studies but are often limited to targeting a small number of positions to adequately limit the search space. To mitigate this, codon degeneracy can be limited using heuristics or previous knowledge of the targeted positions. To automate design of libraries given a set of amino acid sequences, an algorithm (LibDesign) was developed that generates a set of possible degenerate codon libraries, their resulting size, and their score relative to a user-defined scoring function. A gene library of a specified size can then be constructed that is representative of the given amino acid distribution or that includes specific sequences or combinations thereof. LibDesign provides a new tool for automated design of high-quality protein libraries that more effectively harness existing sequence-structure information derived from multiple sequence alignment or computational protein design data.
DOT National Transportation Integrated Search
1974-08-01
Volume 4 describes the automation requirements. A presentation of automation requirements is made for an advanced air traffic management system in terms of controller work force, computer resources, controller productivity, system manning, failure ef...
Zeng, Jianyang; Zhou, Pei; Donald, Bruce Randall
2011-01-01
One bottleneck in NMR structure determination lies in the laborious and time-consuming process of side-chain resonance and NOE assignments. Compared to the well-studied backbone resonance assignment problem, automated side-chain resonance and NOE assignments are relatively less explored. Most NOE assignment algorithms require nearly complete side-chain resonance assignments from a series of through-bond experiments such as HCCH-TOCSY or HCCCONH. Unfortunately, these TOCSY experiments perform poorly on large proteins. To overcome this deficiency, we present a novel algorithm, called NASCA (NOE Assignment and Side-Chain Assignment), to automate both side-chain resonance and NOE assignments and to perform high-resolution protein structure determination in the absence of any explicit through-bond experiment to facilitate side-chain resonance assignment, such as HCCH-TOCSY. After casting the assignment problem into a Markov Random Field (MRF), NASCA extends and applies combinatorial protein design algorithms to compute optimal assignments that best interpret the NMR data. The MRF captures the contact map information of the protein derived from NOESY spectra, exploits the backbone structural information determined by RDCs, and considers all possible side-chain rotamers. The complexity of the combinatorial search is reduced by using a dead-end elimination (DEE) algorithm, which prunes side-chain resonance assignments that are provably not part of the optimal solution. Then an A* search algorithm is employed to find a set of optimal side-chain resonance assignments that best fit the NMR data. These side-chain resonance assignments are then used to resolve the NOE assignment ambiguity and compute high-resolution protein structures. Tests on five proteins show that NASCA assigns resonances for more than 90% of side-chain protons, and achieves about 80% correct assignments. The final structures computed using the NOE distance restraints assigned by NASCA have backbone RMSD 0.8 – 1.5 Å from the reference structures determined by traditional NMR approaches. PMID:21706248
Development of a First-of-a-Kind Deterministic Decision-Making Tool for Supervisory Control System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cetiner, Sacit M; Kisner, Roger A; Muhlheim, Michael David
2015-07-01
Decision-making is the process of identifying and choosing alternatives where each alternative offers a different approach or path to move from a given state or condition to a desired state or condition. The generation of consistent decisions requires that a structured, coherent process be defined, immediately leading to a decision-making framework. The overall objective of the generalized framework is for it to be adopted into an autonomous decision-making framework and tailored to specific requirements for various applications. In this context, automation is the use of computing resources to make decisions and implement a structured decision-making process with limited or nomore » human intervention. The overriding goal of automation is to replace or supplement human decision makers with reconfigurable decision- making modules that can perform a given set of tasks reliably. Risk-informed decision making requires a probabilistic assessment of the likelihood of success given the status of the plant/systems and component health, and a deterministic assessment between plant operating parameters and reactor protection parameters to prevent unnecessary trips and challenges to plant safety systems. The implementation of the probabilistic portion of the decision-making engine of the proposed supervisory control system was detailed in previous milestone reports. Once the control options are identified and ranked based on the likelihood of success, the supervisory control system transmits the options to the deterministic portion of the platform. The deterministic multi-attribute decision-making framework uses variable sensor data (e.g., outlet temperature) and calculates where it is within the challenge state, its trajectory, and margin within the controllable domain using utility functions to evaluate current and projected plant state space for different control decisions. Metrics to be evaluated include stability, cost, time to complete (action), power level, etc. The integration of deterministic calculations using multi-physics analyses (i.e., neutronics, thermal, and thermal-hydraulics) and probabilistic safety calculations allows for the examination and quantification of margin recovery strategies. This also provides validation of the control options identified from the probabilistic assessment. Thus, the thermal-hydraulics analyses are used to validate the control options identified from the probabilistic assessment. Future work includes evaluating other possible metrics and computational efficiencies.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automated the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system hasmore » been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.3 of the system.« less
Comparison of Different Methods of Grading a Level Turn Task on a Flight Simulator
NASA Technical Reports Server (NTRS)
Heath, Bruce E.; Crier, tomyka
2003-01-01
With the advancements in the computing power of personal computers, pc-based flight simulators and trainers have opened new avenues in the training of airplane pilots. It may be desirable to have the flight simulator make a quantitative evaluation of the progress of a pilot's training thereby reducing the physical requirement of the flight instructor who must, in turn, watch every flight. In an experiment, University students conducted six different flights, each consisting of two level turns. The flights were three minutes in duration. By evaluating videotapes, two certified flight instructors provided separate letter grades for each turn. These level turns were also evaluated using two other computer based grading methods. One method determined automated grades based on prescribed tolerances in bank angle, airspeed and altitude. The other method used was deviations in altitude and bank angle for performance index and performance grades.
ERIC Educational Resources Information Center
Collins, Michael J.; Vitz, Ed
1988-01-01
Examines two computer interfaced lab experiments: 1) discusses the automation of a Perkin Elmer 337 infrared spectrophotometer noting the mechanical and electronic changes needed; 2) uses the Gouy method and Lotus Measure software to automate magnetic susceptibility determinations. Methodology is described. (MVL)
Safety in the Automated Office.
ERIC Educational Resources Information Center
Graves, Pat R.; Greathouse, Lillian R.
1990-01-01
Office automation has introduced new hazards to the workplace: electrical hazards related to computer wiring, musculoskeletal problems resulting from use of computer terminals and design of work stations, and environmental concerns related to ventilation, noise levels, and office machine chemicals. (SK)
Hosseini, Mohammad-Parsa; Nazem-Zadeh, Mohammad-Reza; Pompili, Dario; Jafari-Khouzani, Kourosh; Elisevich, Kost; Soltanian-Zadeh, Hamid
2016-01-01
Segmentation of the hippocampus from magnetic resonance (MR) images is a key task in the evaluation of mesial temporal lobe epilepsy (mTLE) patients. Several automated algorithms have been proposed although manual segmentation remains the benchmark. Choosing a reliable algorithm is problematic since structural definition pertaining to multiple edges, missing and fuzzy boundaries, and shape changes varies among mTLE subjects. Lack of statistical references and guidance for quantifying the reliability and reproducibility of automated techniques has further detracted from automated approaches. The purpose of this study was to develop a systematic and statistical approach using a large dataset for the evaluation of automated methods and establish a method that would achieve results better approximating those attained by manual tracing in the epileptogenic hippocampus. A template database of 195 (81 males, 114 females; age range 32-67 yr, mean 49.16 yr) MR images of mTLE patients was used in this study. Hippocampal segmentation was accomplished manually and by two well-known tools (FreeSurfer and hammer) and two previously published methods developed at their institution [Automatic brain structure segmentation (ABSS) and LocalInfo]. To establish which method was better performing for mTLE cases, several voxel-based, distance-based, and volume-based performance metrics were considered. Statistical validations of the results using automated techniques were compared with the results of benchmark manual segmentation. Extracted metrics were analyzed to find the method that provided a more similar result relative to the benchmark. Among the four automated methods, ABSS generated the most accurate results. For this method, the Dice coefficient was 5.13%, 14.10%, and 16.67% higher, Hausdorff was 22.65%, 86.73%, and 69.58% lower, precision was 4.94%, -4.94%, and 12.35% higher, and the root mean square (RMS) was 19.05%, 61.90%, and 65.08% lower than LocalInfo, FreeSurfer, and hammer, respectively. The Bland-Altman similarity analysis revealed a low bias for the ABSS and LocalInfo techniques compared to the others. The ABSS method for automated hippocampal segmentation outperformed other methods, best approximating what could be achieved by manual tracing. This study also shows that four categories of input data can cause automated segmentation methods to fail. They include incomplete studies, artifact, low signal-to-noise ratio, and inhomogeneity. Different scanner platforms and pulse sequences were considered as means by which to improve reliability of the automated methods. Other modifications were specially devised to enhance a particular method assessed in this study.
Hosseini, Mohammad-Parsa; Nazem-Zadeh, Mohammad-Reza; Pompili, Dario; Jafari-Khouzani, Kourosh; Elisevich, Kost; Soltanian-Zadeh, Hamid
2016-01-01
Purpose: Segmentation of the hippocampus from magnetic resonance (MR) images is a key task in the evaluation of mesial temporal lobe epilepsy (mTLE) patients. Several automated algorithms have been proposed although manual segmentation remains the benchmark. Choosing a reliable algorithm is problematic since structural definition pertaining to multiple edges, missing and fuzzy boundaries, and shape changes varies among mTLE subjects. Lack of statistical references and guidance for quantifying the reliability and reproducibility of automated techniques has further detracted from automated approaches. The purpose of this study was to develop a systematic and statistical approach using a large dataset for the evaluation of automated methods and establish a method that would achieve results better approximating those attained by manual tracing in the epileptogenic hippocampus. Methods: A template database of 195 (81 males, 114 females; age range 32–67 yr, mean 49.16 yr) MR images of mTLE patients was used in this study. Hippocampal segmentation was accomplished manually and by two well-known tools (FreeSurfer and hammer) and two previously published methods developed at their institution [Automatic brain structure segmentation (ABSS) and LocalInfo]. To establish which method was better performing for mTLE cases, several voxel-based, distance-based, and volume-based performance metrics were considered. Statistical validations of the results using automated techniques were compared with the results of benchmark manual segmentation. Extracted metrics were analyzed to find the method that provided a more similar result relative to the benchmark. Results: Among the four automated methods, ABSS generated the most accurate results. For this method, the Dice coefficient was 5.13%, 14.10%, and 16.67% higher, Hausdorff was 22.65%, 86.73%, and 69.58% lower, precision was 4.94%, −4.94%, and 12.35% higher, and the root mean square (RMS) was 19.05%, 61.90%, and 65.08% lower than LocalInfo, FreeSurfer, and hammer, respectively. The Bland–Altman similarity analysis revealed a low bias for the ABSS and LocalInfo techniques compared to the others. Conclusions: The ABSS method for automated hippocampal segmentation outperformed other methods, best approximating what could be achieved by manual tracing. This study also shows that four categories of input data can cause automated segmentation methods to fail. They include incomplete studies, artifact, low signal-to-noise ratio, and inhomogeneity. Different scanner platforms and pulse sequences were considered as means by which to improve reliability of the automated methods. Other modifications were specially devised to enhance a particular method assessed in this study. PMID:26745947
Supervised and Unsupervised Learning Technology in the Study of Rodent Behavior
Gris, Katsiaryna V.; Coutu, Jean-Philippe; Gris, Denis
2017-01-01
Quantifying behavior is a challenge for scientists studying neuroscience, ethology, psychology, pathology, etc. Until now, behavior was mostly considered as qualitative descriptions of postures or labor intensive counting of bouts of individual movements. Many prominent behavioral scientists conducted studies describing postures of mice and rats, depicting step by step eating, grooming, courting, and other behaviors. Automated video assessment technologies permit scientists to quantify daily behavioral patterns/routines, social interactions, and postural changes in an unbiased manner. Here, we extensively reviewed published research on the topic of the structural blocks of behavior and proposed a structure of behavior based on the latest publications. We discuss the importance of defining a clear structure of behavior to allow professionals to write viable algorithms. We presented a discussion of technologies that are used in automated video assessment of behavior in mice and rats. We considered advantages and limitations of supervised and unsupervised learning. We presented the latest scientific discoveries that were made using automated video assessment. In conclusion, we proposed that the automated quantitative approach to evaluating animal behavior is the future of understanding the effect of brain signaling, pathologies, genetic content, and environment on behavior. PMID:28804452
A Fully Automated High-Throughput Zebrafish Behavioral Ototoxicity Assay.
Todd, Douglas W; Philip, Rohit C; Niihori, Maki; Ringle, Ryan A; Coyle, Kelsey R; Zehri, Sobia F; Zabala, Leanne; Mudery, Jordan A; Francis, Ross H; Rodriguez, Jeffrey J; Jacob, Abraham
2017-08-01
Zebrafish animal models lend themselves to behavioral assays that can facilitate rapid screening of ototoxic, otoprotective, and otoregenerative drugs. Structurally similar to human inner ear hair cells, the mechanosensory hair cells on their lateral line allow the zebrafish to sense water flow and orient head-to-current in a behavior called rheotaxis. This rheotaxis behavior deteriorates in a dose-dependent manner with increased exposure to the ototoxin cisplatin, thereby establishing itself as an excellent biomarker for anatomic damage to lateral line hair cells. Building on work by our group and others, we have built a new, fully automated high-throughput behavioral assay system that uses automated image analysis techniques to quantify rheotaxis behavior. This novel system consists of a custom-designed swimming apparatus and imaging system consisting of network-controlled Raspberry Pi microcomputers capturing infrared video. Automated analysis techniques detect individual zebrafish, compute their orientation, and quantify the rheotaxis behavior of a zebrafish test population, producing a powerful, high-throughput behavioral assay. Using our fully automated biological assay to test a standardized ototoxic dose of cisplatin against varying doses of compounds that protect or regenerate hair cells may facilitate rapid translation of candidate drugs into preclinical mammalian models of hearing loss.
Computational methods for evaluation of cell-based data assessment--Bioconductor.
Le Meur, Nolwenn
2013-02-01
Recent advances in miniaturization and automation of technologies have enabled cell-based assay high-throughput screening, bringing along new challenges in data analysis. Automation, standardization, reproducibility have become requirements for qualitative research. The Bioconductor community has worked in that direction proposing several R packages to handle high-throughput data including flow cytometry (FCM) experiment. Altogether, these packages cover the main steps of a FCM analysis workflow, that is, data management, quality assessment, normalization, outlier detection, automated gating, cluster labeling, and feature extraction. Additionally, the open-source philosophy of R and Bioconductor, which offers room for new development, continuously drives research and improvement of theses analysis methods, especially in the field of clustering and data mining. This review presents the principal FCM packages currently available in R and Bioconductor, their advantages and their limits. Copyright © 2012 Elsevier Ltd. All rights reserved.
Delpon, Grégory; Escande, Alexandre; Ruef, Timothée; Darréon, Julien; Fontaine, Jimmy; Noblet, Caroline; Supiot, Stéphane; Lacornerie, Thomas; Pasquier, David
2016-01-01
Automated atlas-based segmentation (ABS) algorithms present the potential to reduce the variability in volume delineation. Several vendors offer software that are mainly used for cranial, head and neck, and prostate cases. The present study will compare the contours produced by a radiation oncologist to the contours computed by different automated ABS algorithms for prostate bed cases, including femoral heads, bladder, and rectum. Contour comparison was evaluated by different metrics such as volume ratio, Dice coefficient, and Hausdorff distance. Results depended on the volume of interest showed some discrepancies between the different software. Automatic contours could be a good starting point for the delineation of organs since efficient editing tools are provided by different vendors. It should become an important help in the next few years for organ at risk delineation. PMID:27536556
Ahern, Thomas P.; Beck, Andrew H.; Rosner, Bernard A.; Glass, Ben; Frieling, Gretchen; Collins, Laura C.; Tamimi, Rulla M.
2017-01-01
Background Computational pathology platforms incorporate digital microscopy with sophisticated image analysis to permit rapid, continuous measurement of protein expression. We compared two computational pathology platforms on their measurement of breast tumor estrogen receptor (ER) and progesterone receptor (PR) expression. Methods Breast tumor microarrays from the Nurses’ Health Study were stained for ER (n=592) and PR (n=187). One expert pathologist scored cases as positive if ≥1% of tumor nuclei exhibited stain. ER and PR were then measured with the Definiens Tissue Studio (automated) and Aperio Digital Pathology (user-supervised) platforms. Platform-specific measurements were compared using boxplots, scatter plots, and correlation statistics. Classification of ER and PR positivity by platform-specific measurements was evaluated with areas under receiver operating characteristic curves (AUC) from univariable logistic regression models, using expert pathologist classification as the standard. Results Both platforms showed considerable overlap in continuous measurements of ER and PR between positive and negative groups classified by expert pathologist. Platform-specific measurements were strongly and positively correlated with one another (rho≥0.77). The user-supervised Aperio workflow performed slightly better than the automated Definiens workflow at classifying ER positivity (AUCAperio=0.97; AUCDefiniens=0.90; difference=0.07, 95% CI: 0.05, 0.09) and PR positivity (AUCAperio=0.94; AUCDefiniens=0.87; difference=0.07, 95% CI: 0.03, 0.12). Conclusion Paired hormone receptor expression measurements from two different computational pathology platforms agreed well with one another. The user-supervised workflow yielded better classification accuracy than the automated workflow. Appropriately validated computational pathology algorithms enrich molecular epidemiology studies with continuous protein expression data and may accelerate tumor biomarker discovery. PMID:27729430
Experiments in cooperative-arm object manipulation with a two-armed free-flying robot. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Koningstein, Ross
1990-01-01
Developing computed-torque controllers for complex manipulator systems using current techniques and tools is difficult because they address the issues pertinent to simulation, as opposed to control. A new formulation of computed-torque (CT) control that leads to an automated computer-torque robot controller program is presented. This automated tool is used for simulations and experimental demonstrations of endpoint and object control from a free-flying robot. A new computed-torque formulation states the multibody control problem in an elegant, homogeneous, and practical form. A recursive dynamics algorithm is presented that numerically evaluates kinematics and dynamics terms for multibody systems given a topological description. Manipulators may be free-flying, and may have closed-chain constraints. With the exception of object squeeze-force control, the algorithm does not deal with actuator redundancy. The algorithm is used to implement an automated 2D computed-torque dynamics and control package that allows joint, endpoint, orientation, momentum, and object squeeze-force control. This package obviates the need for hand-derivation of kinematics and dynamics, and is used for both simulation and experimental control. Endpoint control experiments are performed on a laboratory robot that has two arms to manipulate payloads, and uses an air bearing to achieve very-low drag characteristics. Simulations and experimental data for endpoint and object controllers are presented for the experimental robot - a complex dynamic system. There is a certain rather wide set of conditions under which CT endpoint controllers can neglect robot base accelerations (but not motions) and achieve comparable performance including base accelerations in the model. The regime over which this simplification holds is explored by simulation and experiment.
NASA Technical Reports Server (NTRS)
Borchardt, G. C.
1994-01-01
The Simple Tool for Automated Reasoning program (STAR) is an interactive, interpreted programming language for the development and operation of artificial intelligence (AI) application systems. STAR provides an environment for integrating traditional AI symbolic processing with functions and data structures defined in compiled languages such as C, FORTRAN and PASCAL. This type of integration occurs in a number of AI applications including interpretation of numerical sensor data, construction of intelligent user interfaces to existing compiled software packages, and coupling AI techniques with numerical simulation techniques and control systems software. The STAR language was created as part of an AI project for the evaluation of imaging spectrometer data at NASA's Jet Propulsion Laboratory. Programming in STAR is similar to other symbolic processing languages such as LISP and CLIP. STAR includes seven primitive data types and associated operations for the manipulation of these structures. A semantic network is used to organize data in STAR, with capabilities for inheritance of values and generation of side effects. The AI knowledge base of STAR can be a simple repository of records or it can be a highly interdependent association of implicit and explicit components. The symbolic processing environment of STAR may be extended by linking the interpreter with functions defined in conventional compiled languages. These external routines interact with STAR through function calls in either direction, and through the exchange of references to data structures. The hybrid knowledge base may thus be accessed and processed in general by either side of the application. STAR is initially used to link externally compiled routines and data structures. It is then invoked to interpret the STAR rules and symbolic structures. In a typical interactive session, the user enters an expression to be evaluated, STAR parses the input, evaluates the expression, performs any file input/output required, and displays the results. The STAR interpreter is written in the C language for interactive execution. It has been implemented on a VAX 11/780 computer operating under VMS, and the UNIX version has been implemented on a Sun Microsystems 2/170 workstation. STAR has a memory requirement of approximately 200K of 8 bit bytes, excluding externally compiled functions and application-dependent symbolic definitions. This program was developed in 1985.
NASA Technical Reports Server (NTRS)
Borchardt, G. C.
1994-01-01
The Simple Tool for Automated Reasoning program (STAR) is an interactive, interpreted programming language for the development and operation of artificial intelligence (AI) application systems. STAR provides an environment for integrating traditional AI symbolic processing with functions and data structures defined in compiled languages such as C, FORTRAN and PASCAL. This type of integration occurs in a number of AI applications including interpretation of numerical sensor data, construction of intelligent user interfaces to existing compiled software packages, and coupling AI techniques with numerical simulation techniques and control systems software. The STAR language was created as part of an AI project for the evaluation of imaging spectrometer data at NASA's Jet Propulsion Laboratory. Programming in STAR is similar to other symbolic processing languages such as LISP and CLIP. STAR includes seven primitive data types and associated operations for the manipulation of these structures. A semantic network is used to organize data in STAR, with capabilities for inheritance of values and generation of side effects. The AI knowledge base of STAR can be a simple repository of records or it can be a highly interdependent association of implicit and explicit components. The symbolic processing environment of STAR may be extended by linking the interpreter with functions defined in conventional compiled languages. These external routines interact with STAR through function calls in either direction, and through the exchange of references to data structures. The hybrid knowledge base may thus be accessed and processed in general by either side of the application. STAR is initially used to link externally compiled routines and data structures. It is then invoked to interpret the STAR rules and symbolic structures. In a typical interactive session, the user enters an expression to be evaluated, STAR parses the input, evaluates the expression, performs any file input/output required, and displays the results. The STAR interpreter is written in the C language for interactive execution. It has been implemented on a VAX 11/780 computer operating under VMS, and the UNIX version has been implemented on a Sun Microsystems 2/170 workstation. STAR has a memory requirement of approximately 200K of 8 bit bytes, excluding externally compiled functions and application-dependent symbolic definitions. This program was developed in 1985.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Linguraru, Marius George; Panjwani, Neil; Fletcher, Joel G.
2011-12-15
Purpose: To evaluate the performance of a computer-aided detection (CAD) system for detecting colonic polyps at noncathartic computed tomography colonography (CTC) in conjunction with an automated image-based colon cleansing algorithm. Methods: An automated colon cleansing algorithm was designed to detect and subtract tagged-stool, accounting for heterogeneity and poor tagging, to be used in conjunction with a colon CAD system. The method is locally adaptive and combines intensity, shape, and texture analysis with probabilistic optimization. CTC data from cathartic-free bowel preparation were acquired for testing and training the parameters. Patients underwent various colonic preparations with barium or Gastroview in divided dosesmore » over 48 h before scanning. No laxatives were administered and no dietary modifications were required. Cases were selected from a polyp-enriched cohort and included scans in which at least 90% of the solid stool was visually estimated to be tagged and each colonic segment was distended in either the prone or supine view. The CAD system was run comparatively with and without the stool subtraction algorithm. Results: The dataset comprised 38 CTC scans from prone and/or supine scans of 19 patients containing 44 polyps larger than 10 mm (22 unique polyps, if matched between prone and supine scans). The results are robust on fine details around folds, thin-stool linings on the colonic wall, near polyps and in large fluid/stool pools. The sensitivity of the CAD system is 70.5% per polyp at a rate of 5.75 false positives/scan without using the stool subtraction module. This detection improved significantly (p = 0.009) after automated colon cleansing on cathartic-free data to 86.4% true positive rate at 5.75 false positives/scan. Conclusions: An automated image-based colon cleansing algorithm designed to overcome the challenges of the noncathartic colon significantly improves the sensitivity of colon CAD by approximately 15%.« less
RNA-Puzzles: A CASP-like evaluation of RNA three-dimensional structure prediction
Cruz, José Almeida; Blanchet, Marc-Frédérick; Boniecki, Michal; Bujnicki, Janusz M.; Chen, Shi-Jie; Cao, Song; Das, Rhiju; Ding, Feng; Dokholyan, Nikolay V.; Flores, Samuel Coulbourn; Huang, Lili; Lavender, Christopher A.; Lisi, Véronique; Major, François; Mikolajczak, Katarzyna; Patel, Dinshaw J.; Philips, Anna; Puton, Tomasz; Santalucia, John; Sijenyi, Fredrick; Hermann, Thomas; Rother, Kristian; Rother, Magdalena; Serganov, Alexander; Skorupski, Marcin; Soltysinski, Tomasz; Sripakdeevong, Parin; Tuszynska, Irina; Weeks, Kevin M.; Waldsich, Christina; Wildauer, Michael; Leontis, Neocles B.; Westhof, Eric
2012-01-01
We report the results of a first, collective, blind experiment in RNA three-dimensional (3D) structure prediction, encompassing three prediction puzzles. The goals are to assess the leading edge of RNA structure prediction techniques; compare existing methods and tools; and evaluate their relative strengths, weaknesses, and limitations in terms of sequence length and structural complexity. The results should give potential users insight into the suitability of available methods for different applications and facilitate efforts in the RNA structure prediction community in ongoing efforts to improve prediction tools. We also report the creation of an automated evaluation pipeline to facilitate the analysis of future RNA structure prediction exercises. PMID:22361291
Luna, Jorge M; Yip, Natalie; Pivovarov, Rimma; Vawdrey, David K
2016-08-01
Clinical teams in acute inpatient settings can greatly benefit from automated charting technologies that continuously monitor patient vital status. NewYork-Presbyterian has designed and developed a real-time patient monitoring system that integrates vital signs sensors, networking, and electronic health records, to allow for automatic charting of patient status. We evaluate the representativeness (a combination of agreement, safety and timing) of a core vital sign across nursing intensity care protocols for preliminary feasibility assessment. Our findings suggest an automated way of summarizing heart rate provides representation of true heart rate status and can facilitate alternatives approaches to burdensome manual nurse charting of physiological parameters.
Evaluating a variety of text-mined features for automatic protein function prediction with GOstruct.
Funk, Christopher S; Kahanda, Indika; Ben-Hur, Asa; Verspoor, Karin M
2015-01-01
Most computational methods that predict protein function do not take advantage of the large amount of information contained in the biomedical literature. In this work we evaluate both ontology term co-mention and bag-of-words features mined from the biomedical literature and analyze their impact in the context of a structured output support vector machine model, GOstruct. We find that even simple literature based features are useful for predicting human protein function (F-max: Molecular Function =0.408, Biological Process =0.461, Cellular Component =0.608). One advantage of using literature features is their ability to offer easy verification of automated predictions. We find through manual inspection of misclassifications that some false positive predictions could be biologically valid predictions based upon support extracted from the literature. Additionally, we present a "medium-throughput" pipeline that was used to annotate a large subset of co-mentions; we suggest that this strategy could help to speed up the rate at which proteins are curated.
ASI's space automation and robotics programs: The second step
NASA Technical Reports Server (NTRS)
Dipippo, Simonetta
1994-01-01
The strategic decisions taken by ASI in the last few years in building up the overall A&R program, represent the technological drivers for other applications (i.e., internal automation of the Columbus Orbital Facility in the ESA Manned Space program, applications to mobile robots both in space and non-space environments, etc...). In this context, the main area of application now emerging is the scientific missions domain. Due to the broad range of applications of the developed technologies, both in the in-orbit servicing and maintenance of space structures and scientific missions, ASI foresaw the need to have a common technological development path, mainly focusing on: (1) control; (2) manipulation; (3) on-board computing; (4) sensors; and (5) teleoperation. Before entering into new applications in the scientific missions field, a brief overview of the status of the SPIDER related projects is given, underlining also the possible new applications for the LEO/GEO space structures.
ERIC Educational Resources Information Center
Connelly, E. M.; And Others
A new approach to deriving human performance measures and criteria for use in automatically evaluating trainee performance is described. Ultimately, this approach will allow automatic measurement of pilot performance in a flight simulator or from recorded in-flight data. An efficient method of representing performance data within a computer is…
Linguistic Features of Writing Quality
ERIC Educational Resources Information Center
McNamara, Danielle S.; Crossley, Scott A.; McCarthy, Philip M.
2010-01-01
In this study, a corpus of expert-graded essays, based on a standardized scoring rubric, is computationally evaluated so as to distinguish the differences between those essays that were rated as high and those rated as low. The automated tool, Coh-Metrix, is used to examine the degree to which high- and low-proficiency essays can be predicted by…
Optimum Edging and Trimming of Hardwood Lumber
Carmen Regalado; D. Earl Kline; Philip A. Araman
1992-01-01
Before the adoption of an automated system for optimizing edging and trimming in hardwood mills, the performance of present manual systems must be evaluated to provide a basis for comparison. a study was made in which lumber values recovered in actual hardwood operations were compared to the output of a computer-based procedure for edging and trimming optimization. The...
Automated breast segmentation in ultrasound computer tomography SAFT images
NASA Astrophysics Data System (ADS)
Hopp, T.; You, W.; Zapf, M.; Tan, W. Y.; Gemmeke, H.; Ruiter, N. V.
2017-03-01
Ultrasound Computer Tomography (USCT) is a promising new imaging system for breast cancer diagnosis. An essential step before further processing is to remove the water background from the reconstructed images. In this paper we present a fully-automated image segmentation method based on three-dimensional active contours. The active contour method is extended by applying gradient vector flow and encoding the USCT aperture characteristics as additional weighting terms. A surface detection algorithm based on a ray model is developed to initialize the active contour, which is iteratively deformed to capture the breast outline in USCT reflection images. The evaluation with synthetic data showed that the method is able to cope with noisy images, and is not influenced by the position of the breast and the presence of scattering objects within the breast. The proposed method was applied to 14 in-vivo images resulting in an average surface deviation from a manual segmentation of 2.7 mm. We conclude that automated segmentation of USCT reflection images is feasible and produces results comparable to a manual segmentation. By applying the proposed method, reproducible segmentation results can be obtained without manual interaction by an expert.
Automated Computerized Analysis of Speechin Psychiatric Disorders
Cohen, Alex S.; Elvevåg, Brita
2014-01-01
Purpose of Review Disturbances in communication are a hallmark of severe mental illnesses. Recent technological advances have paved the way for objectifying communication using automated computerized linguistic and acoustic analysis. We review recent studies applying various computer-based assessments to the natural language produced by adult patients with severe mental illness. Recent Findings Automated computerized methods afford tools with which it is possible to objectively evaluate patients in a reliable, valid and efficient manner that complements human ratings. Crucially, these measures correlate with important clinical measures. The clinical relevance of these novel metrics has been demonstrated by showing their relationship to functional outcome measures, their in vivo link to classic ‘language’ regions in the brain, and, in the case of linguistic analysis, their relationship to candidate genes for severe mental illness. Summary Computer based assessments of natural language afford a framework with which to measure communication disturbances in adults with SMI. Emerging evidence suggests that they can be reliable and valid, and overcome many practical limitations of more traditional assessment methods. The advancement of these technologies offers unprecedented potential for measuring and understanding some of the most crippling symptoms of some of the most debilitating illnesses known to humankind. PMID:24613984
Reliability analysis of laminated CMC components through shell subelement techniques
NASA Technical Reports Server (NTRS)
Starlinger, Alois; Duffy, Stephen F.; Gyekenyesi, John P.
1992-01-01
An updated version of the integrated design program Composite Ceramics Analysis and Reliability Evaluation of Structures (C/CARES) was developed for the reliability evaluation of ceramic matrix composites (CMC) laminated shell components. The algorithm is now split into two modules: a finite-element data interface program and a reliability evaluation algorithm. More flexibility is achieved, allowing for easy implementation with various finite-element programs. The interface program creates a neutral data base which is then read by the reliability module. This neutral data base concept allows easy data transfer between different computer systems. The new interface program from the finite-element code Matrix Automated Reduction and Coupling (MARC) also includes the option of using hybrid laminates (a combination of plies of different materials or different layups) and allows for variations in temperature fields throughout the component. In the current version of C/CARES, a subelement technique was implemented, enabling stress gradients within an element to be taken into account. The noninteractive reliability function is now evaluated at each Gaussian integration point instead of using averaging techniques. As a result of the increased number of stress evaluation points, considerable improvements in the accuracy of reliability analyses were realized.
DOT National Transportation Integrated Search
1974-08-01
Volume 4 describes the automation requirements. A presentation of automation requirements is made for an advanced air traffic management system in terms of controller work for-e, computer resources, controller productivity, system manning, failure ef...
MIEC-SVM: automated pipeline for protein peptide/ligand interaction prediction.
Li, Nan; Ainsworth, Richard I; Wu, Meixin; Ding, Bo; Wang, Wei
2016-03-15
MIEC-SVM is a structure-based method for predicting protein recognition specificity. Here, we present an automated MIEC-SVM pipeline providing an integrated and user-friendly workflow for construction and application of the MIEC-SVM models. This pipeline can handle standard amino acids and those with post-translational modifications (PTMs) or small molecules. Moreover, multi-threading and support to Sun Grid Engine (SGE) are implemented to significantly boost the computational efficiency. The program is available at http://wanglab.ucsd.edu/MIEC-SVM CONTACT: : wei-wang@ucsd.edu Supplementary data available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
EON: a component-based approach to automation of protocol-directed therapy.
Musen, M A; Tu, S W; Das, A K; Shahar, Y
1996-01-01
Provision of automated support for planning protocol-directed therapy requires a computer program to take as input clinical data stored in an electronic patient-record system and to generate as output recommendations for therapeutic interventions and laboratory testing that are defined by applicable protocols. This paper presents a synthesis of research carried out at Stanford University to model the therapy-planning task and to demonstrate a component-based architecture for building protocol-based decision-support systems. We have constructed general-purpose software components that (1) interpret abstract protocol specifications to construct appropriate patient-specific treatment plans; (2) infer from time-stamped patient data higher-level, interval-based, abstract concepts; (3) perform time-oriented queries on a time-oriented patient database; and (4) allow acquisition and maintenance of protocol knowledge in a manner that facilitates efficient processing both by humans and by computers. We have implemented these components in a computer system known as EON. Each of the components has been developed, evaluated, and reported independently. We have evaluated the integration of the components as a composite architecture by implementing T-HELPER, a computer-based patient-record system that uses EON to offer advice regarding the management of patients who are following clinical trial protocols for AIDS or HIV infection. A test of the reuse of the software components in a different clinical domain demonstrated rapid development of a prototype application to support protocol-based care of patients who have breast cancer. PMID:8930854
Zelesky, Veronica; Schneider, Richard; Janiszewski, John; Zamora, Ismael; Ferguson, James; Troutman, Matthew
2013-05-01
The ability to supplement high-throughput metabolic clearance data with structural information defining the site of metabolism should allow design teams to streamline their synthetic decisions. However, broad application of metabolite identification in early drug discovery has been limited, largely due to the time required for data review and structural assignment. The advent of mass defect filtering and its application toward metabolite scouting paved the way for the development of software automation tools capable of rapidly identifying drug-related material in complex biological matrices. Two semi-automated commercial software applications, MetabolitePilot™ and Mass-MetaSite™, were evaluated to assess the relative speed and accuracy of structural assignments using data generated on a high-resolution MS platform. Review of these applications has demonstrated their utility in providing accurate results in a time-efficient manner, leading to acceleration of metabolite identification initiatives while highlighting the continued need for biotransformation expertise in the interpretation of more complex metabolic reactions.
Verification Test of Automated Robotic Assembly of Space Truss Structures
NASA Technical Reports Server (NTRS)
Rhodes, Marvin D.; Will, Ralph W.; Quach, Cuong C.
1995-01-01
A multidisciplinary program has been conducted at the Langley Research Center to develop operational procedures for supervised autonomous assembly of truss structures suitable for large-aperture antennas. The hardware and operations required to assemble a 102-member tetrahedral truss and attach 12 hexagonal panels were developed and evaluated. A brute-force automation approach was used to develop baseline assembly hardware and software techniques. However, as the system matured and operations were proven, upgrades were incorporated and assessed against the baseline test results. These upgrades included the use of distributed microprocessors to control dedicated end-effector operations, machine vision guidance for strut installation, and the use of an expert system-based executive-control program. This paper summarizes the developmental phases of the program, the results of several assembly tests, and a series of proposed enhancements. No problems that would preclude automated in-space assembly or truss structures have been encountered. The test system was developed at a breadboard level and continued development at an enhanced level is warranted.
Ratanawongsa, Neda; Quan, Judy; Handley, Margaret A; Sarkar, Urmimala; Schillinger, Dean
2018-04-06
Clinicians have difficulty accurately assessing medication non-adherence within chronic disease care settings. Health information technology (HIT) could offer novel tools to assess medication adherence in diverse populations outside of usual health care settings. In a multilingual urban safety net population, we examined the validity of assessing adherence using automated telephone self-management (ATSM) queries, when compared with non-adherence using continuous medication gap (CMG) on pharmacy claims. We hypothesized that patients reporting greater days of missed pills to ATSM queries would have higher rates of non-adherence as measured by CMG, and that ATSM adherence assessments would perform as well as structured interview assessments. As part of an ATSM-facilitated diabetes self-management program, low-income health plan members typed numeric responses to rotating weekly ATSM queries: "In the last 7 days, how many days did you MISS taking your …" diabetes, blood pressure, or cholesterol pill. Research assistants asked similar questions in computer-assisted structured telephone interviews. We measured continuous medication gap (CMG) by claims over 12 preceding months. To evaluate convergent validity, we compared rates of optimal adherence (CMG ≤ 20%) across respondents reporting 0, 1, and ≥ 2 missed pill days on ATSM and on structured interview. Among 210 participants, 46% had limited health literacy, 57% spoke Cantonese, and 19% Spanish. ATSM respondents reported ≥1 missed day for diabetes (33%), blood pressure (19%), and cholesterol (36%) pills. Interview respondents reported ≥1 missed day for diabetes (28%), blood pressure (21%), and cholesterol (26%) pills. Optimal adherence rates by CMG were lower among ATSM respondents reporting more missed days for blood pressure (p = 0.02) and cholesterol (p < 0.01); by interview, differences were significant for cholesterol (p = 0.01). Language-concordant ATSM demonstrated modest potential for assessing adherence. Studies should evaluate HIT assessments of medication beliefs and concerns in diverse populations. NCT00683020 , registered May 21, 2008.
Lardy, Matthew A; Lebrun, Laurie; Bullard, Drew; Kissinger, Charles; Gobbi, Alberto
2012-05-25
In modern day drug discovery campaigns, computational chemists have to be concerned not only about improving the potency of molecules but also reducing any off-target ADMET activity. There are a plethora of antitargets that computational chemists may have to consider. Fortunately many antitargets have crystal structures deposited in the PDB. These structures are immediately useful to our Autocorrelator: an automated model generator that optimizes variables for building computational models. This paper describes the use of the Autocorrelator to construct high quality docking models for cytochrome P450 2C9 (CYP2C9) from two publicly available crystal structures. Both models result in strong correlation coefficients (R² > 0.66) between the predicted and experimental determined log(IC₅₀) values. Results from the two models overlap well with each other, converging on the same scoring function, deprotonated charge state, and predicted the binding orientation for our collection of molecules.
Automating the parallel processing of fluid and structural dynamics calculations
NASA Technical Reports Server (NTRS)
Arpasi, Dale J.; Cole, Gary L.
1987-01-01
The NASA Lewis Research Center is actively involved in the development of expert system technology to assist users in applying parallel processing to computational fluid and structural dynamic analysis. The goal of this effort is to eliminate the necessity for the physical scientist to become a computer scientist in order to effectively use the computer as a research tool. Programming and operating software utilities have previously been developed to solve systems of ordinary nonlinear differential equations on parallel scalar processors. Current efforts are aimed at extending these capabilities to systems of partial differential equations, that describe the complex behavior of fluids and structures within aerospace propulsion systems. This paper presents some important considerations in the redesign, in particular, the need for algorithms and software utilities that can automatically identify data flow patterns in the application program and partition and allocate calculations to the parallel processors. A library-oriented multiprocessing concept for integrating the hardware and software functions is described.
Decision making and problem solving with computer assistance
NASA Technical Reports Server (NTRS)
Kraiss, F.
1980-01-01
In modern guidance and control systems, the human as manager, supervisor, decision maker, problem solver and trouble shooter, often has to cope with a marginal mental workload. To improve this situation, computers should be used to reduce the operator from mental stress. This should not solely be done by increased automation, but by a reasonable sharing of tasks in a human-computer team, where the computer supports the human intelligence. Recent developments in this area are summarized. It is shown that interactive support of operator by intelligent computer is feasible during information evaluation, decision making and problem solving. The applied artificial intelligence algorithms comprehend pattern recognition and classification, adaptation and machine learning as well as dynamic and heuristic programming. Elementary examples are presented to explain basic principles.
Automated clinical system for chromosome analysis
NASA Technical Reports Server (NTRS)
Castleman, K. R.; Friedan, H. J.; Johnson, E. T.; Rennie, P. A.; Wall, R. J. (Inventor)
1978-01-01
An automatic chromosome analysis system is provided wherein a suitably prepared slide with chromosome spreads thereon is placed on the stage of an automated microscope. The automated microscope stage is computer operated to move the slide to enable detection of chromosome spreads on the slide. The X and Y location of each chromosome spread that is detected is stored. The computer measures the chromosomes in a spread, classifies them by group or by type and also prepares a digital karyotype image. The computer system can also prepare a patient report summarizing the result of the analysis and listing suspected abnormalities.
Automated essay scoring and the future of educational assessment in medical education.
Gierl, Mark J; Latifi, Syed; Lai, Hollis; Boulais, André-Philippe; De Champlain, André
2014-10-01
Constructed-response tasks, which range from short-answer tests to essay questions, are included in assessments of medical knowledge because they allow educators to measure students' ability to think, reason, solve complex problems, communicate and collaborate through their use of writing. However, constructed-response tasks are also costly to administer and challenging to score because they rely on human raters. One alternative to the manual scoring process is to integrate computer technology with writing assessment. The process of scoring written responses using computer programs is known as 'automated essay scoring' (AES). An AES system uses a computer program that builds a scoring model by extracting linguistic features from a constructed-response prompt that has been pre-scored by human raters and then, using machine learning algorithms, maps the linguistic features to the human scores so that the computer can be used to classify (i.e. score or grade) the responses of a new group of students. The accuracy of the score classification can be evaluated using different measures of agreement. Automated essay scoring provides a method for scoring constructed-response tests that complements the current use of selected-response testing in medical education. The method can serve medical educators by providing the summative scores required for high-stakes testing. It can also serve medical students by providing them with detailed feedback as part of a formative assessment process. Automated essay scoring systems yield scores that consistently agree with those of human raters at a level as high, if not higher, as the level of agreement among human raters themselves. The system offers medical educators many benefits for scoring constructed-response tasks, such as improving the consistency of scoring, reducing the time required for scoring and reporting, minimising the costs of scoring, and providing students with immediate feedback on constructed-response tasks. © 2014 John Wiley & Sons Ltd.
Micromechanics based simulation of ductile fracture in structural steels
NASA Astrophysics Data System (ADS)
Yellavajjala, Ravi Kiran
The broader aim of this research is to develop fundamental understanding of ductile fracture process in structural steels, propose robust computational models to quantify the associated damage, and provide numerical tools to simplify the implementation of these computational models into general finite element framework. Mechanical testing on different geometries of test specimens made of ASTM A992 steels is conducted to experimentally characterize the ductile fracture at different stress states under monotonic and ultra-low cycle fatigue (ULCF) loading. Scanning electron microscopy studies of the fractured surfaces is conducted to decipher the underlying microscopic damage mechanisms that cause fracture in ASTM A992 steels. Detailed micromechanical analyses for monotonic and cyclic loading are conducted to understand the influence of stress triaxiality and Lode parameter on the void growth phase of ductile fracture. Based on monotonic analyses, an uncoupled micromechanical void growth model is proposed to predict ductile fracture. This model is then incorporated in to finite element program as a weakly coupled model to simulate the loss of load carrying capacity in the post microvoid coalescence regime for high triaxialities. Based on the cyclic analyses, an uncoupled micromechanics based cyclic void growth model is developed to predict the ULCF life of ASTM A992 steels subjected to high stress triaxialities. Furthermore, a computational fracture locus for ASTM A992 steels is developed and incorporated in to finite element program as an uncoupled ductile fracture model. This model can be used to predict the ductile fracture initiation under monotonic loading in a wide range of triaxiality and Lode parameters. Finally, a coupled microvoid elongation and dilation based continuum damage model is proposed, implemented, calibrated and validated. This model is capable of simulating the local softening caused by the various phases of ductile fracture process under monotonic loading for a wide range of stress states. Novel differentiation procedures based on complex analyses along with existing finite difference methods and automatic differentiation are extended using perturbation techniques to evaluate tensor derivatives. These tensor differentiation techniques are then used to automate nonlinear constitutive models into implicit finite element framework. Finally, the efficiency of these automation procedures is demonstrated using benchmark problems.
Computational mining for hypothetical patterns of amino acid side chains in protein data bank (PDB)
NASA Astrophysics Data System (ADS)
Ghani, Nur Syatila Ab; Firdaus-Raih, Mohd
2018-04-01
The three-dimensional structure of a protein can provide insights regarding its function. Functional relationship between proteins can be inferred from fold and sequence similarities. In certain cases, sequence or fold comparison fails to conclude homology between proteins with similar mechanism. Since the structure is more conserved than the sequence, a constellation of functional residues can be similarly arranged among proteins of similar mechanism. Local structural similarity searches are able to detect such constellation of amino acids among distinct proteins, which can be useful to annotate proteins of unknown function. Detection of such patterns of amino acids on a large scale can increase the repertoire of important 3D motifs since available known 3D motifs currently, could not compensate the ever-increasing numbers of uncharacterized proteins to be annotated. Here, a computational platform for an automated detection of 3D motifs is described. A fuzzy-pattern searching algorithm derived from IMagine an Amino Acid 3D Arrangement search EnGINE (IMAAAGINE) was implemented to develop an automated method for searching of hypothetical patterns of amino acid side chains in Protein Data Bank (PDB), without the need for prior knowledge on related sequence or structure of pattern of interest. We present an example of the searches, which is the detection of a hypothetical pattern derived from known structural motif of C2H2 structural pattern from zinc fingers. The conservation of particular patterns of amino acid side chains in unrelated proteins is highlighted. This approach can act as a complementary method for available structure- and sequence-based platforms and may contribute in improving functional association between proteins.
Automation of Precise Time Reference Stations (PTRS)
NASA Astrophysics Data System (ADS)
Wheeler, P. J.
1985-04-01
The U.S. Naval Observatory is presently engaged in a program of automating precise time stations (PTS) and precise time reference stations (PTBS) by using a versatile mini-computer controlled data acquisition system (DAS). The data acquisition system is configured to monitor locally available PTTI signals such as LORAN-C, OMEGA, and/or the Global Positioning System. In addition, the DAS performs local standard intercomparison. Computer telephone communications provide automatic data transfer to the Naval Observatory. Subsequently, after analysis of the data, results and information can be sent back to the precise time reference station to provide automatic control of remote station timing. The DAS configuration is designed around state of the art standard industrial high reliability modules. The system integration and software are standardized but allow considerable flexibility to satisfy special local requirements such as stability measurements, performance evaluation and printing of messages and certificates. The DAS operates completely independently and may be queried or controlled at any time with a computer or terminal device (control is protected for use by authorized personnel only). Such DAS equipped PTS are operational in Hawaii, California, Texas and Florida.
Ingenious Snake: An Adaptive Multi-Class Contours Extraction
NASA Astrophysics Data System (ADS)
Li, Baolin; Zhou, Shoujun
2018-04-01
Active contour model (ACM) plays an important role in computer vision and medical image application. The traditional ACMs were used to extract single-class of object contours. While, simultaneous extraction of multi-class of interesting contours (i.e., various contours with closed- or open-ended) have not been solved so far. Therefore, a novel ACM model named “Ingenious Snake” is proposed to adaptively extract these interesting contours. In the first place, the ridge-points are extracted based on the local phase measurement of gradient vector flow field; the consequential ridgelines initialization are automated with high speed. Secondly, the contours’ deformation and evolvement are implemented with the ingenious snake. In the experiments, the result from initialization, deformation and evolvement are compared with the existing methods. The quantitative evaluation of the structure extraction is satisfying with respect of effectiveness and accuracy.
Automated analysis of biological oscillator models using mode decomposition.
Konopka, Tomasz
2011-04-01
Oscillating signals produced by biological systems have shapes, described by their Fourier spectra, that can potentially reveal the mechanisms that generate them. Extracting this information from measured signals is interesting for the validation of theoretical models, discovery and classification of interaction types, and for optimal experiment design. An automated workflow is described for the analysis of oscillating signals. A software package is developed to match signal shapes to hundreds of a priori viable model structures defined by a class of first-order differential equations. The package computes parameter values for each model by exploiting the mode decomposition of oscillating signals and formulating the matching problem in terms of systems of simultaneous polynomial equations. On the basis of the computed parameter values, the software returns a list of models consistent with the data. In validation tests with synthetic datasets, it not only shortlists those model structures used to generate the data but also shows that excellent fits can sometimes be achieved with alternative equations. The listing of all consistent equations is indicative of how further invalidation might be achieved with additional information. When applied to data from a microarray experiment on mice, the procedure finds several candidate model structures to describe interactions related to the circadian rhythm. This shows that experimental data on oscillators is indeed rich in information about gene regulation mechanisms. The software package is available at http://babylone.ulb.ac.be/autoosc/.
Computer Automated Ultrasonic Inspection System
1985-02-06
Reports 74 3.1.4 Statistical Analysis Capability 74 3.2 Nondestructive Evaluation Terminal Hardware 76 3.3 Nondestructive Evaluation Terminal Vendor...3.4.2.6 Create a Hold Tape 103 vi TABLE OF CONTENTS SECTION PAGE 3.4.3 System Status 104 3.4.4 Statistical Analysis 105 3.4.4.1 Statistical Analysis...Data Extraction 105 3.4.4.2 Statistical Analysis Report and Display Generation 106 3.4.5 Quality Assurance Reports 106 3.4.6 Nondestructive Inspection
Automated array assembly task, phase 1
NASA Technical Reports Server (NTRS)
Carbajal, B. G.
1977-01-01
State-of-the-art technologies applicable to silicon solar cell and solar cell module fabrication were assessed. The assessment consisted of a technical feasibility evaluation and a cost projection for high volume production of solar cell modules. Design equations based on minimum power loss were used as a tool in the evaluation of metallization technologies. A solar cell process sensitivity study using models, computer calculations, and experimental data was used to identify process step variation and cell output variation correlations.
Preliminary Results from the Application of Automated Adjoint Code Generation to CFL3D
NASA Technical Reports Server (NTRS)
Carle, Alan; Fagan, Mike; Green, Lawrence L.
1998-01-01
This report describes preliminary results obtained using an automated adjoint code generator for Fortran to augment a widely-used computational fluid dynamics flow solver to compute derivatives. These preliminary results with this augmented code suggest that, even in its infancy, the automated adjoint code generator can accurately and efficiently deliver derivatives for use in transonic Euler-based aerodynamic shape optimization problems with hundreds to thousands of independent design variables.
1977-01-26
Sisteme Matematicheskogo Obespecheniya YeS EVM [ Applied Programs in the Software System for the Unified System of Computers], by A. Ye. Fateyev, A. I...computerized systems are most effective in large production complexes , in which the level of utilization of computers can be as high as 500,000...performance of these tasks could be furthered by the complex introduction of electronic computers in automated control systems. The creation of ASU
AN ULTRAVIOLET-VISIBLE SPECTROPHOTOMETER AUTOMATION SYSTEM. PART I: FUNCTIONAL SPECIFICATIONS
This document contains the project definition, the functional requirements, and the functional design for a proposed computer automation system for scanning spectrophotometers. The system will be implemented on a Data General computer using the BASIC language. The system is a rea...