Science.gov

Sample records for computationally efficient cad

  1. A new computationally efficient CAD system for pulmonary nodule detection in CT imagery.

    PubMed

    Messay, Temesguen; Hardie, Russell C; Rogers, Steven K

    2010-06-01

    Early detection of lung nodules is extremely important for the diagnosis and clinical management of lung cancer. In this paper, a novel computer aided detection (CAD) system for the detection of pulmonary nodules in thoracic computed tomography (CT) imagery is presented. The paper describes the architecture of the CAD system and assesses its performance on a publicly available database to serve as a benchmark for future research efforts. Training and tuning of all modules in our CAD system is done using a separate and independent dataset provided courtesy of the University of Texas Medical Branch (UTMB). The publicly available testing dataset is that created by the Lung Image Database Consortium (LIDC). The LIDC data used here is comprised of 84 CT scans containing 143 nodules ranging from 3 to 30mm in effective size that are manually segmented at least by one of the four radiologists. The CAD system uses a fully automated lung segmentation algorithm to define the boundaries of the lung regions. It combines intensity thresholding with morphological processing to detect and segment nodule candidates simultaneously. A set of 245 features is computed for each segmented nodule candidate. A sequential forward selection process is used to determine the optimum subset of features for two distinct classifiers, a Fisher Linear Discriminant (FLD) classifier and a quadratic classifier. A performance comparison between the two classifiers is presented, and based on this, the FLD classifier is selected for the CAD system. With an average of 517.5 nodule candidates per case/scan (517.5+/-72.9), the proposed front-end detector/segmentor is able to detect 92.8% of all the nodules in the LIDC/testing dataset (based on merged ground truth). The mean overlap between the nodule regions delineated by three or more radiologists and the ones segmented by the proposed segmentation algorithm is approximately 63%. Overall, with a specificity of 3 false positives (FPs) per case/patient on

  2. Computing Mass Properties From AutoCAD

    NASA Technical Reports Server (NTRS)

    Jones, A.

    1990-01-01

    Mass properties of structures computed from data in drawings. AutoCAD to Mass Properties (ACTOMP) computer program developed to facilitate quick calculations of mass properties of structures containing many simple elements in such complex configurations as trusses or sheet-metal containers. Mathematically modeled in AutoCAD or compatible computer-aided design (CAD) system in minutes by use of three-dimensional elements. Written in Microsoft Quick-Basic (Version 2.0).

  3. A CAD (Classroom Assessment Design) of a Computer Programming Course

    ERIC Educational Resources Information Center

    Hawi, Nazir S.

    2012-01-01

    This paper presents a CAD (classroom assessment design) of an entry-level undergraduate computer programming course "Computer Programming I". CAD has been the product of a long experience in teaching computer programming courses including teaching "Computer Programming I" 22 times. Each semester, CAD is evaluated and modified…

  4. Information-theoretic CAD system in mammography: Entropy-based indexing for computational efficiency and robust performance

    SciTech Connect

    Tourassi, Georgia D.; Harrawood, Brian; Singh, Swatee; Lo, Joseph Y.

    2007-08-15

    We have previously presented a knowledge-based computer-assisted detection (KB-CADe) system for the detection of mammographic masses. The system is designed to compare a query mammographic region with mammographic templates of known ground truth. The templates are stored in an adaptive knowledge database. Image similarity is assessed with information theoretic measures (e.g., mutual information) derived directly from the image histograms. A previous study suggested that the diagnostic performance of the system steadily improves as the knowledge database is initially enriched with more templates. However, as the database increases in size, an exhaustive comparison of the query case with each stored template becomes computationally burdensome. Furthermore, blind storing of new templates may result in redundancies that do not necessarily improve diagnostic performance. To address these concerns we investigated an entropy-based indexing scheme for improving the speed of analysis and for satisfying database storage restrictions without compromising the overall diagnostic performance of our KB-CADe system. The indexing scheme was evaluated on two different datasets as (i) a search mechanism to sort through the knowledge database, and (ii) a selection mechanism to build a smaller, concise knowledge database that is easier to maintain but still effective. There were two important findings in the study. First, entropy-based indexing is an effective strategy to identify fast a subset of templates that are most relevant to a given query. Only this subset could be analyzed in more detail using mutual information for optimized decision making regarding the query. Second, a selective entropy-based deposit strategy may be preferable where only high entropy cases are maintained in the knowledge database. Overall, the proposed entropy-based indexing scheme was shown to reduce the computational cost of our KB-CADe system by 55% to 80% while maintaining the system's diagnostic

  5. Computer-aided diagnosis (CAD) for colonoscopy

    NASA Astrophysics Data System (ADS)

    Gu, Jia; Poirson, Allen

    2007-03-01

    Colorectal cancer is the second leading cause of cancer deaths, and ranks third for new cancer cases and cancer mortality for both men and women. However, its death rate can be dramatically reduced by appropriate treatment when early detection is available. The purpose of colonoscopy is to identify and assess the severity of lesions, which may be flat or protruding. Due to the subjective nature of the examination, colonoscopic proficiency is highly variable and dependent upon the colonoscopist's knowledge and experience. An automated image processing system providing an objective, rapid, and inexpensive analysis of video from a standard colonoscope could provide a valuable tool for screening and diagnosis. In this paper, we present the design, functionality and preliminary results of its Computer-Aided-Diagnosis (CAD) system for colonoscopy - ColonoCAD TM. ColonoCAD is a complex multi-sensor, multi-data and multi-algorithm image processing system, incorporating data management and visualization, video quality assessment and enhancement, calibration, multiple view based reconstruction, feature extraction and classification. As this is a new field in medical image processing, our hope is that this paper will provide the framework to encourage and facilitate collaboration and discussion between industry, academia, and medical practitioners.

  6. Computer-aided-diagnosis (CAD) for colposcopy

    NASA Astrophysics Data System (ADS)

    Lange, Holger; Ferris, Daron G.

    2005-04-01

    Uterine cervical cancer is the second most common cancer among women worldwide. Colposcopy is a diagnostic method, whereby a physician (colposcopist) visually inspects the lower genital tract (cervix, vulva and vagina), with special emphasis on the subjective appearance of metaplastic epithelium comprising the transformation zone on the cervix. Cervical cancer precursor lesions and invasive cancer exhibit certain distinctly abnormal morphologic features. Lesion characteristics such as margin; color or opacity; blood vessel caliber, intercapillary spacing and distribution; and contour are considered by colposcopists to derive a clinical diagnosis. Clinicians and academia have suggested and shown proof of concept that automated image analysis of cervical imagery can be used for cervical cancer screening and diagnosis, having the potential to have a direct impact on improving women"s health care and reducing associated costs. STI Medical Systems is developing a Computer-Aided-Diagnosis (CAD) system for colposcopy -- ColpoCAD. At the heart of ColpoCAD is a complex multi-sensor, multi-data and multi-feature image analysis system. A functional description is presented of the envisioned ColpoCAD system, broken down into: Modality Data Management System, Image Enhancement, Feature Extraction, Reference Database, and Diagnosis and directed Biopsies. The system design and development process of the image analysis system is outlined. The system design provides a modular and open architecture built on feature based processing. The core feature set includes the visual features used by colposcopists. This feature set can be extended to include new features introduced by new instrument technologies, like fluorescence and impedance, and any other plausible feature that can be extracted from the cervical data. Preliminary results of our research on detecting the three most important features: blood vessel structures, acetowhite regions and lesion margins are shown. As this is a new

  7. CAD-centric Computation Management System for a Virtual TBM

    SciTech Connect

    Ramakanth Munipalli; K.Y. Szema; P.Y. Huang; C.M. Rowell; A.Ying; M. Abdou

    2011-05-03

    HyPerComp Inc. in research collaboration with TEXCEL has set out to build a Virtual Test Blanket Module (VTBM) computational system to address the need in contemporary fusion research for simulating the integrated behavior of the blanket, divertor and plasma facing components in a fusion environment. Physical phenomena to be considered in a VTBM will include fluid flow, heat transfer, mass transfer, neutronics, structural mechanics and electromagnetics. We seek to integrate well established (third-party) simulation software in various disciplines mentioned above. The integrated modeling process will enable user groups to interoperate using a common modeling platform at various stages of the analysis. Since CAD is at the core of the simulation (as opposed to computational meshes which are different for each problem,) VTBM will have a well developed CAD interface, governing CAD model editing, cleanup, parameter extraction, model deformation (based on simulation,) CAD-based data interpolation. In Phase-I, we built the CAD-hub of the proposed VTBM and demonstrated its use in modeling a liquid breeder blanket module with coupled MHD and structural mechanics using HIMAG and ANSYS. A complete graphical user interface of the VTBM was created, which will form the foundation of any future development. Conservative data interpolation via CAD (as opposed to mesh-based transfer), the regeneration of CAD models based upon computed deflections, are among the other highlights of phase-I activity.

  8. Computer-aided engineering: the step beyond CAD/CAM

    SciTech Connect

    Hatfield, L.; Trost, S.R.; O'Brien, D.W.; Pomernacki, C.L.

    1981-11-06

    At Lawrence Livermore National Laboratory, the need for increased engineering productivity, coupled with increasingly difficult engineering problems, presents a significant challenge. Advances in computer technology coupled with successful CAD/CAM experiences suggest that computer-aided engineering (CAE) will help meet this challenge. Requirements for a CAE system are developed, a CAE system plan is outlined, and some remaining problem areas are discussed.

  9. Introduction to CAD/Computers. High-Technology Training Module.

    ERIC Educational Resources Information Center

    Lockerby, Hugh

    This learning module for an eighth-grade introductory technology course is designed to help teachers introduce students to computer-assisted design (CAD) in a communications unit on graphics. The module contains a module objective and five specific objectives, a content outline, suggested instructor methodology, student activities, a list of six…

  10. Effectiveness and efficiency of a CAD/CAM orthodontic bracket system.

    PubMed

    Brown, Matthew W; Koroluk, Lorne; Ko, Ching-Chang; Zhang, Kai; Chen, Mengqi; Nguyen, Tung

    2015-12-01

    The first straight-wire appliance was introduced over 40 years ago to increase the consistency and efficiency of orthodontic treatment. More recently, computer-aided design and computer-aided manufacturing (CAD/CAM) technology has been used to create individualized orthodontic appliances. The purpose of this study was to investigate the clinical effectiveness and efficiency of CAD/CAM customized orthodontic appliances compared with direct and indirect bonded stock orthodontic brackets. This retrospective study included 3 treatment groups: group 1 patients were direct bonded with self-ligating appliances, group 2 patients were indirect bonded with self-ligating appliances, and group 3 patients were indirect bonded with CAD/CAM self-ligating appliances. Complete pretreatment and posttreatment records were obtained for all patients. The American Board of Orthodontics (ABO) Discrepancy Index was used to evaluate the pretreatment records, and the posttreatment outcomes were analyzed using the ABO Cast-Radiograph Evaluation. All data collection and analysis were completed by 1 evaluator. There were no statistically significant differences in the ABO Discrepancy Index or the ABO Cast-Radiograph Evaluation among the groups. Treatment times for the 3 groups were significantly different; the CAD/CAM group was the shortest at 13.8 ± 3.4 months, compared with 21.9 ± 5.0 and 16.9 ± 4.1 months for the direct bonded and indirect bonded groups, respectively. The number of treatment appointments for the CAD/CAM group was significantly fewer than for the direct bonded group. The CAD/CAM orthodontic bracket system evaluated in this study was as effective in treatment outcome measures as were standard brackets bonded both directly and indirectly. The CAD/CAM appliance was more efficient in regard to treatment duration, although the decrease in total archwire appointments was minimal. Further investigation is needed to better quantify the clinical benefits of CAD/CAM orthodontic

  11. Intelligent Embedded Instruction for Computer-Aided Design (CAD) systems

    DTIC Science & Technology

    1988-10-01

    8217 convenience and according to their individual learning preferences. This benefit is extremely valuable for adult professionals whose needs may be highly...solving problems. Adult designers tend to develop their own personal ways of using CAD software which can optimize a system’s use. This ability has been...average age for subjects with more than 1 year of computer experience was 34 whereas those with less than 2 months of experience averaged 41 years old

  12. Role of computer aided detection (CAD) integration: case study with meniscal and articular cartilage CAD applications

    NASA Astrophysics Data System (ADS)

    Safdar, Nabile; Ramakrishna, Bharath; Saiprasad, Ganesh; Siddiqui, Khan; Siegel, Eliot

    2008-03-01

    Knee-related injuries involving the meniscal or articular cartilage are common and require accurate diagnosis and surgical intervention when appropriate. With proper techniques and experience, confidence in detection of meniscal tears and articular cartilage abnormalities can be quite high. However, for radiologists without musculoskeletal training, diagnosis of such abnormalities can be challenging. In this paper, the potential of improving diagnosis through integration of computer-aided detection (CAD) algorithms for automatic detection of meniscal tears and articular cartilage injuries of the knees is studied. An integrated approach in which the results of algorithms evaluating either meniscal tears or articular cartilage injuries provide feedback to each other is believed to improve the diagnostic accuracy of the individual CAD algorithms due to the known association between abnormalities in these distinct anatomic structures. The correlation between meniscal tears and articular cartilage injuries is exploited to improve the final diagnostic results of the individual algorithms. Preliminary results from the integrated application are encouraging and more comprehensive tests are being planned.

  13. Converting Between PLY and Ballistic Research Laboratory-Computer-Aided Design (BRL-CAD) File Formats

    DTIC Science & Technology

    2015-02-01

    Converting Between PLY and Ballistic Research Laboratory–Computer-Aided Design (BRL-CAD) File Formats by Rishub Jain ARL-CR-0760...0760 February 2015 Converting Between PLY and Ballistic Research Laboratory–Computer-Aided Design (BRL-CAD) File Formats Rishub Jain US...and Ballistic Research Laboratory–Computer-Aided Design (BRL-CAD) File Formats 5a. CONTRACT NUMBER W911NF-10-2-0076 5b. GRANT NUMBER 5c. PROGRAM

  14. Target Impact Detection Algorithm Using Computer-aided Design (CAD) Model Geometry

    DTIC Science & Technology

    2014-09-01

    UNCLASSIFIED AD-E403 558 Technical Report ARMET-TR-13024 TARGET IMPACT DETECTION ALGORITHM USING COMPUTER-AIDED DESIGN ( CAD ...DETECTION ALGORITHM USING COMPUTER-AIDED DESIGN ( CAD ) MODEL GEOMETRY 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...This report documents a method and algorithm to export geometry from a three-dimensional, computer-aided design ( CAD ) model in a format that can be

  15. Computer Aided Detection (CAD) Systems for Mammography and the Use of GRID in Medicine

    NASA Astrophysics Data System (ADS)

    Lauria, Adele

    It is well known that the most effective way to defeat breast cancer is early detection, as surgery and medical therapies are more efficient when the disease is diagnosed at an early stage. The principal diagnostic technique for breast cancer detection is X-ray mammography. Screening programs have been introduced in many European countries to invite women to have periodic radiological breast examinations. In such screenings, radiologists are often required to examine large numbers of mammograms with a double reading, that is, two radiologists examine the images independently and then compare their results. In this way an increment in sensitivity (the rate of correctly identified images with a lesion) of up to 15% is obtained.1,2 In most radiological centres, it is a rarity to find two radiologists to examine each report. In recent years different Computer Aided Detection (CAD) systems have been developed as a support to radiologists working in mammography: one may hope that the "second opinion" provided by CAD might represent a lower cost alternative to improve the diagnosis. At present, four CAD systems have obtained the FDA approval in the USA. † Studies3,4 show an increment in sensitivity when CAD systems are used. Freer and Ulissey in 2001 5 demonstrated that the use of a commercial CAD system (ImageChecker M1000, R2 Technology) increases the number of cancers detected up to 19.5% with little increment in recall rate. Ciatto et al.,5 in a study simulating a double reading with a commercial CAD system (SecondLook‡), showed a moderate increment in sensitivity while reducing specificity (the rate of correctly identified images without a lesion). Notwithstanding these optimistic results, there is an ongoing debate to define the advantages of the use of CAD as second reader: the main limits underlined, e.g., by Nishikawa6 are that retrospective studies are considered much too optimistic and that clinical studies must be performed to demonstrate a statistically

  16. Efficiency of a mathematical model in generating CAD/CAM-partial crowns with natural tooth morphology.

    PubMed

    Ender, Andreas; Mörmann, Werner H; Mehl, Albert

    2011-04-01

    The "biogeneric tooth model" can be used for computer-aided design (CAD) of the occlusal surface of dental restorations. From digital 3D-data, it automatically retrieves a morphology matching the natural surface left after preparation. This study evaluates the potential of this method for generating well-matched and well-adjusted CAD/computer-aided manufacturing (CAM) fabricated partial crowns. Twelve models with partial crown preparations were mounted into an articulator. Partial crowns were designed with the Cerec 3D CAD software based on the biogeneric tooth model (Biog.CAD) and, for control, with a conventional data-based Cerec 3D CAD software (Conv.CAD). The design time was measured, and the naturalness of the morphology was visually assessed. The restorations were milled, cemented on the models, and the vertical discrepancy and the time for final occlusal adjustment were measured. The Biog.CAD software offered a significantly higher naturalness (up to 225 to 11 scores) and was significantly faster by 251 (± 78) s in designing partial crowns (p < 0.01) compared to Conv.CAD software. Vertical discrepancy, 0.52 (± 0.28) mm for Conv.CAD and 0.46 (± 0.19)mm for Biog.CAD, and occlusal adjustment time, 118 (± 132)s for Conv.CAD and 102 (± 77)s for Biog.CAD, did not differ significantly. In conclusion, the biogeneric tooth model is able to generate occlusal morphology of partial crowns in a fully automated process with higher naturalness compared to conventional interactive CAD software.

  17. Analog Computer-Aided Detection (CAD) information can be more effective than binary marks.

    PubMed

    Cunningham, Corbin A; Drew, Trafton; Wolfe, Jeremy M

    2017-02-01

    In socially important visual search tasks, such as baggage screening and diagnostic radiology, experts miss more targets than is desirable. Computer-aided detection (CAD) programs have been developed specifically to improve performance in these professional search tasks. For example, in breast cancer screening, many CAD systems are capable of detecting approximately 90% of breast cancer, with approximately 0.5 false-positive detections per image. Nevertheless, benefits of CAD in clinical settings tend to be small (Birdwell, 2009) or even absent (Meziane et al., 2011; Philpotts, 2009). The marks made by a CAD system can be "binary," giving the same signal to any location where the signal is above some threshold. Alternatively, a CAD system presents an analog signal that reflects strength of the signal at a location. In the experiments reported, we compare analog and binary CAD presentations using nonexpert observers and artificial stimuli defined by two noisy signals: a visible color signal and an "invisible" signal that informed our simulated CAD system. We found that analog CAD generally yielded better overall performance than binary CAD. The analog benefit is similar at high and low target prevalence. Our data suggest that the form of the CAD signal can directly influence performance. Analog CAD may allow the computer to be more helpful to the searcher.

  18. Computer-aided design and computer-aided manufacture (CAD/CAM) system for construction of spinal orthosis for patients with adolescent idiopathic scoliosis.

    PubMed

    Wong, M S

    2011-01-01

    ABSTRACT Spinal orthoses are commonly prescribed to patients with moderate adolescent idiopathic scoliosis (AIS) for prevention of further curve deterioration. In conventional manufacturing method, plaster bandages are used to obtain the patient's body contour and then the plaster cast is rectified manually. With computer-aided design and computer-aided manufacture (CAD/CAM) system, a series of automated processes from body scanning to digital rectification and milling of the positive model can be performed in a fast and accurate fashion. The purpose of this manuscript is to introduce the application of CAD/CAM system to the construction of spinal orthosis for patients with AIS. Based on evidence within the literature, CAD/CAM method can achieve similar clinical outcomes but with higher efficiency than the conventional fabrication method. Therefore, CAD/CAM method should be considered a substitute to the conventional method in fabrication of spinal orthoses for patients with AIS.

  19. Creation of Anatomically Accurate Computer-Aided Design (CAD) Solid Models from Medical Images

    NASA Technical Reports Server (NTRS)

    Stewart, John E.; Graham, R. Scott; Samareh, Jamshid A.; Oberlander, Eric J.; Broaddus, William C.

    1999-01-01

    Most surgical instrumentation and implants used in the world today are designed with sophisticated Computer-Aided Design (CAD)/Computer-Aided Manufacturing (CAM) software. This software automates the mechanical development of a product from its conceptual design through manufacturing. CAD software also provides a means of manipulating solid models prior to Finite Element Modeling (FEM). Few surgical products are designed in conjunction with accurate CAD models of human anatomy because of the difficulty with which these models are created. We have developed a novel technique that creates anatomically accurate, patient specific CAD solids from medical images in a matter of minutes.

  20. Creation of Anatomically Accurate Computer-Aided Design (CAD) Solid Models from Medical Images

    NASA Technical Reports Server (NTRS)

    Stewart, John E.; Graham, R. Scott; Samareh, Jamshid A.; Oberlander, Eric J.; Broaddus, William C.

    1999-01-01

    Most surgical instrumentation and implants used in the world today are designed with sophisticated Computer-Aided Design (CAD)/Computer-Aided Manufacturing (CAM) software. This software automates the mechanical development of a product from its conceptual design through manufacturing. CAD software also provides a means of manipulating solid models prior to Finite Element Modeling (FEM). Few surgical products are designed in conjunction with accurate CAD models of human anatomy because of the difficulty with which these models are created. We have developed a novel technique that creates anatomically accurate, patient specific CAD solids from medical images in a matter of minutes.

  1. Computer-aided diagnosis (CAD) of subsolid nodules: Evaluation of a commercial CAD system.

    PubMed

    Benzakoun, Joseph; Bommart, Sébastien; Coste, Joël; Chassagnon, Guillaume; Lederlin, Mathieu; Boussouar, Samia; Revel, Marie-Pierre

    2016-10-01

    To evaluate the performance of a commercially available CAD system for automated detection and measurement of subsolid nodules. The CAD system was tested on 50 pure ground-glass and 50 part-solid nodules (median diameter: 17mm) previously found on standard-dose CT scans in 100 different patients. True nodule detection and the total number of CAD marks were evaluated at different sensitivity settings. The influence of nodule and CT acquisition characteristics was analyzed with logistic regression. Software and manually measured diameters were compared with Spearman and Bland-Altman methods. With sensitivity adjusted for 3-mm nodule detection, 50/100 (50%) subsolid nodules were detected, at the average cost of 17 CAD marks per CT. These figures were respectively 26/100 (26%) and 2 at the 5-mm setting. At the highest sensitivity setting (2-mm nodule detection), the average number of CAD marks per CT was 41 but the nodule detection rate only increased to 54%. Part-solid nodules were better detected than pure ground glass nodules: 36/50 (72%) versus 14/50 (28%) at the 3-mm setting (p<0.0001), with no influence of the solid component size. Except for the type (i.e. part solid or pure ground glass), no other nodule characteristic influenced the detection rate. High-quality segmentation was obtained for 79 nodules, which for automated measurements correlated well with manual measurements (rho=0.90[0.84-0.93]). All part-solid nodules had software-measured attenuation values above -671Hounsfield units (HU). The detection rate of subsolid nodules by this CAD system was insufficient, but high-quality segmentation was obtained in 79% of cases, allowing automated measurement of size and attenuation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. A Multidisciplinary Research Team Approach to Computer-Aided Drafting (CAD) System Selection. Final Report.

    ERIC Educational Resources Information Center

    Franken, Ken; And Others

    A multidisciplinary research team was assembled to review existing computer-aided drafting (CAD) systems for the purpose of enabling staff in the Design Drafting Department at Linn Technical College (Missouri) to select the best system out of the many CAD systems in existence. During the initial stage of the evaluation project, researchers…

  3. Role of Computer Aided Diagnosis (CAD) in the detection of pulmonary nodules on 64 row multi detector computed tomography.

    PubMed

    Prakashini, K; Babu, Satish; Rajgopal, K V; Kokila, K Raja

    2016-01-01

    To determine the overall performance of an existing CAD algorithm with thin-section computed tomography (CT) in the detection of pulmonary nodules and to evaluate detection sensitivity at a varying range of nodule density, size, and location. A cross-sectional prospective study was conducted on 20 patients with 322 suspected nodules who underwent diagnostic chest imaging using 64-row multi-detector CT. The examinations were evaluated on reconstructed images of 1.4 mm thickness and 0.7 mm interval. Detection of pulmonary nodules, initially by a radiologist of 2 years experience (RAD) and later by CAD lung nodule software was assessed. Then, CAD nodule candidates were accepted or rejected accordingly. Detected nodules were classified based on their size, density, and location. The performance of the RAD and CAD system was compared with the gold standard that is true nodules confirmed by consensus of senior RAD and CAD together. The overall sensitivity and false-positive (FP) rate of CAD software was calculated. Of the 322 suspected nodules, 221 were classified as true nodules on the consensus of senior RAD and CAD together. Of the true nodules, the RAD detected 206 (93.2%) and 202 (91.4%) by the CAD. CAD and RAD together picked up more number of nodules than either CAD or RAD alone. Overall sensitivity for nodule detection with the CAD program was 91.4%, and FP detection per patient was 5.5%. The CAD showed comparatively higher sensitivity for nodules of size 4-10 mm (93.4%) and nodules in hilar (100%) and central (96.5%) location when compared to RAD's performance. CAD performance was high in detecting pulmonary nodules including the small size and low-density nodules. CAD even with relatively high FP rate, assists and improves RAD's performance as a second reader, especially for nodules located in the central and hilar region and for small nodules by saving RADs time.

  4. Integrating CAD modules in a PACS environment using a wide computing infrastructure.

    PubMed

    Suárez-Cuenca, Jorge J; Tilve, Amara; López, Ricardo; Ferro, Gonzalo; Quiles, Javier; Souto, Miguel

    2017-04-01

    The aim of this paper is to describe a project designed to achieve a total integration of different CAD algorithms into the PACS environment by using a wide computing infrastructure. The aim is to build a system for the entire region of Galicia, Spain, to make CAD accessible to multiple hospitals by employing different PACSs and clinical workstations. The new CAD model seeks to connect different devices (CAD systems, acquisition modalities, workstations and PACS) by means of networking based on a platform that will offer different CAD services. This paper describes some aspects related to the health services of the region where the project was developed, CAD algorithms that were either employed or selected for inclusion in the project, and several technical aspects and results. We have built a standard-based platform with which users can request a CAD service and receive the results in their local PACS. The process runs through a web interface that allows sending data to the different CAD services. A DICOM SR object is received with the results of the algorithms stored inside the original study in the proper folder with the original images. As a result, a homogeneous service to the different hospitals of the region will be offered. End users will benefit from a homogeneous workflow and a standardised integration model to request and obtain results from CAD systems in any modality, not dependant on commercial integration models. This new solution will foster the deployment of these technologies in the entire region of Galicia.

  5. Evolution of facility layout requirements and CAD (computer-aided design) system development

    SciTech Connect

    Jones, M. )

    1990-06-01

    The overall configuration of the Superconducting Super Collider (SSC) including the infrastructure and land boundary requirements were developed using a computer-aided design (CAD) system. The evolution of the facility layout requirements and the use of the CAD system are discussed. The emphasis has been on minimizing the amount of input required and maximizing the speed by which the output may be obtained. The computer system used to store the data is also described.

  6. Computationally efficient multibody simulations

    NASA Technical Reports Server (NTRS)

    Ramakrishnan, Jayant; Kumar, Manoj

    1994-01-01

    Computationally efficient approaches to the solution of the dynamics of multibody systems are presented in this work. The computational efficiency is derived from both the algorithmic and implementational standpoint. Order(n) approaches provide a new formulation of the equations of motion eliminating the assembly and numerical inversion of a system mass matrix as required by conventional algorithms. Computational efficiency is also gained in the implementation phase by the symbolic processing and parallel implementation of these equations. Comparison of this algorithm with existing multibody simulation programs illustrates the increased computational efficiency.

  7. Longitudinal Study of Factors Impacting the Implementation of Notebook Computer Based CAD Instruction

    ERIC Educational Resources Information Center

    Goosen, Richard F.

    2009-01-01

    This study provides information for higher education leaders that have or are considering conducting Computer Aided Design (CAD) instruction using student owned notebook computers. Survey data were collected during the first 8 years of a pilot program requiring engineering technology students at a four year public university to acquire a notebook…

  8. Role of Computer Aided Diagnosis (CAD) in the detection of pulmonary nodules on 64 row multi detector computed tomography

    PubMed Central

    Prakashini, K; Babu, Satish; Rajgopal, KV; Kokila, K Raja

    2016-01-01

    Aims and Objectives: To determine the overall performance of an existing CAD algorithm with thin-section computed tomography (CT) in the detection of pulmonary nodules and to evaluate detection sensitivity at a varying range of nodule density, size, and location. Materials and Methods: A cross-sectional prospective study was conducted on 20 patients with 322 suspected nodules who underwent diagnostic chest imaging using 64-row multi-detector CT. The examinations were evaluated on reconstructed images of 1.4 mm thickness and 0.7 mm interval. Detection of pulmonary nodules, initially by a radiologist of 2 years experience (RAD) and later by CAD lung nodule software was assessed. Then, CAD nodule candidates were accepted or rejected accordingly. Detected nodules were classified based on their size, density, and location. The performance of the RAD and CAD system was compared with the gold standard that is true nodules confirmed by consensus of senior RAD and CAD together. The overall sensitivity and false-positive (FP) rate of CAD software was calculated. Observations and Results: Of the 322 suspected nodules, 221 were classified as true nodules on the consensus of senior RAD and CAD together. Of the true nodules, the RAD detected 206 (93.2%) and 202 (91.4%) by the CAD. CAD and RAD together picked up more number of nodules than either CAD or RAD alone. Overall sensitivity for nodule detection with the CAD program was 91.4%, and FP detection per patient was 5.5%. The CAD showed comparatively higher sensitivity for nodules of size 4–10 mm (93.4%) and nodules in hilar (100%) and central (96.5%) location when compared to RAD's performance. Conclusion: CAD performance was high in detecting pulmonary nodules including the small size and low-density nodules. CAD even with relatively high FP rate, assists and improves RAD's performance as a second reader, especially for nodules located in the central and hilar region and for small nodules by saving RADs time. PMID:27578931

  9. Operating efficiency of computers

    NASA Technical Reports Server (NTRS)

    Pac, J.

    1977-01-01

    A method is outlined which can be used to guarantee to users of computing systems a measure of operating efficiency. The monthly utilization coefficient should be equal to or exceed a value agreed on in advance. In addition, the repair time during a computer breakdown should not be longer than a period agreed on in advance.

  10. When and why might a Computer Aided Detection (CAD) system interfere with visual search? An eye-tracking study

    PubMed Central

    Drew, Trafton; Cunningham, Corbin; Wolfe, Jeremy

    2012-01-01

    Rational and Objectives Computer Aided Detection (CAD) systems are intended to improve performance. This study investigates how CAD might actually interfere with a visual search task. This is a laboratory study with implications for clinical use of CAD. Methods 47 naïve observers in two studies were asked to search for a target, embedded in 1/f2.4 noise while we monitored their eye-movements. For some observers, a CAD system marked 75% of targets and 10% of distractors while other observers completed the study without CAD. In Experiment 1, the CAD system’s primary function was to tell observers where the target might be. In Experiment 2, CAD provided information about target identity. Results In Experiment 1, there was a significant enhancement of observer sensitivity in the presence of CAD (t(22)=4.74, p<.001), but there was also a substantial cost. Targets that were not marked by the CAD system were missed more frequently than equivalent targets in No CAD blocks of the experiment (t(22)=7.02, p<.001). Experiment 2 showed no behavioral benefit from CAD, but also no significant cost on sensitivity to unmarked targets (t(22)=0.6, p=n.s.). Finally, in both experiments, CAD produced reliable changes in eye-movements: CAD observers examined a lower total percentage of the search area than the No CAD observers (Ex 1: t(48)=3.05, p<.005; Ex 2: t(50)=7.31, p<.001). Conclusions CAD signals do not combine with observers’ unaided performance in a straight-forward manner. CAD can engender a sense of certainty that can lead to incomplete search and elevated chances of missing unmarked stimuli. PMID:22958720

  11. Adjoint Sensitivity Computations for an Embedded-Boundary Cartesian Mesh Method and CAD Geometry

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis,Michael J.

    2006-01-01

    Cartesian-mesh methods are perhaps the most promising approach for addressing the issues of flow solution automation for aerodynamic design problems. In these methods, the discretization of the wetted surface is decoupled from that of the volume mesh. This not only enables fast and robust mesh generation for geometry of arbitrary complexity, but also facilitates access to geometry modeling and manipulation using parametric Computer-Aided Design (CAD) tools. Our goal is to combine the automation capabilities of Cartesian methods with an eficient computation of design sensitivities. We address this issue using the adjoint method, where the computational cost of the design sensitivities, or objective function gradients, is esseutially indepeudent of the number of design variables. In previous work, we presented an accurate and efficient algorithm for the solution of the adjoint Euler equations discretized on Cartesian meshes with embedded, cut-cell boundaries. Novel aspects of the algorithm included the computation of surface shape sensitivities for triangulations based on parametric-CAD models and the linearization of the coupling between the surface triangulation and the cut-cells. The objective of the present work is to extend our adjoint formulation to problems involving general shape changes. Central to this development is the computation of volume-mesh sensitivities to obtain a reliable approximation of the objective finction gradient. Motivated by the success of mesh-perturbation schemes commonly used in body-fitted unstructured formulations, we propose an approach based on a local linearization of a mesh-perturbation scheme similar to the spring analogy. This approach circumvents most of the difficulties that arise due to non-smooth changes in the cut-cell layer as the boundary shape evolves and provides a consistent approximation tot he exact gradient of the discretized abjective function. A detailed gradient accurace study is presented to verify our approach

  12. Web-based computer-aided-diagnosis (CAD) system for bone age assessment (BAA) of children

    NASA Astrophysics Data System (ADS)

    Zhang, Aifeng; Uyeda, Joshua; Tsao, Sinchai; Ma, Kevin; Vachon, Linda A.; Liu, Brent J.; Huang, H. K.

    2008-03-01

    Bone age assessment (BAA) of children is a clinical procedure frequently performed in pediatric radiology to evaluate the stage of skeletal maturation based on a left hand and wrist radiograph. The most commonly used standard: Greulich and Pyle (G&P) Hand Atlas was developed 50 years ago and exclusively based on Caucasian population. Moreover, inter- & intra-observer discrepancies using this method create a need of an objective and automatic BAA method. A digital hand atlas (DHA) has been collected with 1,400 hand images of normal children from Asian, African American, Caucasian and Hispanic descends. Based on DHA, a fully automatic, objective computer-aided-diagnosis (CAD) method was developed and it was adapted to specific population. To bring DHA and CAD method to the clinical environment as a useful tool in assisting radiologist to achieve higher accuracy in BAA, a web-based system with direct connection to a clinical site is designed as a novel clinical implementation approach for online and real time BAA. The core of the system, a CAD server receives the image from clinical site, processes it by the CAD method and finally, generates report. A web service publishes the results and radiologists at the clinical site can review it online within minutes. This prototype can be easily extended to multiple clinical sites and will provide the foundation for broader use of the CAD system for BAA.

  13. An Analysis of Computer Aided Design (CAD) Packages Used at MSFC for the Recent Initiative to Integrate Engineering Activities

    NASA Technical Reports Server (NTRS)

    Smith, Leigh M.; Parker, Nelson C. (Technical Monitor)

    2002-01-01

    This paper analyzes the use of Computer Aided Design (CAD) packages at NASA's Marshall Space Flight Center (MSFC). It examines the effectiveness of recent efforts to standardize CAD practices across MSFC engineering activities. An assessment of the roles played by management, designers, analysts, and manufacturers in this initiative will be explored. Finally, solutions are presented for better integration of CAD across MSFC in the future.

  14. 3D object optonumerical acquisition methods for CAD/CAM and computer graphics systems

    NASA Astrophysics Data System (ADS)

    Sitnik, Robert; Kujawinska, Malgorzata; Pawlowski, Michal E.; Woznicki, Jerzy M.

    1999-08-01

    The creation of a virtual object for CAD/CAM and computer graphics on the base of data gathered by full-field optical measurement of 3D object is presented. The experimental co- ordinates are alternatively obtained by combined fringe projection/photogrammetry based system or fringe projection/virtual markers setup. The new and fully automatic procedure which process the cloud of measured points into triangular mesh accepted by CAD/CAM and computer graphics systems is presented. Its applicability for various classes of objects is tested including the error analysis of virtual objects generated. The usefulness of the method is proved by applying the virtual object in rapid prototyping system and in computer graphics environment.

  15. Computationally efficient control allocation

    NASA Technical Reports Server (NTRS)

    Durham, Wayne (Inventor)

    2001-01-01

    A computationally efficient method for calculating near-optimal solutions to the three-objective, linear control allocation problem is disclosed. The control allocation problem is that of distributing the effort of redundant control effectors to achieve some desired set of objectives. The problem is deemed linear if control effectiveness is affine with respect to the individual control effectors. The optimal solution is that which exploits the collective maximum capability of the effectors within their individual physical limits. Computational efficiency is measured by the number of floating-point operations required for solution. The method presented returned optimal solutions in more than 90% of the cases examined; non-optimal solutions returned by the method were typically much less than 1% different from optimal and the errors tended to become smaller than 0.01% as the number of controls was increased. The magnitude of the errors returned by the present method was much smaller than those that resulted from either pseudo inverse or cascaded generalized inverse solutions. The computational complexity of the method presented varied linearly with increasing numbers of controls; the number of required floating point operations increased from 5.5 i, to seven times faster than did the minimum-norm solution (the pseudoinverse), and at about the same rate as did the cascaded generalized inverse solution. The computational requirements of the method presented were much better than that of previously described facet-searching methods which increase in proportion to the square of the number of controls.

  16. Computer-aided detection (CAD) of breast masses in mammography: combined detection and ensemble classification

    NASA Astrophysics Data System (ADS)

    Choi, Jae Young; Kim, Dae Hoe; Plataniotis, Konstantinos N.; Ro, Yong Man

    2014-07-01

    We propose a novel computer-aided detection (CAD) framework of breast masses in mammography. To increase detection sensitivity for various types of mammographic masses, we propose the combined use of different detection algorithms. In particular, we develop a region-of-interest combination mechanism that integrates detection information gained from unsupervised and supervised detection algorithms. Also, to significantly reduce the number of false-positive (FP) detections, the new ensemble classification algorithm is developed. Extensive experiments have been conducted on a benchmark mammogram database. Results show that our combined detection approach can considerably improve the detection sensitivity with a small loss of FP rate, compared to representative detection algorithms previously developed for mammographic CAD systems. The proposed ensemble classification solution also has a dramatic impact on the reduction of FP detections; as much as 70% (from 15 to 4.5 per image) at only cost of 4.6% sensitivity loss (from 90.0% to 85.4%). Moreover, our proposed CAD method performs as well or better (70.7% and 80.0% per 1.5 and 3.5 FPs per image respectively) than the results of mammography CAD algorithms previously reported in the literature.

  17. Computer-aided detection (CAD) of breast masses in mammography: combined detection and ensemble classification.

    PubMed

    Choi, Jae Young; Kim, Dae Hoe; Plataniotis, Konstantinos N; Ro, Yong Man

    2014-07-21

    We propose a novel computer-aided detection (CAD) framework of breast masses in mammography. To increase detection sensitivity for various types of mammographic masses, we propose the combined use of different detection algorithms. In particular, we develop a region-of-interest combination mechanism that integrates detection information gained from unsupervised and supervised detection algorithms. Also, to significantly reduce the number of false-positive (FP) detections, the new ensemble classification algorithm is developed. Extensive experiments have been conducted on a benchmark mammogram database. Results show that our combined detection approach can considerably improve the detection sensitivity with a small loss of FP rate, compared to representative detection algorithms previously developed for mammographic CAD systems. The proposed ensemble classification solution also has a dramatic impact on the reduction of FP detections; as much as 70% (from 15 to 4.5 per image) at only cost of 4.6% sensitivity loss (from 90.0% to 85.4%). Moreover, our proposed CAD method performs as well or better (70.7% and 80.0% per 1.5 and 3.5 FPs per image respectively) than the results of mammography CAD algorithms previously reported in the literature.

  18. Project Integration Architecture (PIA) and Computational Analysis Programming Interface (CAPRI) for Accessing Geometry Data from CAD Files

    NASA Technical Reports Server (NTRS)

    Benyo, Theresa L.

    2002-01-01

    Integration of a supersonic inlet simulation with a computer aided design (CAD) system is demonstrated. The integration is performed using the Project Integration Architecture (PIA). PIA provides a common environment for wrapping many types of applications. Accessing geometry data from CAD files is accomplished by incorporating appropriate function calls from the Computational Analysis Programming Interface (CAPRI). CAPRI is a CAD vendor neutral programming interface that aids in acquiring geometry data directly from CAD files. The benefits of wrapping a supersonic inlet simulation into PIA using CAPRI are; direct access of geometry data, accurate capture of geometry data, automatic conversion of data units, CAD vendor neutral operation, and on-line interactive history capture. This paper describes the PIA and the CAPRI wrapper and details the supersonic inlet simulation demonstration.

  19. Computer-Aided Diagnostic (CAD) Scheme by Use of Contralateral Subtraction Technique

    NASA Astrophysics Data System (ADS)

    Nagashima, Hiroyuki; Harakawa, Tetsumi

    We developed a computer-aided diagnostic (CAD) scheme for detection of subtle image findings of acute cerebral infarction in brain computed tomography (CT) by using a contralateral subtraction technique. In our computerized scheme, the lateral inclination of image was first corrected automatically by rotating and shifting. The contralateral subtraction image was then derived by subtraction of reversed image from original image. Initial candidates for acute cerebral infarctions were identified using the multiple-thresholding and image filtering techniques. As the 1st step for removing false positive candidates, fourteen image features were extracted in each of the initial candidates. Halfway candidates were detected by applying the rule-based test with these image features. At the 2nd step, five image features were extracted using the overlapping scale with halfway candidates in interest slice and upper/lower slice image. Finally, acute cerebral infarction candidates were detected by applying the rule-based test with five image features. The sensitivity in the detection for 74 training cases was 97.4% with 3.7 false positives per image. The performance of CAD scheme for 44 testing cases had an approximate result to training cases. Our CAD scheme using the contralateral subtraction technique can reveal suspected image findings of acute cerebral infarctions in CT images.

  20. Teaching for CAD Expertise

    ERIC Educational Resources Information Center

    Chester, Ivan

    2007-01-01

    CAD (Computer Aided Design) has now become an integral part of Technology Education. The recent introduction of highly sophisticated, low-cost CAD software and CAM hardware capable of running on desktop computers has accelerated this trend. There is now quite widespread introduction of solid modeling CAD software into secondary schools but how…

  1. Teaching for CAD Expertise

    ERIC Educational Resources Information Center

    Chester, Ivan

    2007-01-01

    CAD (Computer Aided Design) has now become an integral part of Technology Education. The recent introduction of highly sophisticated, low-cost CAD software and CAM hardware capable of running on desktop computers has accelerated this trend. There is now quite widespread introduction of solid modeling CAD software into secondary schools but how…

  2. Computer-assisted orthognathic surgery: feasibility study using multiple CAD/CAM surgical splints.

    PubMed

    Zinser, Max J; Mischkowski, Robert A; Sailer, Hermann F; Zöller, Joachim E

    2012-05-01

    We present a virtual planning protocol incorporating a patented 3-surgical splint technique for orthognathic surgery. The purpose of this investigation was to demonstrate the feasibility and validity of the method in vivo. The protocol consisted of (1) computed tomography (CT) or cone-beam computed tomography (CBCT) maxillofacial imaging, optical scan of articulated dental study models, segmentation, and fusion; (2) diagnosis and virtual treatment planning; (3) computed-assisted design and manufacture (CAD/CAM) of the surgical splints; and (4) intraoperative surgical transfer. Validation of the accuracy of the technique was investigated by applying the protocol to 8 adult class III patients treated with bimaxillary osteotomies. The virtual plan was compared with the postoperative surgical result using image fusion of CT/CBCT dataset by analysis of measurements between hard and soft tissue landmarks relative to reference planes. The virtual planning approach showed clinically acceptable precision for the position of the maxilla (<0.23 mm) and condyle (<0.19 mm), marginal precision for the mandible (<0.33 mm), and low precision for the soft tissue (<2.52 mm). Virtual diagnosis, planning, and use of a patented CAD/CAM surgical splint technique provides a reliable method that may offer an alternate approach to the use of arbitrary splints and 2-dimensional planning. Copyright © 2012 Elsevier Inc. All rights reserved.

  3. Development of simulation tools for numerical investigation and computer-aided design (CAD) of gyrotrons

    NASA Astrophysics Data System (ADS)

    Damyanova, M.; Sabchevski, S.; Zhelyazkov, I.; Vasileva, E.; Balabanova, E.; Dankov, P.; Malinov, P.

    2016-10-01

    As the most powerful CW sources of coherent radiation in the sub-terahertz to terahertz frequency range the gyrotrons have demonstrated a remarkable potential for numerous novel and prospective applications in the fundamental physical research and the technologies. Among them are powerful gyrotrons for electron cyclotron resonance heating (ECRH) and current drive (ECCD) of magnetically confined plasma in various reactors for controlled thermonuclear fusion (e.g., tokamaks and most notably ITER), high-frequency gyrotrons for sub-terahertz spectroscopy (for example NMR-DNP, XDMR, study of the hyperfine structure of positronium, etc.), gyrotrons for thermal processing and so on. Modelling and simulation are indispensable tools for numerical studies, computer-aided design (CAD) and optimization of such sophisticated vacuum tubes (fast-wave devices) operating on a physical principle known as electron cyclotron resonance maser (ECRM) instability. During the recent years, our research team has been involved in the development of physical models and problem-oriented software packages for numerical analysis and CAD of different gyrotrons in the framework of a broad international collaboration. In this paper we present the current status of our simulation tools (GYROSIM and GYREOSS packages) and illustrate their functionality by results of numerical experiments carried out recently. Finally, we provide an outlook on the envisaged further development of the computer codes and the computational modules belonging to these packages and specialized to different subsystems of the gyrotrons.

  4. Is computer aided detection (CAD) cost effective in screening mammography? A model based on the CADET II study

    PubMed Central

    2011-01-01

    Background Single reading with computer aided detection (CAD) is an alternative to double reading for detecting cancer in screening mammograms. The aim of this study is to investigate whether the use of a single reader with CAD is more cost-effective than double reading. Methods Based on data from the CADET II study, the cost-effectiveness of single reading with CAD versus double reading was measured in terms of cost per cancer detected. Cost (Pound (£), year 2007/08) of single reading with CAD versus double reading was estimated assuming a health and social service perspective and a 7 year time horizon. As the equipment cost varies according to the unit size a separate analysis was conducted for high, average and low volume screening units. One-way sensitivity analyses were performed by varying the reading time, equipment and assessment cost, recall rate and reader qualification. Results CAD is cost increasing for all sizes of screening unit. The introduction of CAD is cost-increasing compared to double reading because the cost of CAD equipment, staff training and the higher assessment cost associated with CAD are greater than the saving in reading costs. The introduction of single reading with CAD, in place of double reading, would produce an additional cost of £227 and £253 per 1,000 women screened in high and average volume units respectively. In low volume screening units, the high cost of purchasing the equipment will results in an additional cost of £590 per 1,000 women screened. One-way sensitivity analysis showed that the factors having the greatest effect on the cost-effectiveness of CAD with single reading compared with double reading were the reading time and the reader's professional qualification (radiologist versus advanced practitioner). Conclusions Without improvements in CAD effectiveness (e.g. a decrease in the recall rate) CAD is unlikely to be a cost effective alternative to double reading for mammography screening in UK. This study

  5. Efficient Universal Blind Quantum Computation

    NASA Astrophysics Data System (ADS)

    Giovannetti, Vittorio; Maccone, Lorenzo; Morimae, Tomoyuki; Rudolph, Terry G.

    2013-12-01

    We give a cheat sensitive protocol for blind universal quantum computation that is efficient in terms of computational and communication resources: it allows one party to perform an arbitrary computation on a second party’s quantum computer without revealing either which computation is performed, or its input and output. The first party’s computational capabilities can be extremely limited: she must only be able to create and measure single-qubit superposition states. The second party is not required to use measurement-based quantum computation. The protocol requires the (optimal) exchange of O(Jlog⁡2(N)) single-qubit states, where J is the computational depth and N is the number of qubits needed for the computation.

  6. Efficient universal blind quantum computation.

    PubMed

    Giovannetti, Vittorio; Maccone, Lorenzo; Morimae, Tomoyuki; Rudolph, Terry G

    2013-12-06

    We give a cheat sensitive protocol for blind universal quantum computation that is efficient in terms of computational and communication resources: it allows one party to perform an arbitrary computation on a second party's quantum computer without revealing either which computation is performed, or its input and output. The first party's computational capabilities can be extremely limited: she must only be able to create and measure single-qubit superposition states. The second party is not required to use measurement-based quantum computation. The protocol requires the (optimal) exchange of O(Jlog2(N)) single-qubit states, where J is the computational depth and N is the number of qubits needed for the computation.

  7. Materials for chairside CAD/CAM restorations.

    PubMed

    Fasbinder, Dennis J

    2010-01-01

    Chairside computer-aided design/computer-aided manufacturing (CAD/CAM) systems have become considerably more accurate, efficient, and prevalent as the technology has evolved in the past 25 years. The initial restorative material option for chairside CAD/CAM restorations was limited to ceramic blocks. Restorative material options have multiplied and now include esthetic ceramics, high-strength ceramics, and composite materials for both definitive and temporary restoration applications. This article will review current materials available for chairside CAD/CAM restorations.

  8. The computation of all plane/surface intersections for CAD/CAM applications

    NASA Technical Reports Server (NTRS)

    Hoitsma, D. H., Jr.; Roche, M.

    1984-01-01

    The problem of the computation and display of all intersections of a given plane with a rational bicubic surface patch for use on an interactive CAD/CAM system is examined. The general problem of calculating all intersections of a plane and a surface consisting of rational bicubic patches is reduced to the case of a single generic patch by applying a rejection algorithm which excludes all patches that do not intersect the plane. For each pertinent patch the algorithm presented computed the intersection curves by locating an initial point on each curve, and computes successive points on the curve using a tolerance step equation. A single cubic equation solver is used to compute the initial curve points lying on the boundary of a surface patch, and the method of resultants as applied to curve theory is used to determine critical points which, in turn, are used to locate initial points that lie on intersection curves which are in the interior of the patch. Examples are given to illustrate the ability of this algorithm to produce all intersection curves.

  9. The computation of all plane/surface intersections for CAD/CAM applications

    NASA Technical Reports Server (NTRS)

    Hoitsma, D. H., Jr.; Roche, M.

    1984-01-01

    The problem of the computation and display of all intersections of a given plane with a rational bicubic surface patch for use on an interactive CAD/CAM system is examined. The general problem of calculating all intersections of a plane and a surface consisting of rational bicubic patches is reduced to the case of a single generic patch by applying a rejection algorithm which excludes all patches that do not intersect the plane. For each pertinent patch the algorithm presented computed the intersection curves by locating an initial point on each curve, and computes successive points on the curve using a tolerance step equation. A single cubic equation solver is used to compute the initial curve points lying on the boundary of a surface patch, and the method of resultants as applied to curve theory is used to determine critical points which, in turn, are used to locate initial points that lie on intersection curves which are in the interior of the patch. Examples are given to illustrate the ability of this algorithm to produce all intersection curves.

  10. Improvement of MS (multiple sclerosis) CAD (computer aided diagnosis) performance using C/C++ and computing engine in the graphical processing unit (GPU)

    NASA Astrophysics Data System (ADS)

    Suh, Joohyung; Ma, Kevin; Le, Anh

    2011-03-01

    Multiple Sclerosis (MS) is a disease which is caused by damaged myelin around axons of the brain and spinal cord. Currently, MR Imaging is used for diagnosis, but it is very highly variable and time-consuming since the lesion detection and estimation of lesion volume are performed manually. For this reason, we developed a CAD (Computer Aided Diagnosis) system which would assist segmentation of MS to facilitate physician's diagnosis. The MS CAD system utilizes K-NN (k-nearest neighbor) algorithm to detect and segment the lesion volume in an area based on the voxel. The prototype MS CAD system was developed under the MATLAB environment. Currently, the MS CAD system consumes a huge amount of time to process data. In this paper we will present the development of a second version of MS CAD system which has been converted into C/C++ in order to take advantage of the GPU (Graphical Processing Unit) which will provide parallel computation. With the realization of C/C++ and utilizing the GPU, we expect to cut running time drastically. The paper investigates the conversion from MATLAB to C/C++ and the utilization of a high-end GPU for parallel computing of data to improve algorithm performance of MS CAD.

  11. Revision of Electro-Mechanical Drafting Program to Include CAD/D (Computer-Aided Drafting/Design). Final Report.

    ERIC Educational Resources Information Center

    Snyder, Nancy V.

    North Seattle Community College decided to integrate computer-aided design/drafting (CAD/D) into its Electro-Mechanical Drafting Program. This choice necessitated a redefinition of the program through new curriculum and course development. To initiate the project, a new industrial advisory council was formed. Major electronic and recruiting firms…

  12. Computer-assisted detection (CAD) of pulmonary nodules on thoracic CT scans using image processing and classification techniques

    NASA Astrophysics Data System (ADS)

    Dehmeshki, Jamshid; Valdivieso-Casique, Manlio; Siddique, Musib; Dehkordi, Mandana E.; Costello, John; Roddie, Mary

    2004-05-01

    Computer assisted methods for the detection of pulmonary nodules have become more important as the resolution of CT scanners has increased and as more accurate and reproducible detections are needed. In this paper we describe the results of a CAD system for the detection of lung nodules and compare them against the interpretations of three independent radiologists.

  13. Computer-aided analysis of the influence of digitizing and surfacing on the accuracy in dental CAD/CAM technology.

    PubMed

    Rudolph, Heike; Luthardt, Ralph G; Walter, Michael H

    2007-05-01

    In dentistry, ceramic materials with high fracture resistance are needed for all-ceramic fixed partial dentures (FPDs). The sophisticated processing of advanced ceramics that can be used for such dental restorations demands the application of CAD/CAM technologies. These techniques necessitate digitizing of the prepared teeth or the planned restoration itself and surfacing of the acquired digital data before milling paths can be generated. As precision in fit is crucial for dental restorations, a computer-aided method for the quantitative and qualitative 3D analysis has been developed and applied. Factors influencing the obtainable precision in the application of CAD/CAM techniques were taken into consideration.

  14. Comparison of two software versions of a commercially available computer-aided detection (CAD) system for detecting breast cancer.

    PubMed

    Kim, Seung Ja; Moon, Woo Kyung; Kim, Soo-Yeon; Chang, Jung Min; Kim, Sun Mi; Cho, Nariya

    2010-06-01

    The performance of the computer-aided detection (CAD) system can be determined by the sensitivity and false-positive marks rate, therefore these factors should be improved by upgrading the software version of the CAD system. To compare retrospectively the performances of two software versions of a commercially available CAD system when applied to full-field digital mammograms for the detection of breast cancers in a screening group. Versions 3.1 and 8.3 of a CAD software system (ImageChecker, R2 Technology) were applied to the full-field digital mammograms of 130 women (age range 36-80, mean age 53 years) with 130 breast cancers detected by screening. The overall sensitivities of the version 3.1 and 8.3 CAD systems were 92.3% (120 of 130) and 96.2% (125 of 130) (P=0.025), respectively, and sensitivities for masses were 78.3% (36 of 46) and 89.1% (41 of 46) (P=0.024) and for microcalcifications 100% (84 of 84) and 100% (84 of 84), respectively. Version 8.3 correctly marked five lesions of invasive ductal carcinoma that were missed by version 3.1. Average numbers of false-positive marks per image were 0.38 (0.15 for calcifications, 0.23 for masses) for version 3.1 and 0.46 (0.13 for calcifications, 0.33 for masses) for version 8.3 (P=0.1420). The newer version 8.3 of the CAD system showed better overall sensitivity for the detection of breast cancer than version 3.1 due to its improved sensitivity for masses when applied to full-field digital mammograms.

  15. Report on the 2nd European conference on computer-aided design (CAD) in small- and medium-size industries (MICAD 82)

    SciTech Connect

    Magnuson, W.G. Jr.

    1982-10-01

    A summary is presented of the 2nd European conference on computer aided design (CAD) in small- and medium-size industries (MICAD82) held in Paris, France, September 21-23, 1982. The conference emphasized applications of CAD in industries with limited investment resources and which are forced to innovate in order to sustain competition.

  16. Development of problem-oriented software packages for numerical studies and computer-aided design (CAD) of gyrotrons

    NASA Astrophysics Data System (ADS)

    Damyanova, M.; Sabchevski, S.; Zhelyazkov, I.; Vasileva, E.; Balabanova, E.; Dankov, P.; Malinov, P.

    2016-03-01

    Gyrotrons are the most powerful sources of coherent CW (continuous wave) radiation in the frequency range situated between the long-wavelength edge of the infrared light (far-infrared region) and the microwaves, i.e., in the region of the electromagnetic spectrum which is usually called the THz-gap (or T-gap), since the output power of other devices (e.g., solid-state oscillators) operating in this interval is by several orders of magnitude lower. In the recent years, the unique capabilities of the sub-THz and THz gyrotrons have opened the road to many novel and future prospective applications in various physical studies and advanced high-power terahertz technologies. In this paper, we present the current status and functionality of the problem-oriented software packages (most notably GYROSIM and GYREOSS) used for numerical studies, computer-aided design (CAD) and optimization of gyrotrons for diverse applications. They consist of a hierarchy of codes specialized to modelling and simulation of different subsystems of the gyrotrons (EOS, resonant cavity, etc.) and are based on adequate physical models, efficient numerical methods and algorithms.

  17. Establishment of a Computer-Aided Design (CAD)/Computer-Aided Manufacturing (CAM) Process for the Production of Cold Forged Gears

    DTIC Science & Technology

    1984-01-01

    Continue on reverse side if necessary and Identify by block number) Computer Aided Design/Manufacturing (CAD/CAM), Spur and Helical Gears, Cold Forging...for cold forging spur and helical gears. The geometry of the spur and helical gears has been obtained from the kinematics of the hobbing/shaper machines...or shaping) to cut the electrode for a helical gear die were then computed using the corrections described above. A computer program called GEARDI

  18. Web-Based Architecture to Enable Compute-Intensive CAD Tools and Multi-user Synchronization in Teleradiology

    NASA Astrophysics Data System (ADS)

    Mehta, Neville; Kompalli, Suryaprakash; Chaudhary, Vipin

    Teleradiology is the electronic transmission of radiological patient images, such as x-rays, CT, or MR across multiple locations. The goal could be interpretation, consultation, or medical records keeping. Information technology solutions have enabled electronic records and their associated benefits are evident in health care today. However, salient aspects of collaborative interfaces, and computer assisted diagnostic (CAD) tools are yet to be integrated into workflow designs. The Computer Assisted Diagnostics and Interventions (CADI) group at the University at Buffalo has developed an architecture that facilitates web-enabled use of CAD tools, along with the novel concept of synchronized collaboration. The architecture can support multiple teleradiology applications and case studies are presented here.

  19. Improved Foundry Castings Utilizing CAD/CAM (Computer Aided Design/ Computer Aided Manufacture). Volume 1. Overview

    DTIC Science & Technology

    1988-06-30

    several organizations. Members of the project staffs at the University of Pittsburgh, Battelle Columbus Laboratories, Blaw - Knox Foundry and Mill...with the University of Pittsburgh, James Echlin, Blaw - Knox , and A. Roulet, General Dynamics. Computing facilities on the DEC 10 system were made...Akgerman, A. Badawy, C. Wilson, and T. Altan. The project staff at Blaw - Knox included Mssrs. R. Nariman, KI Fahey, and S. Miller. Mr. W. Northey

  20. CAD/CAM/CNC.

    ERIC Educational Resources Information Center

    Domermuth, Dave; And Others

    1996-01-01

    Includes "Quick Start CNC (computer numerical control) with a Vacuum Filter and Laminated Plastic" (Domermuth); "School and Industry Cooperate for Mutual Benefit" (Buckler); and "CAD (computer-assisted drafting) Careers--What Professionals Have to Say" (Skinner). (JOW)

  1. CAD/CAM/CNC.

    ERIC Educational Resources Information Center

    Domermuth, Dave; And Others

    1996-01-01

    Includes "Quick Start CNC (computer numerical control) with a Vacuum Filter and Laminated Plastic" (Domermuth); "School and Industry Cooperate for Mutual Benefit" (Buckler); and "CAD (computer-assisted drafting) Careers--What Professionals Have to Say" (Skinner). (JOW)

  2. TGeoCad: an Interface between ROOT and CAD Systems

    NASA Astrophysics Data System (ADS)

    Luzzi, C.; Carminati, F.

    2014-06-01

    In the simulation of High Energy Physics experiment a very high precision in the description of the detector geometry is essential to achieve the required performances. The physicists in charge of Monte Carlo Simulation of the detector need to collaborate efficiently with the engineers working at the mechanical design of the detector. Often, this collaboration is made hard by the usage of different and incompatible software. ROOT is an object-oriented C++ framework used by physicists for storing, analyzing and simulating data produced by the high-energy physics experiments while CAD (Computer-Aided Design) software is used for mechanical design in the engineering field. The necessity to improve the level of communication between physicists and engineers led to the implementation of an interface between the ROOT geometrical modeler used by the virtual Monte Carlo simulation software and the CAD systems. In this paper we describe the design and implementation of the TGeoCad Interface that has been developed to enable the use of ROOT geometrical models in several CAD systems. To achieve this goal, the ROOT geometry description is converted into STEP file format (ISO 10303), which can be imported and used by many CAD systems.

  3. Shape optimization and CAD

    NASA Technical Reports Server (NTRS)

    Rasmussen, John

    1990-01-01

    Structural optimization has attracted the attention since the days of Galileo. Olhoff and Taylor have produced an excellent overview of the classical research within this field. However, the interest in structural optimization has increased greatly during the last decade due to the advent of reliable general numerical analysis methods and the computer power necessary to use them efficiently. This has created the possibility of developing general numerical systems for shape optimization. Several authors, eg., Esping; Braibant & Fleury; Bennet & Botkin; Botkin, Yang, and Bennet; and Stanton have published practical and successful applications of general optimization systems. Ding and Homlein have produced extensive overviews of available systems. Furthermore, a number of commercial optimization systems based on well-established finite element codes have been introduced. Systems like ANSYS, IDEAS, OASIS, and NISAOPT are widely known examples. In parallel to this development, the technology of computer aided design (CAD) has gained a large influence on the design process of mechanical engineering. The CAD technology has already lived through a rapid development driven by the drastically growing capabilities of digital computers. However, the systems of today are still considered as being only the first generation of a long row of computer integrated manufacturing (CIM) systems. These systems to come will offer an integrated environment for design, analysis, and fabrication of products of almost any character. Thus, the CAD system could be regarded as simply a database for geometrical information equipped with a number of tools with the purpose of helping the user in the design process. Among these tools are facilities for structural analysis and optimization as well as present standard CAD features like drawing, modeling, and visualization tools. The state of the art of structural optimization is that a large amount of mathematical and mechanical techniques are

  4. Shape optimization and CAD

    NASA Technical Reports Server (NTRS)

    Rasmussen, John

    1990-01-01

    Structural optimization has attracted the attention since the days of Galileo. Olhoff and Taylor have produced an excellent overview of the classical research within this field. However, the interest in structural optimization has increased greatly during the last decade due to the advent of reliable general numerical analysis methods and the computer power necessary to use them efficiently. This has created the possibility of developing general numerical systems for shape optimization. Several authors, eg., Esping; Braibant & Fleury; Bennet & Botkin; Botkin, Yang, and Bennet; and Stanton have published practical and successful applications of general optimization systems. Ding and Homlein have produced extensive overviews of available systems. Furthermore, a number of commercial optimization systems based on well-established finite element codes have been introduced. Systems like ANSYS, IDEAS, OASIS, and NISAOPT are widely known examples. In parallel to this development, the technology of computer aided design (CAD) has gained a large influence on the design process of mechanical engineering. The CAD technology has already lived through a rapid development driven by the drastically growing capabilities of digital computers. However, the systems of today are still considered as being only the first generation of a long row of computer integrated manufacturing (CIM) systems. These systems to come will offer an integrated environment for design, analysis, and fabrication of products of almost any character. Thus, the CAD system could be regarded as simply a database for geometrical information equipped with a number of tools with the purpose of helping the user in the design process. Among these tools are facilities for structural analysis and optimization as well as present standard CAD features like drawing, modeling, and visualization tools. The state of the art of structural optimization is that a large amount of mathematical and mechanical techniques are

  5. Improving Computational Efficiency of VAST

    DTIC Science & Technology

    2013-09-01

    Improving Computational Efficiency of VAST Lei Jiang and Tom Macadam Martec Limited Prepared By: Martec Limited 400...1800 Brunswick Street Halifax, Nova Scotia B3J 3J8 Canada Contract Project Manager: Lei Jiang, 902-425-5101 Ext 228 Contract Number: W7707...unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Principal Author Lei Jiang Senior Research Engineer

  6. CAD/CAE Integration Enhanced by New CAD Services Standard

    NASA Technical Reports Server (NTRS)

    Claus, Russell W.

    2002-01-01

    A Government-industry team led by the NASA Glenn Research Center has developed a computer interface standard for accessing data from computer-aided design (CAD) systems. The Object Management Group, an international computer standards organization, has adopted this CAD services standard. The new standard allows software (e.g., computer-aided engineering (CAE) and computer-aided manufacturing software to access multiple CAD systems through one programming interface. The interface is built on top of a distributed computing system called the Common Object Request Broker Architecture (CORBA). CORBA allows the CAD services software to operate in a distributed, heterogeneous computing environment.

  7. CAD: Designs on Business.

    ERIC Educational Resources Information Center

    Milburn, Ken

    1988-01-01

    Provides a general review of the field of Computer-Aided Design Software including specific reviews of "Autosketch,""Generic CADD,""Drafix 1 Plus,""FastCAD," and "Autocad Release 9." Brief articles include "Blueprint for Generation,""CAD for Every Department,""Ideas…

  8. CAD: Designs on Business.

    ERIC Educational Resources Information Center

    Milburn, Ken

    1988-01-01

    Provides a general review of the field of Computer-Aided Design Software including specific reviews of "Autosketch,""Generic CADD,""Drafix 1 Plus,""FastCAD," and "Autocad Release 9." Brief articles include "Blueprint for Generation,""CAD for Every Department,""Ideas…

  9. Computer-aided drafting and design (CAD) in the Plant Engineering organization at Sandia National Laboratories

    SciTech Connect

    Hall, J.T.; Knott, D.D.; Moore, M.B.

    1983-03-01

    The Plant Engineering organization at Sandia National Laboratories, Albuquerque (SNLA), has been working with a CAD system for approximately 2 1/2 yr, and finds itself at a crossroads. CAD has not been a panacea to workload problems to date, and Plant Engineering commissioned a study to try to determine why and to make recommendations to management on what steps might be taken in the future. Recommendations range from making the current system more productive to enhancing it significantly with newer and more powerful graphics technology.

  10. Computer Graphic Design Using Auto-CAD and Plug Nozzle Research

    NASA Technical Reports Server (NTRS)

    Rogers, Rayna C.

    2004-01-01

    The purpose of creating computer generated images varies widely. They can be use for computational fluid dynamics (CFD), or as a blueprint for designing parts. The schematic that I will be working on the summer will be used to create nozzles that are a part of a larger system. At this phase in the project, the nozzles needed for the systems have been fabricated. One part of my mission is to create both three dimensional and two dimensional models on Auto-CAD 2002 of the nozzles. The research on plug nozzles will allow me to have a better understanding of how they assist in the thrust need for a missile to take off. NASA and the United States military are working together to develop a new design concept. On most missiles a convergent-divergent nozzle is used to create thrust. However, the two are looking into different concepts for the nozzle. The standard convergent-divergent nozzle forces a mixture of combustible fluids and air through a smaller area in comparison to where the combination was mixed. Once it passes through the smaller area known as A8 it comes out the end of the nozzle which is larger the first or area A9. This creates enough thrust for the mechanism whether it is an F-18 fighter jet or a missile. The A9 section of the convergent-divergent nozzle has a mechanism that controls how large A9 can be. This is needed because the pressure of the air coming out nozzle must be equal to that of the ambient pressure other wise there will be a loss of performance in the machine. The plug nozzle however does not need to have an A9 that can vary. When the air flow comes out it can automatically sense what the ambient pressure is and will adjust accordingly. The objective of this design is to create a plug nozzle that is not as complicated mechanically as it counterpart the convergent-divergent nozzle.

  11. Computer Graphic Design Using Auto-CAD and Plug Nozzle Research

    NASA Technical Reports Server (NTRS)

    Rogers, Rayna C.

    2004-01-01

    The purpose of creating computer generated images varies widely. They can be use for computational fluid dynamics (CFD), or as a blueprint for designing parts. The schematic that I will be working on the summer will be used to create nozzles that are a part of a larger system. At this phase in the project, the nozzles needed for the systems have been fabricated. One part of my mission is to create both three dimensional and two dimensional models on Auto-CAD 2002 of the nozzles. The research on plug nozzles will allow me to have a better understanding of how they assist in the thrust need for a missile to take off. NASA and the United States military are working together to develop a new design concept. On most missiles a convergent-divergent nozzle is used to create thrust. However, the two are looking into different concepts for the nozzle. The standard convergent-divergent nozzle forces a mixture of combustible fluids and air through a smaller area in comparison to where the combination was mixed. Once it passes through the smaller area known as A8 it comes out the end of the nozzle which is larger the first or area A9. This creates enough thrust for the mechanism whether it is an F-18 fighter jet or a missile. The A9 section of the convergent-divergent nozzle has a mechanism that controls how large A9 can be. This is needed because the pressure of the air coming out nozzle must be equal to that of the ambient pressure other wise there will be a loss of performance in the machine. The plug nozzle however does not need to have an A9 that can vary. When the air flow comes out it can automatically sense what the ambient pressure is and will adjust accordingly. The objective of this design is to create a plug nozzle that is not as complicated mechanically as it counterpart the convergent-divergent nozzle.

  12. Assessment of the Incremental Benefit of Computer-Aided Detection (CAD) for Interpretation of CT Colonography by Experienced and Inexperienced Readers.

    PubMed

    Boone, Darren; Mallett, Susan; McQuillan, Justine; Taylor, Stuart A; Altman, Douglas G; Halligan, Steve

    2015-01-01

    To quantify the incremental benefit of computer-assisted-detection (CAD) for polyps, for inexperienced readers versus experienced readers of CT colonography. 10 inexperienced and 16 experienced radiologists interpreted 102 colonography studies unassisted and with CAD utilised in a concurrent paradigm. They indicated any polyps detected on a study sheet. Readers' interpretations were compared against a ground-truth reference standard: 46 studies were normal and 56 had at least one polyp (132 polyps in total). The primary study outcome was the difference in CAD net benefit (a combination of change in sensitivity and change in specificity with CAD, weighted towards sensitivity) for detection of patients with polyps. Inexperienced readers' per-patient sensitivity rose from 39.1% to 53.2% with CAD and specificity fell from 94.1% to 88.0%, both statistically significant. Experienced readers' sensitivity rose from 57.5% to 62.1% and specificity fell from 91.0% to 88.3%, both non-significant. Net benefit with CAD assistance was significant for inexperienced readers but not for experienced readers: 11.2% (95%CI 3.1% to 18.9%) versus 3.2% (95%CI -1.9% to 8.3%) respectively. Concurrent CAD resulted in a significant net benefit when used by inexperienced readers to identify patients with polyps by CT colonography. The net benefit was nearly four times the magnitude of that observed for experienced readers. Experienced readers did not benefit significantly from concurrent CAD.

  13. Surface analysis of study models generated from OrthoCAD and cone-beam computed tomography imaging.

    PubMed

    Lightheart, Kurtis G; English, Jeryl D; Kau, Chung H; Akyalcin, Sercan; Bussa, Harry I; McGrory, Kathleen R; McGrory, Kevin J

    2012-06-01

    The purpose of this research was to determine the accuracy of digital models generated by cone-beam computed tomography and compare it with that of OrthoCAD models (Cadent, Carlstadt, NJ) for orthodontic diagnosis and treatment planning by using surface area analysis. Two sets of maxillary and mandibular digital models of 30 subjects were obtained. The models were made from impressions scanned with OrthoCAD and by conversion of related cone-beam computed tomography files. Each patient's matched pairs of maxillary and mandibular models were superimposed by using a software program and a best-fit algorithm; surface-to-surface analysis was then performed. The average linear differences between the 2 files at all points on the surfaces were measured, and tolerance levels of 0.25, 0.5, 0.75, 1.0, 1.25, and 1.5 mm were set to determine the surface correlation amounts between the 2 files. Additionally, 6 linear measurements from predetermined landmarks were also measured and analyzed. The average maxillary model linear difference was 0.28 to 0.60 mm, whereas the average mandibular model linear difference ranged between 0.34 and 0.61 mm. Greater than a 90% surface correlation was obtained on average at 1.00 mm in the maxillary models and at 1.25 mm in the mandibular models. The mean differences obtained from the linear measurements of the maxillary and mandibular models were 0.071 and 0.018 mm, respectively. Surface-to-surface analysis of OrthoCAD and digital models generated by cone-beam computed tomography pointed to a fair overlap between the protocols. The accuracy of digital models generated by cone-beam computed tomography is adequate for initial diagnosis and treatment planning in orthodontics. Copyright © 2012 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  14. The VE/CAD synergism

    SciTech Connect

    Sperling, R.B.

    1993-03-19

    Value Engineering (VE) and Computer-Aided Design (CAD) can be used synergistically to reduce costs and improve facilities designs. The cost and schedule impacts of implementing alternative design ideas developed by VE teams can be greatly reduced when the drawings have been produced with interactive CAD systems. To better understand the interrelationship between VE and CAD, the fundamentals of the VE process are explained; and example of a VE proposal is described and the way CAD drawings facilitated its implementation is illustrated.

  15. IGIS (Interactive Geologic Interpretation System) computer-aided photogeologic mapping with image processing, graphics and CAD/CAM capabilities

    SciTech Connect

    McGuffie, B.A.; Johnson, L.F.; Alley, R.E.; Lang, H.R. )

    1989-10-01

    Advances in computer technology are changing the way geologists integrate and use data. Although many geoscience disciplines are absolutely dependent upon computer processing, photogeological and map interpretation computer procedures are just now being developed. Historically, geologists collected data in the field and mapped manually on a topographic map or aerial photographic base. New software called the interactive Geologic Interpretation System (IGIS) is being developed at the Jet Propulsion Laboratory (JPL) within the National Aeronautics and Space Administration (NASA)-funded Multispectral Analysis of Sedimentary Basins Project. To complement conventional geological mapping techniques, Landsat Thematic Mapper (TM) or other digital remote sensing image data and co-registered digital elevation data are combined using computer imaging, graphics, and CAD/CAM techniques to provide tools for photogeologic interpretation, strike/dip determination, cross section construction, stratigraphic section measurement, topographic slope measurement, terrain profile generation, rotatable 3-D block diagram generation, and seismic analysis.

  16. A computational investigation on radiation damage and activation of structural material for C-ADS

    NASA Astrophysics Data System (ADS)

    Liang, Tairan; Shen, Fei; Yin, Wen; Yu, Quanzhi; Liang, Tianjiao

    2015-11-01

    The C-ADS (China Accelerator-Driven Subcritical System) project, which aims at transmuting high-level radiotoxic waste (HLW) and power generation, is now in the research and development stage. In this paper, a simplified ADS model is set up based on the IAEA Th-ADS benchmark calculation model, then the radiation damage as well as the residual radioactivity of the structural material are estimated using the Monte Carlo simulation method. The peak displacement production rate, gas productions, activity and residual dose rate of the structural components like beam window and outer casing of subcritical reactor core are calculated. The calculation methods and the corresponding results provide the basic reference for making reasonable predictions for the lifetime and maintenance operations of the structural material of C-ADS.

  17. Modification to the Monte Carlo N-Particle (MCNP) Visual Editor (MCNPVised) to Read in Computer Aided Design (CAD) Files

    SciTech Connect

    Randolph Schwarz; Leland L. Carter; Alysia Schwarz

    2005-08-23

    Monte Carlo N-Particle Transport Code (MCNP) is the code of choice for doing complex neutron/photon/electron transport calculations for the nuclear industry and research institutions. The Visual Editor for Monte Carlo N-Particle is internationally recognized as the best code for visually creating and graphically displaying input files for MCNP. The work performed in this grant was used to enhance the capabilities of the MCNP Visual Editor to allow it to read in both 2D and 3D Computer Aided Design (CAD) files, allowing the user to electronically generate a valid MCNP input geometry.

  18. Implementation and display of Computer Aided Design (CAD) models in Monte Carlo radiation transport and shielding applications

    SciTech Connect

    Burns, T.J.

    1994-03-01

    An Xwindow application capable of importing geometric information directly from two Computer Aided Design (CAD) based formats for use in radiation transport and shielding analyses is being developed at ORNL. The application permits the user to graphically view the geometric models imported from the two formats for verification and debugging. Previous models, specifically formatted for the radiation transport and shielding codes can also be imported. Required extensions to the existing combinatorial geometry analysis routines are discussed. Examples illustrating the various options and features which will be implemented in the application are presented. The use of the application as a visualization tool for the output of the radiation transport codes is also discussed.

  19. Efficient computation of optimal actions.

    PubMed

    Todorov, Emanuel

    2009-07-14

    Optimal choice of actions is a fundamental problem relevant to fields as diverse as neuroscience, psychology, economics, computer science, and control engineering. Despite this broad relevance the abstract setting is similar: we have an agent choosing actions over time, an uncertain dynamical system whose state is affected by those actions, and a performance criterion that the agent seeks to optimize. Solving problems of this kind remains hard, in part, because of overly generic formulations. Here, we propose a more structured formulation that greatly simplifies the construction of optimal control laws in both discrete and continuous domains. An exhaustive search over actions is avoided and the problem becomes linear. This yields algorithms that outperform Dynamic Programming and Reinforcement Learning, and thereby solve traditional problems more efficiently. Our framework also enables computations that were not possible before: composing optimal control laws by mixing primitives, applying deterministic methods to stochastic systems, quantifying the benefits of error tolerance, and inferring goals from behavioral data via convex optimization. Development of a general class of easily solvable problems tends to accelerate progress--as linear systems theory has done, for example. Our framework may have similar impact in fields where optimal choice of actions is relevant.

  20. Efficient computation of optimal actions

    PubMed Central

    Todorov, Emanuel

    2009-01-01

    Optimal choice of actions is a fundamental problem relevant to fields as diverse as neuroscience, psychology, economics, computer science, and control engineering. Despite this broad relevance the abstract setting is similar: we have an agent choosing actions over time, an uncertain dynamical system whose state is affected by those actions, and a performance criterion that the agent seeks to optimize. Solving problems of this kind remains hard, in part, because of overly generic formulations. Here, we propose a more structured formulation that greatly simplifies the construction of optimal control laws in both discrete and continuous domains. An exhaustive search over actions is avoided and the problem becomes linear. This yields algorithms that outperform Dynamic Programming and Reinforcement Learning, and thereby solve traditional problems more efficiently. Our framework also enables computations that were not possible before: composing optimal control laws by mixing primitives, applying deterministic methods to stochastic systems, quantifying the benefits of error tolerance, and inferring goals from behavioral data via convex optimization. Development of a general class of easily solvable problems tends to accelerate progress—as linear systems theory has done, for example. Our framework may have similar impact in fields where optimal choice of actions is relevant. PMID:19574462

  1. Computationally efficient lossless image coder

    NASA Astrophysics Data System (ADS)

    Sriram, Parthasarathy; Sudharsanan, Subramania I.

    1999-12-01

    Lossless coding of image data has been a very active area of research in the field of medical imaging, remote sensing and document processing/delivery. While several lossless image coders such as JPEG and JBIG have been in existence for a while, their compression performance for encoding continuous-tone images were rather poor. Recently, several state of the art techniques like CALIC and LOCO were introduced with significant improvement in compression performance over traditional coders. However, these coders are very difficult to implement using dedicated hardware or in software using media processors due to their inherently serial nature of their encoding process. In this work, we propose a lossless image coding technique with a compression performance that is very close to the performance of CALIC and LOCO while being very efficient to implement both in hardware and software. Comparisons for encoding the JPEG- 2000 image set show that the compression performance of the proposed coder is within 2 - 5% of the more complex coders while being computationally very efficient. In addition, the encoder is shown to be parallelizabl at a hierarchy of levels. The execution time of the proposed encoder is smaller than what is required by LOCO while the decoder is 2 - 3 times faster that the execution time required by LOCO decoder.

  2. Computer simulation of Sealed NiCad Cells for space application

    NASA Astrophysics Data System (ADS)

    Owen, John R.; Hargreaves, Neil J.; Hay, John L.

    A computer simulation has been developed for the behavior of Sealed Nickel Cadmium cells operating in a low earth orbit satellite. The model incorporates elements to allow for energy losses due to electrolyte resistance, electrode overpotentials and charge leakage. A thermal sub-model simulates the cell temperature and automatically sets the loss elements to fit data collected in tests at variable temperature and charging parameters. The complete model provides the recharge efficiency, power loss and state of charge as a function of the solar power input and load.

  3. Assessment of updated CAD without a new reader study: effect of calibration of computer output on the computer-aided reader performance in CADx

    NASA Astrophysics Data System (ADS)

    Chen, Weijie; Petrick, Nicholas; Sahiner, Berkman

    2011-03-01

    It is very resource-demanding to assess each new version of a CAD system through a new reader study. We conjecture that the aided reader performance on a new version can be predicted by using certain characteristics of the computer output and the reader study conducted when the CAD system was initially introduced. This would likely reduce the need for additional reader studies. However, investigations are needed to develop a sound scientific foundation to test this conjecture. In this work, we consider a CADx system that outputs a disease score to aid the physician in making a diagnostic decision on a located lesion. Our major contribution is to show that calibration, reflected as a change in scale, is a characteristic of the computer output that needs to be considered in order to predict the aided reader performance in a new CADx version without a reader study. We used a bivariate bi-beta distribution to model the joint distribution of the decision variable underlying the reader without aid and the decision variable underlying the version 1 computer output in the initial version. We then applied a monotonic transformation to the computer output to simulate the computer output in a new version, i.e., the scores in the two versions differ only in calibration (specifically a change in scale). By further modeling certain mechanisms that the human reader may use for combining the computer output and the reader-alone scores, we computed the aided reader performance in terms of AUC for the new version of the CADx system. Our results show that the aided reader performance could depend on the degree of calibration difference between the two CAD system outputs. We conclude that for the purpose of predicting the aided reader performance of a new version of the CADx system, ROC performance (or any other rank-based metric) of the stand-alone CADx system may not be sufficient by itself.

  4. Evaluation of fit and efficiency of CAD/CAM fabricated all-ceramic restorations based on direct and indirect digitalization: a double-blinded, randomized clinical trial.

    PubMed

    Ahrberg, Danush; Lauer, Hans Christoph; Ahrberg, Martin; Weigl, Paul

    2016-03-01

    The aim of this clinical trial was to evaluate the marginal and internal fit of CAD/CAM fabricated zirconia crowns and three-unit fixed dental prostheses (FDPs) resulting from direct versus indirect digitalization. The efficiency of both methods was analyzed. In 25 patients, 17 single crowns and eight three-unit FDPs were fabricated with all-ceramic zirconia using CAD/CAM technology. Each patient underwent two different impression methods; a computer-aided impression with Lava C.O.S. (CAI) and a conventional polyether impression with Impregum pent soft (CI). The working time for each group was recorded. Before insertion, the marginal and internal fit was recorded using silicone replicas of the frameworks. Each sample was cut into four sections and evaluated at four sites (marginal gap, mid-axial wall, axio-occlusal transition, centro-occlusal site) under ×64 magnification. The Mann-Whitney U test was used to detect significant differences between the two groups in terms of marginal and internal fit (α = 0.05). The mean for the marginal gap was 61.08 μm (±24.77 μm) for CAI compared with 70.40 μm (±28.87 μm) for CI, which was a statistically significant difference. The other mean values for CAI and CI, respectively, were as follows in micrometers (± standard deviation): 88.27 (±41.49) and 92.13 (±49.87) at the mid-axial wall; 144.78 (±46.23) and 155.60 (±55.77) at the axio-occlusal transition; and 155.57 (49.85) and 171.51 (±60.98) at the centro-occlusal site. The CAI group showed significantly lower values of internal fit at the centro-occlusal site. A quadrant scan with a computer-aided impression was 5 min 6 s more time efficient when compared with a conventional impression, and a full-arch scan was 1 min 34 s more efficient. Although both direct and indirect digitalization facilitate the fabrication of single crowns and three-unit FDPs with clinically acceptable marginal fit, a significantly better marginal fit was noted with direct

  5. Computer-aided design and computer-aided modeling (CAD/CAM) generated surgical splints, cutting guides and custom-made implants: Which indications in orthognathic surgery?

    PubMed

    Scolozzi, P

    2015-12-01

    The purpose of the present report was to describe our indications, results and complications of computer-aided design and computer-aided modeling CAD/CAM surgical splints, cutting guides and custom-made implants in orthognathic surgery. We analyzed the clinical and radiological data of ten consecutive patients with dentofacial deformities treated using a CAD/CAM technique. Four patients had surgical splints and cutting guides for correction of maxillomandibular asymmetries, three had surgical cutting guides and customized internal distractors for correction of severe maxillary deficiencies and three had custom-made implants for additional chin contouring and/or mandibular defects following bimaxillary osteotomies and sliding genioplasty. We recorded age, gender, dentofacial deformity, surgical procedure and intra- and postoperative complications. All of the patients had stable cosmetic results with a high rate of patient satisfaction at the 1-year follow-up examination. No intra- and/or postoperative complications were encountered during any of the different steps of the procedure. This study demonstrated that the application of CAD/CAM patient-specific surgical splints, cutting guides and custom-made implants in orthognathic surgery allows for a successful outcome in the ten patients presented in this series. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  6. Evaluation of Five Microcomputer CAD Packages.

    ERIC Educational Resources Information Center

    Leach, James A.

    1987-01-01

    Discusses the similarities, differences, advanced features, applications and number of users of five microcomputer computer-aided design (CAD) packages. Included are: "AutoCAD (V.2.17)"; "CADKEY (V.2.0)"; "CADVANCE (V.1.0)"; "Super MicroCAD"; and "VersaCAD Advanced (V.4.00)." Describes the…

  7. Assessment of the Incremental Benefit of Computer-Aided Detection (CAD) for Interpretation of CT Colonography by Experienced and Inexperienced Readers

    PubMed Central

    Boone, Darren; Mallett, Susan; McQuillan, Justine; Taylor, Stuart A.; Altman, Douglas G.; Halligan, Steve

    2015-01-01

    Objectives To quantify the incremental benefit of computer-assisted-detection (CAD) for polyps, for inexperienced readers versus experienced readers of CT colonography. Methods 10 inexperienced and 16 experienced radiologists interpreted 102 colonography studies unassisted and with CAD utilised in a concurrent paradigm. They indicated any polyps detected on a study sheet. Readers’ interpretations were compared against a ground-truth reference standard: 46 studies were normal and 56 had at least one polyp (132 polyps in total). The primary study outcome was the difference in CAD net benefit (a combination of change in sensitivity and change in specificity with CAD, weighted towards sensitivity) for detection of patients with polyps. Results Inexperienced readers’ per-patient sensitivity rose from 39.1% to 53.2% with CAD and specificity fell from 94.1% to 88.0%, both statistically significant. Experienced readers’ sensitivity rose from 57.5% to 62.1% and specificity fell from 91.0% to 88.3%, both non-significant. Net benefit with CAD assistance was significant for inexperienced readers but not for experienced readers: 11.2% (95%CI 3.1% to 18.9%) versus 3.2% (95%CI -1.9% to 8.3%) respectively. Conclusions Concurrent CAD resulted in a significant net benefit when used by inexperienced readers to identify patients with polyps by CT colonography. The net benefit was nearly four times the magnitude of that observed for experienced readers. Experienced readers did not benefit significantly from concurrent CAD. PMID:26355745

  8. Effectiveness of braces designed using computer-aided design and manufacturing (CAD/CAM) and finite element simulation compared to CAD/CAM only for the conservative treatment of adolescent idiopathic scoliosis: a prospective randomized controlled trial.

    PubMed

    Cobetto, N; Aubin, C E; Parent, S; Clin, J; Barchi, S; Turgeon, I; Labelle, Hubert

    2016-10-01

    Clinical assessment of immediate in-brace effect of braces designed using CAD/CAM and FEM vs. only CAD/CAM for conservative treatment of AIS, using a randomized blinded and controlled study design. Forty AIS patients were prospectively recruited and randomized into two groups. For 19 patients (control group), the brace was designed using a scan of patient's torso and a conventional CAD/CAM approach (CtrlBrace). For the 21 other patients (test group), the brace was additionally designed using finite element modeling (FEM) and 3D reconstructions of spine, rib cage and pelvis (NewBrace). The NewBrace design was simulated and iteratively optimized to maximize the correction and minimize the contact surface and material. Both groups had comparable age, sex, weight, height, curve type and severity. Scoliosis Research Society standardized criteria for bracing were followed. Average Cobb angle prior to bracing was 27° and 28° for main thoracic (MT) and lumbar (L) curves, respectively, for the control group, while it was 33° and 28° for the test group. CtrlBraces reduced MT and L curves by 8° (29 %) and 10° (40 %), respectively, compared to 14° (43 %) and 13° (46 %) for NewBraces, which were simulated with a difference inferior to 5°. NewBraces were 50 % thinner and had 20 % less covering surface than CtrlBraces. Braces designed with CAD/CAM and 3D FEM simulation were more efficient and lighter than standard CAD/CAM TLSO's at first immediate in-brace evaluation. These results suggest that long-term effect of bracing in AIS may be improved using this new platform for brace fabrication. NCT02285621.

  9. The use of computer-aided design/manufacturing (CAD/CAM) technology to aid in the reconstruction of congenitally deficient pediatric mandibles: A case series.

    PubMed

    Gougoutas, Alexander J; Bastidas, Nicholas; Bartlett, Scott P; Jackson, Oksana

    2015-12-01

    Microvascular reconstruction of the pediatric mandible, particularly when necessitated by severe, congenital hypoplasia, presents a formidable challenge. Complex cases, however, may be simplified by computer-aided design/computer-aided manufacturing (CAD/CAM) assisted surgical planning. This series represents the senior authors' preliminary experiences with CAD/CAM assisted, microvascular reconstruction of the pediatric mandible. Presented are two patients with hemifacial/bifacial microsomia, both with profound mandibular hypoplasia, who underwent CAD/CAM assisted reconstruction of their mandibles with vascularized fibula flaps. Surgical techniques, CAD/CAM routines employed, complications, and long-term outcomes are reported. Successful mandibular reconstructions were achieved in both patients with centralization of their native mandibles and augmentation of deficient mandibular subunits. No long-term complications were observed. CAD/CAM technology can be utilized in pediatric mandibular reconstruction, and is particularly beneficial in cases of profound, congenital hypoplasia requiring extensive, multi-planar, bony reconstructions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  10. Quantum computing: Efficient fault tolerance

    NASA Astrophysics Data System (ADS)

    Gottesman, Daniel

    2016-12-01

    Dealing with errors in a quantum computer typically requires complex programming and many additional quantum bits. A technique for controlling errors has been proposed that alleviates both of these problems.

  11. CAD in der Praxis

    NASA Astrophysics Data System (ADS)

    Labisch, Susanna

    Konstruktion und Fertigung erfolgen in der Praxis fast ausschließlich rechnerunterstützt. Mit diesem Rechnereinsatz beim Konstruieren (CAD, Computer Aided Design) und Fertigen CAM (Computer Aided Manufacturing) scheint die technische Zeichnung an Bedeutung zu verlieren, da die Verständigung zwischen Konstruktions- und Fertigungsabteilung primär durch den Austausch digitaler Daten erfolgen kann.

  12. Influence of surface roughness on mechanical properties of two computer-aided design/computer-aided manufacturing (CAD/CAM) ceramic materials.

    PubMed

    Flury, S; Peutzfeldt, A; Lussi, A

    2012-01-01

    The aim of this study was to evaluate the influence of surface roughness on surface hardness (Vickers; VHN), elastic modulus (EM), and flexural strength (FLS) of two computer-aided design/computer-aided manufacturing (CAD/CAM) ceramic materials. One hundred sixty-two samples of VITABLOCS Mark II (VMII) and 162 samples of IPS Empress CAD (IPS) were ground according to six standardized protocols producing decreasing surface roughnesses (n=27/group): grinding with 1) silicon carbide (SiC) paper #80, 2) SiC paper #120, 3) SiC paper #220, 4) SiC paper #320, 5) SiC paper #500, and 6) SiC paper #1000. Surface roughness (Ra/Rz) was measured with a surface roughness meter, VHN and EM with a hardness indentation device, and FLS with a three-point bending test. To test for a correlation between surface roughness (Ra/Rz) and VHN, EM, or FLS, Spearman rank correlation coefficients were calculated. The decrease in surface roughness led to an increase in VHN from (VMII/IPS; medians) 263.7/256.5 VHN to 646.8/601.5 VHN, an increase in EM from 45.4/41.0 GPa to 66.8/58.4 GPa, and an increase in FLS from 49.5/44.3 MPa to 73.0/97.2 MPa. For both ceramic materials, Spearman rank correlation coefficients showed a strong negative correlation between surface roughness (Ra/Rz) and VHN or EM and a moderate negative correlation between Ra/Rz and FLS. In conclusion, a decrease in surface roughness generally improved the mechanical properties of the CAD/CAM ceramic materials tested. However, FLS was less influenced by surface roughness than expected.

  13. [Retrospective analysis of a computer-aided detection (CAD) system in full-field digital mammography in correlation to tumor histology].

    PubMed

    Obenauer, S; Sohns, C; Werner, C; Grabbe, E

    2005-08-01

    To evaluate the usefulness of a computer-aided detection (CAD) system in full-field digital mammography in correlation to tumor histology. A total of 476 patients (226 patients with histologically proven malignant tumors, 250 healthy women) took part in this study. The mammograms were studied retrospectively, using the CAD system Image Checker. For 226 patients digital mammograms in MLO-projection were available. For 186 of these patients the CC-projection was also available. CAD markers that correlated with histologically proven carcinomas were considered to be true-positive markers. All other CAD markers were considered to be false-positive. Histologically proven carcinomas without markers were false-negative results. The dependence of the CAD markers placement upon the different carcinoma histologies was studied using the Chi-square test. No significant difference could be proven for the detectability of malignant breast lesions of different histologic types. For the detectability of ductal carcinoma in situ (DCIS), invasive ductal carcinoma (IDC), invasive lobular carcinoma (ILC), lobular carcinoma in situ (LCIS), tubular carcinoma and ductulo-lobular carcinoma, the true positives were 71.1 %, 75 %, 70.7 %, 70 %, 60 % and 80 %, respectively, in the MLO projection and 83.9 %, 75.9 %, 81.8 %, 77.8 %, 87.5 % and 33.3 %, respectively, in the CC projection. There was an average of 0.5 false-positive markers per mammographic image. The histologic type of carcinoma seems to have no influence on detectability when using the CAD system. The high rate of false-positive markers shows, however, the limited specificity of the CAD system and that improvements are necessary.

  14. Efficient Computational Model of Hysteresis

    NASA Technical Reports Server (NTRS)

    Shields, Joel

    2005-01-01

    A recently developed mathematical model of the output (displacement) versus the input (applied voltage) of a piezoelectric transducer accounts for hysteresis. For the sake of computational speed, the model is kept simple by neglecting the dynamic behavior of the transducer. Hence, the model applies to static and quasistatic displacements only. A piezoelectric transducer of the type to which the model applies is used as an actuator in a computer-based control system to effect fine position adjustments. Because the response time of the rest of such a system is usually much greater than that of a piezoelectric transducer, the model remains an acceptably close approximation for the purpose of control computations, even though the dynamics are neglected. The model (see Figure 1) represents an electrically parallel, mechanically series combination of backlash elements, each having a unique deadband width and output gain. The zeroth element in the parallel combination has zero deadband width and, hence, represents a linear component of the input/output relationship. The other elements, which have nonzero deadband widths, are used to model the nonlinear components of the hysteresis loop. The deadband widths and output gains of the elements are computed from experimental displacement-versus-voltage data. The hysteresis curve calculated by use of this model is piecewise linear beyond deadband limits.

  15. Effects of Iterative Reconstruction Algorithms on Computer-assisted Detection (CAD) Software for Lung Nodules in Ultra-low-dose CT for Lung Cancer Screening.

    PubMed

    Nomura, Yukihiro; Higaki, Toru; Fujita, Masayo; Miki, Soichiro; Awaya, Yoshikazu; Nakanishi, Toshio; Yoshikawa, Takeharu; Hayashi, Naoto; Awai, Kazuo

    2017-02-01

    This study aimed to evaluate the effects of iterative reconstruction (IR) algorithms on computer-assisted detection (CAD) software for lung nodules in ultra-low-dose computed tomography (ULD-CT) for lung cancer screening. We selected 85 subjects who underwent both a low-dose CT (LD-CT) scan and an additional ULD-CT scan in our lung cancer screening program for high-risk populations. The LD-CT scans were reconstructed with filtered back projection (FBP; LD-FBP). The ULD-CT scans were reconstructed with FBP (ULD-FBP), adaptive iterative dose reduction 3D (AIDR 3D; ULD-AIDR 3D), and forward projected model-based IR solution (FIRST; ULD-FIRST). CAD software for lung nodules was applied to each image dataset, and the performance of the CAD software was compared among the different IR algorithms. The mean volume CT dose indexes were 3.02 mGy (LD-CT) and 0.30 mGy (ULD-CT). For overall nodules, the sensitivities of CAD software at 3.0 false positives per case were 78.7% (LD-FBP), 9.3% (ULD-FBP), 69.4% (ULD-AIDR 3D), and 77.8% (ULD-FIRST). Statistical analysis showed that the sensitivities of ULD-AIDR 3D and ULD-FIRST were significantly higher than that of ULD-FBP (P < .001). The performance of CAD software in ULD-CT was improved by using IR algorithms. In particular, the performance of CAD in ULD-FIRST was almost equivalent to that in LD-FBP. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  16. Energy-efficient quantum computing

    NASA Astrophysics Data System (ADS)

    Ikonen, Joni; Salmilehto, Juha; Möttönen, Mikko

    2017-04-01

    In the near future, one of the major challenges in the realization of large-scale quantum computers operating at low temperatures is the management of harmful heat loads owing to thermal conduction of cabling and dissipation at cryogenic components. This naturally raises the question that what are the fundamental limitations of energy consumption in scalable quantum computing. In this work, we derive the greatest lower bound for the gate error induced by a single application of a bosonic drive mode of given energy. Previously, such an error type has been considered to be inversely proportional to the total driving power, but we show that this limitation can be circumvented by introducing a qubit driving scheme which reuses and corrects drive pulses. Specifically, our method serves to reduce the average energy consumption per gate operation without increasing the average gate error. Thus our work shows that precise, scalable control of quantum systems can, in principle, be implemented without the introduction of excessive heat or decoherence.

  17. Use of CAD systems in design of Space Station and space robots

    NASA Technical Reports Server (NTRS)

    Dwivedi, Suren N.; Yadav, P.; Jones, Gary; Travis, Elmer W.

    1988-01-01

    The evolution of CAD systems is traced. State-of-the-art CAD systems are reviewed and various advanced CAD facilities and supplementing systems being used at NASA-Goddard are described. CAD hardware, computer software, and protocols are detailed.

  18. Use of CAD systems in design of Space Station and space robots

    NASA Technical Reports Server (NTRS)

    Dwivedi, Suren N.; Yadav, P.; Jones, Gary; Travis, Elmer W.

    1988-01-01

    The evolution of CAD systems is traced. State-of-the-art CAD systems are reviewed and various advanced CAD facilities and supplementing systems being used at NASA-Goddard are described. CAD hardware, computer software, and protocols are detailed.

  19. A low cost computer aided design (CAD) system for 3D-reconstruction from serial sections.

    PubMed

    Keri, C; Ahnelt, P K

    1991-05-01

    This paper describes an approach to computer-assisted 3D-reconstruction of neuronal specimens based on a low cost yet powerful software package for a personal computer (Atari ST). It provides an easy to handle (mouse driven) object editor to create 3D models of medium complexity (15,000 vertices) from sections or from scratch. The models may be displayed in various modes including stereo viewing and complex animation sequences.

  20. ESPC Computational Efficiency of Earth System Models

    DTIC Science & Technology

    2014-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. ESPC Computational Efficiency of Earth System Models...00-00-2014 to 00-00-2014 4. TITLE AND SUBTITLE ESPC Computational Efficiency of Earth System Models 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c...optimization in this system. 3 Figure 1 – Plot showing seconds per forecast day wallclock time for a T639L64 (~21 km at the equator) NAVGEM

  1. Gathering Empirical Evidence Concerning Links between Computer Aided Design (CAD) and Creativity

    ERIC Educational Resources Information Center

    Musta'amal, Aede Hatib; Norman, Eddie; Hodgson, Tony

    2009-01-01

    Discussion is often reported concerning potential links between computer-aided designing and creativity, but there is a lack of systematic enquiry to gather empirical evidence concerning such links. This paper reports an indication of findings from other research studies carried out in contexts beyond general education that have sought evidence…

  2. CAD-Based Monte Carlo Neutron Transport KSTAR Analysis for KSTAR

    NASA Astrophysics Data System (ADS)

    Seo, Geon Ho; Choi, Sung Hoon; Shim, Hyung Jin

    2017-09-01

    The Monte Carlo (MC) neutron transport analysis for a complex nuclear system such as fusion facility may require accurate modeling of its complicated geometry. In order to take advantage of modeling capability of the computer aided design (CAD) system for the MC neutronics analysis, the Seoul National University MC code, McCARD, has been augmented with a CAD-based geometry processing module by imbedding the OpenCASCADE CAD kernel. In the developed module, the CAD geometry data are internally converted to the constructive solid geometry model with help of the CAD kernel. An efficient cell-searching algorithm is devised for the void space treatment. The performance of the CAD-based McCARD calculations are tested for the Korea Superconducting Tokamak Advanced Research device by comparing with results of the conventional MC calculations using a text-based geometry input.

  3. Orbital implant placement using a computer-aided design and manufacturing (CAD/CAM) stereolithographic surgical template protocol.

    PubMed

    Goh, B T; Teoh, K H

    2015-05-01

    Surgical implant placement in the orbital region for the support of a prosthesis is challenging due to the thin orbital rim and proximity to vital structures. This article reports the use of a computer-aided design and manufacturing (CAD/CAM) stereolithographic surgical template protocol for orbital implant placement in four patients, who were followed-up for about 7 years. A total of 11 orbital implants were inserted, eight of these in irradiated bone. No intraoperative complications were noted in any of the patients and the implants were all inserted in the planned positions. The survival rate of implants placed in irradiated bone that received hyperbaric oxygen therapy was 62.5% (5/8). One implant failed in a burns injury patient at 74 months after functional loading. The overall survival of implants in the orbital region and the cumulative survival at 7 years was 63.6%. With regard to skin reactions around the abutments, 85% were grade 0, 13% were grade 1, and 2% were grade 2 according to the Holgers classification. The mean survival time of the first prosthesis was 49 months. High patient satisfaction was achieved with the implant-retained orbital prostheses.

  4. Coronary artery computed tomography as the first-choice imaging diagnostics in patients with high pre-test probability of coronary artery disease (CAT-CAD).

    PubMed

    Rudziński, Piotr N; Kruk, Mariusz; Demkow, Marcin; Dzielińska, Zofia; Pręgowski, Jerzy; Witkowski, Adam; Rużyłło, Witold; Kępka, Cezary

    2015-01-01

    The primary diagnostic examination performed in patients with a high pre-test probability of coronary artery disease (CAD) is invasive coronary angiography. Currently, approximately 50% of all invasive coronary angiographies do not end with percutaneous coronary intervention (PCI) because of the absence of significant coronary artery lesions. It is desirable to eliminate such situations. There is an alternative, non-invasive method useful for exclusion of significant CAD, which is coronary computed tomography angiography (CCTA). We hypothesize that use of CCTA as the first choice method in the diagnosis of patients with high pre-test probability of CAD may reduce the number of invasive coronary angiographies not followed by interventional treatment. Coronary computed tomography angiography also seems not to be connected with additional risks and costs of the diagnosis. Confirmation of these assumptions may impact cardiology guidelines. One hundred and twenty patients with indications for invasive coronary angiography determined by current ESC guidelines regarding stable CAD are randomized 1 : 1 to classic invasive coronary angiography group and the CCTA group. All patients included in the study are monitored for the occurrence of possible end points during the diagnostic and therapeutic cycle (from the first imaging examination to either complete revascularization or disqualification from the invasive treatment), or during the follow-up period. Based on the literature, it appears that the use of modern CT systems in patients with high pre-test probability of CAD, as well as appropriate clinical interpretation of the imaging study by invasive cardiologists, enables precise planning of invasive therapeutic procedures. Our randomized study will provide data to verify these assumptions.

  5. CAD/CAM (Computer Aided Design/Computer Aided Manufacture). A Brief Guide to Materials in the Library of Congress.

    ERIC Educational Resources Information Center

    Havas, George D.

    This brief guide to materials in the Library of Congress (LC) on computer aided design and/or computer aided manufacturing lists reference materials and other information sources under 13 headings: (1) brief introductions; (2) LC subject headings used for such materials; (3) textbooks; (4) additional titles; (5) glossaries and handbooks; (6)…

  6. HistoCAD: Machine Facilitated Quantitative Histoimaging with Computer Assisted Diagnosis

    NASA Astrophysics Data System (ADS)

    Tomaszewski, John E.

    Prostatic adenocarcinoma (CAP) is the most common malignancy in American men. In 2010 there will be an estimated 217,730 new cases and 32,050 deaths from CAP in the US. The diagnosis of prostatic adenocarcinoma is made exclusively from the histological evaluation of prostate tissue. The sampling protocols used to obtain 18 gauge (1.5 mm diameter) needle cores are standard sampling templates consisting of 6-12 cores performed in the context of an elevated serum value for prostate specific antigen (PSA). In this context, the prior probability of cancer is somewhat increased. However, even in this screened population, the efficiency of finding cancer is low at only approximately 20%. Histopathologists are faced with the task of reviewing the 5-10 million cores of tissue resulting from approximately 1,000,000 biopsy procedures yearly, parsing all the benign scenes from the worrisome scenes, and deciding which of the worrisome images are cancer.

  7. Micro-computed tomography evaluation of marginal fit of lithium disilicate crowns fabricated by using chairside CAD/CAM systems or the heat-pressing technique.

    PubMed

    Neves, Flávio D; Prado, Célio J; Prudente, Marcel S; Carneiro, Thiago A P N; Zancopé, Karla; Davi, Letícia R; Mendonça, Gustavo; Cooper, Lyndon F; Soares, Carlos José

    2014-11-01

    No consensus exists concerning the acceptable ranges of marginal fit for lithium disilicate crowns fabricated with either heat-pressing techniques or computer-aided design and computer-aided manufacturing (CAD/CAM) systems. The purpose of the study was to evaluate with micro-computed tomography the marginal fit of lithium disilicate crowns fabricated with different chairside CAD/CAM systems (Cerec or E4D) or the heat-pressing technique. Lithium disilicate crowns were fabricated to fit an in vitro cast of a single human premolar. Three fabrication techniques were used: digital impressions with Cerec 3D Bluecam scanner with titanium dioxide powder, followed by milling from IPS e.max CAD for Cerec; digital impressions with E4D Laser scanner without powder, followed by milling from IPS e.max CAD for E4D; and fabrication from IPS e.max Press by using the lost-wax and heat-pressing techniques. Each crown was fixed to the cast and scanned with micro-computed tomography to obtain 52 images for measuring the vertical and horizontal fit. Data were statistically analyzed by 1-way ANOVA, followed by the Tukey honestly significant difference test (α=.05). The mean values of vertical misfit were 36.8 ±13.9 μm for the heat-pressing group and 39.2 ±8.7 μm for the Cerec group, which were significantly smaller values than for the E4D group at 66.9 ±31.9 μm (P=.046). The percentage of crowns with a vertical misfit <75 μm was 83.8% for Cerec and heat-pressing, whereas this value was 65% for E4D. Both types of horizontal misfit (underextended and overextended) were 49.2% for heat-pressing, 50.8% for Cerec, and 58.8% for E4D. Lithium disilicate crowns fabricated by using the Cerec 3D Bluecam scanner CAD/CAM system or the heat-pressing technique exhibited a significantly smaller vertical misfit than crowns fabricated by using an E4D Laser scanner CAD/CAM system. Copyright © 2014 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights

  8. A supervised 'lesion-enhancement' filter by use of a massive-training artificial neural network (MTANN) in computer-aided diagnosis (CAD)

    NASA Astrophysics Data System (ADS)

    Suzuki, Kenji

    2009-09-01

    Computer-aided diagnosis (CAD) has been an active area of study in medical image analysis. A filter for the enhancement of lesions plays an important role for improving the sensitivity and specificity in CAD schemes. The filter enhances objects similar to a model employed in the filter; e.g. a blob-enhancement filter based on the Hessian matrix enhances sphere-like objects. Actual lesions, however, often differ from a simple model; e.g. a lung nodule is generally modeled as a solid sphere, but there are nodules of various shapes and with internal inhomogeneities such as a nodule with spiculations and ground-glass opacity. Thus, conventional filters often fail to enhance actual lesions. Our purpose in this study was to develop a supervised filter for the enhancement of actual lesions (as opposed to a lesion model) by use of a massive-training artificial neural network (MTANN) in a CAD scheme for detection of lung nodules in CT. The MTANN filter was trained with actual nodules in CT images to enhance actual patterns of nodules. By use of the MTANN filter, the sensitivity and specificity of our CAD scheme were improved substantially. With a database of 69 lung cancers, nodule candidate detection by the MTANN filter achieved a 97% sensitivity with 6.7 false positives (FPs) per section, whereas nodule candidate detection by a difference-image technique achieved a 96% sensitivity with 19.3 FPs per section. Classification-MTANNs were applied for further reduction of the FPs. The classification-MTANNs removed 60% of the FPs with a loss of one true positive; thus, it achieved a 96% sensitivity with 2.7 FPs per section. Overall, with our CAD scheme based on the MTANN filter and classification-MTANNs, an 84% sensitivity with 0.5 FPs per section was achieved. First presented at the Seventh International Conference on Machine Learning and Applications, San Diego, CA, USA, 11-13 December 2008.

  9. A supervised 'lesion-enhancement' filter by use of a massive-training artificial neural network (MTANN) in computer-aided diagnosis (CAD).

    PubMed

    Suzuki, Kenji

    2009-09-21

    Computer-aided diagnosis (CAD) has been an active area of study in medical image analysis. A filter for the enhancement of lesions plays an important role for improving the sensitivity and specificity in CAD schemes. The filter enhances objects similar to a model employed in the filter; e.g. a blob-enhancement filter based on the Hessian matrix enhances sphere-like objects. Actual lesions, however, often differ from a simple model; e.g. a lung nodule is generally modeled as a solid sphere, but there are nodules of various shapes and with internal inhomogeneities such as a nodule with spiculations and ground-glass opacity. Thus, conventional filters often fail to enhance actual lesions. Our purpose in this study was to develop a supervised filter for the enhancement of actual lesions (as opposed to a lesion model) by use of a massive-training artificial neural network (MTANN) in a CAD scheme for detection of lung nodules in CT. The MTANN filter was trained with actual nodules in CT images to enhance actual patterns of nodules. By use of the MTANN filter, the sensitivity and specificity of our CAD scheme were improved substantially. With a database of 69 lung cancers, nodule candidate detection by the MTANN filter achieved a 97% sensitivity with 6.7 false positives (FPs) per section, whereas nodule candidate detection by a difference-image technique achieved a 96% sensitivity with 19.3 FPs per section. Classification-MTANNs were applied for further reduction of the FPs. The classification-MTANNs removed 60% of the FPs with a loss of one true positive; thus, it achieved a 96% sensitivity with 2.7 FPs per section. Overall, with our CAD scheme based on the MTANN filter and classification-MTANNs, an 84% sensitivity with 0.5 FPs per section was achieved.

  10. Immersive CAD

    SciTech Connect

    Ames, A.L.

    1999-02-01

    This paper documents development of a capability for performing shape-changing editing operations on solid model representations in an immersive environment. The capability includes part- and assembly-level operations, with part modeling supporting topology-invariant and topology-changing modifications. A discussion of various design considerations in developing an immersive capability is included, along with discussion of a prototype implementation we have developed and explored. The project investigated approaches to providing both topology-invariant and topology-changing editing. A prototype environment was developed to test the approaches and determine the usefulness of immersive editing. The prototype showed exciting potential in redefining the CAD interface. It is fun to use. Editing is much faster and friendlier than traditional feature-based CAD software. The prototype algorithms did not reliably provide a sufficient frame rate for complex geometries, but has provided the necessary roadmap for development of a production capability.

  11. Efficient Methods to Compute Genomic Predictions

    USDA-ARS?s Scientific Manuscript database

    Efficient methods for processing genomic data were developed to increase reliability of estimated breeding values and simultaneously estimate thousands of marker effects. Algorithms were derived and computer programs tested on simulated data for 50,000 markers and 2,967 bulls. Accurate estimates of ...

  12. Use of CAD Geometry in MDO

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.

    1996-01-01

    The purpose of this paper is to discuss the use of Computer-Aided Design (CAD) geometry in a Multi-Disciplinary Design Optimization (MDO) environment. Two techniques are presented to facilitate the use of CAD geometry by different disciplines, such as Computational Fluid Dynamics (CFD) and Computational Structural Mechanics (CSM). One method is to transfer the load from a CFD grid to a CSM grid. The second method is to update the CAD geometry for CSM deflection.

  13. Conservative restorative treatment using a single-visit, all-ceramic CAD/CAM system.

    PubMed

    Benk, Joel

    2007-01-01

    Computer-aided design/computer-aided manufacturing (CAD/CAM) continues to radically change the way in which the dental team plans, prepares, and fabricates a patient's restoration. This advancing technology offers the clinician the ability to scan the patient's failing dentition and then designs a long-lasting, reliable restoration based on this data. CAD/CAM systems also permit efficient, single-visit placement of the restoration while preserving much of the natural tooth structure. This article discusses how a chairside CAD/CAM system can be used to provide such a restoration in the posterior region in a single-visit.

  14. Efficient Calibration of Computationally Intensive Hydrological Models

    NASA Astrophysics Data System (ADS)

    Poulin, A.; Huot, P. L.; Audet, C.; Alarie, S.

    2015-12-01

    A new hybrid optimization algorithm for the calibration of computationally-intensive hydrological models is introduced. The calibration of hydrological models is a blackbox optimization problem where the only information available to the optimization algorithm is the objective function value. In the case of distributed hydrological models, the calibration process is often known to be hampered by computational efficiency issues. Running a single simulation may take several minutes and since the optimization process may require thousands of model evaluations, the computational time can easily expand to several hours or days. A blackbox optimization algorithm, which can substantially improve the calibration efficiency, has been developed. It merges both the convergence analysis and robust local refinement from the Mesh Adaptive Direct Search (MADS) algorithm, and the global exploration capabilities from the heuristic strategies used by the Dynamically Dimensioned Search (DDS) algorithm. The new algorithm is applied to the calibration of the distributed and computationally-intensive HYDROTEL model on three different river basins located in the province of Quebec (Canada). Two calibration problems are considered: (1) calibration of a 10-parameter version of HYDROTEL, and (2) calibration of a 19-parameter version of the same model. A previous study by the authors had shown that the original version of DDS was the most efficient method for the calibration of HYDROTEL, when compared to the MADS and the very well-known SCEUA algorithms. The computational efficiency of the hybrid DDS-MADS method is therefore compared with the efficiency of the DDS algorithm based on a 2000 model evaluations budget. Results show that the hybrid DDS-MADS method can reduce the total number of model evaluations by 70% for the 10-parameter version of HYDROTEL and by 40% for the 19-parameter version without compromising the quality of the final objective function value.

  15. A novel method of computer aided orthognathic surgery using individual CAD/CAM templates: a combination of osteotomy and repositioning guides.

    PubMed

    Li, Biao; Zhang, Lei; Sun, Hao; Yuan, Jianbing; Shen, Steve G F; Wang, Xudong

    2013-12-01

    The maxilla is usually positioned during orthognathic surgery using surgical splints, which has many limitations. In this preliminary study we present a new computer-aided design and manufacture (CAD/CAM) template to guide the osteotomy and the repositioning, and illustrate its feasibility and validity. Six patients with dental maxillofacial deformities were studied. The design of the templates was based on three-dimensional surgical planning, including the Le Fort osteotomy and the repositioning of the maxilla, and were made using a three-dimensional printing technique. Two parts of the templates, respectively, guided the osteotomy and repositioned the maxilla during operation. The traditional occlusal splint was used to achieve the final occlusion with the mandible in the expected position. Postoperative measurements were made between maxillary hard tissue landmarks, relative to reference planes based on computed tomographic (CT) data. The results of the measurements were analysed and compared with the virtual plan. The preliminary results showed that we achieved clinically acceptable precision for the position of the maxilla (<1.0 mm). Preoperative preparation time was reduced to about 145 min. All patients were satisfied with the aesthetic results. Our CAD/CAM templates provide a reliable method for transfer of maxillary surgical planning, which may be a useful alternative to the intermediate splint technique. Our technique does not require traditional model surgery, scanning of dental casts, or recording of the CAD/CAM splint. Copyright © 2013 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  16. Two-view information fusion for improvement of computer-aided detection (CAD) of breast masses on mammograms

    NASA Astrophysics Data System (ADS)

    Wei, Jun; Sahiner, Berkman; Hadjiiski, Lubomir M.; Chan, Heang-Ping; Helvie, Mark A.; Roubidoux, Marilyn A.; Zhou, Chuan; Ge, Jun; Zhang, Yiheng

    2006-03-01

    We are developing a two-view information fusion method to improve the performance of our CAD system for mass detection. Mass candidates on each mammogram were first detected with our single-view CAD system. Potential object pairs on the two-view mammograms were then identified by using the distance between the object and the nipple. Morphological features, Hessian feature, correlation coefficients between the two paired objects and texture features were used as input to train a similarity classifier that estimated a similarity scores for each pair. Finally, a linear discriminant analysis (LDA) classifier was used to fuse the score from the single-view CAD system and the similarity score. A data set of 475 patients containing 972 mammograms with 475 biopsy-proven masses was used to train and test the CAD system. All cases contained the CC view and the MLO or LM view. We randomly divided the data set into two independent sets of 243 cases and 232 cases. The training and testing were performed using the 2-fold cross validation method. The detection performance of the CAD system was assessed by free response receiver operating characteristic (FROC) analysis. The average test FROC curve was obtained from averaging the FP rates at the same sensitivity along the two corresponding test FROC curves from the 2-fold cross validation. At the case-based sensitivities of 90%, 85% and 80% on the test set, the single-view CAD system achieved an FP rate of 2.0, 1.5, and 1.2 FPs/image, respectively. With the two-view fusion system, the FP rates were reduced to 1.7, 1.3, and 1.0 FPs/image, respectively, at the corresponding sensitivities. The improvement was found to be statistically significant (p<0.05) by the AFROC method. Our results indicate that the two-view fusion scheme can improve the performance of mass detection on mammograms.

  17. An Efficient Method for Computing All Reducts

    NASA Astrophysics Data System (ADS)

    Bao, Yongguang; Du, Xiaoyong; Deng, Mingrong; Ishii, Naohiro

    In the process of data mining of decision table using Rough Sets methodology, the main computational effort is associated with the determination of the reducts. Computing all reducts is a combinatorial NP-hard computational problem. Therefore the only way to achieve its faster execution is by providing an algorithm, with a better constant factor, which may solve this problem in reasonable time for real-life data sets. The purpose of this presentation is to propose two new efficient algorithms to compute reducts in information systems. The proposed algorithms are based on the proposition of reduct and the relation between the reduct and discernibility matrix. Experiments have been conducted on some real world domains in execution time. The results show it improves the execution time when compared with the other methods. In real application, we can combine the two proposed algorithms.

  18. Computationally Efficient Multiconfigurational Reactive Molecular Dynamics.

    PubMed

    Yamashita, Takefumi; Peng, Yuxing; Knight, Chris; Voth, Gregory A

    2012-12-11

    It is a computationally demanding task to explicitly simulate the electronic degrees of freedom in a system to observe the chemical transformations of interest, while at the same time sampling the time and length scales required to converge statistical properties and thus reduce artifacts due to initial conditions, finite-size effects, and limited sampling. One solution that significantly reduces the computational expense consists of molecular models in which effective interactions between particles govern the dynamics of the system. If the interaction potentials in these models are developed to reproduce calculated properties from electronic structure calculations and/or ab initio molecular dynamics simulations, then one can calculate accurate properties at a fraction of the computational cost. Multiconfigurational algorithms model the system as a linear combination of several chemical bonding topologies to simulate chemical reactions, also sometimes referred to as "multistate". These algorithms typically utilize energy and force calculations already found in popular molecular dynamics software packages, thus facilitating their implementation without significant changes to the structure of the code. However, the evaluation of energies and forces for several bonding topologies per simulation step can lead to poor computational efficiency if redundancy is not efficiently removed, particularly with respect to the calculation of long-ranged Coulombic interactions. This paper presents accurate approximations (effective long-range interaction and resulting hybrid methods) and multiple-program parallelization strategies for the efficient calculation of electrostatic interactions in reactive molecular simulations.

  19. Computationally Efficient Multiconfigurational Reactive Molecular Dynamics

    PubMed Central

    Yamashita, Takefumi; Peng, Yuxing; Knight, Chris; Voth, Gregory A.

    2012-01-01

    It is a computationally demanding task to explicitly simulate the electronic degrees of freedom in a system to observe the chemical transformations of interest, while at the same time sampling the time and length scales required to converge statistical properties and thus reduce artifacts due to initial conditions, finite-size effects, and limited sampling. One solution that significantly reduces the computational expense consists of molecular models in which effective interactions between particles govern the dynamics of the system. If the interaction potentials in these models are developed to reproduce calculated properties from electronic structure calculations and/or ab initio molecular dynamics simulations, then one can calculate accurate properties at a fraction of the computational cost. Multiconfigurational algorithms model the system as a linear combination of several chemical bonding topologies to simulate chemical reactions, also sometimes referred to as “multistate”. These algorithms typically utilize energy and force calculations already found in popular molecular dynamics software packages, thus facilitating their implementation without significant changes to the structure of the code. However, the evaluation of energies and forces for several bonding topologies per simulation step can lead to poor computational efficiency if redundancy is not efficiently removed, particularly with respect to the calculation of long-ranged Coulombic interactions. This paper presents accurate approximations (effective long-range interaction and resulting hybrid methods) and multiple-program parallelization strategies for the efficient calculation of electrostatic interactions in reactive molecular simulations. PMID:25100924

  20. [Evaluation of production and clinical working time of computer-aided design/computer-aided manufacturing (CAD/CAM) custom trays for complete denture].

    PubMed

    Wei, L; Chen, H; Zhou, Y S; Sun, Y C; Pan, S X

    2017-02-18

    To compare the technician fabrication time and clinical working time of custom trays fabricated using two different methods, the three-dimensional printing custom trays and the conventional custom trays, and to prove the feasibility of the computer-aided design/computer-aided manufacturing (CAD/CAM) custom trays in clinical use from the perspective of clinical time cost. Twenty edentulous patients were recruited into this study, which was prospective, single blind, randomized self-control clinical trials. Two custom trays were fabricated for each participant. One of the custom trays was fabricated using functional suitable denture (FSD) system through CAD/CAM process, and the other was manually fabricated using conventional methods. Then the final impressions were taken using both the custom trays, followed by utilizing the final impression to fabricate complete dentures respectively. The technician production time of the custom trays and the clinical working time of taking the final impression was recorded. The average time spent on fabricating the three-dimensional printing custom trays using FSD system and fabricating the conventional custom trays manually were (28.6±2.9) min and (31.1±5.7) min, respectively. The average time spent on making the final impression with the three-dimensional printing custom trays using FSD system and the conventional custom trays fabricated manually were (23.4±11.5) min and (25.4±13.0) min, respectively. There was significant difference in the technician fabrication time and the clinical working time between the three-dimensional printing custom trays using FSD system and the conventional custom trays fabricated manually (P<0.05). The average time spent on fabricating three-dimensional printing custom trays using FSD system and making the final impression with the trays are less than those of the conventional custom trays fabricated manually, which reveals that the FSD three-dimensional printing custom trays is less time

  1. The reliability of an easy measuring method for abutment convergence angle with a computer-aided design (CAD) system

    PubMed Central

    Seo, Yong-Joon; Kwon, Taek-Ka; Han, Jung-Suk; Lee, Jai-Bong; Kim, Sung-Hun

    2014-01-01

    PURPOSE The purpose of this study was to evaluate the intra-rater reliability and inter-rater reliability of three different methods using a drawing protractor, a digital protractor after tracing, and a CAD system. MATERIALS AND METHODS Twenty-four artificial abutments that had been prepared by dental students were used in this study. Three dental students measured the convergence angles by each method three times. Bland-Altman plots were applied to examine the overall reliability by comparing the traditional tracing method with a new method using the CAD system. Intraclass Correlation Coefficients (ICC) evaluated intra-rater reliability and inter-rater reliability. RESULTS All three methods exhibited high intra-rater and inter-rater reliability (ICC>0.80, P<.05). Measurements with the CAD system showed the highest intra-rater reliability. In addition, it showed improved inter-rater reliability compared with the traditional tracing methods. CONCLUSION Based on the results of this study, the CAD system may be an easy and reliable tool for measuring the abutment convergence angle. PMID:25006382

  2. Computationally Efficient Prediction of Ionic Liquid Properties.

    PubMed

    Chaban, Vitaly V; Prezhdo, Oleg V

    2014-06-05

    Due to fundamental differences, room-temperature ionic liquids (RTIL) are significantly more viscous than conventional molecular liquids and require long simulation times. At the same time, RTILs remain in the liquid state over a much broader temperature range than the ordinary liquids. We exploit the ability of RTILs to stay liquid at several hundred degrees Celsius and introduce a straightforward and computationally efficient method for predicting RTIL properties at ambient temperature. RTILs do not alter phase behavior at 600-800 K. Therefore, their properties can be smoothly extrapolated down to ambient temperatures. We numerically prove the validity of the proposed concept for density and ionic diffusion of four different RTILs. This simple method enhances the computational efficiency of the existing simulation approaches as applied to RTILs by more than an order of magnitude.

  3. Changing computing paradigms towards power efficiency

    PubMed Central

    Klavík, Pavel; Malossi, A. Cristiano I.; Bekas, Costas; Curioni, Alessandro

    2014-01-01

    Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications. PMID:24842033

  4. Effects of tributylborane-activated adhesive and two silane agents on bonding computer-aided design and manufacturing (CAD/CAM) resin composite.

    PubMed

    Shinohara, Ayano; Taira, Yohsuke; Sawase, Takashi

    2017-01-09

    The present study was conducted to evaluate the effects of an experimental adhesive agent [methyl methacrylate-tributylborane liquid (MT)] and two adhesive agents containing silane on the bonding between a resin composite block of a computer-aided design and manufacturing (CAD/CAM) system and a light-curing resin composite veneering material. The surfaces of CAD/CAM resin composite specimens were ground with silicon-carbide paper, treated with phosphoric acid, and then primed with either one of the two silane agents [Scotchbond Universal Adhesive (SC) and GC Ceramic Primer II (GC)], no adhesive control (Cont), or one of three combinations (MT/SC, MT/GC, and MT/Cont). A light-curing resin composite was veneered on the primed CAD/CAM resin composite surface. The veneered specimens were subjected to thermocycling between 4 and 60 °C for 10,000 cycles, and the shear bond strengths were determined. All data were analyzed using analysis of variance and a post hoc Tukey-Kramer HSD test (α = 0.05, n = 8). MT/SC (38.7 MPa) exhibited the highest mean bond strengths, followed by MT/GC (30.4 MPa), SC (27.9 MPa), and MT/Cont (25.7 MPa), while Cont (12.9 MPa) and GC (12.3 MPa) resulted in the lowest bond strengths. The use of MT in conjunction with a silane agent significantly improved the bond strength. Surface treatment with appropriate adhesive agents was confirmed as a prerequisite for veneering CAD/CAM resin composite restorations.

  5. Fabricating a tooth- and implant-supported maxillary obturator for a patient after maxillectomy with computer-guided surgery and CAD/CAM technology: A clinical report.

    PubMed

    Noh, Kwantae; Pae, Ahran; Lee, Jung-Woo; Kwon, Yong-Dae

    2016-05-01

    An obturator prosthesis with insufficient retention and support may be improved with implant placement. However, implant surgery in patients after maxillary tumor resection can be complicated because of limited visibility and anatomic complexity. Therefore, computer-guided surgery can be advantageous even for experienced surgeons. In this clinical report, the use of computer-guided surgery is described for implant placement using a bone-supported surgical template for a patient with maxillary defects. The prosthetic procedure was facilitated and simplified by using computer-aided design/computer-aided manufacture (CAD/CAM) technology. Oral function and phonetics were restored using a tooth- and implant-supported obturator prosthesis. No clinical symptoms and no radiographic signs of significant bone loss around the implants were found at a 3-year follow-up. The treatment approach presented here can be a viable option for patients with insufficient remaining zygomatic bone after a hemimaxillectomy.

  6. PC Board Layout and Electronic Drafting with CAD. Teacher Edition.

    ERIC Educational Resources Information Center

    Bryson, Jimmy

    This teacher's guide contains 11 units of instruction for a course on computer electronics and computer-assisted drafting (CAD) using a personal computer (PC). The course covers the following topics: introduction to electronic drafting with CAD; CAD system and software; basic electronic theory; component identification; basic integrated circuit…

  7. PC Board Layout and Electronic Drafting with CAD. Teacher Edition.

    ERIC Educational Resources Information Center

    Bryson, Jimmy

    This teacher's guide contains 11 units of instruction for a course on computer electronics and computer-assisted drafting (CAD) using a personal computer (PC). The course covers the following topics: introduction to electronic drafting with CAD; CAD system and software; basic electronic theory; component identification; basic integrated circuit…

  8. A computationally efficient superresolution image reconstruction algorithm.

    PubMed

    Nguyen, N; Milanfar, P; Golub, G

    2001-01-01

    Superresolution reconstruction produces a high-resolution image from a set of low-resolution images. Previous iterative methods for superresolution had not adequately addressed the computational and numerical issues for this ill-conditioned and typically underdetermined large scale problem. We propose efficient block circulant preconditioners for solving the Tikhonov-regularized superresolution problem by the conjugate gradient method. We also extend to underdetermined systems the derivation of the generalized cross-validation method for automatic calculation of regularization parameters. The effectiveness of our preconditioners and regularization techniques is demonstrated with superresolution results for a simulated sequence and a forward looking infrared (FLIR) camera image sequence.

  9. Efficient quantum computing of complex dynamics.

    PubMed

    Benenti, G; Casati, G; Montangero, S; Shepelyansky, D L

    2001-11-26

    We propose a quantum algorithm which uses the number of qubits in an optimal way and efficiently simulates a physical model with rich and complex dynamics described by the quantum sawtooth map. The numerical study of the effect of static imperfections in the quantum computer hardware shows that the main elements of the phase space structures are accurately reproduced up to a time scale which is polynomial in the number of qubits. The errors generated by these imperfections are more significant than the errors of random noise in gate operations.

  10. Efficient Radiative Transfer Computations in the Atmosphere.

    DTIC Science & Technology

    1981-01-01

    absorptance, A = 1 - r , the net flux at level Z is given by equation (5) Net Flux, F (Z) = I - I, = B(Zsfc) -B(Ztop) A (ZtopZ) Zsfc - sft A (Z’,Z)dB(Z’) (5) ztop 11... F . Alyea, N. Phillips and R . Prinn, 1975; A three dimensional dynamical-chemical model of atmos- pheric ozone, J. Atmos. Sci., 32:170-194. 4...AD-ADO? 289 AIR FORCE INST OF TECH WRIGHT-PATTERSON AFB OH F /0 41/I EFFICIENT RADIATIVE TRANSFER COMPUTATIONS IN THE ATNOSI*ERE.fUI JAN 81 C R POSEY

  11. Improving the radiologist-CAD interaction: designing for appropriate trust.

    PubMed

    Jorritsma, W; Cnossen, F; van Ooijen, P M A

    2015-02-01

    Computer-aided diagnosis (CAD) has great potential to improve radiologists' diagnostic performance. However, the reported performance of the radiologist-CAD team is lower than what might be expected based on the performance of the radiologist and the CAD system in isolation. This indicates that the interaction between radiologists and the CAD system is not optimal. An important factor in the interaction between humans and automated aids (such as CAD) is trust. Suboptimal performance of the human-automation team is often caused by an inappropriate level of trust in the automation. In this review, we examine the role of trust in the radiologist-CAD interaction and suggest ways to improve the output of the CAD system so that it allows radiologists to calibrate their trust in the CAD system more effectively. Observer studies of the CAD systems show that radiologists often have an inappropriate level of trust in the CAD system. They sometimes under-trust CAD, thereby reducing its potential benefits, and sometimes over-trust it, leading to diagnostic errors they would not have made without CAD. Based on the literature on trust in human-automation interaction and the results of CAD observer studies, we have identified four ways to improve the output of CAD so that it allows radiologists to form a more appropriate level of trust in CAD. Designing CAD systems for appropriate trust is important and can improve the performance of the radiologist-CAD team. Future CAD research and development should acknowledge the importance of the radiologist-CAD interaction, and specifically the role of trust therein, in order to create the perfect artificial partner for the radiologist. This review focuses on the role of trust in the radiologist-CAD interaction. The aim of the review is to encourage CAD developers to design for appropriate trust and thereby improve the performance of the radiologist-CAD team. Copyright © 2014 The Royal College of Radiologists. Published by Elsevier Ltd

  12. Computationally efficient variable resolution depth estimation

    NASA Astrophysics Data System (ADS)

    Calder, B. R.; Rice, G.

    2017-09-01

    A new algorithm for data-adaptive, large-scale, computationally efficient estimation of bathymetry is proposed. The algorithm uses a first pass over the observations to construct a spatially varying estimate of data density, which is then used to predict achievable estimate sample spacing for robust depth estimation across the area of interest. A low-resolution estimate of depth is also constructed during the first pass as a guide for further work. A piecewise-regular grid is then constructed following the sample spacing estimates, and accurate depth is finally estimated using the composite refined grid and an extended and re-implemented version of the CUBE algorithm. Resource-efficient data structures allow for the algorithm to operate over large areas and large datasets without excessive compute resources; modular design allows for more complex spatial representations to be included if required. The proposed system is demonstrated on a pair of hydrographic datasets, illustrating the adaptation of the algorithm to different depth- and sensor-driven data densities. Although the algorithm was designed for bathymetric estimation, it could be readily used on other two dimensional scalar fields where variable data density is a driver.

  13. The effect of radiation dose reduction on computer-aided detection (CAD) performance in a low-dose lung cancer screening population.

    PubMed

    Young, Stefano; Lo, Pechin; Kim, Grace; Brown, Matthew; Hoffman, John; Hsu, William; Wahi-Anwar, Wasil; Flores, Carlos; Lee, Grace; Noo, Frederic; Goldin, Jonathan; McNitt-Gray, Michael

    2017-04-01

    Lung cancer screening with low-dose CT has recently been approved for reimbursement, heralding the arrival of such screening services worldwide. Computer-aided detection (CAD) tools offer the potential to assist radiologists in detecting nodules in these screening exams. In lung screening, as in all CT exams, there is interest in further reducing radiation dose. However, the effects of continued dose reduction on CAD performance are not fully understood. In this work, we investigated the effect of reducing radiation dose on CAD lung nodule detection performance in a screening population. The raw projection data files were collected from 481 patients who underwent low-dose screening CT exams at our institution as part of the National Lung Screening Trial (NLST). All scans were performed on a multidetector scanner (Sensation 64, Siemens Healthcare, Forchheim Germany) according to the NLST protocol, which called for a fixed tube current scan of 25 effective mAs for standard-sized patients and 40 effective mAs for larger patients. The raw projection data were input to a reduced-dose simulation software to create simulated reduced-dose scans corresponding to 50% and 25% of the original protocols. All raw data files were reconstructed at the scanner with 1 mm slice thickness and B50 kernel. The lungs were segmented semi-automatically, and all images and segmentations were input to an in-house CAD algorithm trained on higher dose scans (75-300 mAs). CAD findings were compared to a reference standard generated by an experienced reader. Nodule- and patient-level sensitivities were calculated along with false positives per scan, all of which were evaluated in terms of the relative change with respect to dose. Nodules were subdivided based on size and solidity into categories analogous to the LungRADS assessment categories, and sub-analyses were performed. From the 481 patients in this study, 82 had at least one nodule (prevalence of 17%) and 399 did not (83%). A total of 118

  14. The application of CAD / CAM technology in Dentistry

    NASA Astrophysics Data System (ADS)

    Susic, I.; Travar, M.; Susic, M.

    2017-05-01

    Information and communication technologies have found their application in the healthcare sector, including the frameworks of modern dentistry. CAD / CAM application in dentistry is the process by which is attained finished dental restoration through fine milling process of ready ceramic blocks. CAD / CAM is an acronym of english words Computer-Aided-Design (CAD) / Computer-Aided-Manufacture (CAM), respectively dental computer aided design and computer aided manufacture of inlays, onlays, crowns and bridges. CAD / CAM technology essentially allows you to create a two-dimensional and three-dimensional models and their materialization by numerical controlled machines. In order to operate more efficiently, reduce costs, increase user/patient satisfaction and ultimately achieve profits, many dental offices in the world have their attention focused on implementation of modern IT solutions in everyday practice. In addition to the specialized clinic management software, inventory control, etc., or hardware such as the use of lasers in cosmetic dentistry or intraoral scanning, recently the importance is given to the application of CAD / CAM technology in the field of prosthetic. After the removal of pathologically altered tooth structure, it is necessary to achieve restoration that will be most similar to the anatomy of a natural tooth. Applying CAD / CAM technology on applicable ceramic blocks it can be obtained very quick, but also very accurate restoration, in the forms of inlays, onlays, bridges and crowns. The paper presents the advantages of using this technology as well as satisfaction of the patients and dentists by using systems as: Cercon, Celay, Cerec, Lava, Everest, which represent imperative of modern dentistry in creating fixed dental restorations.

  15. Viewing CAD Drawings on the Internet

    ERIC Educational Resources Information Center

    Schwendau, Mark

    2004-01-01

    Computer aided design (CAD) has been producing 3-D models for years. AutoCAD software is frequently used to create sophisticated 3-D models. These CAD files can be exported as 3DS files for import into Autodesk's 3-D Studio Viz. In this program, the user can render and modify the 3-D model before exporting it out as a WRL (world file hyperlinked)…

  16. Viewing CAD Drawings on the Internet

    ERIC Educational Resources Information Center

    Schwendau, Mark

    2004-01-01

    Computer aided design (CAD) has been producing 3-D models for years. AutoCAD software is frequently used to create sophisticated 3-D models. These CAD files can be exported as 3DS files for import into Autodesk's 3-D Studio Viz. In this program, the user can render and modify the 3-D model before exporting it out as a WRL (world file hyperlinked)…

  17. CAD/CAM. High-Technology Training Module.

    ERIC Educational Resources Information Center

    Zuleger, Robert

    This high technology training module is an advanced course on computer-assisted design/computer-assisted manufacturing (CAD/CAM) for grades 11 and 12. This unit, to be used with students in advanced drafting courses, introduces the concept of CAD/CAM. The content outline includes the following seven sections: (1) CAD/CAM software; (2) computer…

  18. Current techniques in CAD/CAM denture fabrication.

    PubMed

    Baba, Nadim Z; AlRumaih, Hamad S; Goodacre, Brian J; Goodacre, Charles J

    2016-01-01

    Recently, the use of computer-aided design/computer-aided manufacturing (CAD/CAM) to produce complete dentures has seen exponential growth in the dental market, and the number of commercially available CAD/CAM denture systems grows every year. The purpose of this article is to describe the clinical and laboratory procedures of 5 CAD/CAM denture systems.

  19. A primer on the energy efficiency of computing

    SciTech Connect

    Koomey, Jonathan G.

    2015-03-30

    The efficiency of computing at peak output has increased rapidly since the dawn of the computer age. This paper summarizes some of the key factors affecting the efficiency of computing in all usage modes. While there is still great potential for improving the efficiency of computing devices, we will need to alter how we do computing in the next few decades because we are finally approaching the limits of current technologies.

  20. CAD Services: an Industry Standard Interface for Mechanical CAD Interoperability

    NASA Technical Reports Server (NTRS)

    Claus, Russell; Weitzer, Ilan

    2002-01-01

    Most organizations seek to design and develop new products in increasingly shorter time periods. At the same time, increased performance demands require a team-based multidisciplinary design process that may span several organizations. One approach to meet these demands is to use 'Geometry Centric' design. In this approach, design engineers team their efforts through one united representation of the design that is usually captured in a CAD system. Standards-based interfaces are critical to provide uniform, simple, distributed services that enable the 'Geometry Centric' design approach. This paper describes an industry-wide effort, under the Object Management Group's (OMG) Manufacturing Domain Task Force, to define interfaces that enable the interoperability of CAD, Computer Aided Manufacturing (CAM), and Computer Aided Engineering (CAE) tools. This critical link to enable 'Geometry Centric' design is called: Cad Services V1.0. This paper discusses the features of this standard and proposed application.

  1. Efficient computation of volume in flow predictions

    NASA Technical Reports Server (NTRS)

    Vinokur, M.; Kordulla, W.

    1983-01-01

    An efficient method for calculating cell volumes for time-dependent three-dimensional flow predictions by finite volume calculations is presented. Eight arbitrary corner points are considered and the shape face is divided into two planar triangles. The volume is then dependent on the orientation of the partitioning. In the case of a hexahedron, it is noted that any open surface with a boundary that is a closed curve possesses a surface vector independent of the surface shape. Expressions are defined for the surface vector, which is independent of the partitioning surface diagonal used to quantify the volume. Using a decomposition of the cell volume involving two corners, with each the vertex of three diagonals and six corners which are vertices of one diagonal, gives portions which are tetrahedra. The resultant mesh is can be used for time-dependent finite volume calculations one requires less computer time than previous methods.

  2. Impact of image normalization and quantization on the performance of sonar computer-aided detection/computer-aided classification (CAD/CAC) algorithms

    NASA Astrophysics Data System (ADS)

    Ciany, Charles M.; Zurawski, William C.

    2007-04-01

    Raytheon has extensively processed high-resolution sonar images with its CAD/CAC algorithms to provide real-time classification of mine-like bottom objects in a wide range of shallow-water environments. The algorithm performance is measured in terms of probability of correct classification (Pcc) as a function of false alarm rate, and is impacted by variables associated with both the physics of the problem and the signal processing design choices. Some examples of prominent variables pertaining to the choices of signal processing parameters are image resolution (i.e., pixel dimensions), image normalization scheme, and pixel intensity quantization level (i.e., number of bits used to represent the intensity of each image pixel). Improvements in image resolution associated with the technology transition from sidescan to synthetic aperture sonars have prompted the use of image decimation algorithms to reduce the number of pixels per image that are processed by the CAD/CAC algorithms, in order to meet real-time processor throughput requirements. Additional improvements in digital signal processing hardware have also facilitated the use of an increased quantization level in converting the image data from analog to digital format. This study evaluates modifications to the normalization algorithm and image pixel quantization level within the image processing prior to CAD/CAC processing, and examines their impact on the resulting CAD/CAC algorithm performance. The study utilizes a set of at-sea data from multiple test exercises in varying shallow water environments.

  3. Computer-aided detection (CAD) of solid pulmonary nodules in chest x-ray equivalent ultralow dose chest CT - first in-vivo results at dose levels of 0.13mSv.

    PubMed

    Messerli, Michael; Kluckert, Thomas; Knitel, Meinhard; Rengier, Fabian; Warschkow, René; Alkadhi, Hatem; Leschka, Sebastian; Wildermuth, Simon; Bauer, Ralf W

    2016-12-01

    To determine the value of computer-aided detection (CAD) for solid pulmonary nodules in ultralow radiation dose single-energy computed tomography (CT) of the chest using third-generation dual-source CT at 100kV and fixed tube current at 70 mAs with tin filtration. 202 consecutive patients undergoing clinically indicated standard dose chest CT (1.8±0.7 mSv) were prospectively included and scanned with an additional ultralow dose CT (0.13±0.01 mSv) in the same session. Standard of reference (SOR) was established by consensus reading of standard dose CT by two radiologists. CAD was performed in standard dose and ultralow dose CT with two different reconstruction kernels. CAD detection rate of nodules was evaluated including subgroups of different nodule sizes (<5, 5-7, >7mm). Sensitivity was further analysed in multivariable mixed effects logistic regression. The SOR included 279 solid nodules (mean diameter 4.3±3.4mm, range 1-24mm). There was no significant difference in per-nodule sensitivity of CAD in standard dose with 70% compared to 68% in ultralow dose CT both overall and in different size subgroups (all p>0.05). CAD led to a significant increase of sensitivity for both radiologists reading the ultralow dose CT scans (all p<0.001). In multivariable analysis, the use of CAD (p<0.001), and nodule size (p<0.0001) were independent predictors for nodule detection, but not BMI (p=0.933) and the use of contrast agents (p=0.176). Computer-aided detection of solid pulmonary nodules using ultralow dose CT with chest X-ray equivalent radiation dose has similar sensitivities to those from standard dose CT. Adding CAD in ultralow dose CT significantly improves the sensitivity of radiologists. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. Dimensioning storage and computing clusters for efficient high throughput computing

    NASA Astrophysics Data System (ADS)

    Accion, E.; Bria, A.; Bernabeu, G.; Caubet, M.; Delfino, M.; Espinal, X.; Merino, G.; Lopez, F.; Martinez, F.; Planas, E.

    2012-12-01

    Scientific experiments are producing huge amounts of data, and the size of their datasets and total volume of data continues increasing. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of scientific data centers has shifted from efficiently coping with PetaByte scale storage to deliver quality data processing throughput. The dimensioning of the internal components in High Throughput Computing (HTC) data centers is of crucial importance to cope with all the activities demanded by the experiments, both the online (data acceptance) and the offline (data processing, simulation and user analysis). This requires a precise setup involving disk and tape storage services, a computing cluster and the internal networking to prevent bottlenecks, overloads and undesired slowness that lead to losses cpu cycles and batch jobs failures. In this paper we point out relevant features for running a successful data storage and processing service in an intensive HTC environment.

  5. Computed tomography and CAD/CAE methods for the study of the osseus inner Ear bone of Greek quaternary endemic mammals

    NASA Astrophysics Data System (ADS)

    Provatidis, C. G.; Theodorou, E. G.; Theodorou, G. E.

    It is undisputed that the use of computed tomography gives the researcher an inside view of the internal morphology of precious findings. The main goal, in this study, is to take advantage of the huge possibilities that derive from the use of CT Scans in the field of Vertebrate Palaeontology. Rare fossils skull parts (Ospetrosum of Elephas tiliensis from Tilos, Phanourios minor from Cyprus and Candiacervus sp. from Crete) brought to light by excavations, required further analysis of their inside structure by non destructive methods. Selected specimens were scanned and exported into Dicom files. These were then imported into MIMICS Software in order to develop the required 3D digital CAD models. By using distinctive reference points on the bone geometry based on palaeontological criteria, section views were created thus revealing the extremely complex inside structure and making it available for farther palaeontological analysis.

  6. Education and Training Packages for CAD/CAM.

    ERIC Educational Resources Information Center

    Wright, I. C.

    1986-01-01

    Discusses educational efforts in the fields of Computer Assisted Design and Manufacturing (CAD/CAM). Describes two educational training initiatives underway in the United Kingdom, one of which is a resource materials package for teachers of CAD/CAM at the undergraduate level, and the other a training course for managers of CAD/CAM systems. (TW)

  7. Education and Training Packages for CAD/CAM.

    ERIC Educational Resources Information Center

    Wright, I. C.

    1986-01-01

    Discusses educational efforts in the fields of Computer Assisted Design and Manufacturing (CAD/CAM). Describes two educational training initiatives underway in the United Kingdom, one of which is a resource materials package for teachers of CAD/CAM at the undergraduate level, and the other a training course for managers of CAD/CAM systems. (TW)

  8. Efficient quantum computing using coherent photon conversion.

    PubMed

    Langford, N K; Ramelow, S; Prevedel, R; Munro, W J; Milburn, G J; Zeilinger, A

    2011-10-12

    Single photons are excellent quantum information carriers: they were used in the earliest demonstrations of entanglement and in the production of the highest-quality entanglement reported so far. However, current schemes for preparing, processing and measuring them are inefficient. For example, down-conversion provides heralded, but randomly timed, single photons, and linear optics gates are inherently probabilistic. Here we introduce a deterministic process--coherent photon conversion (CPC)--that provides a new way to generate and process complex, multiquanta states for photonic quantum information applications. The technique uses classically pumped nonlinearities to induce coherent oscillations between orthogonal states of multiple quantum excitations. One example of CPC, based on a pumped four-wave-mixing interaction, is shown to yield a single, versatile process that provides a full set of photonic quantum processing tools. This set satisfies the DiVincenzo criteria for a scalable quantum computing architecture, including deterministic multiqubit entanglement gates (based on a novel form of photon-photon interaction), high-quality heralded single- and multiphoton states free from higher-order imperfections, and robust, high-efficiency detection. It can also be used to produce heralded multiphoton entanglement, create optically switchable quantum circuits and implement an improved form of down-conversion with reduced higher-order effects. Such tools are valuable building blocks for many quantum-enabled technologies. Finally, using photonic crystal fibres we experimentally demonstrate quantum correlations arising from a four-colour nonlinear process suitable for CPC and use these measurements to study the feasibility of reaching the deterministic regime with current technology. Our scheme, which is based on interacting bosonic fields, is not restricted to optical systems but could also be implemented in optomechanical, electromechanical and superconducting

  9. Microcomputer Simulated CAD for Engineering Graphics.

    ERIC Educational Resources Information Center

    Huggins, David L.; Myers, Roy E.

    1983-01-01

    Describes a simulated computer-aided-graphics (CAD) program at The Pennsylvania State University. Rationale for the program, facilities, microcomputer equipment (Apple) used, and development of a software package for simulating applied engineering graphics are considered. (JN)

  10. Application of Fisher fusion techniques to improve the individual performance of sonar computer-aided detection/computer-aided classification (CAD/CAC) algorithms

    NASA Astrophysics Data System (ADS)

    Ciany, Charles M.; Zurawski, William C.

    2009-05-01

    Raytheon has extensively processed high-resolution sidescan sonar images with its CAD/CAC algorithms to provide classification of targets in a variety of shallow underwater environments. The Raytheon CAD/CAC algorithm is based on non-linear image segmentation into highlight, shadow, and background regions, followed by extraction, association, and scoring of features from candidate highlight and shadow regions of interest (ROIs). The targets are classified by thresholding an overall classification score, which is formed by summing the individual feature scores. The algorithm performance is measured in terms of probability of correct classification as a function of false alarm rate, and is determined by both the choice of classification features and the manner in which the classifier rates and combines these features to form its overall score. In general, the algorithm performs very reliably against targets that exhibit "strong" highlight and shadow regions in the sonar image- i.e., both the highlight echo and its associated shadow region from the target are distinct relative to the ambient background. However, many real-world undersea environments can produce sonar images in which a significant percentage of the targets exhibit either "weak" highlight or shadow regions in the sonar image. The challenge of achieving robust performance in these environments has traditionally been addressed by modifying the individual feature scoring algorithms to optimize the separation between the corresponding highlight or shadow feature scores of targets and non-targets. This study examines an alternate approach that employs principles of Fisher fusion to determine a set of optimal weighting coefficients that are applied to the individual feature scores before summing to form the overall classification score. The results demonstrate improved performance of the CAD/CAC algorithm on at-sea data sets.

  11. Graphical method for analyzing digital computer efficiency

    NASA Technical Reports Server (NTRS)

    Chan, S. P.; Munoz, R. M.

    1971-01-01

    Analysis method utilizes graph-theoretic approach for evaluating computation cost and makes logical distinction between linear graph of a computation and linear graph of a program. It applies equally well to other processes which depend on quatitative edge nomenclature and precedence relationships between edges.

  12. Duality quantum computer and the efficient quantum simulations

    NASA Astrophysics Data System (ADS)

    Wei, Shijie; Long, Guilu; Tsinghua National LaboratoryInformation Science; Technology Collaboration; Collaborative Innovation Center of Quantum Matter Collaboration

    Duality quantum computer is a new kind of quantum computer which is able to perform an arbitrary sum of unitaries, and therefore a general quantum operator. This gives more computational power than a normal quantum computer. All linear bounded operators can be realized in a duality quantum computer, and unitary operators are just the extreme points of the set of generalized quantum gates. Duality quantum computer can provide flexibility and clear physical picture in designing quantum algorithms, serving as a useful bridge between quantum and classical algorithms. In this report, we will firstly briefly review the theory of duality quantum computer. Then we will introduce the application of duality quantum computer in Hamiltonian simulation. We will show that duality quantum computer can simulate quantum systems more efficiently than ordinary quantum computer by providing descriptions of the recent efficient quantum simulation algorithms.

  13. Using AutoCAD for descriptive geometry exercises. in undergraduate structural geology

    NASA Astrophysics Data System (ADS)

    Jacobson, Carl E.

    2001-02-01

    The exercises in descriptive geometry typically utilized in undergraduate structural geology courses are quickly and easily solved using the computer drafting program AutoCAD. The key to efficient use of AutoCAD for descriptive geometry involves taking advantage of User Coordinate Systems, alternative angle conventions, relative coordinates, and other aspects of AutoCAD that may not be familiar to the beginning user. A summary of these features and an illustration of their application to the creation of structure contours for a planar dipping bed provides the background necessary to solve other problems in descriptive geometry with the computer. The ease of the computer constructions reduces frustration for the student and provides more time to think about the principles of the problems.

  14. Productivity increase through implementation of CAD/CAE workstation

    NASA Technical Reports Server (NTRS)

    Bromley, L. K.

    1985-01-01

    The tracking and communication division computer aided design/computer aided engineering system is now operational. The system is utilized in an effort to automate certain tasks that were previously performed manually. These tasks include detailed test configuration diagrams of systems under certification test in the ESTL, floorplan layouts of future planned laboratory reconfigurations, and other graphical documentation of division activities. The significant time savings achieved with this CAD/CAE system are examined: (1) input of drawings and diagrams; (2) editing of initial drawings; (3) accessibility of the data; and (4) added versatility. It is shown that the Applicon CAD/CAE system, with its ease of input and editing, the accessibility of data, and its added versatility, has made more efficient many of the necessary but often time-consuming tasks associated with engineering design and testing.

  15. Productivity increase through implementation of CAD/CAE workstation

    NASA Technical Reports Server (NTRS)

    Bromley, L. K.

    1985-01-01

    The tracking and communication division computer aided design/computer aided engineering system is now operational. The system is utilized in an effort to automate certain tasks that were previously performed manually. These tasks include detailed test configuration diagrams of systems under certification test in the ESTL, floorplan layouts of future planned laboratory reconfigurations, and other graphical documentation of division activities. The significant time savings achieved with this CAD/CAE system are examined: (1) input of drawings and diagrams; (2) editing of initial drawings; (3) accessibility of the data; and (4) added versatility. It is shown that the Applicon CAD/CAE system, with its ease of input and editing, the accessibility of data, and its added versatility, has made more efficient many of the necessary but often time-consuming tasks associated with engineering design and testing.

  16. Efficient Computation Of Behavior Of Aircraft Tires

    NASA Technical Reports Server (NTRS)

    Tanner, John A.; Noor, Ahmed K.; Andersen, Carl M.

    1989-01-01

    NASA technical paper discusses challenging application of computational structural mechanics to numerical simulation of responses of aircraft tires during taxing, takeoff, and landing. Presents details of three main elements of computational strategy: use of special three-field, mixed-finite-element models; use of operator splitting; and application of technique reducing substantially number of degrees of freedom. Proposed computational strategy applied to two quasi-symmetric problems: linear analysis of anisotropic tires through use of two-dimensional-shell finite elements and nonlinear analysis of orthotropic tires subjected to unsymmetric loading. Three basic types of symmetry and combinations exhibited by response of tire identified.

  17. Efficient Computation Of Behavior Of Aircraft Tires

    NASA Technical Reports Server (NTRS)

    Tanner, John A.; Noor, Ahmed K.; Andersen, Carl M.

    1989-01-01

    NASA technical paper discusses challenging application of computational structural mechanics to numerical simulation of responses of aircraft tires during taxing, takeoff, and landing. Presents details of three main elements of computational strategy: use of special three-field, mixed-finite-element models; use of operator splitting; and application of technique reducing substantially number of degrees of freedom. Proposed computational strategy applied to two quasi-symmetric problems: linear analysis of anisotropic tires through use of two-dimensional-shell finite elements and nonlinear analysis of orthotropic tires subjected to unsymmetric loading. Three basic types of symmetry and combinations exhibited by response of tire identified.

  18. Determining efficacy of mammographic CAD systems.

    PubMed

    Hoffmeister, Jeffrey W; Rogers, Steven K; DeSimio, Martin P; Brem, Rachel F

    2002-01-01

    Computer-aided detection (CAD) system sensitivity estimates without a radiologist in the loop are straightforward to measure but are extremely data dependent. The only relevant performance metric is improvement in CAD-assisted radiologist sensitivity. Unfortunately, this is difficult to accurately assess. Without a large study measuring the improvement in CAD-assisted radiologist sensitivity over the same cases, it is not possible to make valid comparisons between systems. As multiple CAD systems become commercially available, comparison issues need to be explored and resolved. Data from clinical trials of 2 systems are examined. Statistical hypothesis tests are applied to these data. Additionally, sensitivities of 2 systems are compared from an experiment testing over the same 120 cases. Even with large databases, there is not sufficient evidence to conclude performance differences exist between the 2 systems. It is prohibitively expensive to show conclusive sensitivity differences between commercially available mammographic CAD systems.

  19. Project CAD as of July 1978: CAD support project, situation in July 1978

    NASA Technical Reports Server (NTRS)

    Boesch, L.; Lang-Lendorff, G.; Rothenberg, R.; Stelzer, V.

    1979-01-01

    The structure of Computer Aided Design (CAD) and the requirements for program developments in past and future are described. The actual standard and the future aims of CAD programs are presented. The developed programs in: (1) civil engineering; (2) mechanical engineering; (3) chemical engineering/shipbuilding; (4) electrical engineering; and (5) general programs are discussed.

  20. Synthesis of Efficient Structures for Concurrent Computation.

    DTIC Science & Technology

    1983-10-01

    CONTRACT OR GRANT NUMBER(a) Richard M. King and Ernst Mayr F49620-82-C-0007 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT, TASK...for CONCURRENT COMPUTATION by Richard M. King Ernst W. Mayrt Cordel Green Principal Investigator Kestrel Institute 1801 Page Mill Road Palo Alto, CA... Mayr , and A. Siegel ’Techniques for Solving Graph Problems in Parallel Environments’ Proceedings of the W4h Symposium on Foundation* of Computer

  1. Efficient Computation Of Confidence Intervals Of Parameters

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.

    1992-01-01

    Study focuses on obtaining efficient algorithm for estimation of confidence intervals of ML estimates. Four algorithms selected to solve associated constrained optimization problem. Hybrid algorithms, following search and gradient approaches, prove best.

  2. Efficient Computation Of Confidence Intervals Of Parameters

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.

    1992-01-01

    Study focuses on obtaining efficient algorithm for estimation of confidence intervals of ML estimates. Four algorithms selected to solve associated constrained optimization problem. Hybrid algorithms, following search and gradient approaches, prove best.

  3. Experimental Implementation of Efficient Linear Optics Quantum Computation

    DTIC Science & Technology

    2007-11-02

    Experimental Implementation of Efficient Linear Optics Quantum Computation Final Report G. J. Milburn, T. C. Ralph, and A. G. White University of...Queensland, Australia 1. Statement of Problem. One of the earliest proposals [1] for implementing quantum computation was based on encoding...containing few photons. In 2001 Knill, Laflamme and Milburn (KLM) found a way to circumvent this restriction and implement efficient quantum computation

  4. Efficient Parallel Engineering Computing on Linux Workstations

    NASA Technical Reports Server (NTRS)

    Lou, John Z.

    2010-01-01

    A C software module has been developed that creates lightweight processes (LWPs) dynamically to achieve parallel computing performance in a variety of engineering simulation and analysis applications to support NASA and DoD project tasks. The required interface between the module and the application it supports is simple, minimal and almost completely transparent to the user applications, and it can achieve nearly ideal computing speed-up on multi-CPU engineering workstations of all operating system platforms. The module can be integrated into an existing application (C, C++, Fortran and others) either as part of a compiled module or as a dynamically linked library (DLL).

  5. Experimental Realization of High-Efficiency Counterfactual Computation

    NASA Astrophysics Data System (ADS)

    Kong, Fei; Ju, Chenyong; Huang, Pu; Wang, Pengfei; Kong, Xi; Shi, Fazhan; Jiang, Liang; Du, Jiangfeng

    2015-08-01

    Counterfactual computation (CFC) exemplifies the fascinating quantum process by which the result of a computation may be learned without actually running the computer. In previous experimental studies, the counterfactual efficiency is limited to below 50%. Here we report an experimental realization of the generalized CFC protocol, in which the counterfactual efficiency can break the 50% limit and even approach unity in principle. The experiment is performed with the spins of a negatively charged nitrogen-vacancy color center in diamond. Taking advantage of the quantum Zeno effect, the computer can remain in the not-running subspace due to the frequent projection by the environment, while the computation result can be revealed by final detection. The counterfactual efficiency up to 85% has been demonstrated in our experiment, which opens the possibility of many exciting applications of CFC, such as high-efficiency quantum integration and imaging.

  6. Efficient Computation Of Manipulator Inertia Matrix

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1991-01-01

    Improved method for computation of manipulator inertia matrix developed, based on concept of spatial inertia of composite rigid body. Required for implementation of advanced dynamic-control schemes as well as dynamic simulation of manipulator motion. Motivated by increasing demand for fast algorithms to provide real-time control and simulation capability and, particularly, need for faster-than-real-time simulation capability, required in many anticipated space teleoperation applications.

  7. Efficient Kinematic Computations For 7-DOF Manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun; Long, Mark K.; Kreutz-Delgado, Kenneth

    1994-01-01

    Efficient algorithms for forward kinematic mappings of seven-degree-of-freedom (7-DOF) robotic manipulator having revolute joints developed on basis of representation of redundant DOF in terms of parameter called "arm angle." Continuing effort to exploit redundancy in manipulator according to concept of basic and additional tasks. Concept also discussed in "Configuration-Control Scheme Copes With Singularities" (NPO-18556) and "Increasing the Dexterity of Redundant Robots" (NPO-17801).

  8. Efficient Kinematic Computations For 7-DOF Manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun; Long, Mark K.; Kreutz-Delgado, Kenneth

    1994-01-01

    Efficient algorithms for forward kinematic mappings of seven-degree-of-freedom (7-DOF) robotic manipulator having revolute joints developed on basis of representation of redundant DOF in terms of parameter called "arm angle." Continuing effort to exploit redundancy in manipulator according to concept of basic and additional tasks. Concept also discussed in "Configuration-Control Scheme Copes With Singularities" (NPO-18556) and "Increasing the Dexterity of Redundant Robots" (NPO-17801).

  9. Efficient Associative Computation with Discrete Synapses.

    PubMed

    Knoblauch, Andreas

    2016-01-01

    Neural associative networks are a promising computational paradigm for both modeling neural circuits of the brain and implementing associative memory and Hebbian cell assemblies in parallel VLSI or nanoscale hardware. Previous work has extensively investigated synaptic learning in linear models of the Hopfield type and simple nonlinear models of the Steinbuch/Willshaw type. Optimized Hopfield networks of size n can store a large number of about n(2)/k memories of size k (or associations between them) but require real-valued synapses, which are expensive to implement and can store at most C = 0.72 bits per synapse. Willshaw networks can store a much smaller number of about n(2)/k(2) memories but get along with much cheaper binary synapses. Here I present a learning model employing synapses with discrete synaptic weights. For optimal discretization parameters, this model can store, up to a factor ζ close to one, the same number of memories as for optimized Hopfield-type learning--for example, ζ = 0.64 for binary synapses, ζ = 0.88 for 2 bit (four-state) synapses, ζ = 0.96 for 3 bit (8-state) synapses, and ζ > 0.99 for 4 bit (16-state) synapses. The model also provides the theoretical framework to determine optimal discretization parameters for computer implementations or brainlike parallel hardware including structural plasticity. In particular, as recently shown for the Willshaw network, it is possible to store C(I) = 1 bit per computer bit and up to C(S) = log n bits per nonsilent synapse, whereas the absolute number of stored memories can be much larger than for the Willshaw model.

  10. Aerodynamic Design of Complex Configurations Using Cartesian Methods and CAD Geometry

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.; Pulliam, Thomas H.

    2003-01-01

    The objective for this paper is to present the development of an optimization capability for the Cartesian inviscid-flow analysis package of Aftosmis et al. We evaluate and characterize the following modules within the new optimization framework: (1) A component-based geometry parameterization approach using a CAD solid representation and the CAPRI interface. (2) The use of Cartesian methods in the development Optimization techniques using a genetic algorithm. The discussion and investigations focus on several real world problems of the optimization process. We examine the architectural issues associated with the deployment of a CAD-based design approach in a heterogeneous parallel computing environment that contains both CAD workstations and dedicated compute nodes. In addition, we study the influence of noise on the performance of optimization techniques, and the overall efficiency of the optimization process for aerodynamic design of complex three-dimensional configurations. of automated optimization tools. rithm and a gradient-based algorithm.

  11. A panorama of dental CAD/CAM restorative systems.

    PubMed

    Liu, Perng-Ru

    2005-07-01

    In the last 2 decades, exciting new developments in dental materials and computer technology have led to the success of contemporary dental computer-aided design/computer-aided manufacturing (CAD/CAM) technology. Several highly sophisticated chairside and laboratory CAD/CAM systems have been introduced or are under development. This article provides an overview of the development of various CAD/CAM systems. Operational components, methodologies, and restorative materials used with common CAD/CAM systems are discussed. Research data and clinical studies are presented to substantiate the clinical performance of these systems.

  12. Panorama of dental CAD/CAM restorative systems.

    PubMed

    Liu, Perng-Ru; Essig, Milton E

    2008-10-01

    In the past two decades, exciting new developments in dental materials and computer technology have led to the success of contemporary dental computer-aided design/computer-aided manufacture (CAD/CAM) technology. Several highly sophisticated in-office and laboratory CAD/CAM systems have been introduced or are under development. This article provides an overview of the development of various CAD/CAM systems. Operational components, methodologies, and restorative materials used with common CAD/CAM systems are discussed. Research data and clinical studies are presented to substantiate the clinical performance of these systems.

  13. AutoCAD-To-NASTRAN Translator Program

    NASA Technical Reports Server (NTRS)

    Jones, A.

    1989-01-01

    Program facilitates creation of finite-element mathematical models from geometric entities. AutoCAD to NASTRAN translator (ACTON) computer program developed to facilitate quick generation of small finite-element mathematical models for use with NASTRAN finite-element modeling program. Reads geometric data of drawing from Data Exchange File (DXF) used in AutoCAD and other PC-based drafting programs. Written in Microsoft Quick-Basic (Version 2.0).

  14. Multi-site evaluation of a computer aided detection (CAD) algorithm for small acute intra-cranial hemorrhage and development of a stand-alone CAD system ready for deployment in a clinical environment

    NASA Astrophysics Data System (ADS)

    Deshpande, Ruchi R.; Fernandez, James; Lee, Joon K.; Chan, Tao; Liu, Brent J.; Huang, H. K.

    2010-03-01

    Timely detection of Acute Intra-cranial Hemorrhage (AIH) in an emergency environment is essential for the triage of patients suffering from Traumatic Brain Injury. Moreover, the small size of lesions and lack of experience on the reader's part could lead to difficulties in the detection of AIH. A CT based CAD algorithm for the detection of AIH has been developed in order to improve upon the current standard of identification and treatment of AIH. A retrospective analysis of the algorithm has already been carried out with 135 AIH CT studies with 135 matched normal head CT studies from the Los Angeles County General Hospital/ University of Southern California Hospital System (LAC/USC). In the next step, AIH studies have been collected from Walter Reed Army Medical Center, and are currently being processed using the AIH CAD system as part of implementing a multi-site assessment and evaluation of the performance of the algorithm. The sensitivity and specificity numbers from the Walter Reed study will be compared with the numbers from the LAC/USC study to determine if there are differences in the presentation and detection due to the difference in the nature of trauma between the two sites. Simultaneously, a stand-alone system with a user friendly GUI has been developed to facilitate implementation in a clinical setting.

  15. Efficient tree codes on SIMD computer architectures

    NASA Astrophysics Data System (ADS)

    Olson, Kevin M.

    1996-11-01

    This paper describes changes made to a previous implementation of an N -body tree code developed for a fine-grained, SIMD computer architecture. These changes include (1) switching from a balanced binary tree to a balanced oct tree, (2) addition of quadrupole corrections, and (3) having the particles search the tree in groups rather than individually. An algorithm for limiting errors is also discussed. In aggregate, these changes have led to a performance increase of over a factor of 10 compared to the previous code. For problems several times larger than the processor array, the code now achieves performance levels of ~ 1 Gflop on the Maspar MP-2 or roughly 20% of the quoted peak performance of this machine. This percentage is competitive with other parallel implementations of tree codes on MIMD architectures. This is significant, considering the low relative cost of SIMD architectures.

  16. Efficient computation of parameter confidence intervals

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.

    1987-01-01

    An important step in system identification of aircraft is the estimation of stability and control derivatives from flight data along with an assessment of parameter accuracy. When the maximum likelihood estimation technique is used, parameter accuracy is commonly assessed by the Cramer-Rao lower bound. It is known, however, that in some cases the lower bound can be substantially different from the parameter variance. Under these circumstances the Cramer-Rao bounds may be misleading as an accuracy measure. This paper discusses the confidence interval estimation problem based on likelihood ratios, which offers a more general estimate of the error bounds. Four approaches are considered for computing confidence intervals of maximum likelihood parameter estimates. Each approach is applied to real flight data and compared.

  17. Efficient computation of parameter confidence intervals

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.

    1987-01-01

    An important step in system identification of aircraft is the estimation of stability and control derivatives from flight data along with an assessment of parameter accuracy. When the maximum likelihood estimation technique is used, parameter accuracy is commonly assessed by the Cramer-Rao lower bound. It is known, however, that in some cases the lower bound can be substantially different from the parameter variance. Under these circumstances the Cramer-Rao bounds may be misleading as an accuracy measure. This paper discusses the confidence interval estimation problem based on likelihood ratios, which offers a more general estimate of the error bounds. Four approaches are considered for computing confidence intervals of maximum likelihood parameter estimates. Each approach is applied to real flight data and compared.

  18. An efficient numerical method for orbit computations

    NASA Astrophysics Data System (ADS)

    Palacios, M.; Abad, A.; Elipe, A.

    1992-08-01

    A nonstandard formulation of perturbed Keplerian motion is set forth based on the analysis by Deprit (1975) and incorporating quaternions to integrate the equations of motion. The properties of quaternions are discussed and applied to the portion of the equations of motion describing the rotations between the space frame and the departure frame. Angular momentum is assumed to be constant, and a redundant set of variables is introduced to test the equations of motion for different step sizes. The method is analyzed for the cases of artificial satellites in Keplerian circular orbits, Keplerian elliptical orbits, and zonal harmonics. The present formulation is shown to adequately represent the dynamical behavior while avoiding small inclinations. The rotations described by quaternions require less arithmetic operations and therefore save computation time, and the accuracy of the solutions are improved by at least two significant digits.

  19. A computable expression of closure to efficient causation.

    PubMed

    Mossio, Matteo; Longo, Giuseppe; Stewart, John

    2009-04-07

    In this paper, we propose a mathematical expression of closure to efficient causation in terms of lambda-calculus; we argue that this opens up the perspective of developing principled computer simulations of systems closed to efficient causation in an appropriate programming language. An important implication of our formulation is that, by exhibiting an expression in lambda-calculus, which is a paradigmatic formalism for computability and programming, we show that there are no conceptual or principled problems in realizing a computer simulation or model of closure to efficient causation. We conclude with a brief discussion of the question whether closure to efficient causation captures all relevant properties of living systems. We suggest that it might not be the case, and that more complex definitions could indeed create crucial some obstacles to computability.

  20. Efficient algorithm to compute the Berry conductivity

    NASA Astrophysics Data System (ADS)

    Dauphin, A.; Müller, M.; Martin-Delgado, M. A.

    2014-07-01

    We propose and construct a numerical algorithm to calculate the Berry conductivity in topological band insulators. The method is applicable to cold atom systems as well as solid state setups, both for the insulating case where the Fermi energy lies in the gap between two bulk bands as well as in the metallic regime. This method interpolates smoothly between both regimes. The algorithm is gauge-invariant by construction, efficient, and yields the Berry conductivity with known and controllable statistical error bars. We apply the algorithm to several paradigmatic models in the field of topological insulators, including Haldane's model on the honeycomb lattice, the multi-band Hofstadter model, and the BHZ model, which describes the 2D spin Hall effect observed in CdTe/HgTe/CdTe quantum well heterostructures.

  1. Computationally efficient Bayesian inference for inverse problems.

    SciTech Connect

    Marzouk, Youssef M.; Najm, Habib N.; Rahn, Larry A.

    2007-10-01

    Bayesian statistics provides a foundation for inference from noisy and incomplete data, a natural mechanism for regularization in the form of prior information, and a quantitative assessment of uncertainty in the inferred results. Inverse problems - representing indirect estimation of model parameters, inputs, or structural components - can be fruitfully cast in this framework. Complex and computationally intensive forward models arising in physical applications, however, can render a Bayesian approach prohibitive. This difficulty is compounded by high-dimensional model spaces, as when the unknown is a spatiotemporal field. We present new algorithmic developments for Bayesian inference in this context, showing strong connections with the forward propagation of uncertainty. In particular, we introduce a stochastic spectral formulation that dramatically accelerates the Bayesian solution of inverse problems via rapid evaluation of a surrogate posterior. We also explore dimensionality reduction for the inference of spatiotemporal fields, using truncated spectral representations of Gaussian process priors. These new approaches are demonstrated on scalar transport problems arising in contaminant source inversion and in the inference of inhomogeneous material or transport properties. We also present a Bayesian framework for parameter estimation in stochastic models, where intrinsic stochasticity may be intermingled with observational noise. Evaluation of a likelihood function may not be analytically tractable in these cases, and thus several alternative Markov chain Monte Carlo (MCMC) schemes, operating on the product space of the observations and the parameters, are introduced.

  2. Earthquake detection through computationally efficient similarity search.

    PubMed

    Yoon, Clara E; O'Reilly, Ossian; Bergen, Karianne J; Beroza, Gregory C

    2015-12-01

    Seismology is experiencing rapid growth in the quantity of data, which has outpaced the development of processing algorithms. Earthquake detection-identification of seismic events in continuous data-is a fundamental operation for observational seismology. We developed an efficient method to detect earthquakes using waveform similarity that overcomes the disadvantages of existing detection methods. Our method, called Fingerprint And Similarity Thresholding (FAST), can analyze a week of continuous seismic waveform data in less than 2 hours, or 140 times faster than autocorrelation. FAST adapts a data mining algorithm, originally designed to identify similar audio clips within large databases; it first creates compact "fingerprints" of waveforms by extracting key discriminative features, then groups similar fingerprints together within a database to facilitate fast, scalable search for similar fingerprint pairs, and finally generates a list of earthquake detections. FAST detected most (21 of 24) cataloged earthquakes and 68 uncataloged earthquakes in 1 week of continuous data from a station located near the Calaveras Fault in central California, achieving detection performance comparable to that of autocorrelation, with some additional false detections. FAST is expected to realize its full potential when applied to extremely long duration data sets over a distributed network of seismic stations. The widespread application of FAST has the potential to aid in the discovery of unexpected seismic signals, improve seismic monitoring, and promote a greater understanding of a variety of earthquake processes.

  3. Earthquake detection through computationally efficient similarity search

    PubMed Central

    Yoon, Clara E.; O’Reilly, Ossian; Bergen, Karianne J.; Beroza, Gregory C.

    2015-01-01

    Seismology is experiencing rapid growth in the quantity of data, which has outpaced the development of processing algorithms. Earthquake detection—identification of seismic events in continuous data—is a fundamental operation for observational seismology. We developed an efficient method to detect earthquakes using waveform similarity that overcomes the disadvantages of existing detection methods. Our method, called Fingerprint And Similarity Thresholding (FAST), can analyze a week of continuous seismic waveform data in less than 2 hours, or 140 times faster than autocorrelation. FAST adapts a data mining algorithm, originally designed to identify similar audio clips within large databases; it first creates compact “fingerprints” of waveforms by extracting key discriminative features, then groups similar fingerprints together within a database to facilitate fast, scalable search for similar fingerprint pairs, and finally generates a list of earthquake detections. FAST detected most (21 of 24) cataloged earthquakes and 68 uncataloged earthquakes in 1 week of continuous data from a station located near the Calaveras Fault in central California, achieving detection performance comparable to that of autocorrelation, with some additional false detections. FAST is expected to realize its full potential when applied to extremely long duration data sets over a distributed network of seismic stations. The widespread application of FAST has the potential to aid in the discovery of unexpected seismic signals, improve seismic monitoring, and promote a greater understanding of a variety of earthquake processes. PMID:26665176

  4. Efficiently modeling neural networks on massively parallel computers

    NASA Technical Reports Server (NTRS)

    Farber, Robert M.

    1993-01-01

    Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper describes the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD (Single Instruction Multiple Data) architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SIMD computers and can be implemented on MIMD computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors (which has a sub-linear runtime growth on the order of O(log(number of processors)). We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can extend to arbitrarily large networks (within memory limitations) by merging the memory space of separate processors with fast adjacent processor interprocessor communications. This paper will consider the simulation of only feed forward neural network although this method is extendable to recurrent networks.

  5. CAD/CAM: Practical and Persuasive in Canadian Schools

    ERIC Educational Resources Information Center

    Willms, Ed

    2007-01-01

    Chances are that many high school students would not know how to use drafting instruments, but some might want to gain competence in computer-assisted design (CAD) and possibly computer-assisted manufacturing (CAM). These students are often attracted to tech courses by the availability of CAD/CAM instructions, and many go on to impress employers…

  6. CAD/CAM: Practical and Persuasive in Canadian Schools

    ERIC Educational Resources Information Center

    Willms, Ed

    2007-01-01

    Chances are that many high school students would not know how to use drafting instruments, but some might want to gain competence in computer-assisted design (CAD) and possibly computer-assisted manufacturing (CAM). These students are often attracted to tech courses by the availability of CAD/CAM instructions, and many go on to impress employers…

  7. An efficient method for computation of the manipulator inertia matrix

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1989-01-01

    An efficient method of computation of the manipulator inertia matrix is presented. Using spatial notations, the method leads to the definition of the composite rigid-body spatial inertia, which is a spatial representation of the notion of augmented body. The previously proposed methods, the physical interpretations leading to their derivation, and their redundancies are analyzed. The proposed method achieves a greater efficiency by eliminating the redundancy in the intrinsic equations as well as by a better choice of coordinate frame for their projection. In this case, removing the redundancy leads to greater efficiency of the computation in both serial and parallel senses.

  8. Conceptual Fuselage Design with Direct CAD Modeling

    NASA Astrophysics Data System (ADS)

    Anderson, Benjamin K.

    In today's day and age, the use of automated technology is becoming increasingly prevalent. Throughout the aerospace industry, we see the use of automated systems in manufacturing, testing, and, progressively, in design. This thesis focuses on the idea of automated structural design that can be directly coupled with parametric Computer-Aided Drafting (CAD) and used to support aircraft conceptual design. This idea has been around for many years; however, with the advancement of CAD technology, it is becoming more realistic. Having the ability to input design parameters, analyze the structure, and produce a basic CAD model not only saves time in the design process but provides an excellent platform to communicate ideas. The user has the ability to change parameters and quickly determine the effect on the structure. Coupling this idea with automated parametric CAD provides visual verification and a platform to export into Finite Element Analysis (FEA) for further verification.

  9. An application protocol for CAD to CAD transfer of electronic information

    NASA Technical Reports Server (NTRS)

    Azu, Charles C., Jr.

    1993-01-01

    The exchange of Computer Aided Design (CAD) information between dissimilar CAD systems is a problem. This is especially true for transferring electronics CAD information such as multi-chip module (MCM), hybrid microcircuit assembly (HMA), and printed circuit board (PCB) designs. Currently, there exists several neutral data formats for transferring electronics CAD information. These include IGES, EDIF, and DXF formats. All these formats have limitations for use in exchanging electronic data. In an attempt to overcome these limitations, the Navy's MicroCIM program implemented a project to transfer hybrid microcircuit design information between dissimilar CAD systems. The IGES (Initial Graphics Exchange Specification) format is used since it is well established within the CAD industry. The goal of the project is to have a complete transfer of microelectronic CAD information, using IGES, without any data loss. An Application Protocol (AP) is being developed to specify how hybrid microcircuit CAD information will be represented by IGES entity constructs. The AP defines which IGES data items are appropriate for describing HMA geometry, connectivity, and processing as well as HMA material characteristics.

  10. A highly efficient cocaine detoxifying enzyme obtained by computational design

    PubMed Central

    Zheng, Fang; Xue, Liu; Hou, Shurong; Liu, Junjun; Zhan, Max; Yang, Wenchao; Zhan, Chang-Guo

    2014-01-01

    Compared to naturally occurring enzymes, computationally designed enzymes are usually much less efficient, with their catalytic activities being more than six orders of magnitude below the diffusion limit. Here we use a two-step computational design approach, combined with experimental work, to design a highly efficient cocaine hydrolising enzyme. We engineer E30-6 from human butyrylcholinesterase (BChE), which is specific for cocaine hydrolysis, and obtain a much higher catalytic efficiency for cocaine conversion than for conversion of the natural BChE substrate, acetylcholine (ACh). The catalytic efficiency of E30-6 for cocaine hydrolysis is comparable to that of the most efficient known naturally-occurring hydrolytic enzyme, acetylcholinesterase, the catalytic activity of which approaches the diffusion limit. We further show that E30-6 can protect mice from a subsequently administered lethal dose of cocaine, suggesting the enzyme may have therapeutic potential in the setting of cocaine detoxification or cocaine abuse. PMID:24643289

  11. Some Workplace Effects of CAD and CAM.

    ERIC Educational Resources Information Center

    Ebel, Karl-H.; Ulrich, Erhard

    1987-01-01

    Examines the impact of computer-aided design (CAD) and computer-aided manufacturing (CAM) on employment, work organization, working conditions, job content, training, and industrial relations in several countries. Finds little evidence of negative employment effects since productivity gains are offset by various compensatory factors. (Author/CH)

  12. Some Workplace Effects of CAD and CAM.

    ERIC Educational Resources Information Center

    Ebel, Karl-H.; Ulrich, Erhard

    1987-01-01

    Examines the impact of computer-aided design (CAD) and computer-aided manufacturing (CAM) on employment, work organization, working conditions, job content, training, and industrial relations in several countries. Finds little evidence of negative employment effects since productivity gains are offset by various compensatory factors. (Author/CH)

  13. Fit of CAD/CAM implant frameworks: a comprehensive review.

    PubMed

    Abduo, Jaafar

    2014-12-01

    Computer-aided design and computer-aided manufacturing (CAD/CAM) is a strongly emerging prosthesis fabrication method for implant dentistry. Currently, CAD/CAM allows the construction of implant frameworks from different materials. This review evaluates the literature pertaining to the precision fit of fixed implant frameworks fabricated by CAD/CAM. Following a comprehensive electronic search through PubMed (MEDLINE), 14 relevant articles were identified. The results indicate that the precision fit of CAD/CAM frameworks exceeded the fit of the 1-piece cast frameworks and laser-welded frameworks. A similar fit was observed for CAD/CAM frameworks and bonding of the framework body to prefabricated cylinders. The influence of CAD/CAM materials on the fit of a framework is minimal.

  14. CAD/CAM technology for implant abutments, crowns, and superstructures.

    PubMed

    Kapos, Theodoros; Evans, Christopher

    2014-01-01

    The aim of this systematic review was to compare implant prostheses fabricated by computer-assisted design and computer-assisted manufacturing (CAD/CAM) with conventionally fabricated implant prostheses when assessing esthetics, complications (biologic and mechanical), patient satisfaction, and economic factors. Electronic searches for clinical studies focusing on long-term follow-up were performed using the PubMed and Ovid search engines. Concentrating on the restorative aspect of the CAD/CAM technology applicable to implant dentistry, pertinent literature was divided into articles related to implant abutments, crowns, and frameworks. A total of 18 articles satisfied the inclusion criteria. Two articles reported on CAD/CAM crowns, six on abutments, and 10 on implant-supported CAD/CAM frameworks. The mean survival rate for CAD/CAM crowns was 98.85% and for CAD/CAM abutments 100%. The mean survival rate for CAD/CAM frameworks was 95.98%. Based on the current literature, CAD/CAM fabricated crowns, abutments, and frameworks demonstrate survival rates comparable to conventionally fabricated prostheses. Implant survival appears unaffected by fabrication technique. Since this technology encompasses several manufacturing variations, a new definition might be necessary to accurately define the processes under which the CAD/CAM restorations are fabricated. "Complete CAD/CAM product" where no or minimal manual intervention is employed could be a possible term.

  15. Positive Wigner Functions Render Classical Simulation of Quantum Computation Efficient

    NASA Astrophysics Data System (ADS)

    Mari, A.; Eisert, J.

    2012-12-01

    We show that quantum circuits where the initial state and all the following quantum operations can be represented by positive Wigner functions can be classically efficiently simulated. This is true both for continuous-variable as well as discrete variable systems in odd prime dimensions, two cases which will be treated on entirely the same footing. Noting the fact that Clifford and Gaussian operations preserve the positivity of the Wigner function, our result generalizes the Gottesman-Knill theorem. Our algorithm provides a way of sampling from the output distribution of a computation or a simulation, including the efficient sampling from an approximate output distribution in the case of sampling imperfections for initial states, gates, or measurements. In this sense, this work highlights the role of the positive Wigner function as separating classically efficiently simulable systems from those that are potentially universal for quantum computing and simulation, and it emphasizes the role of negativity of the Wigner function as a computational resource.

  16. Positive Wigner functions render classical simulation of quantum computation efficient.

    PubMed

    Mari, A; Eisert, J

    2012-12-07

    We show that quantum circuits where the initial state and all the following quantum operations can be represented by positive Wigner functions can be classically efficiently simulated. This is true both for continuous-variable as well as discrete variable systems in odd prime dimensions, two cases which will be treated on entirely the same footing. Noting the fact that Clifford and Gaussian operations preserve the positivity of the Wigner function, our result generalizes the Gottesman-Knill theorem. Our algorithm provides a way of sampling from the output distribution of a computation or a simulation, including the efficient sampling from an approximate output distribution in the case of sampling imperfections for initial states, gates, or measurements. In this sense, this work highlights the role of the positive Wigner function as separating classically efficiently simulable systems from those that are potentially universal for quantum computing and simulation, and it emphasizes the role of negativity of the Wigner function as a computational resource.

  17. Applying Performance Models to Understand Data-Intensive Computing Efficiency

    DTIC Science & Technology

    2010-05-01

    data - intensive computing, cloud computing, analytical modeling, Hadoop, MapReduce , performance and efficiency 1 Introduction “ Data - intensive scalable...the writing of the output data to disk. In systems that replicate data across multiple nodes, such as the GFS [11] and HDFS [3] distributed file...evenly distributed across all participating nodes in the cluster , that nodes are homogeneous, and that each node retrieves its initial input from local

  18. I/O-Efficient Scientific Computation Using TPIE

    NASA Technical Reports Server (NTRS)

    Vengroff, Darren Erik; Vitter, Jeffrey Scott

    1996-01-01

    In recent years, input/output (I/O)-efficient algorithms for a wide variety of problems have appeared in the literature. However, systems specifically designed to assist programmers in implementing such algorithms have remained scarce. TPIE is a system designed to support I/O-efficient paradigms for problems from a variety of domains, including computational geometry, graph algorithms, and scientific computation. The TPIE interface frees programmers from having to deal not only with explicit read and write calls, but also the complex memory management that must be performed for I/O-efficient computation. In this paper we discuss applications of TPIE to problems in scientific computation. We discuss algorithmic issues underlying the design and implementation of the relevant components of TPIE and present performance results of programs written to solve a series of benchmark problems using our current TPIE prototype. Some of the benchmarks we present are based on the NAS parallel benchmarks while others are of our own creation. We demonstrate that the central processing unit (CPU) overhead required to manage I/O is small and that even with just a single disk, the I/O overhead of I/O-efficient computation ranges from negligible to the same order of magnitude as CPU time. We conjecture that if we use a number of disks in parallel this overhead can be all but eliminated.

  19. Equilibrium analysis of the efficiency of an autonomous molecular computer

    NASA Astrophysics Data System (ADS)

    Rose, John A.; Deaton, Russell J.; Hagiya, Masami; Suyama, Akira

    2002-02-01

    In the whiplash polymerase chain reaction (WPCR), autonomous molecular computation is implemented in vitro by the recursive, self-directed polymerase extension of a mixture of DNA hairpins. Although computational efficiency is known to be reduced by a tendency for DNAs to self-inhibit by backhybridization, both the magnitude of this effect and its dependence on the reaction conditions have remained open questions. In this paper, the impact of backhybridization on WPCR efficiency is addressed by modeling the recursive extension of each strand as a Markov chain. The extension efficiency per effective polymerase-DNA encounter is then estimated within the framework of a statistical thermodynamic model. Model predictions are shown to provide close agreement with the premature halting of computation reported in a recent in vitro WPCR implementation, a particularly significant result, given that backhybridization had been discounted as the dominant error process. The scaling behavior further indicates completion times to be sufficiently long to render WPCR-based massive parallelism infeasible. A modified architecture, PNA-mediated WPCR (PWPCR) is then proposed in which the occupancy of backhybridized hairpins is reduced by targeted PNA2/DNA triplex formation. The efficiency of PWPCR is discussed using a modified form of the model developed for WPCR. Predictions indicate the PWPCR efficiency is sufficient to allow the implementation of autonomous molecular computation on a massive scale.

  20. A CAD System for Hemorrhagic Stroke

    PubMed Central

    Nowinski, Wieslaw L; Qian, Guoyu; Hanley, Daniel F

    2014-01-01

    Summary Computer-aided detection/diagnosis (CAD) is a key component of routine clinical practice, increasingly used for detection, interpretation, quantification and decision support. Despite a critical need, there is no clinically accepted CAD system for stroke yet. Here we introduce a CAD system for hemorrhagic stroke. This CAD system segments, quantifies, and displays hematoma in 2D/3D, and supports evacuation of hemorrhage by thrombolytic treatment monitoring progression and quantifying clot removal. It supports seven-step workflow: select patient, add a new study, process patient's scans, show segmentation results, plot hematoma volumes, show 3D synchronized time series hematomas, and generate report. The system architecture contains four components: library, tools, application with user interface, and hematoma segmentation algorithm. The tools include a contour editor, 3D surface modeler, 3D volume measure, histogramming, hematoma volume plot, and 3D synchronized time-series hematoma display. The CAD system has been designed and implemented in C++. It has also been employed in the CLEAR and MISTIE phase-III, multicenter clinical trials. This stroke CAD system is potentially useful in research and clinical applications, particularly for clinical trials. PMID:25196612

  1. A CAD System for Hemorrhagic Stroke.

    PubMed

    Nowinski, Wieslaw L; Qian, Guoyu; Hanley, Daniel F

    2014-09-01

    Computer-aided detection/diagnosis (CAD) is a key component of routine clinical practice, increasingly used for detection, interpretation, quantification and decision support. Despite a critical need, there is no clinically accepted CAD system for stroke yet. Here we introduce a CAD system for hemorrhagic stroke. This CAD system segments, quantifies, and displays hematoma in 2D/3D, and supports evacuation of hemorrhage by thrombolytic treatment monitoring progression and quantifying clot removal. It supports seven-step workflow: select patient, add a new study, process patient's scans, show segmentation results, plot hematoma volumes, show 3D synchronized time series hematomas, and generate report. The system architecture contains four components: library, tools, application with user interface, and hematoma segmentation algorithm. The tools include a contour editor, 3D surface modeler, 3D volume measure, histogramming, hematoma volume plot, and 3D synchronized time-series hematoma display. The CAD system has been designed and implemented in C++. It has also been employed in the CLEAR and MISTIE phase-III, multicenter clinical trials. This stroke CAD system is potentially useful in research and clinical applications, particularly for clinical trials.

  2. Future CAD in multi-dimensional medical images--project on multi-organ, multi-disease CAD system.

    PubMed

    Kobatake, Hidefumi

    2007-01-01

    A large research project on the subject of computer-aided diagnosis (CAD) entitled "Intelligent Assistance in Diagnosis of Multi-dimensional Medical Images" was initiated in Japan in 2003. The objective of this research project is to develop a multi-organ, multi-disease CAD system that incorporates anatomical knowledge of the human body and diagnostic knowledge of various types of diseases. The present paper provides an overview of the project and clarifies the trend of future CAD technologies in Japan.

  3. CAD software lights up the environmental scene

    SciTech Connect

    Basta, N.

    1996-01-01

    There seems to be a natural affinity between the data requirements of environmental work and computer-aided design (CAD) software. Perhaps the best example of this is the famous shots of the ozone hole produced by computer-enhanced satellite imagery in the mid-1980s. Once this image was published, the highly abstract discussion of ozone concentrations and arctic wind patterns suddenly became very real. On ground level, in the day-to-day work of environmental managers and site restorers, CAD software is proving its value over and over. Graphic images are a convenient, readily understandable way of presenting the large volumes of data produced by environmental projects. With the latest CAD systems, the work of specifying process equipment or subsurface conditions can be reused again and again as projects move from the study and design phase to the construction or remediation phases. An important subset of CAD is geographic information systems (GIS), which are used to organize data on a site-specific basis. Like CAD itself, GIS reaches out beyond the borders of the computer screen or printout, in such ways making use of the Geostationary Positioning System (a global method of locating position precisely), and matching current with historical data. Good GIS software can also make use of the large database of geological data produced by government and industry, thus saving on surveying costs and exploratory well construction.

  4. Computationally Efficient Composite Likelihood Statistics for Demographic Inference.

    PubMed

    Coffman, Alec J; Hsieh, Ping Hsun; Gravel, Simon; Gutenkunst, Ryan N

    2016-02-01

    Many population genetics tools employ composite likelihoods, because fully modeling genomic linkage is challenging. But traditional approaches to estimating parameter uncertainties and performing model selection require full likelihoods, so these tools have relied on computationally expensive maximum-likelihood estimation (MLE) on bootstrapped data. Here, we demonstrate that statistical theory can be applied to adjust composite likelihoods and perform robust computationally efficient statistical inference in two demographic inference tools: ∂a∂i and TRACTS. On both simulated and real data, the adjustments perform comparably to MLE bootstrapping while using orders of magnitude less computational time.

  5. Popescu-Rohrlich correlations imply efficient instantaneous nonlocal quantum computation

    NASA Astrophysics Data System (ADS)

    Broadbent, Anne

    2016-08-01

    In instantaneous nonlocal quantum computation, two parties cooperate in order to perform a quantum computation on their joint inputs, while being restricted to a single round of simultaneous communication. Previous results showed that instantaneous nonlocal quantum computation is possible, at the cost of an exponential amount of prior shared entanglement (in the size of the input). Here, we show that a linear amount of entanglement suffices, (in the size of the computation), as long as the parties share nonlocal correlations as given by the Popescu-Rohrlich box. This means that communication is not required for efficient instantaneous nonlocal quantum computation. Exploiting the well-known relation to position-based cryptography, our result also implies the impossibility of secure position-based cryptography against adversaries with nonsignaling correlations. Furthermore, our construction establishes a quantum analog of the classical communication complexity collapse under nonsignaling correlations.

  6. Cement Thickness of Inlay Restorations Made of Lithium Disilicate, Polymer-Infiltrated Ceramic and Nano-Ceramic CAD/CAM Materials Evaluated Using 3D X-Ray Micro-Computed Tomography.

    PubMed

    Uzgur, Recep; Ercan, Ertuğrul; Uzgur, Zeynep; Çolak, Hakan; Yalçın, Muhammet; Özcan, Mutlu

    2016-08-12

    To evaluate the marginal and internal cement thicknesses of inlay restorations made of various CAD/CAM materials using 3D X-ray micro-computed tomography (micro-CT) technique. Caries-free extracted mandibular molars (N = 30) with similar size were randomly assigned to three groups (N = 10 per group). Mesio-occlusal-distal (MOD) cavities were prepared, and inlay restorations were obtained by milling out CAD/CAM materials namely, (a) IPS: monolithic lithium disilicate (control), (b) VE: polymer-infiltrated ceramic, and (c) CS: nano-ceramic using a CAM unit. Marginal and internal cement thicknesses were measured using 3D micro-CT. Data were analyzed using 1-way ANOVA and Tukey's tests (alpha = 0.05). The mean marginal and internal cement thickness were not significant in all inlay materials (p > 0.05). Mean marginal cement thickness (μm) was the lowest for the IPS group (67.54 ± 10.16) followed by VE (84.09 ± 3.94) and CS (95.18 ± 10.58) (p > 0.05). The internal cement thickness (μm) was the lowest in the CS group (54.85 ± 6.94) followed by IPS (60.58 ± 9.22) and VE (77.53 ± 12.13) (p > 0.05). Marginal and internal cement thicknesses of MOD inlays made of monolithic lithium disilicate, polymer-infiltrated ceramic, and nano-ceramic CAD/CAM materials were similar and all less than 100 μm, which could be considered clinically acceptable. MOD inlays made of different CAD/CAM materials presented similar cement thickness, less than 100 μm. © 2016 by the American College of Prosthodontists.

  7. On the Use of CAD and Cartesian Methods for Aerodynamic Optimization

    NASA Technical Reports Server (NTRS)

    Nemec, M.; Aftosmis, M. J.; Pulliam, T. H.

    2004-01-01

    The objective for this paper is to present the development of an optimization capability for Curt3D, a Cartesian inviscid-flow analysis package. We present the construction of a new optimization framework and we focus on the following issues: 1) Component-based geometry parameterization approach using parametric-CAD models and CAPRI. A novel geometry server is introduced that addresses the issue of parallel efficiency while only sparingly consuming CAD resources; 2) The use of genetic and gradient-based algorithms for three-dimensional aerodynamic design problems. The influence of noise on the optimization methods is studied. Our goal is to create a responsive and automated framework that efficiently identifies design modifications that result in substantial performance improvements. In addition, we examine the architectural issues associated with the deployment of a CAD-based approach in a heterogeneous parallel computing environment that contains both CAD workstations and dedicated compute engines. We demonstrate the effectiveness of the framework for a design problem that features topology changes and complex geometry.

  8. A Computationally Efficient Algorithm for Aerosol Phase Equilibrium

    SciTech Connect

    Zaveri, Rahul A.; Easter, Richard C.; Peters, Len K.; Wexler, Anthony S.

    2004-10-04

    Three-dimensional models of atmospheric inorganic aerosols need an accurate yet computationally efficient thermodynamic module that is repeatedly used to compute internal aerosol phase state equilibrium. In this paper, we describe the development and evaluation of a computationally efficient numerical solver called MESA (Multicomponent Equilibrium Solver for Aerosols). The unique formulation of MESA allows iteration of all the equilibrium equations simultaneously while maintaining overall mass conservation and electroneutrality in both the solid and liquid phases. MESA is unconditionally stable, shows robust convergence, and typically requires only 10 to 20 single-level iterations (where all activity coefficients and aerosol water content are updated) per internal aerosol phase equilibrium calculation. Accuracy of MESA is comparable to that of the highly accurate Aerosol Inorganics Model (AIM), which uses a rigorous Gibbs free energy minimization approach. Performance evaluation will be presented for a number of complex multicomponent mixtures commonly found in urban and marine tropospheric aerosols.

  9. An overview of energy efficiency techniques in cluster computing systems

    SciTech Connect

    Valentini, Giorgio Luigi; Lassonde, Walter; Khan, Samee Ullah; Min-Allah, Nasro; Madani, Sajjad A.; Li, Juan; Zhang, Limin; Wang, Lizhe; Ghani, Nasir; Kolodziej, Joanna; Li, Hongxiang; Zomaya, Albert Y.; Xu, Cheng-Zhong; Balaji, Pavan; Vishnu, Abhinav; Pinel, Fredric; Pecero, Johnatan E.; Kliazovich, Dzmitry; Bouvry, Pascal

    2011-09-10

    Two major constraints demand more consideration for energy efficiency in cluster computing: (a) operational costs, and (b) system reliability. Increasing energy efficiency in cluster systems will reduce energy consumption, excess heat, lower operational costs, and improve system reliability. Based on the energy-power relationship, and the fact that energy consumption can be reduced with strategic power management, we focus in this survey on the characteristic of two main power management technologies: (a) static power management (SPM) systems that utilize low-power components to save the energy, and (b) dynamic power management (DPM) systems that utilize software and power-scalable components to optimize the energy consumption. We present the current state of the art in both of the SPM and DPM techniques, citing representative examples. The survey is concluded with a brief discussion and some assumptions about the possible future directions that could be explored to improve the energy efficiency in cluster computing.

  10. Making a Case for CAD in the Curriculum.

    ERIC Educational Resources Information Center

    Threlfall, K. Denise

    1995-01-01

    Computer-assisted design (CAD) technology is transforming the apparel industry. Students of fashion merchandising and clothing design must be prepared on state-of-the-art equipment. ApparelCAD software is one example of courseware for instruction in pattern design and production. (SK)

  11. Making a Case for CAD in the Curriculum.

    ERIC Educational Resources Information Center

    Threlfall, K. Denise

    1995-01-01

    Computer-assisted design (CAD) technology is transforming the apparel industry. Students of fashion merchandising and clothing design must be prepared on state-of-the-art equipment. ApparelCAD software is one example of courseware for instruction in pattern design and production. (SK)

  12. An Evaluation of Internet-Based CAD Collaboration Tools

    ERIC Educational Resources Information Center

    Smith, Shana Shiang-Fong

    2004-01-01

    Due to the now widespread use of the Internet, most companies now require computer aided design (CAD) tools that support distributed collaborative design on the Internet. Such CAD tools should enable designers to share product models, as well as related data, from geographically distant locations. However, integrated collaborative design…

  13. Efficient Computation of 3D Clipped Voronoi Diagram

    NASA Astrophysics Data System (ADS)

    Yan, Dong-Ming; Wang, Wenping; Lévy, Bruno; Liu, Yang

    The Voronoi diagram is a fundamental geometry structure widely used in various fields, especially in computer graphics and geometry computing. For a set of points in a compact 3D domain (i.e. a finite 3D volume), some Voronoi cells of their Voronoi diagram are infinite, but in practice only the parts of the cells inside the domain are needed, as when computing the centroidal Voronoi tessellation. Such a Voronoi diagram confined to a compact domain is called a clipped Voronoi diagram. We present an efficient algorithm for computing the clipped Voronoi diagram for a set of sites with respect to a compact 3D volume, assuming that the volume is represented as a tetrahedral mesh. We also describe an application of the proposed method to implementing a fast method for optimal tetrahedral mesh generation based on the centroidal Voronoi tessellation.

  14. A concurrent computer aided detection (CAD) tool for articular cartilage disease of the knee on MR imaging using active shape models

    NASA Astrophysics Data System (ADS)

    Ramakrishna, Bharath; Saiprasad, Ganesh; Safdar, Nabile; Siddiqui, Khan; Chang, Chein-I.; Siegel, Eliot

    2008-03-01

    Osteoarthritis (OA) is the most common form of arthritis and a major cause of morbidity affecting millions of adults in the US and world wide. In the knee, OA begins with the degeneration of joint articular cartilage, eventually resulting in the femur and tibia coming in contact, and leading to severe pain and stiffness. There has been extensive research examining 3D MR imaging sequences and automatic/semi-automatic techniques for 2D/3D articular cartilage extraction. However, in routine clinical practice the most popular technique still remain radiographic examination and qualitative assessment of the joint space. This may be in large part because of a lack of tools that can provide clinically relevant diagnosis in adjunct (in near real time fashion) with the radiologist and which can serve the needs of the radiologists and reduce inter-observer variation. Our work aims to fill this void by developing a CAD application that can generate clinically relevant diagnosis of the articular cartilage damage in near real time fashion. The algorithm features a 2D Active Shape Model (ASM) for modeling the bone-cartilage interface on all the slices of a Double Echo Steady State (DESS) MR sequence, followed by measurement of the cartilage thickness from the surface of the bone, and finally by the identification of regions of abnormal thinness and focal/degenerative lesions. A preliminary evaluation of CAD tool was carried out on 10 cases taken from the Osteoarthritis Initiative (OAI) database. When compared with 2 board-certified musculoskeletal radiologists, the automatic CAD application was able to get segmentation/thickness maps in little over 60 seconds for all of the cases. This observation poses interesting possibilities for increasing radiologist productivity and confidence, improving patient outcomes, and applying more sophisticated CAD algorithms to routine orthopedic imaging tasks.

  15. The effects of slice thickness and radiation dose level variations on computer-aided diagnosis (CAD) nodule detection performance in pediatric chest CT scans

    NASA Astrophysics Data System (ADS)

    Emaminejad, Nastaran; Lo, Pechin; Ghahremani, Shahnaz; Kim, Grace H.; Brown, Matthew S.; McNitt-Gray, Michael F.

    2017-03-01

    For pediatric oncology patients, CT scans are performed to assess treatment response and disease progression. CAD may be used to detect lung nodules which would reflect metastatic disease. The purpose of this study was to investigate the effects of reducing radiation dose and varying slice thickness on CAD performance in the detection of solid lung nodules in pediatric patients. The dataset consisted of CT scans of 58 pediatric chest cases, from which 7 cases had lung nodules detected by radiologist, and a total of 28 nodules were marked. For each case, the original raw data (sinogram data) was collected and a noise addition model was used to simulate reduced-dose scans of 50%, 25% and 10% of the original dose. In addition, the original and reduced-dose raw data were reconstructed at slice thicknesses of 1.5 and 3 mm using a medium sharp (B45) kernel; the result was eight datasets (4 dose levels x 2 thicknesses) for each case An in-house CAD tool was applied on all reconstructed scans, and results were compared with the radiologist's markings. Patient level mean sensitivities at 3mm thickness were 24%, 26%, 25%, 27%, and at 1.5 mm thickness were 23%, 29%, 35%, 36% for 10%, 25%, 50%, and 100% dose level, respectively. Mean FP numbers were 1.5, 0.9, 0.8, 0.7 at 3 mm and 11.4, 3.5, 2.8, 2.8 at 1.5 mm thickness for 10%, 25%, 50%, and 100% dose level respectively. CAD sensitivity did not change with dose level for 3mm thickness, but did change with dose for 1.5 mm. False Positives increased at low dose levels where noise values were high.

  16. An Accurate and Efficient Method of Computing Differential Seismograms

    NASA Astrophysics Data System (ADS)

    Hu, S.; Zhu, L.

    2013-12-01

    Inversion of seismic waveforms for Earth structure usually requires computing partial derivatives of seismograms with respect to velocity model parameters. We developed an accurate and efficient method to calculate differential seismograms for multi-layered elastic media, based on the Thompson-Haskell propagator matrix technique. We first derived the partial derivatives of the Haskell matrix and its compound matrix respect to the layer parameters (P wave velocity, shear wave velocity and density). We then derived the partial derivatives of surface displacement kernels in the frequency-wavenumber domain. The differential seismograms are obtained by using the frequency-wavenumber double integration method. The implementation is computationally efficient and the total computing time is proportional to the time of computing the seismogram itself, i.e., independent of the number of layers in the model. We verified the correctness of results by comparing with differential seismograms computed using the finite differences method. Our results are more accurate because of the analytical nature of the derived partial derivatives.

  17. Efficient computations of quantum canonical Gibbs state in phase space

    NASA Astrophysics Data System (ADS)

    Bondar, Denys I.; Campos, Andre G.; Cabrera, Renan; Rabitz, Herschel A.

    2016-06-01

    The Gibbs canonical state, as a maximum entropy density matrix, represents a quantum system in equilibrium with a thermostat. This state plays an essential role in thermodynamics and serves as the initial condition for nonequilibrium dynamical simulations. We solve a long standing problem for computing the Gibbs state Wigner function with nearly machine accuracy by solving the Bloch equation directly in the phase space. Furthermore, the algorithms are provided yielding high quality Wigner distributions for pure stationary states as well as for Thomas-Fermi and Bose-Einstein distributions. The developed numerical methods furnish a long-sought efficient computation framework for nonequilibrium quantum simulations directly in the Wigner representation.

  18. Efficient quantum circuits for one-way quantum computing.

    PubMed

    Tanamoto, Tetsufumi; Liu, Yu-Xi; Hu, Xuedong; Nori, Franco

    2009-03-13

    While Ising-type interactions are ideal for implementing controlled phase flip gates in one-way quantum computing, natural interactions between solid-state qubits are most often described by either the XY or the Heisenberg models. We show an efficient way of generating cluster states directly using either the imaginary SWAP (iSWAP) gate for the XY model, or the sqrt[SWAP] gate for the Heisenberg model. Our approach thus makes one-way quantum computing more feasible for solid-state devices.

  19. A Case Study in CAD Design Automation

    ERIC Educational Resources Information Center

    Lowe, Andrew G.; Hartman, Nathan W.

    2011-01-01

    Computer-aided design (CAD) software and other product life-cycle management (PLM) tools have become ubiquitous in industry during the past 20 years. Over this time they have continuously evolved, becoming programs with enormous capabilities, but the companies that use them have not evolved their design practices at the same rate. Due to the…

  20. Mechanical Drafting with CAD. Teacher Edition.

    ERIC Educational Resources Information Center

    McClain, Gerald R.

    This instructor's manual contains 13 units of instruction for a course on mechanical drafting with options for using computer-aided drafting (CAD). Each unit includes some or all of the following basic components of a unit of instruction: objective sheet, suggested activities for the teacher, assignment sheets and answers to assignment sheets,…

  1. Mechanical Drafting with CAD. Teacher Edition.

    ERIC Educational Resources Information Center

    McClain, Gerald R.

    This instructor's manual contains 13 units of instruction for a course on mechanical drafting with options for using computer-aided drafting (CAD). Each unit includes some or all of the following basic components of a unit of instruction: objective sheet, suggested activities for the teacher, assignment sheets and answers to assignment sheets,…

  2. CAD-RADS(TM) Coronary Artery Disease - Reporting and Data System. An expert consensus document of the Society of Cardiovascular Computed Tomography (SCCT), the American College of Radiology (ACR) and the North American Society for Cardiovascular Imaging (NASCI). Endorsed by the American College of Cardiology.

    PubMed

    Cury, Ricardo C; Abbara, Suhny; Achenbach, Stephan; Agatston, Arthur; Berman, Daniel S; Budoff, Matthew J; Dill, Karin E; Jacobs, Jill E; Maroules, Christopher D; Rubin, Geoffrey D; Rybicki, Frank J; Schoepf, U Joseph; Shaw, Leslee J; Stillman, Arthur E; White, Charles S; Woodard, Pamela K; Leipsic, Jonathon A

    2016-01-01

    The intent of CAD-RADS - Coronary Artery Disease Reporting and Data System is to create a standardized method to communicate findings of coronary CT angiography (coronary CTA) in order to facilitate decision-making regarding further patient management. The suggested CAD-RADS classification is applied on a per-patient basis and represents the highest-grade coronary artery lesion documented by coronary CTA. It ranges from CAD-RADS 0 (Zero) for the complete absence of stenosis and plaque to CAD-RADS 5 for the presence of at least one totally occluded coronary artery and should always be interpreted in conjunction with the impression found in the report. Specific recommendations are provided for further management of patients with stable or acute chest pain based on the CAD-RADS classification. The main goal of CAD-RADS is to standardize reporting of coronary CTA results and to facilitate communication of test results to referring physicians along with suggestions for subsequent patient management. In addition, CAD-RADS will provide a framework of standardization that may benefit education, research, peer-review and quality assurance with the potential to ultimately result in improved quality of care. Copyright © 2016 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.

  3. CAD-RADS™: Coronary Artery Disease - Reporting and Data System: An Expert Consensus Document of the Society of Cardiovascular Computed Tomography (SCCT), the American College of Radiology (ACR) and the North American Society for Cardiovascular Imaging (NASCI). Endorsed by the American College of Cardiology.

    PubMed

    Cury, Ricardo C; Abbara, Suhny; Achenbach, Stephan; Agatston, Arthur; Berman, Daniel S; Budoff, Matthew J; Dill, Karin E; Jacobs, Jill E; Maroules, Christopher D; Rubin, Geoffrey D; Rybicki, Frank J; Schoepf, U Joseph; Shaw, Leslee J; Stillman, Arthur E; White, Charles S; Woodard, Pamela K; Leipsic, Jonathon A

    2016-12-01

    The intent of CAD-RADS - Coronary Artery Disease Reporting and Data System is to create a standardized method to communicate findings of coronary CT angiography (coronary CTA) in order to facilitate decision-making regarding further patient management. The suggested CAD-RADS classification is applied on a per-patient basis and represents the highest-grade coronary artery lesion documented by coronary CTA. It ranges from CAD-RADS 0 (Zero) for the complete absence of stenosis and plaque to CAD-RADS 5 for the presence of at least one totally occluded coronary artery and should always be interpreted in conjunction with the impression found in the report. Specific recommendations are provided for further management of patients with stable or acute chest pain based on the CAD-RADS classification. The main goal of CAD-RADS is to standardize reporting of coronary CTA results and to facilitate communication of test results to referring physicians along with suggestions for subsequent patient management. In addition, CAD-RADS will provide a framework of standardization that may benefit education, research, peer-review and quality assurance with the potential to ultimately result in improved quality of care. Copyright © 2016 Society of Cardiovascular Computed Tomography and the American College of Radiology. Published by Elsevier Inc. All rights reserved.

  4. Efficient and accurate computation of the incomplete Airy functions

    NASA Technical Reports Server (NTRS)

    Constantinides, E. D.; Marhefka, R. J.

    1993-01-01

    The incomplete Airy integrals serve as canonical functions for the uniform ray optical solutions to several high-frequency scattering and diffraction problems that involve a class of integrals characterized by two stationary points that are arbitrarily close to one another or to an integration endpoint. Integrals with such analytical properties describe transition region phenomena associated with composite shadow boundaries. An efficient and accurate method for computing the incomplete Airy functions would make the solutions to such problems useful for engineering purposes. In this paper a convergent series solution for the incomplete Airy functions is derived. Asymptotic expansions involving several terms are also developed and serve as large argument approximations. The combination of the series solution with the asymptotic formulae provides for an efficient and accurate computation of the incomplete Airy functions. Validation of accuracy is accomplished using direct numerical integration data.

  5. Computational methods for efficient structural reliability and reliability sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1993-01-01

    This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

  6. A Computationally Efficient Method for Polyphonic Pitch Estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Ruohua; Reiss, Joshua D.; Mattavelli, Marco; Zoia, Giorgio

    2009-12-01

    This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI) as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.

  7. Computational methods for efficient structural reliability and reliability sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1993-01-01

    This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

  8. Selecting a CAD/CAM system for the first time

    SciTech Connect

    Link, C.H.

    1984-01-01

    A good percentage of manufacturing companies with gross sales exceeding $100 million already use some form of computer-aided design and manufacturing (CAD/CAM). More recently, however, a new, larger group with annual gross sales between $10 million and $100 million have entered the CAD/CAM marketplace. Interested because the technology has proven itself to be economical, beneficial, and relatively easy to implement without prior computer experience, these companies are actively investigating and applying CAD and CAD/CAM. Many are subcontractors to large companies with low design requirements. Others may design and/or manufacture a proprietary product line. Yet, while resembling one another, the needs of these companies vary widely. Applying the proper kind and level of CAD/CAM, therefore, requires a careful analysis of each user's needs.

  9. Efficient MATLAB computations with sparse and factored tensors.

    SciTech Connect

    Bader, Brett William; Kolda, Tamara Gibson (Sandia National Lab, Livermore, CA)

    2006-12-01

    In this paper, the term tensor refers simply to a multidimensional or N-way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose storing sparse tensors using coordinate format and describe the computational efficiency of this scheme for various mathematical operations, including those typical to tensor decomposition algorithms. Second, we study factored tensors, which have the property that they can be assembled from more basic components. We consider two specific types: a Tucker tensor can be expressed as the product of a core tensor (which itself may be dense, sparse, or factored) and a matrix along each mode, and a Kruskal tensor can be expressed as the sum of rank-1 tensors. We are interested in the case where the storage of the components is less than the storage of the full tensor, and we demonstrate that many elementary operations can be computed using only the components. All of the efficiencies described in this paper are implemented in the Tensor Toolbox for MATLAB.

  10. Convolutional networks for fast, energy-efficient neuromorphic computing

    PubMed Central

    Esser, Steven K.; Merolla, Paul A.; Arthur, John V.; Cassidy, Andrew S.; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J.; McKinstry, Jeffrey L.; Melano, Timothy; Barch, Davis R.; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D.; Modha, Dharmendra S.

    2016-01-01

    Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware’s underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer. PMID:27651489

  11. Improving computational efficiency of Monte Carlo simulations with variance reduction

    SciTech Connect

    Turner, A.

    2013-07-01

    CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)

  12. Computationally efficient ASIC implementation of space-time block decoding

    NASA Astrophysics Data System (ADS)

    Cavus, Enver; Daneshrad, Babak

    2002-12-01

    In this paper, we describe a computationally efficient ASIC design that leads to a highly efficient power and area implementation of space-time block decoder compared to a direct implementation of the original algorithm. Our study analyzes alternative methods of evaluating as well as implementing the previously reported maximum likelihood algorithms (Tarokh et al. 1998) for a more favorable hardware design. In our previous study (Cavus et al. 2001), after defining some intermediate variables at the algorithm level, highly computationally efficient decoding approaches, namely sign and double-sign methods, are developed and their effectiveness are illustrated for 2x2, 8x3 and 8x4 systems using BPSK, QPSK, 8-PSK, or 16-QAM modulation. In this work, alternative architectures for the decoder implementation are investigated and an implementation having a low computation approach is proposed. The applied techniques at the higher algorithm and architectural levels lead to a substantial simplification of the hardware architecture and significantly reduced power consumption. The proposed architecture is being fabricated in TSMC 0.18 μ process.

  13. Convolutional networks for fast, energy-efficient neuromorphic computing.

    PubMed

    Esser, Steven K; Merolla, Paul A; Arthur, John V; Cassidy, Andrew S; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J; McKinstry, Jeffrey L; Melano, Timothy; Barch, Davis R; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D; Modha, Dharmendra S

    2016-10-11

    Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware's underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.

  14. Computationally Efficient Spline-Based Time Delay Estimation

    PubMed Central

    Viola, Francesco; Walker, William F.

    2008-01-01

    We have previously presented a highly accurate, spline-based time delay estimator (TDE) that directly determines sub-sample time delay estimates from sampled data. The algorithm uses cubic splines to produce a continuous time representation of a reference signal, then computes an analytical matching function between this reference and a delayed signal. The location of the minima of this function yields estimates of the time delay. In this paper we present more computationally efficient formulations of this algorithm. We present the results of computer simulations and ultrasound experiments which indicate that the bias and the standard deviation of the proposed algorithms are comparable to those of the original method, and thus superior to other published algorithms. PMID:18986905

  15. Computationally efficient spline-based time delay estimation.

    PubMed

    Viola, Francesco; Walker, William F

    2008-09-01

    We previously presented a highly accurate, spline-based time delay estimator that directly determines subsample time delay estimates from sampled data. The algorithm uses cubic splines to produce a continuous time representation of a reference signal, and then computes an analytical matching function between this reference and a delayed signal. The location of the minima of this function yields estimates of the time delay. In this paper we present more computationally efficient formulations of this algorithm. We present the results of computer simulations and ultrasound experiments which indicate that the bias and the standard deviation of the proposed algorithms are comparable to those of the original method, and thus superior to other published algorithms.

  16. Efficient Computation of Exchange Energy Density with Gaussian Basis Functions.

    PubMed

    Liu, Fenglai; Kong, Jing

    2017-06-13

    Density functional theory (DFT) is widely applied in chemistry and physics. Still it fails to correctly predict quantitatively or even qualitatively for systems with significant nondynamic correlation. Several DFT functionals were proposed in recent years to treat the nondynamic correlation, most of which added the exact exchange energy density as a new variable. This quantity, calculated as Hartree-Fock (HF) exchange energy density, is the computational bottleneck for calculations with these new functionals. We present an implementation of an efficient seminumerical algorithm in this paper as a solution for this computational bottleneck. The method scales quadratically with respect to the molecular size and the basis set size. The scheme, exact for the purpose of computing the HF exchange energy density, is favored for medium-sized basis sets and can be competitive even for large basis sets with efficient grids when compared with our previous approximate resolution-of-identity scheme. It can also be used as a seminumerical integration scheme to compute the HF exchange energy and matrix on a standard atom-centered grid. Calculations on a series of alanine peptides show that for large basis sets the seminumerical scheme becomes competitive to the conventional analytical method and can be about six times faster for aug-cc-pvtz basis. The practicality of the algorithm is demonstrated through a local hybrid self-consistent calculation of the acenes-20 molecule.

  17. Simulation model to analyze the scatter radiation effects on breast cancer diagnosis by CAD system

    NASA Astrophysics Data System (ADS)

    Irita, Ricardo T.; Frere, Annie F.; Fujita, Hiroshi

    2002-05-01

    One of factors that more affect the radiographic image quality is the scatter radiation produced by interaction between the x-ray and the radiographed object. Recently the Computer Aided Diagnosis (CAD) Systems are coming to aid the detection of breast small details. Nevertheless, we not sure how much the scatter radiation decrease the efficiency of this systems. This work presents a model in order to quantify the scatter radiation and find it relation between CAD's results used for the microcalcification detection. We simulated scatter photons that reaches the film and we added it to the mammography image. The new images were processed and the alterations of the CAD's results were analyzed. The information loss to breast composed by 80 percent adipose tissue was 0,0561 per each centimeter increased in the breast's thickness. We calculated these same data considering a proportion variation of adipose tissue and considering the breast composition of 90 percent and 70 percent the loss it would be of 0.0504 and 0.07559 per increased cm, respectively. We can increase the wanted scattered radiation to any image with its own characteristics and analyze the disturbances that it can bring to the visual inspection or the automatic detection (CAD system) efficiently.

  18. CAD-CAM experiences at Bendix Kansas City: the user perspective

    SciTech Connect

    Mentesana, C.

    1983-04-01

    The Bendix Kansas City Division manufactures a variety of precision mechanical, electrical and electronic components and assemblies for the Department of Energy. CAD-CAM has been in use at Bendix for about two years. Development of CAD-CAM is the responsibility of the CAD-CAM Operations group. This group works with users, in-house computer professionals and vendors to provide CAD-CAM products and services.

  19. Energy efficient hybrid computing systems using spin devices

    NASA Astrophysics Data System (ADS)

    Sharad, Mrigank

    Emerging spin-devices like magnetic tunnel junctions (MTJ's), spin-valves and domain wall magnets (DWM) have opened new avenues for spin-based logic design. This work explored potential computing applications which can exploit such devices for higher energy-efficiency and performance. The proposed applications involve hybrid design schemes, where charge-based devices supplement the spin-devices, to gain large benefits at the system level. As an example, lateral spin valves (LSV) involve switching of nanomagnets using spin-polarized current injection through a metallic channel such as Cu. Such spin-torque based devices possess several interesting properties that can be exploited for ultra-low power computation. Analog characteristic of spin current facilitate non-Boolean computation like majority evaluation that can be used to model a neuron. The magneto-metallic neurons can operate at ultra-low terminal voltage of ˜20mV, thereby resulting in small computation power. Moreover, since nano-magnets inherently act as memory elements, these devices can facilitate integration of logic and memory in interesting ways. The spin based neurons can be integrated with CMOS and other emerging devices leading to different classes of neuromorphic/non-Von-Neumann architectures. The spin-based designs involve `mixed-mode' processing and hence can provide very compact and ultra-low energy solutions for complex computation blocks, both digital as well as analog. Such low-power, hybrid designs can be suitable for various data processing applications like cognitive computing, associative memory, and currentmode on-chip global interconnects. Simulation results for these applications based on device-circuit co-simulation framework predict more than ˜100x improvement in computation energy as compared to state of the art CMOS design, for optimal spin-device parameters.

  20. CAD/CAM: influencing the skilled trades

    SciTech Connect

    Plomp, P.W.

    1983-01-01

    CAD/CAM is influencing many of the skilled trades. Computers are permitting an increase in productivity with a resulting need for people with greater mental rather than motor skills. Computers are used for the control of a wide variety of manufacturing operations and the trend is accelerating rapidly. The net result is a smaller need for semi-skilled jobs but an increasing demand for skilled craftsmen.

  1. CAD-CAE in Electrical Machines and Drives Teaching.

    ERIC Educational Resources Information Center

    Belmans, R.; Geysen, W.

    1988-01-01

    Describes the use of computer-aided design (CAD) techniques in teaching the design of electrical motors. Approaches described include three technical viewpoints, such as electromagnetics, thermal, and mechanical aspects. Provides three diagrams, a table, and conclusions. (YP)

  2. Overview of NASA MSFC IEC Multi-CAD Collaboration Capability

    NASA Technical Reports Server (NTRS)

    Moushon, Brian; McDuffee, Patrick

    2005-01-01

    This viewgraph presentation provides an overview of a Design and Data Management System (DDMS) for Computer Aided Design (CAD) collaboration in order to support the Integrated Engineering Capability (IEC) at Marshall Space Flight Center (MSFC).

  3. CAD-CAE in Electrical Machines and Drives Teaching.

    ERIC Educational Resources Information Center

    Belmans, R.; Geysen, W.

    1988-01-01

    Describes the use of computer-aided design (CAD) techniques in teaching the design of electrical motors. Approaches described include three technical viewpoints, such as electromagnetics, thermal, and mechanical aspects. Provides three diagrams, a table, and conclusions. (YP)

  4. Single unit CAD/CAM restorations: a literature review.

    PubMed

    Freedman, Michael; Quinn, Frank; O'Sullivan, Michael

    2007-01-01

    Computer-aided design/computer-aided manufacture (CAD/CAM) has been used in dentistry since 1987. Since then, many CAD/CAM systems have been described, which enable the production of chair-side single unit dental restorations. These restorations are of comparable quality to those made by conventional techniques and have some specific advantages, including rapid production, improved wear properties, decreased laboratory fee and improved cross infection control. This literature review investigates the evidence base for the use of single unit CAD/CAM restorations. Materials, marginal gap, aesthetics, post-operative sensitivity, cementation, cost-effectiveness and longevity are discussed.

  5. Volume-averaged SAR in adult and child head models when using mobile phones: a computational study with detailed CAD-based models of commercial mobile phones.

    PubMed

    Keshvari, Jafar; Heikkilä, Teemu

    2011-12-01

    Previous studies comparing SAR difference in the head of children and adults used highly simplified generic models or half-wave dipole antennas. The objective of this study was to investigate the SAR difference in the head of children and adults using realistic EMF sources based on CAD models of commercial mobile phones. Four MRI-based head phantoms were used in the study. CAD models of Nokia 8310 and 6630 mobile phones were used as exposure sources. Commercially available FDTD software was used for the SAR calculations. SAR values were simulated at frequencies 900 MHz and 1747 MHz for Nokia 8310, and 900 MHz, 1747 MHz and 1950 MHz for Nokia 6630. The main finding of this study was that the SAR distribution/variation in the head models highly depends on the structure of the antenna and phone model, which suggests that the type of the exposure source is the main parameter in EMF exposure studies to be focused on. Although the previous findings regarding significant role of the anatomy of the head, phone position, frequency, local tissue inhomogeneity and tissue composition specifically in the exposed area on SAR difference were confirmed, the SAR values and SAR distributions caused by generic source models cannot be extrapolated to the real device exposures. The general conclusion is that from a volume averaged SAR point of view, no systematic differences between child and adult heads were found.

  6. Improving robustness and computational efficiency using modern C++

    SciTech Connect

    Paterno, M.; Kowalkowski, J.; Green, C.

    2014-01-01

    For nearly two decades, the C++ programming language has been the dominant programming language for experimental HEP. The publication of ISO/IEC 14882:2011, the current version of the international standard for the C++ programming language, makes available a variety of language and library facilities for improving the robustness, expressiveness, and computational efficiency of C++ code. However, much of the C++ written by the experimental HEP community does not take advantage of the features of the language to obtain these benefits, either due to lack of familiarity with these features or concern that these features must somehow be computationally inefficient. In this paper, we address some of the features of modern C+-+, and show how they can be used to make programs that are both robust and computationally efficient. We compare and contrast simple yet realistic examples of some common implementation patterns in C, currently-typical C++, and modern C++, and show (when necessary, down to the level of generated assembly language code) the quality of the executable code produced by recent C++ compilers, with the aim of allowing the HEP community to make informed decisions on the costs and benefits of the use of modern C++.

  7. Improving robustness and computational efficiency using modern C++

    NASA Astrophysics Data System (ADS)

    Paterno, M.; Kowalkowski, J.; Green, C.

    2014-06-01

    For nearly two decades, the C++ programming language has been the dominant programming language for experimental HEP. The publication of ISO/IEC 14882:2011, the current version of the international standard for the C++ programming language, makes available a variety of language and library facilities for improving the robustness, expressiveness, and computational efficiency of C++ code. However, much of the C++ written by the experimental HEP community does not take advantage of the features of the language to obtain these benefits, either due to lack of familiarity with these features or concern that these features must somehow be computationally inefficient. In this paper, we address some of the features of modern C+-+, and show how they can be used to make programs that are both robust and computationally efficient. We compare and contrast simple yet realistic examples of some common implementation patterns in C, currently-typical C++, and modern C++, and show (when necessary, down to the level of generated assembly language code) the quality of the executable code produced by recent C++ compilers, with the aim of allowing the HEP community to make informed decisions on the costs and benefits of the use of modern C++.

  8. Computing highly specific and mismatch tolerant oligomers efficiently.

    PubMed

    Yamada, Tomoyuki; Morishita, Shinichi

    2003-01-01

    The sequencing of the genomes of a variety of species and the growing databases containing expressed sequence tags (ESTs) and complementary DNAs (cDNAs) facilitate the design of highly specific oligomers for use as genomic markers, PCR primers, or DNA oligo microarrays. The first step in evaluating the specificity of short oligomers of about twenty units in length is to determine the frequencies at which the oligomers occur. However, for oligomers longer than about fifty units this is not efficient, as they usually have a frequency of only 1. A more suitable procedure is to consider the mismatch tolerance of an oligomer, that is, the minimum number of mismatches that allows a given oligomer to match a sub-sequence other than the target sequence anywhere in the genome or the EST database. However, calculating the exact value of mismatch tolerance is computationally costly and impractical. Therefore, we studied the problem of checking whether an oligomer meets the constraint that its mismatch tolerance is no less than a given threshold. Here, we present an efficient dynamic programming algorithm solution that utilizes suffix and height arrays. We demonstrated the effectiveness of this algorithm by efficiently computing a dense list of oligo-markers applicable to the human genome. Experimental results show that the algorithm runs faster than well-known Abrahamson's algorithm by orders of magnitude and is able to enumerate 63% to approximately 79% of qualified oligomers.

  9. Computing highly specific and noise-tolerant oligomers efficiently.

    PubMed

    Yamada, Tomoyuki; Morishita, Shinichi

    2004-03-01

    The sequencing of the genomes of a variety of species and the growing databases containing expressed sequence tags (ESTs) and complementary DNAs (cDNAs) facilitate the design of highly specific oligomers for use as genomic markers, PCR primers, or DNA oligo microarrays. The first step in evaluating the specificity of short oligomers of about 20 units in length is to determine the frequencies at which the oligomers occur. However, for oligomers longer than about fifty units this is not efficient, as they usually have a frequency of only 1. A more suitable procedure is to consider the mismatch tolerance of an oligomer, that is, the minimum number of mismatches that allows a given oligomer to match a substring other than the target sequence anywhere in the genome or the EST database. However, calculating the exact value of mismatch tolerance is computationally costly and impractical. Therefore, we studied the problem of checking whether an oligomer meets the constraint that its mismatch tolerance is no less than a given threshold. Here, we present an efficient dynamic programming algorithm solution that utilizes suffix and height arrays. We demonstrated the effectiveness of this algorithm by efficiently computing a dense list of numerous oligo-markers applicable to the human genome. Experimental results show that the algorithm runs faster than well-known Abrahamson's algorithm by orders of magnitude and is able to enumerate 65% approximately 76% of qualified oligomers.

  10. Methods for increased computational efficiency of multibody simulations

    NASA Astrophysics Data System (ADS)

    Epple, Alexander

    This thesis is concerned with the efficient numerical simulation of finite element based flexible multibody systems. Scaling operations are systematically applied to the governing index-3 differential algebraic equations in order to solve the problem of ill conditioning for small time step sizes. The importance of augmented Lagrangian terms is demonstrated. The use of fast sparse solvers is justified for the solution of the linearized equations of motion resulting in significant savings of computational costs. Three time stepping schemes for the integration of the governing equations of flexible multibody systems are discussed in detail. These schemes are the two-stage Radau IIA scheme, the energy decaying scheme, and the generalized-a method. Their formulations are adapted to the specific structure of the governing equations of flexible multibody systems. The efficiency of the time integration schemes is comprehensively evaluated on a series of test problems. Formulations for structural and constraint elements are reviewed and the problem of interpolation of finite rotations in geometrically exact structural elements is revisited. This results in the development of a new improved interpolation algorithm, which preserves the objectivity of the strain field and guarantees stable simulations in the presence of arbitrarily large rotations. Finally, strategies for the spatial discretization of beams in the presence of steep variations in cross-sectional properties are developed. These strategies reduce the number of degrees of freedom needed to accurately analyze beams with discontinuous properties, resulting in improved computational efficiency.

  11. A SINDA thermal model using CAD/CAE technologies

    NASA Technical Reports Server (NTRS)

    Rodriguez, Jose A.; Spencer, Steve

    1992-01-01

    The approach to thermal analysis described by this paper is a technique that incorporates Computer Aided Design (CAD) and Computer Aided Engineering (CAE) to develop a thermal model that has the advantages of Finite Element Methods (FEM) without abandoning the unique advantages of Finite Difference Methods (FDM) in the analysis of thermal systems. The incorporation of existing CAD geometry, the powerful use of a pre and post processor and the ability to do interdisciplinary analysis, will be described.

  12. Improving the Efficiency of Abdominal Aortic Aneurysm Wall Stress Computations

    PubMed Central

    Zelaya, Jaime E.; Goenezen, Sevan; Dargon, Phong T.; Azarbal, Amir-Farzin; Rugonyi, Sandra

    2014-01-01

    An abdominal aortic aneurysm is a pathological dilation of the abdominal aorta, which carries a high mortality rate if ruptured. The most commonly used surrogate marker of rupture risk is the maximal transverse diameter of the aneurysm. More recent studies suggest that wall stress from models of patient-specific aneurysm geometries extracted, for instance, from computed tomography images may be a more accurate predictor of rupture risk and an important factor in AAA size progression. However, quantification of wall stress is typically computationally intensive and time-consuming, mainly due to the nonlinear mechanical behavior of the abdominal aortic aneurysm walls. These difficulties have limited the potential of computational models in clinical practice. To facilitate computation of wall stresses, we propose to use a linear approach that ensures equilibrium of wall stresses in the aneurysms. This proposed linear model approach is easy to implement and eliminates the burden of nonlinear computations. To assess the accuracy of our proposed approach to compute wall stresses, results from idealized and patient-specific model simulations were compared to those obtained using conventional approaches and to those of a hypothetical, reference abdominal aortic aneurysm model. For the reference model, wall mechanical properties and the initial unloaded and unstressed configuration were assumed to be known, and the resulting wall stresses were used as reference for comparison. Our proposed linear approach accurately approximates wall stresses for varying model geometries and wall material properties. Our findings suggest that the proposed linear approach could be used as an effective, efficient, easy-to-use clinical tool to estimate patient-specific wall stresses. PMID:25007052

  13. Improving the efficiency of abdominal aortic aneurysm wall stress computations.

    PubMed

    Zelaya, Jaime E; Goenezen, Sevan; Dargon, Phong T; Azarbal, Amir-Farzin; Rugonyi, Sandra

    2014-01-01

    An abdominal aortic aneurysm is a pathological dilation of the abdominal aorta, which carries a high mortality rate if ruptured. The most commonly used surrogate marker of rupture risk is the maximal transverse diameter of the aneurysm. More recent studies suggest that wall stress from models of patient-specific aneurysm geometries extracted, for instance, from computed tomography images may be a more accurate predictor of rupture risk and an important factor in AAA size progression. However, quantification of wall stress is typically computationally intensive and time-consuming, mainly due to the nonlinear mechanical behavior of the abdominal aortic aneurysm walls. These difficulties have limited the potential of computational models in clinical practice. To facilitate computation of wall stresses, we propose to use a linear approach that ensures equilibrium of wall stresses in the aneurysms. This proposed linear model approach is easy to implement and eliminates the burden of nonlinear computations. To assess the accuracy of our proposed approach to compute wall stresses, results from idealized and patient-specific model simulations were compared to those obtained using conventional approaches and to those of a hypothetical, reference abdominal aortic aneurysm model. For the reference model, wall mechanical properties and the initial unloaded and unstressed configuration were assumed to be known, and the resulting wall stresses were used as reference for comparison. Our proposed linear approach accurately approximates wall stresses for varying model geometries and wall material properties. Our findings suggest that the proposed linear approach could be used as an effective, efficient, easy-to-use clinical tool to estimate patient-specific wall stresses.

  14. Full-mouth rehabilitation with monolithic CAD/CAM-fabricated hybrid and all-ceramic materials: A case report and 3-year follow up.

    PubMed

    Selz, Christian F; Vuck, Alexander; Guess, Petra C

    2016-02-01

    Esthetic full-mouth rehabilitation represents a great challenge for clinicians and dental technicians. Computer-aided design/ computer-assisted manufacture (CAD/CAM) technology and novel ceramic materials in combination with adhesive cementation provide a reliable, predictable, and economic workflow. Polychromatic feldspathic CAD/CAM ceramics that are specifically designed for anterior indications result in superior esthetics, whereas novel CAD/CAM hybrid ceramics provide sufficient fracture resistance and adsorption of the occlusal load in posterior areas. Screw-retained monolithic CAD/CAM lithium disilicate crowns (ie, hybrid abutment crowns) represent a reliable and time- and cost-efficient prosthetic implant solution. This case report details a CAD/CAM approach to the full-arch rehabilitation of a 65-year-old patient with toothand implant-supported restorations and provides an overview of the applied CAD/CAM materials and the utilized chairside intraoral scanner. The esthetics, functional occlusion, and gingival and peri-implant tissues remained stable over a follow-up period of 3 years. No signs of fractures within the restorations were observed.

  15. Adding computationally efficient realism to Monte Carlo turbulence simulation

    NASA Technical Reports Server (NTRS)

    Campbell, C. W.

    1985-01-01

    Frequently in aerospace vehicle flight simulation, random turbulence is generated using the assumption that the craft is small compared to the length scales of turbulence. The turbulence is presumed to vary only along the flight path of the vehicle but not across the vehicle span. The addition of the realism of three-dimensionality is a worthy goal, but any such attempt will not gain acceptance in the simulator community unless it is computationally efficient. A concept for adding three-dimensional realism with a minimum of computational complexity is presented. The concept involves the use of close rational approximations to irrational spectra and cross-spectra so that systems of stable, explicit difference equations can be used to generate the turbulence.

  16. Efficient simulation of open quantum system in duality quantum computing

    NASA Astrophysics Data System (ADS)

    Wei, Shi-Jie; Long, Gui-Lu

    2016-11-01

    Practical quantum systems are open systems due to interactions with their environment. Understanding the evolution of open systems dynamics is important for quantum noise processes , designing quantum error correcting codes, and performing simulations of open quantum systems. Here we proposed an efficient quantum algorithm for simulating the evolution of an open quantum system on a duality quantum computer. In contrast to unitary evolution in a usual quantum computer, the evolution operator in a duality quantum computer is a linear combination of unitary operators. In this duality algorithm, the time evolution of open quantum system is realized by using Kraus operators which is naturally realized in duality quantum computing. Compared to the Lloyd's quantum algorithm [Science.273, 1073(1996)] , the dependence on the dimension of the open quantum system in our algorithm is decreased. Moreover, our algorithm uses a truncated Taylor series of the evolution operators, exponentially improving the performance on the precision compared with existing quantum simulation algorithms with unitary evolution operations.

  17. Experiences With Efficient Methodologies for Teaching Computer Programming to Geoscientists

    NASA Astrophysics Data System (ADS)

    Jacobs, Christian T.; Gorman, Gerard J.; Rees, Huw E.; Craig, Lorraine E.

    2016-08-01

    Computer programming was once thought of as a skill required only by professional software developers. But today, given the ubiquitous nature of computation and data science it is quickly becoming necessary for all scientists and engineers to have at least a basic knowledge of how to program. Teaching how to program, particularly to those students with little or no computing background, is well-known to be a difficult task. However, there is also a wealth of evidence-based teaching practices for teaching programming skills which can be applied to greatly improve learning outcomes and the student experience. Adopting these practices naturally gives rise to greater learning efficiency - this is critical if programming is to be integrated into an already busy geoscience curriculum. This paper considers an undergraduate computer programming course, run during the last 5 years in the Department of Earth Science and Engineering at Imperial College London. The teaching methodologies that were used each year are discussed alongside the challenges that were encountered, and how the methodologies affected student performance. Anonymised student marks and feedback are used to highlight this, and also how the adjustments made to the course eventually resulted in a highly effective learning environment.

  18. Increasing Computational Efficiency of Cochlear Models Using Boundary Layers

    PubMed Central

    Alkhairy, Samiya A.; Shera, Christopher A.

    2016-01-01

    Our goal is to develop methods to improve the efficiency of computational models of the cochlea for applications that require the solution accurately only within a basal region of interest, specifically by decreasing the number of spatial sections needed for simulation of the problem with good accuracy. We design algebraic spatial and parametric transformations to computational models of the cochlea. These transformations are applied after the basal region of interest and allow for spatial preservation, driven by the natural characteristics of approximate spatial causality of cochlear models. The project is of foundational nature and hence the goal is to design, characterize and develop an understanding and framework rather than optimization and globalization. Our scope is as follows: designing the transformations; understanding the mechanisms by which computational load is decreased for each transformation; development of performance criteria; characterization of the results of applying each transformation to a specific physical model and discretization and solution schemes. In this manuscript, we introduce one of the proposed methods (complex spatial transformation) for a case study physical model that is a linear, passive, transmission line model in which the various abstraction layers (electric parameters, filter parameters, wave parameters) are clearer than other models. This is conducted in the frequency domain for multiple frequencies using a second order finite difference scheme for discretization and direct elimination for solving the discrete system of equations. The performance is evaluated using two developed simulative criteria for each of the transformations. In conclusion, the developed methods serve to increase efficiency of a computational traveling wave cochlear model when spatial preservation can hold, while maintaining good correspondence with the solution of interest and good accuracy, for applications in which the interest is in the solution

  19. Increasing computational efficiency of cochlear models using boundary layers

    NASA Astrophysics Data System (ADS)

    Alkhairy, Samiya A.; Shera, Christopher A.

    2015-12-01

    Our goal is to develop methods to improve the efficiency of computational models of the cochlea for applications that require the solution accurately only within a basal region of interest, specifically by decreasing the number of spatial sections needed for simulation of the problem with good accuracy. We design algebraic spatial and parametric transformations to computational models of the cochlea. These transformations are applied after the basal region of interest and allow for spatial preservation, driven by the natural characteristics of approximate spatial causality of cochlear models. The project is of foundational nature and hence the goal is to design, characterize and develop an understanding and framework rather than optimization and globalization. Our scope is as follows: designing the transformations; understanding the mechanisms by which computational load is decreased for each transformation; development of performance criteria; characterization of the results of applying each transformation to a specific physical model and discretization and solution schemes. In this manuscript, we introduce one of the proposed methods (complex spatial transformation) for a case study physical model that is a linear, passive, transmission line model in which the various abstraction layers (electric parameters, filter parameters, wave parameters) are clearer than other models. This is conducted in the frequency domain for multiple frequencies using a second order finite difference scheme for discretization and direct elimination for solving the discrete system of equations. The performance is evaluated using two developed simulative criteria for each of the transformations. In conclusion, the developed methods serve to increase efficiency of a computational traveling wave cochlear model when spatial preservation can hold, while maintaining good correspondence with the solution of interest and good accuracy, for applications in which the interest is in the solution

  20. Efficient quantum algorithm for computing n-time correlation functions.

    PubMed

    Pedernales, J S; Di Candia, R; Egusquiza, I L; Casanova, J; Solano, E

    2014-07-11

    We propose a method for computing n-time correlation functions of arbitrary spinorial, fermionic, and bosonic operators, consisting of an efficient quantum algorithm that encodes these correlations in an initially added ancillary qubit for probe and control tasks. For spinorial and fermionic systems, the reconstruction of arbitrary n-time correlation functions requires the measurement of two ancilla observables, while for bosonic variables time derivatives of the same observables are needed. Finally, we provide examples applicable to different quantum platforms in the frame of the linear response theory.

  1. Efficient Parallel Kernel Solvers for Computational Fluid Dynamics Applications

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He

    1997-01-01

    Distributed-memory parallel computers dominate today's parallel computing arena. These machines, such as Intel Paragon, IBM SP2, and Cray Origin2OO, have successfully delivered high performance computing power for solving some of the so-called "grand-challenge" problems. Despite initial success, parallel machines have not been widely accepted in production engineering environments due to the complexity of parallel programming. On a parallel computing system, a task has to be partitioned and distributed appropriately among processors to reduce communication cost and to attain load balance. More importantly, even with careful partitioning and mapping, the performance of an algorithm may still be unsatisfactory, since conventional sequential algorithms may be serial in nature and may not be implemented efficiently on parallel machines. In many cases, new algorithms have to be introduced to increase parallel performance. In order to achieve optimal performance, in addition to partitioning and mapping, a careful performance study should be conducted for a given application to find a good algorithm-machine combination. This process, however, is usually painful and elusive. The goal of this project is to design and develop efficient parallel algorithms for highly accurate Computational Fluid Dynamics (CFD) simulations and other engineering applications. The work plan is 1) developing highly accurate parallel numerical algorithms, 2) conduct preliminary testing to verify the effectiveness and potential of these algorithms, 3) incorporate newly developed algorithms into actual simulation packages. The work plan has well achieved. Two highly accurate, efficient Poisson solvers have been developed and tested based on two different approaches: (1) Adopting a mathematical geometry which has a better capacity to describe the fluid, (2) Using compact scheme to gain high order accuracy in numerical discretization. The previously developed Parallel Diagonal Dominant (PDD) algorithm

  2. IMPROVING TACONITE PROCESSING PLANT EFFICIENCY BY COMPUTER SIMULATION, Final Report

    SciTech Connect

    William M. Bond; Salih Ersayin

    2007-03-30

    This project involved industrial scale testing of a mineral processing simulator to improve the efficiency of a taconite processing plant, namely the Minorca mine. The Concentrator Modeling Center at the Coleraine Minerals Research Laboratory, University of Minnesota Duluth, enhanced the capabilities of available software, Usim Pac, by developing mathematical models needed for accurate simulation of taconite plants. This project provided funding for this technology to prove itself in the industrial environment. As the first step, data representing existing plant conditions were collected by sampling and sample analysis. Data were then balanced and provided a basis for assessing the efficiency of individual devices and the plant, and also for performing simulations aimed at improving plant efficiency. Performance evaluation served as a guide in developing alternative process strategies for more efficient production. A large number of computer simulations were then performed to quantify the benefits and effects of implementing these alternative schemes. Modification of makeup ball size was selected as the most feasible option for the target performance improvement. This was combined with replacement of existing hydrocyclones with more efficient ones. After plant implementation of these modifications, plant sampling surveys were carried out to validate findings of the simulation-based study. Plant data showed very good agreement with the simulated data, confirming results of simulation. After the implementation of modifications in the plant, several upstream bottlenecks became visible. Despite these bottlenecks limiting full capacity, concentrator energy improvement of 7% was obtained. Further improvements in energy efficiency are expected in the near future. The success of this project demonstrated the feasibility of a simulation-based approach. Currently, the Center provides simulation-based service to all the iron ore mining companies operating in northern

  3. CAD-CAM at Bendix Kansas city: the BICAM system

    SciTech Connect

    Witte, D.R.

    1983-04-01

    Bendix Kansas City Division (BEKC) has been involved in Computer Aided Manufacturing (CAM) technology since the late 1950's when the numerical control (N/C) analysts installed computers to aid in N/C tape preparation for numerically controlled machines. Computer Aided Design (CAD) technology was introduced in 1976, when a number of 2D turnkey drafting stations were procured for printed wiring board (PWB) drawing definition and maintenance. In June, 1980, CAD-CAM Operations was formed to incorporate an integrated CAD-CAM capability into Bendix operations. In March 1982, a ninth division was added to the existing eight divisions at Bendix. Computer Integrated Manufacturing (CIM) is a small organization, reporting directly to the general manager, who has responsibility to coordinate the overall integration of computer aided systems at Bendix. As a long range plan, CIM has adopted a National Bureau of Standards (NBS) architecture titled Factory of the Future. Conceptually, the Bendix CAD-CAM system has a centrally located data base which can be accessed by both CAD and CAM tools, processes, and personnel thus forming an integrated Computer Aided Engineering (CAE) System. This is a key requirement of the Bendix CAD-CAM system that will be presented in more detail.

  4. High-frequency CAD-based scattering model: SERMAT

    NASA Astrophysics Data System (ADS)

    Goupil, D.; Boutillier, M.

    1991-09-01

    Specifications for an industrial radar cross section (RCS) calculation code are given: it must be able to exchange data with many computer aided design (CAD) systems, it must be fast, and it must have powerful graphic tools. Classical physical optics (PO) and equivalent currents (EC) techniques have proven their efficiency on simple objects for a long time. Difficult geometric problems occur when objects with very complex shapes have to be computed. Only a specific geometric code can solve these problems. We have established that, once these problems have been solved: (1) PO and EC give good results on complex objects of large size compared to wavelength; and (2) the implementation of these objects in a software package (SERMAT) allows fast and sufficiently precise domain RCS calculations to meet industry requirements in the domain of stealth.

  5. A/E/C Graphics Standard: Release 2.0 (formerly titled CAD Drafting Standard)

    DTIC Science & Technology

    2015-08-01

    Civil Information Modeling (CIM), and Computer- Aided Design (CAD). It is through the collection and documentation of these practices that consistent...acronyms: • A/E/C – Architecture, Engineering, and Construction • BIM – Building Information Modeling • CADComputer- Aided Design • CIM – Civil...Building Information Modeling (BIM), Civil Information Modeling (CIM), and Computer- Aided Design (CAD). It is through the collection and documentation

  6. TinkerCell: modular CAD tool for synthetic biology

    PubMed Central

    Chandran, Deepak; Bergmann, Frank T; Sauro, Herbert M

    2009-01-01

    Background Synthetic biology brings together concepts and techniques from engineering and biology. In this field, computer-aided design (CAD) is necessary in order to bridge the gap between computational modeling and biological data. Using a CAD application, it would be possible to construct models using available biological "parts" and directly generate the DNA sequence that represents the model, thus increasing the efficiency of design and construction of synthetic networks. Results An application named TinkerCell has been developed in order to serve as a CAD tool for synthetic biology. TinkerCell is a visual modeling tool that supports a hierarchy of biological parts. Each part in this hierarchy consists of a set of attributes that define the part, such as sequence or rate constants. Models that are constructed using these parts can be analyzed using various third-party C and Python programs that are hosted by TinkerCell via an extensive C and Python application programming interface (API). TinkerCell supports the notion of a module, which are networks with interfaces. Such modules can be connected to each other, forming larger modular networks. TinkerCell is a free and open-source project under the Berkeley Software Distribution license. Downloads, documentation, and tutorials are available at . Conclusion An ideal CAD application for engineering biological systems would provide features such as: building and simulating networks, analyzing robustness of networks, and searching databases for components that meet the design criteria. At the current state of synthetic biology, there are no established methods for measuring robustness or identifying components that fit a design. The same is true for databases of biological parts. TinkerCell's flexible modeling framework allows it to cope with changes in the field. Such changes may involve the way parts are characterized or the way synthetic networks are modeled and analyzed computationally. TinkerCell can readily

  7. TinkerCell: modular CAD tool for synthetic biology.

    PubMed

    Chandran, Deepak; Bergmann, Frank T; Sauro, Herbert M

    2009-10-29

    Synthetic biology brings together concepts and techniques from engineering and biology. In this field, computer-aided design (CAD) is necessary in order to bridge the gap between computational modeling and biological data. Using a CAD application, it would be possible to construct models using available biological "parts" and directly generate the DNA sequence that represents the model, thus increasing the efficiency of design and construction of synthetic networks. An application named TinkerCell has been developed in order to serve as a CAD tool for synthetic biology. TinkerCell is a visual modeling tool that supports a hierarchy of biological parts. Each part in this hierarchy consists of a set of attributes that define the part, such as sequence or rate constants. Models that are constructed using these parts can be analyzed using various third-party C and Python programs that are hosted by TinkerCell via an extensive C and Python application programming interface (API). TinkerCell supports the notion of a module, which are networks with interfaces. Such modules can be connected to each other, forming larger modular networks. TinkerCell is a free and open-source project under the Berkeley Software Distribution license. Downloads, documentation, and tutorials are available at http://www.tinkercell.com. An ideal CAD application for engineering biological systems would provide features such as: building and simulating networks, analyzing robustness of networks, and searching databases for components that meet the design criteria. At the current state of synthetic biology, there are no established methods for measuring robustness or identifying components that fit a design. The same is true for databases of biological parts. TinkerCell's flexible modeling framework allows it to cope with changes in the field. Such changes may involve the way parts are characterized or the way synthetic networks are modeled and analyzed computationally. TinkerCell can readily accept

  8. A computational efficient modelling of laminar separation bubbles

    NASA Technical Reports Server (NTRS)

    Dini, Paolo; Maughmer, Mark D.

    1990-01-01

    In predicting the aerodynamic characteristics of airfoils operating at low Reynolds numbers, it is often important to account for the effects of laminar (transitional) separation bubbles. Previous approaches to the modelling of this viscous phenomenon range from fast but sometimes unreliable empirical correlations for the length of the bubble and the associated increase in momentum thickness, to more accurate but significantly slower displacement-thickness iteration methods employing inverse boundary-layer formulations in the separated regions. Since the penalty in computational time associated with the more general methods is unacceptable for airfoil design applications, use of an accurate yet computationally efficient model is highly desirable. To this end, a semi-empirical bubble model was developed and incorporated into the Eppler and Somers airfoil design and analysis program. The generality and the efficiency was achieved by successfully approximating the local viscous/inviscid interaction, the transition location, and the turbulent reattachment process within the framework of an integral boundary-layer method. Comparisons of the predicted aerodynamic characteristics with experimental measurements for several airfoils show excellent and consistent agreement for Reynolds numbers from 2,000,000 down to 100,000.

  9. Efficient Hessian computation using sparse matrix derivatives in RAM notation.

    PubMed

    von Oertzen, Timo; Brick, Timothy R

    2014-06-01

    This article proposes a new, more efficient method to compute the minus two log likelihood, its gradient, and the Hessian for structural equation models (SEMs) in reticular action model (RAM) notation. The method exploits the beneficial aspect of RAM notation that the matrix derivatives used in RAM are sparse. For an SEM with K variables, P parameters, and P' entries in the symmetrical or asymmetrical matrix of the RAM notation filled with parameters, the asymptotical run time of the algorithm is O(P ' K (2) + P (2) K (2) + K (3)). The naive implementation and numerical implementations are both O(P (2) K (3)), so that for typical applications of SEM, the proposed algorithm is asymptotically K times faster than the best previously known algorithm. A simulation comparison with a numerical algorithm shows that the asymptotical efficiency is transferred to an applied computational advantage that is crucial for the application of maximum likelihood estimation, even in small, but especially in moderate or large, SEMs.

  10. Computationally efficient angles-only tracking with particle flow filters

    NASA Astrophysics Data System (ADS)

    Costa, Russell; Wettergren, Thomas A.

    2015-05-01

    Particle filters represent the current state of the art in nonlinear, non-Gaussian filtering. They are easy to implement and have been applied in numerous domains. That being said, particle filters can be impractical for problems with state dimensions greater than four, if some other problem specific efficiencies can't be identified. This "curse of dimensionality" makes particle filters a computationally burdensome approach, and the associated re-sampling makes parallel processing difficult. In the past several years an alternative to particle filters dubbed particle flows has emerged as a (potentially) much more efficient method to solving non-linear, non-Gaussian problems. Particle flow filtering (unlike particle filtering) is a deterministic approach, however, its implementation entails solving an under-determined system of partial differential equations which has infinitely many potential solutions. In this work we apply the filters to angles-only target motion analysis problems in order to quantify the (if any) computational gains over standard particle filtering approaches. In particular we focus on the simplest form of particle flow filter, known as the exact particle flow filter. This form assumes a Gaussian prior and likelihood function of the unknown target states and is then linearized as is standard practice for extended Kalman filters. We implement both particle filters and particle flows and perform numerous numerical experiments for comparison.

  11. Efficient Computation of the Topology of Level Sets

    SciTech Connect

    Pascucci, V; Cole-McLaughlin, K

    2002-07-19

    This paper introduces two efficient algorithms that compute the Contour Tree of a 3D scalar field F and its augmented version with the Betti numbers of each isosurface. The Contour Tree is a fundamental data structure in scientific visualization that is used to pre-process the domain mesh to allow optimal computation of isosurfaces with minimal storage overhead. The Contour Tree can be also used to build user interfaces reporting the complete topological characterization of a scalar field, as shown in Figure 1. In the first part of the paper we present a new scheme that augments the Contour Tree with the Betti numbers of each isocontour in linear time. We show how to extend the scheme introduced in 3 with the Betti number computation without increasing its complexity. Thus we improve on the time complexity from our previous approach 8 from 0(m log m) to 0(n log n+m), where m is the number of tetrahedra and n is the number of vertices in the domain of F. In the second part of the paper we introduce a new divide and conquer algorithm that computes the Augmented Contour Tree for scalar fields defined on rectilinear grids. The central part of the scheme computes the output contour tree by merging two intermediate contour trees and is independent of the interpolant. In this way we confine any knowledge regarding a specific interpolant to an oracle that computes the tree for a single cell. We have implemented this oracle for the trilinear interpolant and plan to replace it with higher order interpolants when needed. The complexity of the scheme is O(n + t log n), where t is the number of critical points of F. This allows for the first time to compute the Contour Tree in linear time in many practical cases when t = O(n{sup 1-e}). We report the running times for a parallel implementation of our algorithm, showing good scalability with the number of processors.

  12. Computationally efficient implementation of combustion chemistry in parallel PDF calculations

    SciTech Connect

    Lu Liuyan Lantz, Steven R.; Ren Zhuyin; Pope, Stephen B.

    2009-08-20

    In parallel calculations of combustion processes with realistic chemistry, the serial in situ adaptive tabulation (ISAT) algorithm [S.B. Pope, Computationally efficient implementation of combustion chemistry using in situ adaptive tabulation, Combustion Theory and Modelling, 1 (1997) 41-63; L. Lu, S.B. Pope, An improved algorithm for in situ adaptive tabulation, Journal of Computational Physics 228 (2009) 361-386] substantially speeds up the chemistry calculations on each processor. To improve the parallel efficiency of large ensembles of such calculations in parallel computations, in this work, the ISAT algorithm is extended to the multi-processor environment, with the aim of minimizing the wall clock time required for the whole ensemble. Parallel ISAT strategies are developed by combining the existing serial ISAT algorithm with different distribution strategies, namely purely local processing (PLP), uniformly random distribution (URAN), and preferential distribution (PREF). The distribution strategies enable the queued load redistribution of chemistry calculations among processors using message passing. They are implemented in the software x2f{sub m}pi, which is a Fortran 95 library for facilitating many parallel evaluations of a general vector function. The relative performance of the parallel ISAT strategies is investigated in different computational regimes via the PDF calculations of multiple partially stirred reactors burning methane/air mixtures. The results show that the performance of ISAT with a fixed distribution strategy strongly depends on certain computational regimes, based on how much memory is available and how much overlap exists between tabulated information on different processors. No one fixed strategy consistently achieves good performance in all the regimes. Therefore, an adaptive distribution strategy, which blends PLP, URAN and PREF, is devised and implemented. It yields consistently good performance in all regimes. In the adaptive

  13. Computationally efficient implementation of combustion chemistry in parallel PDF calculations

    NASA Astrophysics Data System (ADS)

    Lu, Liuyan; Lantz, Steven R.; Ren, Zhuyin; Pope, Stephen B.

    2009-08-01

    In parallel calculations of combustion processes with realistic chemistry, the serial in situ adaptive tabulation (ISAT) algorithm [S.B. Pope, Computationally efficient implementation of combustion chemistry using in situ adaptive tabulation, Combustion Theory and Modelling, 1 (1997) 41-63; L. Lu, S.B. Pope, An improved algorithm for in situ adaptive tabulation, Journal of Computational Physics 228 (2009) 361-386] substantially speeds up the chemistry calculations on each processor. To improve the parallel efficiency of large ensembles of such calculations in parallel computations, in this work, the ISAT algorithm is extended to the multi-processor environment, with the aim of minimizing the wall clock time required for the whole ensemble. Parallel ISAT strategies are developed by combining the existing serial ISAT algorithm with different distribution strategies, namely purely local processing (PLP), uniformly random distribution (URAN), and preferential distribution (PREF). The distribution strategies enable the queued load redistribution of chemistry calculations among processors using message passing. They are implemented in the software x2f_mpi, which is a Fortran 95 library for facilitating many parallel evaluations of a general vector function. The relative performance of the parallel ISAT strategies is investigated in different computational regimes via the PDF calculations of multiple partially stirred reactors burning methane/air mixtures. The results show that the performance of ISAT with a fixed distribution strategy strongly depends on certain computational regimes, based on how much memory is available and how much overlap exists between tabulated information on different processors. No one fixed strategy consistently achieves good performance in all the regimes. Therefore, an adaptive distribution strategy, which blends PLP, URAN and PREF, is devised and implemented. It yields consistently good performance in all regimes. In the adaptive parallel

  14. EXCAVATOR: a computer program for efficiently mining gene expression data.

    PubMed

    Xu, Dong; Olman, Victor; Wang, Li; Xu, Ying

    2003-10-01

    Massive amounts of gene expression data are generated using microarrays for functional studies of genes and gene expression data clustering is a useful tool for studying the functional relationship among genes in a biological process. We have developed a computer package EXCAVATOR for clustering gene expression profiles based on our new framework for representing gene expression data as a minimum spanning tree. EXCAVATOR uses a number of rigorous and efficient clustering algorithms. This program has a number of unique features, including capabilities for: (i) data- constrained clustering; (ii) identification of genes with similar expression profiles to pre-specified seed genes; (iii) cluster identification from a noisy background; (iv) computational comparison between different clustering results of the same data set. EXCAVATOR can be run from a Unix/Linux/DOS shell, from a Java interface or from a Web server. The clustering results can be visualized as colored figures and 2-dimensional plots. Moreover, EXCAVATOR provides a wide range of options for data formats, distance measures, objective functions, clustering algorithms, methods to choose number of clusters, etc. The effectiveness of EXCAVATOR has been demonstrated on several experimental data sets. Its performance compares favorably against the popular K-means clustering method in terms of clustering quality and computing time.

  15. Efficient parameter sensitivity computation for spatially extended reaction networks

    NASA Astrophysics Data System (ADS)

    Lester, C.; Yates, C. A.; Baker, R. E.

    2017-01-01

    Reaction-diffusion models are widely used to study spatially extended chemical reaction systems. In order to understand how the dynamics of a reaction-diffusion model are affected by changes in its input parameters, efficient methods for computing parametric sensitivities are required. In this work, we focus on the stochastic models of spatially extended chemical reaction systems that involve partitioning the computational domain into voxels. Parametric sensitivities are often calculated using Monte Carlo techniques that are typically computationally expensive; however, variance reduction techniques can decrease the number of Monte Carlo simulations required. By exploiting the characteristic dynamics of spatially extended reaction networks, we are able to adapt existing finite difference schemes to robustly estimate parametric sensitivities in a spatially extended network. We show that algorithmic performance depends on the dynamics of the given network and the choice of summary statistics. We then describe a hybrid technique that dynamically chooses the most appropriate simulation method for the network of interest. Our method is tested for functionality and accuracy in a range of different scenarios.

  16. A Computational Framework for Efficient Low Temperature Plasma Simulations

    NASA Astrophysics Data System (ADS)

    Verma, Abhishek Kumar; Venkattraman, Ayyaswamy

    2016-10-01

    Over the past years, scientific computing has emerged as an essential tool for the investigation and prediction of low temperature plasmas (LTP) applications which includes electronics, nanomaterial synthesis, metamaterials etc. To further explore the LTP behavior with greater fidelity, we present a computational toolbox developed to perform LTP simulations. This framework will allow us to enhance our understanding of multiscale plasma phenomenon using high performance computing tools mainly based on OpenFOAM FVM distribution. Although aimed at microplasma simulations, the modular framework is able to perform multiscale, multiphysics simulations of physical systems comprises of LTP. Some salient introductory features are capability to perform parallel, 3D simulations of LTP applications on unstructured meshes. Performance of the solver is tested based on numerical results assessing accuracy and efficiency of benchmarks for problems in microdischarge devices. Numerical simulation of microplasma reactor at atmospheric pressure with hemispherical dielectric coated electrodes will be discussed and hence, provide an overview of applicability and future scope of this framework.

  17. Efficient Homotopy Continuation Algorithms with Application to Computational Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Brown, David A.

    New homotopy continuation algorithms are developed and applied to a parallel implicit finite-difference Newton-Krylov-Schur external aerodynamic flow solver for the compressible Euler, Navier-Stokes, and Reynolds-averaged Navier-Stokes equations with the Spalart-Allmaras one-equation turbulence model. Many new analysis tools, calculations, and numerical algorithms are presented for the study and design of efficient and robust homotopy continuation algorithms applicable to solving very large and sparse nonlinear systems of equations. Several specific homotopies are presented and studied and a methodology is presented for assessing the suitability of specific homotopies for homotopy continuation. . A new class of homotopy continuation algorithms, referred to as monolithic homotopy continuation algorithms, is developed. These algorithms differ from classical predictor-corrector algorithms by combining the predictor and corrector stages into a single update, significantly reducing the amount of computation and avoiding wasted computational effort resulting from over-solving in the corrector phase. The new algorithms are also simpler from a user perspective, with fewer input parameters, which also improves the user's ability to choose effective parameters on the first flow solve attempt. Conditional convergence is proved analytically and studied numerically for the new algorithms. The performance of a fully-implicit monolithic homotopy continuation algorithm is evaluated for several inviscid, laminar, and turbulent flows over NACA 0012 airfoils and ONERA M6 wings. The monolithic algorithm is demonstrated to be more efficient than the predictor-corrector algorithm for all applications investigated. It is also demonstrated to be more efficient than the widely-used pseudo-transient continuation algorithm for all inviscid and laminar cases investigated, and good performance scaling with grid refinement is demonstrated for the inviscid cases. Performance is also demonstrated

  18. A computationally efficient method for hand-eye calibration.

    PubMed

    Zhang, Zhiqiang; Zhang, Lin; Yang, Guang-Zhong

    2017-07-19

    Surgical robots with cooperative control and semiautonomous features have shown increasing clinical potential, particularly for repetitive tasks under imaging and vision guidance. Effective performance of an autonomous task requires accurate hand-eye calibration so that the transformation between the robot coordinate frame and the camera coordinates is well defined. In practice, due to changes in surgical instruments, online hand-eye calibration must be performed regularly. In order to ensure seamless execution of the surgical procedure without affecting the normal surgical workflow, it is important to derive fast and efficient hand-eye calibration methods. We present a computationally efficient iterative method for hand-eye calibration. In this method, dual quaternion is introduced to represent the rigid transformation, and a two-step iterative method is proposed to recover the real and dual parts of the dual quaternion simultaneously, and thus the estimation of rotation and translation of the transformation. The proposed method was applied to determine the rigid transformation between the stereo laparoscope and the robot manipulator. Promising experimental and simulation results have shown significant convergence speed improvement to 3 iterations from larger than 30 with regard to standard optimization method, which illustrates the effectiveness and efficiency of the proposed method.

  19. Efficient Universal Computing Architectures for Decoding Neural Activity

    PubMed Central

    Rapoport, Benjamin I.; Turicchia, Lorenzo; Wattanapanitch, Woradorn; Davidson, Thomas J.; Sarpeshkar, Rahul

    2012-01-01

    The ability to decode neural activity into meaningful control signals for prosthetic devices is critical to the development of clinically useful brain– machine interfaces (BMIs). Such systems require input from tens to hundreds of brain-implanted recording electrodes in order to deliver robust and accurate performance; in serving that primary function they should also minimize power dissipation in order to avoid damaging neural tissue; and they should transmit data wirelessly in order to minimize the risk of infection associated with chronic, transcutaneous implants. Electronic architectures for brain– machine interfaces must therefore minimize size and power consumption, while maximizing the ability to compress data to be transmitted over limited-bandwidth wireless channels. Here we present a system of extremely low computational complexity, designed for real-time decoding of neural signals, and suited for highly scalable implantable systems. Our programmable architecture is an explicit implementation of a universal computing machine emulating the dynamics of a network of integrate-and-fire neurons; it requires no arithmetic operations except for counting, and decodes neural signals using only computationally inexpensive logic operations. The simplicity of this architecture does not compromise its ability to compress raw neural data by factors greater than . We describe a set of decoding algorithms based on this computational architecture, one designed to operate within an implanted system, minimizing its power consumption and data transmission bandwidth; and a complementary set of algorithms for learning, programming the decoder, and postprocessing the decoded output, designed to operate in an external, nonimplanted unit. The implementation of the implantable portion is estimated to require fewer than 5000 operations per second. A proof-of-concept, 32-channel field-programmable gate array (FPGA) implementation of this portion is consequently energy efficient

  20. ProperCAD: A portable object-oriented parallel environment for VLSI CAD

    NASA Technical Reports Server (NTRS)

    Ramkumar, Balkrishna; Banerjee, Prithviraj

    1993-01-01

    Most parallel algorithms for VLSI CAD proposed to date have one important drawback: they work efficiently only on machines that they were designed for. As a result, algorithms designed to date are dependent on the architecture for which they are developed and do not port easily to other parallel architectures. A new project under way to address this problem is described. A Portable object-oriented parallel environment for CAD algorithms (ProperCAD) is being developed. The objectives of this research are (1) to develop new parallel algorithms that run in a portable object-oriented environment (CAD algorithms using a general purpose platform for portable parallel programming called CARM is being developed and a C++ environment that is truly object-oriented and specialized for CAD applications is also being developed); and (2) to design the parallel algorithms around a good sequential algorithm with a well-defined parallel-sequential interface (permitting the parallel algorithm to benefit from future developments in sequential algorithms). One CAD application that has been implemented as part of the ProperCAD project, flat VLSI circuit extraction, is described. The algorithm, its implementation, and its performance on a range of parallel machines are discussed in detail. It currently runs on an Encore Multimax, a Sequent Symmetry, Intel iPSC/2 and i860 hypercubes, a NCUBE 2 hypercube, and a network of Sun Sparc workstations. Performance data for other applications that were developed are provided: namely test pattern generation for sequential circuits, parallel logic synthesis, and standard cell placement.

  1. Ergonomics Perspective in Agricultural Research: A User-Centred Approach Using CAD and Digital Human Modeling (DHM) Technologies

    NASA Astrophysics Data System (ADS)

    Patel, Thaneswer; Sanjog, J.; Karmakar, Sougata

    2016-09-01

    Computer-aided Design (CAD) and Digital Human Modeling (DHM) (specialized CAD software for virtual human representation) technologies endow unique opportunities to incorporate human factors pro-actively in design development. Challenges of enhancing agricultural productivity through improvement of agricultural tools/machineries and better human-machine compatibility can be ensured by adoption of these modern technologies. Objectives of present work are to provide the detailed scenario of CAD and DHM applications in agricultural sector; and finding out means for wide adoption of these technologies for design and development of cost-effective, user-friendly, efficient and safe agricultural tools/equipment and operator's workplace. Extensive literature review has been conducted for systematic segregation and representation of available information towards drawing inferences. Although applications of various CAD software have momentum in agricultural research particularly for design and manufacturing of agricultural equipment/machinery, use of DHM is still at its infancy in this sector. Current review discusses about reasons of less adoption of these technologies in agricultural sector and steps to be taken for their wide adoption. It also suggests possible future research directions to come up with better ergonomic design strategies for improvement of agricultural equipment/machines and workstations through application of CAD and DHM.

  2. CAD/CAM for optomechatronics

    NASA Astrophysics Data System (ADS)

    Zhou, Haiguang; Han, Min

    2003-10-01

    We focus at CAD/CAM for optomechatronics. We have developed a kind of CAD/CAM, which is not only for mechanics but also for optics and electronic. The software can be used for training and education. We introduce mechanical CAD, optical CAD and electrical CAD, we show how to draw a circuit diagram, mechanical diagram and luminous transmission diagram, from 2D drawing to 3D drawing. We introduce how to create 2D and 3D parts for optomechatronics, how to edit tool paths, how to select parameters for process, how to run the post processor, dynamic show the tool path and generate the CNC programming. We introduce the joint application of CAD&CAM. We aim at how to match the requirement of optical, mechanical and electronics.

  3. The Enzyme Activity and Substrate Specificity of Two Major Cinnamyl Alcohol Dehydrogenases in Sorghum (Sorghum bicolor), SbCAD2 and SbCAD4.

    PubMed

    Jun, Se-Young; Walker, Alexander M; Kim, Hoon; Ralph, John; Vermerris, Wilfred; Sattler, Scott E; Kang, ChulHee

    2017-08-01

    Cinnamyl alcohol dehydrogenase (CAD) catalyzes the final step in monolignol biosynthesis, reducing sinapaldehyde, coniferaldehyde, and p-coumaraldehyde to their corresponding alcohols in an NADPH-dependent manner. Because of its terminal location in monolignol biosynthesis, the variation in substrate specificity and activity of CAD can result in significant changes in overall composition and amount of lignin. Our in-depth characterization of two major CAD isoforms, SbCAD2 (Brown midrib 6 [bmr6]) and SbCAD4, in lignifying tissues of sorghum (Sorghum bicolor), a strategic plant for generating renewable chemicals and fuels, indicates their similarity in both structure and activity to Arabidopsis (Arabidopsis thaliana) CAD5 and Populus tremuloides sinapyl alcohol dehydrogenase, respectively. This first crystal structure of a monocot CAD combined with enzyme kinetic data and a catalytic model supported by site-directed mutagenesis allows full comparison with dicot CADs and elucidates the potential signature sequence for their substrate specificity and activity. The L119W/G301F-SbCAD4 double mutant displayed its substrate preference in the order coniferaldehyde > p-coumaraldehyde > sinapaldehyde, with higher catalytic efficiency than that of both wild-type SbCAD4 and SbCAD2. As SbCAD4 is the only major CAD isoform in bmr6 mutants, replacing SbCAD4 with L119W/G301F-SbCAD4 in bmr6 plants could produce a phenotype that is more amenable to biomass processing. © 2017 American Society of Plant Biologists. All Rights Reserved.

  4. DeviceEditor visual biological CAD canvas

    PubMed Central

    2012-01-01

    Background Biological Computer Aided Design (bioCAD) assists the de novo design and selection of existing genetic components to achieve a desired biological activity, as part of an integrated design-build-test cycle. To meet the emerging needs of Synthetic Biology, bioCAD tools must address the increasing prevalence of combinatorial library design, design rule specification, and scar-less multi-part DNA assembly. Results We report the development and deployment of web-based bioCAD software, DeviceEditor, which provides a graphical design environment that mimics the intuitive visual whiteboard design process practiced in biological laboratories. The key innovations of DeviceEditor include visual combinatorial library design, direct integration with scar-less multi-part DNA assembly design automation, and a graphical user interface for the creation and modification of design specification rules. We demonstrate how biological designs are rendered on the DeviceEditor canvas, and we present effective visualizations of genetic component ordering and combinatorial variations within complex designs. Conclusions DeviceEditor liberates researchers from DNA base-pair manipulation, and enables users to create successful prototypes using standardized, functional, and visual abstractions. Open and documented software interfaces support further integration of DeviceEditor with other bioCAD tools and software platforms. DeviceEditor saves researcher time and institutional resources through correct-by-construction design, the automation of tedious tasks, design reuse, and the minimization of DNA assembly costs. PMID:22373390

  5. Generating Composite Overlapping Grids on CAD Geometries

    SciTech Connect

    Henshaw, W.D.

    2002-02-07

    We describe some algorithms and tools that have been developed to generate composite overlapping grids on geometries that have been defined with computer aided design (CAD) programs. This process consists of five main steps. Starting from a description of the surfaces defining the computational domain we (1) correct errors in the CAD representation, (2) determine topology of the patched-surface, (3) build a global triangulation of the surface, (4) construct structured surface and volume grids using hyperbolic grid generation, and (5) generate the overlapping grid by determining the holes and the interpolation points. The overlapping grid generator which is used for the final step also supports the rapid generation of grids for block-structured adaptive mesh refinement and for moving grids. These algorithms have been implemented as part of the Overture object-oriented framework.

  6. Color shading technology for design CAD systems

    SciTech Connect

    Mori, K.; Mori, H.; Tanaka, T.; Okuyama, Y.

    1986-01-01

    One issue in new vehicle development that has become increasingly more important is the need to put attractive vehicles on the market at the right time. In recent years vehicle design has become a very crucial factor in this effort. Automakers are required to create vehicles having a higher quality design in a shorter period of time and supply them to the market in a timely fashion. As part of the effort to meet these requirements the automakers have developed a variety of CAD/CAM systems, which counterparts in industry in general. Although most CAD/CAM systems are currently being used primarily at the design and manufacturing stages, the full potential of CAD systems has yet to be realized at the design stage. At Nissan, they have developed a CAD styling system called the Digitized Image Modeling System (DIMS), which serves as a support tool for the creation of new vehicle designs. This system has greatly enhanced the efficiency of the creative activities in the design process and enables them to create vehicles with higher quality designs in a more timely manner. This paper describes how DIMS is being utilized in the design process and focuses in detail on the theory of shading, which occupies a vital position in overall system.

  7. The Efficiency of Various Computers and Optimizations in Performing Finite Element Computations

    NASA Technical Reports Server (NTRS)

    Marcus, Martin H.; Broduer, Steve (Technical Monitor)

    2001-01-01

    With the advent of computers with many processors, it becomes unclear how to best exploit this advantage. For example, matrices can be inverted by applying several processors to each vector operation, or one processor can be applied to each matrix. The former approach has diminishing returns beyond a handful of processors, but how many processors depends on the computer architecture. Applying one processor to each matrix is feasible with enough ram memory and scratch disk space, but the speed at which this is done is found to vary by a factor of three depending on how it is done. The cost of the computer must also be taken into account. A computer with many processors and fast interprocessor communication is much more expensive than the same computer and processors with slow interprocessor communication. Consequently, for problems that require several matrices to be inverted, the best speed per dollar for computers is found to be several small workstations that are networked together, such as in a Beowulf cluster. Since these machines typically have two processors per node, each matrix is most efficiently inverted with no more than two processors assigned to it.

  8. The Efficiency of Various Computers and Optimizations in Performing Finite Element Computations

    NASA Technical Reports Server (NTRS)

    Marcus, Martin H.; Broduer, Steve (Technical Monitor)

    2001-01-01

    With the advent of computers with many processors, it becomes unclear how to best exploit this advantage. For example, matrices can be inverted by applying several processors to each vector operation, or one processor can be applied to each matrix. The former approach has diminishing returns beyond a handful of processors, but how many processors depends on the computer architecture. Applying one processor to each matrix is feasible with enough ram memory and scratch disk space, but the speed at which this is done is found to vary by a factor of three depending on how it is done. The cost of the computer must also be taken into account. A computer with many processors and fast interprocessor communication is much more expensive than the same computer and processors with slow interprocessor communication. Consequently, for problems that require several matrices to be inverted, the best speed per dollar for computers is found to be several small workstations that are networked together, such as in a Beowulf cluster. Since these machines typically have two processors per node, each matrix is most efficiently inverted with no more than two processors assigned to it.

  9. Method for computationally efficient design of dielectric laser accelerator structures.

    PubMed

    Hughes, Tyler; Veronis, Georgios; Wootton, Kent P; Joel England, R; Fan, Shanhui

    2017-06-26

    Dielectric microstructures have generated much interest in recent years as a means of accelerating charged particles when powered by solid state lasers. The acceleration gradient (or particle energy gain per unit length) is an important figure of merit. To design structures with high acceleration gradients, we explore the adjoint variable method, a highly efficient technique used to compute the sensitivity of an objective with respect to a large number of parameters. With this formalism, the sensitivity of the acceleration gradient of a dielectric structure with respect to its entire spatial permittivity distribution is calculated by the use of only two full-field electromagnetic simulations, the original and 'adjoint'. The adjoint simulation corresponds physically to the reciprocal situation of a point charge moving through the accelerator gap and radiating. Using this formalism, we perform numerical optimizations aimed at maximizing acceleration gradients, which generate fabricable structures of greatly improved performance in comparison to previously examined geometries.

  10. An efficient algorithm for computing the crossovers in satellite altimetry

    NASA Technical Reports Server (NTRS)

    Tai, Chang-Kou

    1988-01-01

    An efficient algorithm has been devised to compute the crossovers in satellite altimetry. The significance of the crossovers is twofold. First, they are needed to perform the crossover adjustment to remove the orbit error. Secondly, they yield important insight into oceanic variability. Nevertheless, there is no published algorithm to make this very time consuming task easier, which is the goal of this report. The success of the algorithm is predicated on the ability to predict (by analytical means) the crossover coordinates to within 6 km and 1 sec of the true values. Hence, only one interpolation/extrapolation step on the data is needed to derive the crossover coordinates in contrast to the many interpolation/extrapolation operations usually needed to arrive at the same accuracy level if deprived of this information.

  11. CAD of control systems: Application of nonlinear programming to a linear quadratic formulation

    NASA Technical Reports Server (NTRS)

    Fleming, P.

    1983-01-01

    The familiar suboptimal regulator design approach is recast as a constrained optimization problem and incorporated in a Computer Aided Design (CAD) package where both design objective and constraints are quadratic cost functions. This formulation permits the separate consideration of, for example, model following errors, sensitivity measures and control energy as objectives to be minimized or limits to be observed. Efficient techniques for computing the interrelated cost functions and their gradients are utilized in conjunction with a nonlinear programming algorithm. The effectiveness of the approach and the degree of insight into the problem which it affords is illustrated in a helicopter regulation design example.

  12. A computationally efficient spectral method for modeling core dynamics

    NASA Astrophysics Data System (ADS)

    Marti, P.; Calkins, M. A.; Julien, K.

    2016-08-01

    An efficient, spectral numerical method is presented for solving problems in a spherical shell geometry that employs spherical harmonics in the angular dimensions and Chebyshev polynomials in the radial direction. We exploit the three-term recurrence relation for Chebyshev polynomials that renders all matrices sparse in spectral space. This approach is significantly more efficient than the collocation approach and is generalizable to both the Galerkin and tau methodologies for enforcing boundary conditions. The sparsity of the matrices reduces the computational complexity of the linear solution of implicit-explicit time stepping schemes to O(N) operations, compared to O>(N2>) operations for a collocation method. The method is illustrated by considering several example problems of important dynamical processes in the Earth's liquid outer core. Results are presented from both fully nonlinear, time-dependent numerical simulations and eigenvalue problems arising from the investigation of the onset of convection and the inertial wave spectrum. We compare the explicit and implicit temporal discretization of the Coriolis force; the latter becomes computationally feasible given the sparsity of the differential operators. We find that implicit treatment of the Coriolis force allows for significantly larger time step sizes compared to explicit algorithms; for hydrodynamic and dynamo problems at an Ekman number of E=10-5, time step sizes can be increased by a factor of 3 to 16 times that of the explicit algorithm, depending on the order of the time stepping scheme. The implementation with explicit Coriolis force scales well to at least 2048 cores, while the implicit implementation scales to 512 cores.

  13. CAD/CAM improves productivity in nonaerospace job shops

    NASA Astrophysics Data System (ADS)

    Koenig, D. T.

    1982-12-01

    Business cost improvements that can result from Computer Aided Design/Computer Aided Manufacturing (CAD/CAM), when properly applied, are discussed. Emphasis is placed on the use of CAD/CAM for machine and process control, design and planning control, and production and measurement control. It is pointed out that the implementation of CAD/CAM should be based on the following priorities: (1) recognize interrelationships between the principal functions of CAD/CAM; (2) establish a Systems Council to determine overall strategy and specify the communications/decision-making system; (3) implement the communications/decision-making system to improve productivity; and (4) implement interactive graphics and other additions to further improve productivity.

  14. An image database management system for conducting CAD research

    NASA Astrophysics Data System (ADS)

    Gruszauskas, Nicholas; Drukker, Karen; Giger, Maryellen L.

    2007-03-01

    The development of image databases for CAD research is not a trivial task. The collection and management of images and their related metadata from multiple sources is a time-consuming but necessary process. By standardizing and centralizing the methods in which these data are maintained, one can generate subsets of a larger database that match the specific criteria needed for a particular research project in a quick and efficient manner. A research-oriented management system of this type is highly desirable in a multi-modality CAD research environment. An online, webbased database system for the storage and management of research-specific medical image metadata was designed for use with four modalities of breast imaging: screen-film mammography, full-field digital mammography, breast ultrasound and breast MRI. The system was designed to consolidate data from multiple clinical sources and provide the user with the ability to anonymize the data. Input concerning the type of data to be stored as well as desired searchable parameters was solicited from researchers in each modality. The backbone of the database was created using MySQL. A robust and easy-to-use interface for entering, removing, modifying and searching information in the database was created using HTML and PHP. This standardized system can be accessed using any modern web-browsing software and is fundamental for our various research projects on computer-aided detection, diagnosis, cancer risk assessment, multimodality lesion assessment, and prognosis. Our CAD database system stores large amounts of research-related metadata and successfully generates subsets of cases that match the user's desired search criteria.

  15. Efficient free energy calculations of quantum systems through computer simulations

    NASA Astrophysics Data System (ADS)

    Antonelli, Alex; Ramirez, Rafael; Herrero, Carlos; Hernandez, Eduardo

    2009-03-01

    In general, the classical limit is assumed in computer simulation calculations of free energy. This approximation, however, is not justifiable for a class of systems in which quantum contributions for the free energy cannot be neglected. The inclusion of quantum effects is important for the determination of reliable phase diagrams of these systems. In this work, we present a new methodology to compute the free energy of many-body quantum systems [1]. This methodology results from the combination of the path integral formulation of statistical mechanics and efficient non-equilibrium methods to estimate free energy, namely, the adiabatic switching and reversible scaling methods. A quantum Einstein crystal is used as a model to show the accuracy and reliability the methodology. This new method is applied to the calculation of solid-liquid coexistence properties of neon. Our findings indicate that quantum contributions to properties such as, melting point, latent heat of fusion, entropy of fusion, and slope of melting line can be up to 10% of the calculated values using the classical approximation. [1] R. M. Ramirez, C. P. Herrero, A. Antonelli, and E. R. Hernández, Journal of Chemical Physics 129, 064110 (2008)

  16. An efficient parallel algorithm for accelerating computational protein design

    PubMed Central

    Zhou, Yichao; Xu, Wei; Donald, Bruce R.; Zeng, Jianyang

    2014-01-01

    Motivation: Structure-based computational protein design (SCPR) is an important topic in protein engineering. Under the assumption of a rigid backbone and a finite set of discrete conformations of side-chains, various methods have been proposed to address this problem. A popular method is to combine the dead-end elimination (DEE) and A* tree search algorithms, which provably finds the global minimum energy conformation (GMEC) solution. Results: In this article, we improve the efficiency of computing A* heuristic functions for protein design and propose a variant of A* algorithm in which the search process can be performed on a single GPU in a massively parallel fashion. In addition, we make some efforts to address the memory exceeding problem in A* search. As a result, our enhancements can achieve a significant speedup of the A*-based protein design algorithm by four orders of magnitude on large-scale test data through pre-computation and parallelization, while still maintaining an acceptable memory overhead. We also show that our parallel A* search algorithm could be successfully combined with iMinDEE, a state-of-the-art DEE criterion, for rotamer pruning to further improve SCPR with the consideration of continuous side-chain flexibility. Availability: Our software is available and distributed open-source under the GNU Lesser General License Version 2.1 (GNU, February 1999). The source code can be downloaded from http://www.cs.duke.edu/donaldlab/osprey.php or http://iiis.tsinghua.edu.cn/∼compbio/software.html. Contact: zengjy321@tsinghua.edu.cn Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24931991

  17. Bio++: efficient extensible libraries and tools for computational molecular evolution.

    PubMed

    Guéguen, Laurent; Gaillard, Sylvain; Boussau, Bastien; Gouy, Manolo; Groussin, Mathieu; Rochette, Nicolas C; Bigot, Thomas; Fournier, David; Pouyet, Fanny; Cahais, Vincent; Bernard, Aurélien; Scornavacca, Céline; Nabholz, Benoît; Haudry, Annabelle; Dachary, Loïc; Galtier, Nicolas; Belkhir, Khalid; Dutheil, Julien Y

    2013-08-01

    Efficient algorithms and programs for the analysis of the ever-growing amount of biological sequence data are strongly needed in the genomics era. The pace at which new data and methodologies are generated calls for the use of pre-existing, optimized-yet extensible-code, typically distributed as libraries or packages. This motivated the Bio++ project, aiming at developing a set of C++ libraries for sequence analysis, phylogenetics, population genetics, and molecular evolution. The main attractiveness of Bio++ is the extensibility and reusability of its components through its object-oriented design, without compromising the computer-efficiency of the underlying methods. We present here the second major release of the libraries, which provides an extended set of classes and methods. These extensions notably provide built-in access to sequence databases and new data structures for handling and manipulating sequences from the omics era, such as multiple genome alignments and sequencing reads libraries. More complex models of sequence evolution, such as mixture models and generic n-tuples alphabets, are also included.

  18. Digital dentistry: an overview of recent developments for CAD/CAM generated restorations.

    PubMed

    Beuer, F; Schweiger, J; Edelhoff, D

    2008-05-10

    As in many other industries, production stages are increasingly becoming automated in dental technology. As the price of dental laboratory work has become a major factor in treatment planning and therapy, automation could enable more competitive production in high-wage areas like Western Europe and the USA. Advances in computer technology now enable cost-effective production of individual pieces. Dental restorations produced with computer assistance have become more common in recent years. Most dental companies have access to CAD/CAM procedures, either in the dental practice, the dental laboratory or in the form of production centres. The many benefits associated with CAD/CAM generated dental restorations include: the access to new, almost defect-free, industrially prefabricated and controlled materials; an increase in quality and reproducibility and also data storage commensurate with a standardised chain of production; an improvement in precision and planning, as well as an increase in efficiency. As a result of continual developments in computer hardware and software, new methods of production and new treatment concepts are to be expected, which will enable an additional reduction in costs. Dentists, who will be confronted with these techniques in the future, require certain basic knowledge if they are to benefit from these new procedures. This article gives an overview of CAD/CAM-technologies and systems available for dentistry today.

  19. Development of a CAD Model Simplification Framework for Finite Element Analysis

    DTIC Science & Technology

    2012-01-01

    97 4.20 Von Mises Stress distribution on the unsimplified boom assembly . . 97 vii List of Abbreviations CAD Computer Aided Design FEA Finite Element...Element Analysis viii Chapter 1 Introduction 1.1 Motivation Engineering models today are mainly developed within a 3D computer aided design ( CAD ) software...and manufac- turability. The generated 3D CAD models are used for downstream applications within the design and manufacturing process. Finite

  20. Textbook Multigrid Efficiency for Computational Fluid Dynamics Simulations

    NASA Technical Reports Server (NTRS)

    Brandt, Achi; Thomas, James L.; Diskin, Boris

    2001-01-01

    Considerable progress over the past thirty years has been made in the development of large-scale computational fluid dynamics (CFD) solvers for the Euler and Navier-Stokes equations. Computations are used routinely to design the cruise shapes of transport aircraft through complex-geometry simulations involving the solution of 25-100 million equations; in this arena the number of wind-tunnel tests for a new design has been substantially reduced. However, simulations of the entire flight envelope of the vehicle, including maximum lift, buffet onset, flutter, and control effectiveness have not been as successful in eliminating the reliance on wind-tunnel testing. These simulations involve unsteady flows with more separation and stronger shock waves than at cruise. The main reasons limiting further inroads of CFD into the design process are: (1) the reliability of turbulence models; and (2) the time and expense of the numerical simulation. Because of the prohibitive resolution requirements of direct simulations at high Reynolds numbers, transition and turbulence modeling is expected to remain an issue for the near term. The focus of this paper addresses the latter problem by attempting to attain optimal efficiencies in solving the governing equations. Typically current CFD codes based on the use of multigrid acceleration techniques and multistage Runge-Kutta time-stepping schemes are able to converge lift and drag values for cruise configurations within approximately 1000 residual evaluations. An optimally convergent method is defined as having textbook multigrid efficiency (TME), meaning the solutions to the governing system of equations are attained in a computational work which is a small (less than 10) multiple of the operation count in the discretized system of equations (residual equations). In this paper, a distributed relaxation approach to achieving TME for Reynolds-averaged Navier-Stokes (RNAS) equations are discussed along with the foundations that form the

  1. A new CAD approach for improving efficacy of cancer screening

    NASA Astrophysics Data System (ADS)

    Zheng, Bin; Qian, Wei; Li, Lihua; Pu, Jiantao; Kang, Yan; Lure, Fleming; Tan, Maxine; Qiu, Yuchen

    2015-03-01

    Since performance and clinical utility of current computer-aided detection (CAD) schemes of detecting and classifying soft tissue lesions (e.g., breast masses and lung nodules) is not satisfactory, many researchers in CAD field call for new CAD research ideas and approaches. The purpose of presenting this opinion paper is to share our vision and stimulate more discussions of how to overcome or compensate the limitation of current lesion-detection based CAD schemes in the CAD research community. Since based on our observation that analyzing global image information plays an important role in radiologists' decision making, we hypothesized that using the targeted quantitative image features computed from global images could also provide highly discriminatory power, which are supplementary to the lesion-based information. To test our hypothesis, we recently performed a number of independent studies. Based on our published preliminary study results, we demonstrated that global mammographic image features and background parenchymal enhancement of breast MR images carried useful information to (1) predict near-term breast cancer risk based on negative screening mammograms, (2) distinguish between true- and false-positive recalls in mammography screening examinations, and (3) classify between malignant and benign breast MR examinations. The global case-based CAD scheme only warns a risk level of the cases without cueing a large number of false-positive lesions. It can also be applied to guide lesion-based CAD cueing to reduce false-positives but enhance clinically relevant true-positive cueing. However, before such a new CAD approach is clinically acceptable, more work is needed to optimize not only the scheme performance but also how to integrate with lesion-based CAD schemes in the clinical practice.

  2. CAD Model and Visual Assisted Control System for NIF Target Area Positioners

    SciTech Connect

    Tekle, E A; Wilson, E F; Paik, T S

    2007-10-03

    The National Ignition Facility (NIF) target chamber contains precision motion control systems that reach up to 6 meters into the target chamber for handling targets and diagnostics. Systems include the target positioner, an alignment sensor, and diagnostic manipulators (collectively called positioners). Target chamber shot experiments require a variety of positioner arrangements near the chamber center to be aligned to an accuracy of 10 micrometers. Positioners are some of the largest devices in NIF, and they require careful monitoring and control in 3 dimensions to prevent interferences. The Integrated Computer Control System provides efficient and flexible multi-positioner controls. This is accomplished through advanced video-control integration incorporating remote position sensing and realtime analysis of a CAD model of target chamber devices. The control system design, the method used to integrate existing mechanical CAD models, and the offline test laboratory used to verify proper operation of the control system are described.

  3. Toward clinical end-user computing: programmable order protocols for efficient human computer interaction.

    PubMed

    Cho, H; Kwak, Y S; Noh, Y S; Yang, M O; Kim, D J

    1998-01-01

    We have developed and implemented an efficient method of managing routine patient care information as a programmable group order protocol. The purpose of protocol is to minimize a labor-intensive manual computer interaction by grouping clinically related routine orders as a single entity, thus to greatly speed up the time taken for manual entry such as keyboard stroke and/or mouse clicking. User programmability is added to facilitate insertion, deletion and update of order items to be a locally independent operation. A sequence of menu screen is also programmable when a change of standard operation is needed. Department specific order protocols are classified into four categories to improve user convenience. The degree of efficiency is measured by a number of key strokes and entry time. In most cases the time to enter order protocol with correction is found to take less than one minute with less than five key strokes. The method of order protocol entry clearly demonstrates end-user computing capability so that department specific requirements are resolved without resorting to computer department personnel. Flexibility of managing individual physician specific protocols is also beneficial enough to enhance the morale toward a hospital information system currently in use.

  4. CAD/CAM data management

    NASA Technical Reports Server (NTRS)

    Bray, O. H.

    1984-01-01

    The role of data base management in CAD/CAM, particularly for geometric data is described. First, long term and short term objectives for CAD/CAM data management are identified. Second, the benefits of the data base management approach are explained. Third, some of the additional work needed in the data base area is discussed.

  5. CAD/CAM data management

    NASA Technical Reports Server (NTRS)

    Bray, O. H.

    1984-01-01

    The role of data base management in CAD/CAM, particularly for geometric data is described. First, long term and short term objectives for CAD/CAM data management are identified. Second, the benefits of the data base management approach are explained. Third, some of the additional work needed in the data base area is discussed.

  6. Efficiently computing exact geodesic loops within finite steps.

    PubMed

    Xin, Shi-Qing; He, Ying; Fu, Chi-Wing

    2012-06-01

    Closed geodesics, or geodesic loops, are crucial to the study of differential topology and differential geometry. Although the existence and properties of closed geodesics on smooth surfaces have been widely studied in mathematics community, relatively little progress has been made on how to compute them on polygonal surfaces. Most existing algorithms simply consider the mesh as a graph and so the resultant loops are restricted only on mesh edges, which are far from the actual geodesics. This paper is the first to prove the existence and uniqueness of geodesic loop restricted on a closed face sequence; it contributes also with an efficient algorithm to iteratively evolve an initial closed path on a given mesh into an exact geodesic loop within finite steps. Our proposed algorithm takes only an O(k) space complexity and an O(mk) time complexity (experimentally), where m is the number of vertices in the region bounded by the initial loop and the resultant geodesic loop, and k is the average number of edges in the edge sequences that the evolving loop passes through. In contrast to the existing geodesic curvature flow methods which compute an approximate geodesic loop within a predefined threshold, our method is exact and can apply directly to triangular meshes without needing to solve any differential equation with a numerical solver; it can run at interactive speed, e.g., in the order of milliseconds, for a mesh with around 50K vertices, and hence, significantly outperforms existing algorithms. Actually, our algorithm could run at interactive speed even for larger meshes. Besides the complexity of the input mesh, the geometric shape could also affect the number of evolving steps, i.e., the performance. We motivate our algorithm with an interactive shape segmentation example shown later in the paper.

  7. Efficient clustering of large EST data sets on parallel computers

    PubMed Central

    Kalyanaraman, Anantharaman; Aluru, Srinivas; Kothari, Suresh; Brendel, Volker

    2003-01-01

    Clustering expressed sequence tags (ESTs) is a powerful strategy for gene identification, gene expression studies and identifying important genetic variations such as single nucleotide polymorphisms. To enable fast clustering of large-scale EST data, we developed PaCE (for Parallel Clustering of ESTs), a software program for EST clustering on parallel computers. In this paper, we report on the design and development of PaCE and its evaluation using Arabidopsis ESTs. The novel features of our approach include: (i) design of memory efficient algorithms to reduce the memory required to linear in the size of the input, (ii) a combination of algorithmic techniques to reduce the computational work without sacrificing the quality of clustering, and (iii) use of parallel processing to reduce run-time and facilitate clustering of larger data sets. Using a combination of these techniques, we report the clustering of 168 200 Arabidopsis ESTs in 15 min on an IBM xSeries cluster with 30 dual-processor nodes. We also clustered 327 632 rat ESTs in 47 min and 420 694 Triticum aestivum ESTs in 3 h and 15 min. We demonstrate the quality of our software using benchmark Arabidopsis EST data, and by comparing it with CAP3, a software widely used for EST assembly. Our software allows clustering of much larger EST data sets than is possible with current software. Because of its speed, it also facilitates multiple runs with different parameters, providing biologists a tool to better analyze EST sequence data. Using PaCE, we clustered EST data from 23 plant species and the results are available at the PlantGDB website. PMID:12771222

  8. Machinability of CAD-CAM materials.

    PubMed

    Chavali, Ramakiran; Nejat, Amir H; Lawson, Nathaniel C

    2017-08-01

    Although new materials are available for computer-aided design and computer-aided manufacturing (CAD-CAM) fabrication, limited information is available regarding their machinability. The depth of penetration of a milling tool into a material during a timed milling cycle may indicate its machinability. The purpose of this in vitro study was to compare the tool penetration rate for 2 polymer-containing CAD-CAM materials (Lava Ultimate and Enamic) and 2 ceramic-based CAD-CAM materials (e.max CAD and Celtra Duo). The materials were sectioned into 4-mm-thick specimens (n=5/material) and polished with 320-grit SiC paper. Each specimen was loaded into a custom milling apparatus. The apparatus pushed the specimens against a milling tool (E4D Tapered 2016000) rotating at 40 000 RPM with a constant force of 0.98 N. After a 6-minute timed milling cycle, the length of each milling cut was measured with image analysis software under a digital light microscope. Representative specimens and milling tools were examined with scanning electron microscopy (SEM) and energy dispersive x-ray spectroscopy. The penetration rate of Lava Ultimate (3.21 ±0.46 mm/min) and Enamic (2.53 ±0.57 mm/min) was significantly greater than that of e.max CAD (1.12 ±0.32 mm/min) or Celtra Duo (0.80 ±0.21 mm/min) materials. SEM observations showed little tool damage, regardless of material type. Residual material was found on the tools used with polymer-containing materials, and wear of the embedding medium was seen on the tools used with the ceramic-based materials. Edge chipping was noted on cuts made in the ceramic-based materials. Lava Ultimate and Enamic have greater machinability and less edge chipping than e.max CAD and Celtra Duo. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  9. Use of CAD-CAM technology for the fabrication of complete dentures: An alternative technique.

    PubMed

    Yilmaz, Burak; Azak, Aysen Nekora; Alp, Gülce; Ekşi, Hilal

    2017-01-31

    Computer-aided design and computer-aided manufacturing (CAD-CAM) technology is available for the fabrication of complete dentures as an alternative to conventional fabrication techniques. This report describes a work flow for a technique that combines the use of conventional impressions and maxillomandibular relationship records with CAD-CAM technology for the fabrication of maxillary and mandibular complete dentures.

  10. Method for computationally efficient design of dielectric laser accelerator structures

    DOE PAGES

    Hughes, Tyler; Veronis, Georgios; Wootton, Kent P.; ...

    2017-06-22

    Here, dielectric microstructures have generated much interest in recent years as a means of accelerating charged particles when powered by solid state lasers. The acceleration gradient (or particle energy gain per unit length) is an important figure of merit. To design structures with high acceleration gradients, we explore the adjoint variable method, a highly efficient technique used to compute the sensitivity of an objective with respect to a large number of parameters. With this formalism, the sensitivity of the acceleration gradient of a dielectric structure with respect to its entire spatial permittivity distribution is calculated by the use of onlymore » two full-field electromagnetic simulations, the original and ‘adjoint’. The adjoint simulation corresponds physically to the reciprocal situation of a point charge moving through the accelerator gap and radiating. Using this formalism, we perform numerical optimizations aimed at maximizing acceleration gradients, which generate fabricable structures of greatly improved performance in comparison to previously examined geometries.« less

  11. Efficient Computing Budget Allocation for Finding Simplest Good Designs

    PubMed Central

    Jia, Qing-Shan; Zhou, Enlu; Chen, Chun-Hung

    2012-01-01

    In many applications some designs are easier to implement, require less training data and shorter training time, and consume less storage than the others. Such designs are called simple designs, and are usually preferred over complex ones when they all have good performance. Despite the abundant existing studies on how to find good designs in simulation-based optimization (SBO), there exist few studies on finding simplest good designs. We consider this important problem in this paper, and make the following contributions. First, we provide lower bounds for the probabilities of correctly selecting the m simplest designs with top performance, and selecting the best m such simplest good designs, respectively. Second, we develop two efficient computing budget allocation methods to find m simplest good designs and to find the best m such designs, respectively; and show their asymptotic optimalities. Third, we compare the performance of the two methods with equal allocations over 6 academic examples and a smoke detection problem in wireless sensor networks. We hope that this work brings insight to finding the simplest good designs in general. PMID:23687404

  12. Efficient computation of bifurcation diagrams via adaptive ROMs

    NASA Astrophysics Data System (ADS)

    Terragni, F.; Vega, J. M.

    2014-08-01

    Various ideas concerning model reduction based on proper orthogonal decomposition are discussed, exploited, and suited to the approximation of complex bifurcations in some dissipative systems. The observation that the most energetic modes involved in these low dimensional descriptions depend only weakly on the actual values of the problem parameters is firstly highlighted and used to develop a simple strategy to capture the transitions occurring over a given bifurcation parameter span. Flexibility of the approach is stressed by means of some numerical experiments. A significant improvement is obtained by introducing a truncation error estimate to detect when the approximation fails. Thus, the considered modes are suitably updated on demand, as the bifurcation parameter is varied, in order to account for possible changes in the phase space of the system that might be missed. A further extension of the method to more complex (quasi-periodic and chaotic) attractors is finally outlined by implementing a control of truncation instabilities, which leads to a general, adaptive reduced order model for the construction of bifurcation diagrams. Illustration of the ideas and methods in the complex Ginzburg-Landau equation (a paradigm of laminar flows on a bounded domain) evidences a fairly good computational efficiency.

  13. The Effect of Computer Automation on Institutional Review Board (IRB) Office Efficiency

    ERIC Educational Resources Information Center

    Oder, Karl; Pittman, Stephanie

    2015-01-01

    Companies purchase computer systems to make their processes more efficient through automation. Some academic medical centers (AMC) have purchased computer systems for their institutional review boards (IRB) to increase efficiency and compliance with regulations. IRB computer systems are expensive to purchase, deploy, and maintain. An AMC should…

  14. An Algorithm for Projecting Points onto a Patched CAD Model

    SciTech Connect

    Henshaw, W D

    2001-05-29

    We are interested in building structured overlapping grids for geometries defined by computer-aided-design (CAD) packages. Geometric information defining the boundary surfaces of a computation domain is often provided in the form of a collection of possibly hundreds of trimmed patches. The first step in building an overlapping volume grid on such a geometry is to build overlapping surface grids. A surface grid is typically built using hyperbolic grid generation; starting from a curve on the surface, a grid is grown by marching over the surface. A given hyperbolic grid will typically cover many of the underlying CAD surface patches. The fundamental operation needed for building surface grids is that of projecting a point in space onto the closest point on the CAD surface. We describe an fast algorithm for performing this projection, it will make use of a fairly coarse global triangulation of the CAD geometry. We describe how to build this global triangulation by first determining the connectivity of the CAD surface patches. This step is necessary since it often the case that the CAD description will contain no information specifying how a given patch connects to other neighboring patches. Determining the connectivity is difficult since the surface patches may contain mistakes such as gaps or overlaps between neighboring patches.

  15. Building Efficient Wireless Infrastructures for Pervasive Computing Environments

    ERIC Educational Resources Information Center

    Sheng, Bo

    2010-01-01

    Pervasive computing is an emerging concept that thoroughly brings computing devices and the consequent technology into people's daily life and activities. Most of these computing devices are very small, sometimes even "invisible", and often embedded into the objects surrounding people. In addition, these devices usually are not isolated, but…

  16. Building Efficient Wireless Infrastructures for Pervasive Computing Environments

    ERIC Educational Resources Information Center

    Sheng, Bo

    2010-01-01

    Pervasive computing is an emerging concept that thoroughly brings computing devices and the consequent technology into people's daily life and activities. Most of these computing devices are very small, sometimes even "invisible", and often embedded into the objects surrounding people. In addition, these devices usually are not isolated, but…

  17. Balancing Accuracy and Computational Efficiency for Ternary Gas Hydrate Systems

    NASA Astrophysics Data System (ADS)

    White, M. D.

    2011-12-01

    phase transitions. This paper describes and demonstrates a numerical solution scheme for ternary hydrate systems that seeks a balance between accuracy and computational efficiency. This scheme uses a generalize cubic equation of state, functional forms for the hydrate equilibria and cage occupancies, variable switching scheme for phase transitions, and kinetic exchange of hydrate formers (i.e., CH4, CO2, and N2) between the mobile phases (i.e., aqueous, liquid CO2, and gas) and hydrate phase. Accuracy of the scheme will be evaluated by comparing property values and phase equilibria against experimental data. Computational efficiency of the scheme will be evaluated by comparing the base scheme against variants. The application of interest will the production of a natural gas hydrate deposit from a geologic formation, using the guest molecule exchange process; where, a mixture of CO2 and N2 are injected into the formation. During the guest-molecule exchange, CO2 and N2 will predominately replace CH4 in the large and small cages of the sI structure, respectively.

  18. Schools (Students) Exchanging CAD/CAM Files over the Internet.

    ERIC Educational Resources Information Center

    Mahoney, Gary S.; Smallwood, James E.

    This document discusses how students and schools can benefit from exchanging computer-aided design/computer-aided manufacturing (CAD/CAM) files over the Internet, explains how files are exchanged, and examines the problem of selected hardware/software incompatibility. Key terms associated with information search services are defined, and several…

  19. Grayscale optical correlator for CAD/CAC applications

    NASA Astrophysics Data System (ADS)

    Chao, Tien-Hsin; Lu, Thomas

    2008-03-01

    This paper describes JPL's recent work on high-performance automatic target recognition (ATR) processor consisting of a Grayscale Optical Correlator (GOC) and neural network for various Computer Aided Detection and Computer Aided Classification (CAD/CAC) applications. A simulation study for sonar mine and mine-like target detection and classification is presented. Applications to periscope video ATR is also presented.

  20. Computationally efficient models of neuromuscular recruitment and mechanics

    NASA Astrophysics Data System (ADS)

    Song, D.; Raphael, G.; Lan, N.; Loeb, G. E.

    2008-06-01

    We have improved the stability and computational efficiency of a physiologically realistic, virtual muscle (VM 3.*) model (Cheng et al 2000 J. Neurosci. Methods 101 117-30) by a simpler structure of lumped fiber types and a novel recruitment algorithm. In the new version (VM 4.0), the mathematical equations are reformulated into state-space representation and structured into a CMEX S-function in SIMULINK. A continuous recruitment scheme approximates the discrete recruitment of slow and fast motor units under physiological conditions. This makes it possible to predict force output during smooth recruitment and derecruitment without having to simulate explicitly a large number of independently recruited units. We removed the intermediate state variable, effective length (Leff), which had been introduced to model the delayed length dependency of the activation-frequency relationship, but which had little effect and could introduce instability under physiological conditions of use. Both of these changes greatly reduce the number of state variables with little loss of accuracy compared to the original VM. The performance of VM 4.0 was validated by comparison with VM 3.1.5 for both single-muscle force production and a multi-joint task. The improved VM 4.0 model is more suitable for the analysis of neural control of movements and for design of prosthetic systems to restore lost or impaired motor functions. VM 4.0 is available via the internet and includes options to use the original VM model, which remains useful for detailed simulations of single motor unit behavior.

  1. Computer Aided Drafting. Instructor's Guide.

    ERIC Educational Resources Information Center

    Henry, Michael A.

    This guide is intended for use in introducing students to the operation and applications of computer-aided drafting (CAD) systems. The following topics are covered in the individual lessons: understanding CAD (CAD versus traditional manual drafting and care of software and hardware); using the components of a CAD system (primary and other input…

  2. Computer Aided Design of Computer Generated Holograms for electron beam fabrication

    NASA Technical Reports Server (NTRS)

    Urquhart, Kristopher S.; Lee, Sing H.; Guest, Clark C.; Feldman, Michael R.; Farhoosh, Hamid

    1989-01-01

    Computer Aided Design (CAD) systems that have been developed for electrical and mechanical design tasks are also effective tools for the process of designing Computer Generated Holograms (CGHs), particularly when these holograms are to be fabricated using electron beam lithography. CAD workstations provide efficient and convenient means of computing, storing, displaying, and preparing for fabrication many of the features that are common to CGH designs. Experience gained in the process of designing CGHs with various types of encoding methods is presented. Suggestions are made so that future workstations may further accommodate the CGH design process.

  3. Computer Aided Design of Computer Generated Holograms for electron beam fabrication

    NASA Technical Reports Server (NTRS)

    Urquhart, Kristopher S.; Lee, Sing H.; Guest, Clark C.; Feldman, Michael R.; Farhoosh, Hamid

    1989-01-01

    Computer Aided Design (CAD) systems that have been developed for electrical and mechanical design tasks are also effective tools for the process of designing Computer Generated Holograms (CGHs), particularly when these holograms are to be fabricated using electron beam lithography. CAD workstations provide efficient and convenient means of computing, storing, displaying, and preparing for fabrication many of the features that are common to CGH designs. Experience gained in the process of designing CGHs with various types of encoding methods is presented. Suggestions are made so that future workstations may further accommodate the CGH design process.

  4. A Computationally Efficient Multicomponent Equilibrium Solver for Aerosols (MESA)

    SciTech Connect

    Zaveri, Rahul A.; Easter, Richard C.; Peters, Len K.

    2005-12-23

    deliquescence points as well as mass growth factors for the sulfate-rich systems. The MESA-MTEM configuration required only 5 to 10 single-level iterations to obtain the equilibrium solution for ~44% of the 328 multiphase problems solved in the 16 test cases at RH values ranging between 20% and 90%, while ~85% of the problems solved required less than 20 iterations. Based on the accuracy and computational efficiency considerations, the MESA-MTEM configuration is attractive for use in 3-D aerosol/air quality models.

  5. The use of CAD/CAM in dentistry.

    PubMed

    Davidowitz, Gary; Kotick, Philip G

    2011-07-01

    Computer-aided design (CAD) and computer-aided manufacturing (CAM) have become an increasingly popular part of dentistry over the past 25 years. The technology, which is used in both the dental laboratory and the dental office, can be applied to inlays, onlays, veneers, crowns, fixed partial dentures, implant abutments, and even full-mouth reconstruction. This article discusses the history of CAD/CAM in dentistry and gives an overview of how it works. It also provides information on the advantages and disadvantages, describes the main products available, discusses how to incorporate the new technology into your practice, and addresses future applications. Copyright © 2011 Elsevier Inc. All rights reserved.

  6. CAD/CAM generated all-ceramic primary telescopic prostheses.

    PubMed

    Kurbad, A; Ganz, S; Kurbad, S

    2012-01-01

    Computer-aided design and manufacturing (CAD/CAM) systems have proven effective not only for the manufacture of crown and bridge frameworks, inlays, onlays and veneers, but also for the generation of all-ceramic primary telescopic prostheses in more than 10 years of use in dental technology. The new InLab 4.0 software generation makes it possible to design and mill primary telescopic prostheses with CAD/CAM technology. The computer-generated raw crowns for these restorations require very little manual adaptation. The secondary crowns are manufactured by electroforming and bonded onto the tertiary structure or framework.

  7. Productivity improvements through the use of CAD/CAM

    NASA Astrophysics Data System (ADS)

    Wehrman, M. D.

    This paper focuses on Computer Aided Design/Computer Aided Manufacturing (CAD/CAM) productivity improvements that occurred in the Boeing Commercial Airplane Company (BCAC) between 1979 and 1983, with a look at future direction. Since the introduction of numerically controlled machinery in the 1950s, a wide range of engineering and manufacturing applications has evolved. The main portion of this paper includes a summarized and illustrated cross-section of these applications, touching on benefits such as reduced tooling, shortened flow time, increased accuracy, and reduced labor hours. The current CAD/CAM integration activity, directed toward capitalizing on this productivity in the future, is addressed.

  8. CAD/CAM: improved design quality, increased productivity

    SciTech Connect

    Evans, D. E.; England, J.

    1980-01-01

    Maintaining productivity levels while assuring the quality of engineering products grows increasingly more difficult and costly for industries such as the energy industry which are heavily committed to product design. The man/machine interface made possible through the development of computer-aided design/computer-aided manufacturing (CAD/CAM) technology can be applied to the design process as a tool for increased control to assure the quality of the final engineering product. The quality-control aspects of CAD/CAM technology are addressed in this presentation.

  9. Experiences with Efficient Methodologies for Teaching Computer Programming to Geoscientists

    ERIC Educational Resources Information Center

    Jacobs, Christian T.; Gorman, Gerard J.; Rees, Huw E.; Craig, Lorraine E.

    2016-01-01

    Computer programming was once thought of as a skill required only by professional software developers. But today, given the ubiquitous nature of computation and data science it is quickly becoming necessary for all scientists and engineers to have at least a basic knowledge of how to program. Teaching how to program, particularly to those students…

  10. Efficient computation of spatial eigenvalues for hydrodynamic stability analysis

    NASA Technical Reports Server (NTRS)

    Khorrami, Mehdi R.; Malik, Mujeeb R.

    1993-01-01

    The simple procedure presented for spatial stability computations can substantially reduce the computational requirements of such analyses, as illustrated for the cases of both internal and external cases of compressible and incompressible flows, and both viscous and inviscid instability modes. Excellent estimates of spatial eigenvalues are obtained.

  11. Experiences with Efficient Methodologies for Teaching Computer Programming to Geoscientists

    ERIC Educational Resources Information Center

    Jacobs, Christian T.; Gorman, Gerard J.; Rees, Huw E.; Craig, Lorraine E.

    2016-01-01

    Computer programming was once thought of as a skill required only by professional software developers. But today, given the ubiquitous nature of computation and data science it is quickly becoming necessary for all scientists and engineers to have at least a basic knowledge of how to program. Teaching how to program, particularly to those students…

  12. False positive reduction for lung nodule CAD

    NASA Astrophysics Data System (ADS)

    Zhao, Luyin; Boroczky, Lilla; Drysdale, Jeremy; Agnihotri, Lalitha; Lee, Michael C.

    2007-03-01

    Computer-aided detection (CAD) algorithms 'automatically' identify lung nodules on thoracic multi-slice CT scans (MSCT) thereby providing physicians with a computer-generated 'second opinion'. While CAD systems can achieve high sensitivity, their limited specificity has hindered clinical acceptance. To overcome this problem, we propose a false positive reduction (FPR) system based on image processing and machine learning to reduce the number of false positive lung nodules identified by CAD algorithms and thereby improve system specificity. To discriminate between true and false nodules, twenty-three 3D features were calculated from each candidate nodule's volume of interest (VOI). A genetic algorithm (GA) and support vector machine (SVM) were then used to select an optimal subset of features from this pool of candidate features. Using this feature subset, we trained an SVM classifier to eliminate as many false positives as possible while retaining all the true nodules. To overcome the imbalanced nature of typical datasets (significantly more false positives than true positives), an intelligent data selection algorithm was designed and integrated into the machine learning framework, thus further improving the FPR rate. Three independent datasets were used to train and validate the system. Using two datasets for training and the third for validation, we achieved a 59.4% FPR rate while removing one true nodule on the validation datasets. In a second experiment, 75% of the cases were randomly selected from each of the three datasets and the remaining cases were used for validation. A similar FPR rate and true positive retention rate was achieved. Additional experiments showed that the GA feature selection process integrated with the proposed data selection algorithm outperforms the one without it by 5%-10% FPR rate. The methods proposed can be also applied to other application areas, such as computer-aided diagnosis of lung nodules.

  13. Energy efficient computing exploiting the properties of light

    NASA Astrophysics Data System (ADS)

    Shamir, Joseph

    2013-09-01

    Significant reduction of energy dissipation in computing can be achieved by addressing the theoretical lower limit of energy consumption and replacing arrays of traditional Boolean logic gates by other methods of implementing logic operations. In particular, a slight modification of the concept of computing allows the incorporation of fundamentally lossless optical processes as part of the computing operation. While the introduced new concepts can be implemented electronically or by other means, using optics eliminates also energy dissipation involved in the translation of electric charges. A possible realization of the indicated concepts is based on directed logic networks composed of reversible optical logic gate arrays.

  14. Computationally efficient algorithms for real-time attitude estimation

    NASA Technical Reports Server (NTRS)

    Pringle, Steven R.

    1993-01-01

    For many practical spacecraft applications, algorithms for determining spacecraft attitude must combine inputs from diverse sensors and provide redundancy in the event of sensor failure. A Kalman filter is suitable for this task, however, it may impose a computational burden which may be avoided by sub optimal methods. A suboptimal estimator is presented which was implemented successfully on the Delta Star spacecraft which performed a 9 month SDI flight experiment in 1989. This design sought to minimize algorithm complexity to accommodate the limitations of an 8K guidance computer. The algorithm used is interpreted in the framework of Kalman filtering and a derivation is given for the computation.

  15. Computer-Aided Apparel Design in University Curricula.

    ERIC Educational Resources Information Center

    Belleau, Bonnie D.; Bourgeois, Elva B.

    1991-01-01

    As computer-assisted design (CAD) become an integral part of the fashion industry, universities must integrate CAD into the apparel curriculum. Louisiana State University's curriculum enables students to collaborate in CAD problem solving with industry personnel. (SK)

  16. Computer-Aided Apparel Design in University Curricula.

    ERIC Educational Resources Information Center

    Belleau, Bonnie D.; Bourgeois, Elva B.

    1991-01-01

    As computer-assisted design (CAD) become an integral part of the fashion industry, universities must integrate CAD into the apparel curriculum. Louisiana State University's curriculum enables students to collaborate in CAD problem solving with industry personnel. (SK)

  17. Efficient computation of root mean square deviations under rigid transformations.

    PubMed

    Hildebrandt, Anna K; Dietzen, Matthias; Lengauer, Thomas; Lenhof, Hans-Peter; Althaus, Ernst; Hildebrandt, Andreas

    2014-04-15

    The computation of root mean square deviations (RMSD) is an important step in many bioinformatics applications. If approached naively, each RMSD computation takes time linear in the number of atoms. In addition, a careful implementation is required to achieve numerical stability, which further increases runtimes. In practice, the structural variations under consideration are often induced by rigid transformations of the protein, or are at least dominated by a rigid component. In this work, we show how RMSD values resulting from rigid transformations can be computed in constant time from the protein's covariance matrix, which can be precomputed in linear time. As a typical application scenario is protein clustering, we will also show how the Ward-distance which is popular in this field can be reduced to RMSD evaluations, yielding a constant time approach for their computation. Copyright © 2014 Wiley Periodicals, Inc.

  18. Efficient reinforcement learning: computational theories, neuroscience and robotics.

    PubMed

    Kawato, Mitsuo; Samejima, Kazuyuki

    2007-04-01

    Reinforcement learning algorithms have provided some of the most influential computational theories for behavioral learning that depends on reward and penalty. After briefly reviewing supporting experimental data, this paper tackles three difficult theoretical issues that remain to be explored. First, plain reinforcement learning is much too slow to be considered a plausible brain model. Second, although the temporal-difference error has an important role both in theory and in experiments, how to compute it remains an enigma. Third, function of all brain areas, including the cerebral cortex, cerebellum, brainstem and basal ganglia, seems to necessitate a new computational framework. Computational studies that emphasize meta-parameters, hierarchy, modularity and supervised learning to resolve these issues are reviewed here, together with the related experimental data.

  19. Mapping methods for computationally efficient and accurate structural reliability

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1992-01-01

    Mapping methods are developed to improve the accuracy and efficiency of probabilistic structural analyses with coarse finite element meshes. The mapping methods consist of the following: (1) deterministic structural analyses with fine (convergent) finite element meshes; (2) probabilistic structural analyses with coarse finite element meshes; (3) the relationship between the probabilistic structural responses from the coarse and fine finite element meshes; and (4) a probabilistic mapping. The results show that the scatter in the probabilistic structural responses and structural reliability can be efficiently predicted using a coarse finite element model and proper mapping methods with good accuracy. Therefore, large structures can be efficiently analyzed probabilistically using finite element methods.

  20. Reader studies for validation of CAD systems.

    PubMed

    Gallas, Brandon D; Brown, David G

    2008-01-01

    Evaluation of computational intelligence (CI) systems designed to improve the performance of a human operator is complicated by the need to include the effect of human variability. In this paper we consider human (reader) variability in the context of medical imaging computer-assisted diagnosis (CAD) systems, and we outline how to compare the detection performance of readers with and without the CAD. An effective and statistically powerful comparison can be accomplished with a receiver operating characteristic (ROC) experiment, summarized by the reader-averaged area under the ROC curve (AUC). The comparison requires sophisticated yet well-developed methods for multi-reader multi-case (MRMC) variance analysis. MRMC variance analysis accounts for random readers, random cases, and correlations in the experiment. In this paper, we extend the methods available for estimating this variability. Specifically, we present a method that can treat arbitrary study designs. Most methods treat only the fully-crossed study design, where every reader reads every case in two experimental conditions. We demonstrate our method with a computer simulation, and we assess the statistical power of a variety of study designs.

  1. CAD data exchange with Martin Marietta Energy Systems, Inc., Oak Ridge, TN

    SciTech Connect

    Smith, K.L.

    1994-10-01

    This document has been developed to provide guidance in the interchange of electronic CAD data with Martin Marietta Energy Systems, Inc., Oak Ridge, Tennessee. It is not meant to be as comprehensive as the existing standards and specifications, but to provide a minimum set of practices that will enhance the success of the CAD data exchange. It is now a Department of Energy (DOE) Oak Ridge Field Office requirement that Architect-Engineering (A-E) firms prepare all new drawings using a Computer Aided Design (CAD) system that is compatible with the Facility Manager`s (FM) CAD system. For Oak Ridge facilities, the CAD system used for facility design by the FM, Martin Marietta Energy Systems, Inc., is Intregraph. The format for interchange of CAD data for Oak Ridge facilities will be the Intergraph MicroStation/IGDS format.

  2. Limits on efficient computation in the physical world

    NASA Astrophysics Data System (ADS)

    Aaronson, Scott Joel

    More than a speculative technology, quantum computing seems to challenge our most basic intuitions about how the physical world should behave. In this thesis I show that, while some intuitions from classical computer science must be jettisoned in the light of modern physics, many others emerge nearly unscathed; and I use powerful tools from computational complexity theory to help determine which are which. In the first part of the thesis, I attack the common belief that quantum computing resembles classical exponential parallelism, by showing that quantum computers would face serious limitations on a wider range of problems than was previously known. In particular, any quantum algorithm that solves the collision problem---that of deciding whether a sequence of n integers is one-to-one or two-to-one---must query the sequence O (n1/5) times. This resolves a question that was open for years; previously no lower bound better than constant was known. A corollary is that there is no "black-box" quantum algorithm to break cryptographic hash functions or solve the Graph Isomorphism problem in polynomial time. I also show that relative to an oracle, quantum computers could not solve NP-complete problems in polynomial time, even with the help of nonuniform "quantum advice states"; and that any quantum algorithm needs O (2n/4/n) queries to find a local minimum of a black-box function on the n-dimensional hypercube. Surprisingly, the latter result also leads to new classical lower bounds for the local search problem. Finally, I give new lower bounds on quantum one-way communication complexity, and on the quantum query complexity of total Boolean functions and recursive Fourier sampling. The second part of the thesis studies the relationship of the quantum computing model to physical reality. I first examine the arguments of Leonid Levin, Stephen Wolfram, and others who believe quantum computing to be fundamentally impossible. I find their arguments unconvincing without a "Sure

  3. 3D-CAD Effects on Creative Design Performance of Different Spatial Abilities Students

    ERIC Educational Resources Information Center

    Chang, Y.

    2014-01-01

    Students' creativity is an important focus globally and is interrelated with students' spatial abilities. Additionally, three-dimensional computer-assisted drawing (3D-CAD) overcomes barriers to spatial expression during the creative design process. Does 3D-CAD affect students' creative abilities? The purpose of this study was to explore the…

  4. The Use of a Parametric Feature Based CAD System to Teach Introductory Engineering Graphics.

    ERIC Educational Resources Information Center

    Howell, Steven K.

    1995-01-01

    Describes the use of a parametric-feature-based computer-aided design (CAD) System, AutoCAD Designer, in teaching concepts of three dimensional geometrical modeling and design. Allows engineering graphics to go beyond the role of documentation and communication and allows an engineer to actually build a virtual prototype of a design idea and…

  5. 3D-CAD Effects on Creative Design Performance of Different Spatial Abilities Students

    ERIC Educational Resources Information Center

    Chang, Y.

    2014-01-01

    Students' creativity is an important focus globally and is interrelated with students' spatial abilities. Additionally, three-dimensional computer-assisted drawing (3D-CAD) overcomes barriers to spatial expression during the creative design process. Does 3D-CAD affect students' creative abilities? The purpose of this study was to explore the…

  6. Vocational "CAD" Education at the Indian Valley Vocational Center, Sandwich, Illinois.

    ERIC Educational Resources Information Center

    Schwendau, Mark

    Indian Valley Vocational Center (IVVC) has implemented the first vocational computer-aided design (CAD) program in the state of Illinois. The CADAPPLE turnkey system is used by students to do drafting problems assigned from their self-paced workbook "CAD-Tutor." A buddy system allows for students to alternate weeks as operators and…

  7. Teaching an Introductory CAD Course with the System-Engineering Approach.

    ERIC Educational Resources Information Center

    Pao, Y. C.

    1985-01-01

    Advocates that introductory computer aided design (CAD) courses be incorporated into engineering curricula in close conjunction with the system dynamics course. Block diagram manipulation/Bode analysis and finite elementary analysis are used as examples to illustrate the interdisciplinary nature of CAD teaching. (JN)

  8. The fabrication of a customized occlusal splint based on the merging of dynamic jaw tracking records, cone beam computed tomography, and CAD-CAM digital impression

    PubMed Central

    Aslanidou, Katerina; Kau, Chung How; Vlachos, Christos; Saleh, Tayem Abou

    2017-01-01

    OBJECTIVES: The aim of this case report was to present the procedure of fabricating a customized occlusal splint, through a revolutionary software that combines cone beam computed tomography (CBCT) with jaw motion tracking (JMT) data and superimposes a digital impression. MATERIALS AND METHODS: The case report was conducted on a 46-year-old female patient diagnosed with the temporomandibular disorder. A CBCT scan and an optical impression were obtained. The range of the patient's mandibular movements was captured with a JMT device. The data were combined in the SICAT software (SICAT, Sirona, Bonn, Germany). RESULTS: The software enabled the visualization of patient-specific mandibular movements and provided a real dynamic anatomical evaluation of the condylar position in the glenoid fossa. After the assessment of the range of movements during opening, protrusion, and lateral movements all the data were sent to SICAT and a customized occlusal splint was manufactured. CONCLUSIONS: The SICAT software provides a three-dimensional real-dynamic simulation of mandibular movements relative to the patient-specific anatomy of the jaw; thus, it opens new possibilities and potentials for the management of temporomandibular disorders. PMID:28717635

  9. The fabrication of a customized occlusal splint based on the merging of dynamic jaw tracking records, cone beam computed tomography, and CAD-CAM digital impression.

    PubMed

    Aslanidou, Katerina; Kau, Chung How; Vlachos, Christos; Saleh, Tayem Abou

    2017-01-01

    The aim of this case report was to present the procedure of fabricating a customized occlusal splint, through a revolutionary software that combines cone beam computed tomography (CBCT) with jaw motion tracking (JMT) data and superimposes a digital impression. The case report was conducted on a 46-year-old female patient diagnosed with the temporomandibular disorder. A CBCT scan and an optical impression were obtained. The range of the patient's mandibular movements was captured with a JMT device. The data were combined in the SICAT software (SICAT, Sirona, Bonn, Germany). The software enabled the visualization of patient-specific mandibular movements and provided a real dynamic anatomical evaluation of the condylar position in the glenoid fossa. After the assessment of the range of movements during opening, protrusion, and lateral movements all the data were sent to SICAT and a customized occlusal splint was manufactured. The SICAT software provides a three-dimensional real-dynamic simulation of mandibular movements relative to the patient-specific anatomy of the jaw; thus, it opens new possibilities and potentials for the management of temporomandibular disorders.

  10. [Computer-assisted orbital floor reconstruction. Use of a CAD/CAM implant with intraoperative contact-free 3D endo- and exophthalmometry].

    PubMed

    Kühnel, T V; Vairaktaris, E; Alexiou, C; Schlegel, K A; Neukam, F W; Nkenke, E

    2008-11-01

    Pronounced enophthalmos can restrict patients both functionally and aesthetically. Typical symptoms are double vision on both eyes and obvious asymmetry, both of which were present in the 67-year-old male patient presented in this paper. The resulting data of computed tomography was used to fabricate a patient specific ceramic implant for reconstruction of the left orbital floor with an enophthalmos of 4mm. During the surgery the implant fitted anatomically correct, but exophthalmos occurred. The implant needed to be regraded and recontoured in the dorsal fraction, so that overcorrection could be reduced. With the assistance of optical 3D en- and exophthalmometry during surgery, the position of the cornea vertex was reproducible measured. At the end of surgery, exophthalmos was 1.5 mm. After 12 months, enophthalmos of only 1mm exists. This case displays the combination of a patient specific fabricated implant for reconstruction of the orbital floor with optical 3D-en-and exophthalmometry to correct enophthalmos with a high degree of accuracy. Therefore these two techniques in combination should be used when complex corrections of enophthalmos are needed.

  11. Methods for Computationally Efficient Structured CFD Simulations of Complex Turbomachinery Flows

    NASA Technical Reports Server (NTRS)

    Herrick, Gregory P.; Chen, Jen-Ping

    2012-01-01

    This research presents more efficient computational methods by which to perform multi-block structured Computational Fluid Dynamics (CFD) simulations of turbomachinery, thus facilitating higher-fidelity solutions of complicated geometries and their associated flows. This computational framework offers flexibility in allocating resources to balance process count and wall-clock computation time, while facilitating research interests of simulating axial compressor stall inception with more complete gridding of the flow passages and rotor tip clearance regions than is typically practiced with structured codes. The paradigm presented herein facilitates CFD simulation of previously impractical geometries and flows. These methods are validated and demonstrate improved computational efficiency when applied to complicated geometries and flows.

  12. Efficient computational simulation of actin stress fiber remodeling.

    PubMed

    Ristori, T; Obbink-Huizer, C; Oomens, C W J; Baaijens, F P T; Loerakker, S

    2016-09-01

    Understanding collagen and stress fiber remodeling is essential for the development of engineered tissues with good functionality. These processes are complex, highly interrelated, and occur over different time scales. As a result, excessive computational costs are required to computationally predict the final organization of these fibers in response to dynamic mechanical conditions. In this study, an analytical approximation of a stress fiber remodeling evolution law was derived. A comparison of the developed technique with the direct numerical integration of the evolution law showed relatively small differences in results, and the proposed method is one to two orders of magnitude faster.

  13. Quantum-enhanced Sensing and Efficient Quantum Computation

    DTIC Science & Technology

    2015-07-27

    detectors with multiport inteferometers, and used many of the same techniques with nanowire (faster but less efficient) detectors. They demonstrated single...in‐situ quantum state engineering. Finally, we note that we have also adapted the CDL to  simultaneously operate superconducting  nanowire  detectors...along with TESs. While  nanowire   detectors have yet to match the efficiency or counting capabilities of TESs, detector response times  shortened by a

  14. Efficiency of Computer Literacy Course in Communication Studies

    ERIC Educational Resources Information Center

    Gümüs, Agah; Özad, Bahire Efe

    2004-01-01

    Following the exponential increase in the global usage of the Internet as one of the main tools for communication, the Internet established itself as the fourth most powerful media. In a similar vein, computer literacy education and related courses established themselves as the essential components of the Faculty of Communication and Media…

  15. An Efficient Virtual Machine Consolidation Scheme for Multimedia Cloud Computing

    PubMed Central

    Han, Guangjie; Que, Wenhui; Jia, Gangyong; Shu, Lei

    2016-01-01

    Cloud computing has innovated the IT industry in recent years, as it can delivery subscription-based services to users in the pay-as-you-go model. Meanwhile, multimedia cloud computing is emerging based on cloud computing to provide a variety of media services on the Internet. However, with the growing popularity of multimedia cloud computing, its large energy consumption cannot only contribute to greenhouse gas emissions, but also result in the rising of cloud users’ costs. Therefore, the multimedia cloud providers should try to minimize its energy consumption as much as possible while satisfying the consumers’ resource requirements and guaranteeing quality of service (QoS). In this paper, we have proposed a remaining utilization-aware (RUA) algorithm for virtual machine (VM) placement, and a power-aware algorithm (PA) is proposed to find proper hosts to shut down for energy saving. These two algorithms have been combined and applied to cloud data centers for completing the process of VM consolidation. Simulation results have shown that there exists a trade-off between the cloud data center’s energy consumption and service-level agreement (SLA) violations. Besides, the RUA algorithm is able to deal with variable workload to prevent hosts from overloading after VM placement and to reduce the SLA violations dramatically. PMID:26901201

  16. An Efficient Virtual Machine Consolidation Scheme for Multimedia Cloud Computing.

    PubMed

    Han, Guangjie; Que, Wenhui; Jia, Gangyong; Shu, Lei

    2016-02-18

    Cloud computing has innovated the IT industry in recent years, as it can delivery subscription-based services to users in the pay-as-you-go model. Meanwhile, multimedia cloud computing is emerging based on cloud computing to provide a variety of media services on the Internet. However, with the growing popularity of multimedia cloud computing, its large energy consumption cannot only contribute to greenhouse gas emissions, but also result in the rising of cloud users' costs. Therefore, the multimedia cloud providers should try to minimize its energy consumption as much as possible while satisfying the consumers' resource requirements and guaranteeing quality of service (QoS). In this paper, we have proposed a remaining utilization-aware (RUA) algorithm for virtual machine (VM) placement, and a power-aware algorithm (PA) is proposed to find proper hosts to shut down for energy saving. These two algorithms have been combined and applied to cloud data centers for completing the process of VM consolidation. Simulation results have shown that there exists a trade-off between the cloud data center's energy consumption and service-level agreement (SLA) violations. Besides, the RUA algorithm is able to deal with variable workload to prevent hosts from overloading after VM placement and to reduce the SLA violations dramatically.

  17. Automated CD-SEM recipe creation technology for mass production using CAD data

    NASA Astrophysics Data System (ADS)

    Kawahara, Toshikazu; Yoshida, Masamichi; Tanaka, Masashi; Ido, Sanyu; Nakano, Hiroyuki; Adachi, Naokaka; Abe, Yuichi; Nagatomo, Wataru

    2011-03-01

    Critical Dimension Scanning Electron Microscope (CD-SEM) recipe creation needs sample preparation necessary for matching pattern registration, and recipe creation on CD-SEM using the sample, which hinders the reduction in test production cost and time in semiconductor manufacturing factories. From the perspective of cost reduction and improvement of the test production efficiency, automated CD-SEM recipe creation without the sample preparation and the manual operation has been important in the production lines. For the automated CD-SEM recipe creation, we have introduced RecipeDirector (RD) that enables the recipe creation by using Computer-Aided Design (CAD) data and text data that includes measurement information. We have developed a system that automatically creates the CAD data and the text data necessary for the recipe creation on RD; and, for the elimination of the manual operation, we have enhanced RD so that all measurement information can be specified in the text data. As a result, we have established an automated CD-SEM recipe creation system without the sample preparation and the manual operation. For the introduction of the CD-SEM recipe creation system using RD to the production lines, the accuracy of the pattern matching was an issue. The shape of design templates for the matching created from the CAD data was different from that of SEM images in vision. Thus, a development of robust pattern matching algorithm that considers the shape difference was needed. The addition of image processing of the templates for the matching and shape processing of the CAD patterns in the lower layer has enabled the robust pattern matching. This paper describes the automated CD-SEM recipe creation technology for the production lines without the sample preparation and the manual operation using RD applied in Sony Semiconductor Kyusyu Corporation Kumamoto Technology Center (SCK Corporation Kumamoto TEC).

  18. A Wheat Cinnamyl Alcohol Dehydrogenase TaCAD12 Contributes to Host Resistance to the Sharp Eyespot Disease.

    PubMed

    Rong, Wei; Luo, Meiying; Shan, Tianlei; Wei, Xuening; Du, Lipu; Xu, Huijun; Zhang, Zengyan

    2016-01-01

    Sharp eyespot, caused mainly by the necrotrophic fungus Rhizoctonia cerealis, is a destructive disease in hexaploid wheat (Triticum aestivum L.). In Arabidopsis, certain cinnamyl alcohol dehydrogenases (CADs) have been implicated in monolignol biosynthesis and in defense response to bacterial pathogen infection. However, little is known about CADs in wheat defense responses to necrotrophic or soil-borne pathogens. In this study, we isolate a wheat CAD gene TaCAD12 in response to R. cerealis infection through microarray-based comparative transcriptomics, and study the enzyme activity and defense role of TaCAD12 in wheat. The transcriptional levels of TaCAD12 in sharp eyespot-resistant wheat lines were significantly higher compared with those in susceptible wheat lines. The sequence and phylogenetic analyses revealed that TaCAD12 belongs to IV group in CAD family. The biochemical assay proved that TaCAD12 protein is an authentic CAD enzyme and possesses catalytic efficiencies toward both coniferyl aldehyde and sinapyl aldehyde. Knock-down of TaCAD12 transcript significantly repressed resistance of the gene-silenced wheat plants to sharp eyespot caused by R. cerealis, whereas TaCAD12 overexpression markedly enhanced resistance of the transgenic wheat lines to sharp eyespot. Furthermore, certain defense genes (Defensin, PR10, PR17c, and Chitinase1) and monolignol biosynthesis-related genes (TaCAD1, TaCCR, and TaCOMT1) were up-regulated in the TaCAD12-overexpressing wheat plants but down-regulated in TaCAD12-silencing plants. These results suggest that TaCAD12 positively contributes to resistance against sharp eyespot through regulation of the expression of certain defense genes and monolignol biosynthesis-related genes in wheat.

  19. A Wheat Cinnamyl Alcohol Dehydrogenase TaCAD12 Contributes to Host Resistance to the Sharp Eyespot Disease

    PubMed Central

    Rong, Wei; Luo, Meiying; Shan, Tianlei; Wei, Xuening; Du, Lipu; Xu, Huijun; Zhang, Zengyan

    2016-01-01

    Sharp eyespot, caused mainly by the necrotrophic fungus Rhizoctonia cerealis, is a destructive disease in hexaploid wheat (Triticum aestivum L.). In Arabidopsis, certain cinnamyl alcohol dehydrogenases (CADs) have been implicated in monolignol biosynthesis and in defense response to bacterial pathogen infection. However, little is known about CADs in wheat defense responses to necrotrophic or soil-borne pathogens. In this study, we isolate a wheat CAD gene TaCAD12 in response to R. cerealis infection through microarray-based comparative transcriptomics, and study the enzyme activity and defense role of TaCAD12 in wheat. The transcriptional levels of TaCAD12 in sharp eyespot-resistant wheat lines were significantly higher compared with those in susceptible wheat lines. The sequence and phylogenetic analyses revealed that TaCAD12 belongs to IV group in CAD family. The biochemical assay proved that TaCAD12 protein is an authentic CAD enzyme and possesses catalytic efficiencies toward both coniferyl aldehyde and sinapyl aldehyde. Knock-down of TaCAD12 transcript significantly repressed resistance of the gene-silenced wheat plants to sharp eyespot caused by R. cerealis, whereas TaCAD12 overexpression markedly enhanced resistance of the transgenic wheat lines to sharp eyespot. Furthermore, certain defense genes (Defensin, PR10, PR17c, and Chitinase1) and monolignol biosynthesis-related genes (TaCAD1, TaCCR, and TaCOMT1) were up-regulated in the TaCAD12-overexpressing wheat plants but down-regulated in TaCAD12-silencing plants. These results suggest that TaCAD12 positively contributes to resistance against sharp eyespot through regulation of the expression of certain defense genes and monolignol biosynthesis-related genes in wheat. PMID:27899932

  20. A functional variant in APOA5/A4/C3/A1 gene cluster contributes to elevated triglycerides and severity of CAD by interfering with microRNA 3201 binding efficiency.

    PubMed

    Cui, Guanglin; Li, Zongzhe; Li, Rui; Huang, Jin; Wang, Haoran; Zhang, Lina; Ding, Hu; Wang, Dao Wen

    2014-07-22

    Recent genome-wide association studies identified the APOA5/A4/C3/A1 gene cluster polymorphisms influencing triglyceride level and risk of coronary artery disease (CAD). The purposes of this study were to fine-map triglyceride association signals in the APOA5/A4/C3/A1 gene cluster and then explore the clinical relevance in CAD and potential underlying mechanisms. We resequenced the APOA5/A4/C3/A1 gene cluster in 200 patients with extremely high triglyceride levels (≥10 mm/l) and 200 healthy control subjects who were ethnically matched and genotyped 20 genetic markers among 4,991 participants with Chinese Han ethnicity. Subsequently, 8 risk markers were investigated in 917 early-onset and 1,149 late-onset CAD patients, respectively. The molecular mechanism was explored. By resequencing, a number of newly and potentially functional variants were identified, and both the common and rare variants have remarkable cumulative effects on hypertriglyceridemia risk. Of note, gene dosage of rs2266788 demonstrated a robust association with triglyceride level (p = 1.39 × 10(-19)), modified Gensini scores (p = 1.67 × 10(-3)), and numbers of vascular lesions in CAD patients (odds ratio: 1.96, 95% confidence interval: 1.31 to 2.14, p = 8.96 × 10(-4)). Functional study demonstrated that the rs2266788 C allele destroyed microRNA 3201 binding to the 3' UTR of APOA5, resulting in prolonging the half-life of APOA5 messenger RNA and increasing its expression levels. Genetic variants in APOA5/A4/C3/A1 gene cluster play an important role in the regulation of plasma triglyceride levels by an increased APOA5 concentration and contribute to the severity of CAD. Copyright © 2014 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  1. College Students' Reading Efficiency with Computer-Presented Text.

    ERIC Educational Resources Information Center

    Wepner, Shelley B.; Feeley, Joan T.

    Focusing on improving college students' reading efficiency, a study investigated whether a commercially-prepared computerized speed reading package, Speed Reader II, could be utilized as effectively as traditionally printed text. Subjects were 70 college freshmen from a college reading and rate improvement course with borderline scores on the…

  2. Learning with Computer-Based Multimedia: Gender Effects on Efficiency

    ERIC Educational Resources Information Center

    Pohnl, Sabine; Bogner, Franz X.

    2012-01-01

    Up to now, only a few studies in multimedia learning have focused on gender effects. While research has mostly focused on learning success, the effect of gender on instructional efficiency (IE) has not yet been considered. Consequently, we used a quasi-experimental design to examine possible gender differences in the learning success, mental…

  3. Learning with Computer-Based Multimedia: Gender Effects on Efficiency

    ERIC Educational Resources Information Center

    Pohnl, Sabine; Bogner, Franz X.

    2012-01-01

    Up to now, only a few studies in multimedia learning have focused on gender effects. While research has mostly focused on learning success, the effect of gender on instructional efficiency (IE) has not yet been considered. Consequently, we used a quasi-experimental design to examine possible gender differences in the learning success, mental…

  4. Computational Complexity, Efficiency and Accountability in Large Scale Teleprocessing Systems.

    DTIC Science & Technology

    1980-12-01

    COMPLEXITY, EFFICIENCY AND ACCOUNTABILITY IN LARGE SCALE TELEPROCESSING SYSTEMS DAAG29-78-C-0036 STANFORD UNIVERSITY JOHN T. GILL MARTIN E. BELLMAN...solve but easy to check. Ve have also suggested howy sucb random tapes can be simulated by determin- istically generating "pseudorandom" numbers by a

  5. On the Use of CAD-Native Predicates and Geometry in Surface Meshing

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.

    1999-01-01

    Several paradigms for accessing computer-aided design (CAD) geometry during surface meshing for computational fluid dynamics are discussed. File translation, inconsistent geometry engines, and nonnative point construction are all identified as sources of nonrobustness. The paper argues in favor of accessing CAD parts and assemblies in their native format, without translation, and for the use of CAD-native predicates and constructors in surface mesh generation. The discussion also emphasizes the importance of examining the computational requirements for exact evaluation of triangulation predicates during surface meshing.

  6. CAD-PACS integration tool kit based on DICOM secondary capture, structured report and IHE workflow profiles.

    PubMed

    Zhou, Zheng; Liu, Brent J; Le, Anh H

    2007-01-01

    Computer aided diagnosis/detection (CAD) goes beyond subjective visual assessment of clinical images providing quantitative computer analysis of the image content, and can greatly improve clinical diagnostic outcome. Many CAD applications, including commercial and research CAD, have been developed with no ability to integrate the CAD results with a clinical picture archiving and communication system (PACS). This has hindered the extensive use of CAD for maximum benefit within a clinical environment. In this paper, we present a CAD-PACS integration toolkit that integrates CAD results with a clinical PACS. The toolkit is a software package with two versions: DICOM (digital imaging and communications in medicine)-SC (secondary capture) and DICOM-IHE (Integrating the Healthcare Enterprise). The former uses the DICOM secondary capture object model to convert the screen shot of the CAD results to a DICOM image file for PACS workstations to display, while the latter converts the CAD results to a DICOM structured report (SR) based on IHE Workflow Profiles. The DICOM-SC method is simple and easy to be implemented without ability for further data mining of CAD results, while the DICOM-IHE can be used for data mining of CAD results in the future but more complicated to implement than the DICOM-SC method.

  7. Program Evolves from Basic CAD to Total Manufacturing Experience

    ERIC Educational Resources Information Center

    Cassola, Joel

    2011-01-01

    Close to a decade ago, John Hersey High School (JHHS) in Arlington Heights, Illinois, made a transition from a traditional classroom-based pre-engineering program. The new program is geared towards helping students understand the entire manufacturing process. Previously, a JHHS student would design a project in computer-aided design (CAD) software…

  8. Present State of CAD Teaching in Spanish Universities

    ERIC Educational Resources Information Center

    Garcia, Ramon Rubio; Santos, Ramon Gallego; Quiros, Javier Suarez; Penin, Pedro I. Alvarez

    2005-01-01

    During the 1990s, all Spanish Universities updated the syllabuses of their courses as a result of the entry into force of the new Organic Law of Universities ("Ley Organica de Universidades") and, for the first time, "Computer Assisted Design" (CAD) appears in the list of core subjects (compulsory teaching content set by the…

  9. Correlating Trainee Attributes to Performance in 3D CAD Training

    ERIC Educational Resources Information Center

    Hamade, Ramsey F.; Artail, Hassan A.; Sikstrom, Sverker

    2007-01-01

    Purpose: The purpose of this exploratory study is to identify trainee attributes relevant for development of skills in 3D computer-aided design (CAD). Design/methodology/approach: Participants were trained to perform cognitive tasks of comparable complexity over time. Performance data were collected on the time needed to construct test models, and…

  10. Program Evolves from Basic CAD to Total Manufacturing Experience

    ERIC Educational Resources Information Center

    Cassola, Joel

    2011-01-01

    Close to a decade ago, John Hersey High School (JHHS) in Arlington Heights, Illinois, made a transition from a traditional classroom-based pre-engineering program. The new program is geared towards helping students understand the entire manufacturing process. Previously, a JHHS student would design a project in computer-aided design (CAD) software…

  11. Correlating Trainee Attributes to Performance in 3D CAD Training

    ERIC Educational Resources Information Center

    Hamade, Ramsey F.; Artail, Hassan A.; Sikstrom, Sverker

    2007-01-01

    Purpose: The purpose of this exploratory study is to identify trainee attributes relevant for development of skills in 3D computer-aided design (CAD). Design/methodology/approach: Participants were trained to perform cognitive tasks of comparable complexity over time. Performance data were collected on the time needed to construct test models, and…

  12. The design and construction of the CAD-1 airship

    NASA Technical Reports Server (NTRS)

    Kleiner, H. J.; Schneider, R.; Duncan, J. L.

    1975-01-01

    The background history, design philosophy and Computer application as related to the design of the envelope shape, stress calculations and flight trajectories of the CAD-1 airship, now under construction by Canadian Airship Development Corporation are reported. A three-phase proposal for future development of larger cargo carrying airships is included.

  13. Present State of CAD Teaching in Spanish Universities

    ERIC Educational Resources Information Center

    Garcia, Ramon Rubio; Santos, Ramon Gallego; Quiros, Javier Suarez; Penin, Pedro I. Alvarez

    2005-01-01

    During the 1990s, all Spanish Universities updated the syllabuses of their courses as a result of the entry into force of the new Organic Law of Universities ("Ley Organica de Universidades") and, for the first time, "Computer Assisted Design" (CAD) appears in the list of core subjects (compulsory teaching content set by the…

  14. Labeled trees and the efficient computation of derivations

    NASA Technical Reports Server (NTRS)

    Grossman, Robert; Larson, Richard G.

    1989-01-01

    The effective parallel symbolic computation of operators under composition is discussed. Examples include differential operators under composition and vector fields under the Lie bracket. Data structures consisting of formal linear combinations of rooted labeled trees are discussed. A multiplication on rooted labeled trees is defined, thereby making the set of these data structures into an associative algebra. An algebra homomorphism is defined from the original algebra of operators into this algebra of trees. An algebra homomorphism from the algebra of trees into the algebra of differential operators is then described. The cancellation which occurs when noncommuting operators are expressed in terms of commuting ones occurs naturally when the operators are represented using this data structure. This leads to an algorithm which, for operators which are derivations, speeds up the computation exponentially in the degree of the operator. It is shown that the algebra of trees leads naturally to a parallel version of the algorithm.

  15. A New Stochastic Computing Methodology for Efficient Neural Network Implementation.

    PubMed

    Canals, Vincent; Morro, Antoni; Oliver, Antoni; Alomar, Miquel L; Rosselló, Josep L

    2016-03-01

    This paper presents a new methodology for the hardware implementation of neural networks (NNs) based on probabilistic laws. The proposed encoding scheme circumvents the limitations of classical stochastic computing (based on unipolar or bipolar encoding) extending the representation range to any real number using the ratio of two bipolar-encoded pulsed signals. Furthermore, the novel approach presents practically a total noise-immunity capability due to its specific codification. We introduce different designs for building the fundamental blocks needed to implement NNs. The validity of the present approach is demonstrated through a regression and a pattern recognition task. The low cost of the methodology in terms of hardware, along with its capacity to implement complex mathematical functions (such as the hyperbolic tangent), allows its use for building highly reliable systems and parallel computing.

  16. Computationally efficient statistical differential equation modeling using homogenization

    USGS Publications Warehouse

    Hooten, Mevin B.; Garlick, Martha J.; Powell, James A.

    2013-01-01

    Statistical models using partial differential equations (PDEs) to describe dynamically evolving natural systems are appearing in the scientific literature with some regularity in recent years. Often such studies seek to characterize the dynamics of temporal or spatio-temporal phenomena such as invasive species, consumer-resource interactions, community evolution, and resource selection. Specifically, in the spatial setting, data are often available at varying spatial and temporal scales. Additionally, the necessary numerical integration of a PDE may be computationally infeasible over the spatial support of interest. We present an approach to impose computationally advantageous changes of support in statistical implementations of PDE models and demonstrate its utility through simulation using a form of PDE known as “ecological diffusion.” We also apply a statistical ecological diffusion model to a data set involving the spread of mountain pine beetle (Dendroctonus ponderosae) in Idaho, USA.

  17. Chunking as the result of an efficiency computation trade-off

    PubMed Central

    Ramkumar, Pavan; Acuna, Daniel E.; Berniker, Max; Grafton, Scott T.; Turner, Robert S.; Kording, Konrad P.

    2016-01-01

    How to move efficiently is an optimal control problem, whose computational complexity grows exponentially with the horizon of the planned trajectory. Breaking a compound movement into a series of chunks, each planned over a shorter horizon can thus reduce the overall computational complexity and associated costs while limiting the achievable efficiency. This trade-off suggests a cost-effective learning strategy: to learn new movements we should start with many short chunks (to limit the cost of computation). As practice reduces the impediments to more complex computation, the chunking structure should evolve to allow progressively more efficient movements (to maximize efficiency). Here we show that monkeys learning a reaching sequence over an extended period of time adopt this strategy by performing movements that can be described as locally optimal trajectories. Chunking can thus be understood as a cost-effective strategy for producing and learning efficient movements. PMID:27397420

  18. Invited review: efficient computation strategies in genomic selection.

    PubMed

    Misztal, I; Legarra, A

    2016-11-21

    The purpose of this study is review and evaluation of computing methods used in genomic selection for animal breeding. Commonly used models include SNP BLUP with extensions (BayesA, etc), genomic BLUP (GBLUP) and single-step GBLUP (ssGBLUP). These models are applied for genomewide association studies (GWAS), genomic prediction and parameter estimation. Solving methods include finite Cholesky decomposition possibly with a sparse implementation, and iterative Gauss-Seidel (GS) or preconditioned conjugate gradient (PCG), the last two methods possibly with iteration on data. Details are provided that can drastically decrease some computations. For SNP BLUP especially with sampling and large number of SNP, the only choice is GS with iteration on data and adjustment of residuals. If only solutions are required, PCG by iteration on data is a clear choice. A genomic relationship matrix (GRM) has limited dimensionality due to small effective population size, resulting in infinite number of generalized inverses of GRM for large genotyped populations. A specific inverse called APY requires only a small fraction of GRM, is sparse and can be computed and stored at a low cost for millions of animals. With APY inverse and PCG iteration, GBLUP and ssGBLUP can be applied to any population. Both tools can be applied to GWAS. When the system of equations is sparse but contains dense blocks, a recently developed package for sparse Cholesky decomposition and sparse inversion called YAMS has greatly improved performance over packages where such blocks were treated as sparse. With YAMS, GREML and possibly single-step GREML can be applied to populations with >50 000 genotyped animals. From a computational perspective, genomic selection is becoming a mature methodology.

  19. Evaluation of the fit of CAD/CAM abutments.

    PubMed

    Hamilton, Adam; Judge, Roy B; Palamara, Joseph E; Evans, Christopher

    2013-01-01

    This study aimed to compare the fit of computer-aided design/computerassisted manufacture (CAD/CAM) abutments provided by a single system with proprietary prefabricated abutments on various implant systems. Titanium CAD/CAM abutments were compared with prefabricated abutments on five different implant types. The samples were embedded in epoxy resin, sectioned longitudinally, and polished. Scanning electron microscopy was used to measure the gap between the implants and abutments at the connecting flanges and internal features. Independent t tests were used to compare data. A mean difference of 1.86 μm between the gold synOcta and CAD/CAM abutments on the Straumann Standard Plus implant was observed to be statistically significant (P = .002). Less than 0.4 μm of difference was found between the CAD/CAM and prefabricated abutments for the remaining implant types, and statistical significance was not observed. Mean differences of 34.4 μm (gold) and 44.7 μm (titanium) were observed between the CAD/ CAM and prefabricated abutments on the Straumann Standard Plus implants, which were statistically significant (P < .001). A mean difference of 15 μm was also observed between the CAD/CAM and prefabricated abutment on the NobelReplace implant, which was statistically significant (P = .026). All other groups had less that 4 μm of difference, and statistical significance was not observed. The CAD/CAM abutments appeared to have a comparable fit with prefabricated abutments for most of the systems evaluated. Design differences between the abutment connections for both Straumann implants were observed that affected the fit of internal components of the implant-abutment connections.

  20. Bendix CAD-CAM site plan

    SciTech Connect

    Smith, M.L.

    1982-12-01

    The Bendix Site Plan for CAD-CAM encompasses the development and integration of interactive graphics systems, factory data management systems, robotics, direct numerical control, automated inspection, factory automation, and shared data bases to achieve significant plant-wide gains in productivity. This plan does not address all current or planned computerization projects within our facility. A summary of planning proposals and rationale is presented in the following paragraphs. Interactive Graphics System (IGS) capability presently consists of two Applicon CAD systems and the CD-2000 software program processing on a time-shared CYBER 174 computer and a dedicated CYBER 173. Proposed plans include phased procurment through FY85 of additional computers and sufficient graphics terminals to support projected needs in drafting, tool/gage design, N/C programming, and process engineering. Planned procurement of additional computer equipment in FY86 and FY87 will provide the capacity necessary for a comprehensive graphics data base management system, computer-aided process planning graphics, and special graphics requirements in facilities and test equipment design. The overall IGS plan, designated BICAM (Bendix Integrated Computer Aided Manufacturing), will provide the capability and capacity to integrate manufacturing activities through a shared product data base and standardized data exchange format. Planned efforts in robotics will result in productive applications of low to medium technology robots beginning in FY82, and extending by FY85 to work cell capabilities utilizing higher technology robots with sensors such as vision and instrumented remote compliance devices. A number of robots are projected to be in service by 1990.

  1. Unwrapping ADMM: Efficient Distributed Computing via Transpose Reduction

    DTIC Science & Technology

    2016-05-11

    convergence rates of the proposed schemes and demonstrate the efficiency of this approach by fitting linear classifiers and sparse linear models to...unwrapped ADMM for this problem requires the formation of DiDTi on each server, rather than DTi Di. 5 Applications: Linear Classifiers and Sparsity...In addition to penalized regression problems, transpose re- duction can train linear classifiers . If D ∈ Rm×n contains feature vectors and l ∈ Rm

  2. Robust, efficient computational methods for axially symmetric optical aspheres.

    PubMed

    Forbes, G W

    2010-09-13

    Whether in design or the various stages of fabrication and testing, an effective representation of an asphere's shape is critical. Some algorithms are given for implementing tailored polynomials that are ideally suited to these needs. With minimal coding, these results allow a recently introduced orthogonal polynomial basis to be employed to arbitrary orders. Interestingly, these robust and efficient methods are enabled by the introduction of an auxiliary polynomial basis.

  3. Construction CAE; Integration of CAD, simulation, planning and cost control

    SciTech Connect

    Wickard, D.A. ); Bill, R.D.; Gates, K.H.; Yoshinaga, T.; Ohcoshi, S. )

    1989-01-01

    Construction CAE is a simulation, planning, scheduling, and cost control tool that is integrated with a computer aided design (CAD) system. The system uses a CAD model and allows the user to perform construction simulation on objects defined within the model. Initial cost/schedule reports as well as those required for project chronicling are supported through an interface to a work breakdown structure (WBS) and a client's existing schedule reporting system. By integrating currently available project control tools with a simulation system, Construction CAE is more effective than its individual components.

  4. How to Quickly Import CAD Geometry into Thermal Desktop

    NASA Technical Reports Server (NTRS)

    Wright, Shonte; Beltran, Emilio

    2002-01-01

    There are several groups at JPL (Jet Propulsion Laboratory) that are committed to concurrent design efforts, two are featured here. Center for Space Mission Architecture and Design (CSMAD) enables the practical application of advanced process technologies in JPL's mission architecture process. Team I functions as an incubator for projects that are in the Discovery, and even pre-Discovery proposal stages. JPL's concurrent design environment is to a large extent centered on the CAD (Computer Aided Design) file. During concurrent design sessions CAD geometry is ported to other more specialized engineering design packages.

  5. Efficient Helicopter Aerodynamic and Aeroacoustic Predictions on Parallel Computers

    NASA Technical Reports Server (NTRS)

    Wissink, Andrew M.; Lyrintzis, Anastasios S.; Strawn, Roger C.; Oliker, Leonid; Biswas, Rupak

    1996-01-01

    This paper presents parallel implementations of two codes used in a combined CFD/Kirchhoff methodology to predict the aerodynamics and aeroacoustics properties of helicopters. The rotorcraft Navier-Stokes code, TURNS, computes the aerodynamic flowfield near the helicopter blades and the Kirchhoff acoustics code computes the noise in the far field, using the TURNS solution as input. The overall parallel strategy adds MPI message passing calls to the existing serial codes to allow for communication between processors. As a result, the total code modifications required for parallel execution are relatively small. The biggest bottleneck in running the TURNS code in parallel comes from the LU-SGS algorithm that solves the implicit system of equations. We use a new hybrid domain decomposition implementation of LU-SGS to obtain good parallel performance on the SP-2. TURNS demonstrates excellent parallel speedups for quasi-steady and unsteady three-dimensional calculations of a helicopter blade in forward flight. The execution rate attained by the code on 114 processors is six times faster than the same cases run on one processor of the Cray C-90. The parallel Kirchhoff code also shows excellent parallel speedups and fast execution rates. As a performance demonstration, unsteady acoustic pressures are computed at 1886 far-field observer locations for a sample acoustics problem. The calculation requires over two hundred hours of CPU time on one C-90 processor but takes only a few hours on 80 processors of the SP2. The resultant far-field acoustic field is analyzed with state of-the-art audio and video rendering of the propagating acoustic signals.

  6. Design of efficient computational workflows for in silico drug repurposing.

    PubMed

    Vanhaelen, Quentin; Mamoshina, Polina; Aliper, Alexander M; Artemov, Artem; Lezhnina, Ksenia; Ozerov, Ivan; Labat, Ivan; Zhavoronkov, Alex

    2017-02-01

    Here, we provide a comprehensive overview of the current status of in silico repurposing methods by establishing links between current technological trends, data availability and characteristics of the algorithms used in these methods. Using the case of the computational repurposing of fasudil as an alternative autophagy enhancer, we suggest a generic modular organization of a repurposing workflow. We also review 3D structure-based, similarity-based, inference-based and machine learning (ML)-based methods. We summarize the advantages and disadvantages of these methods to emphasize three current technical challenges. We finish by discussing current directions of research, including possibilities offered by new methods, such as deep learning.

  7. An efficient computational tool for ramjet combustor research

    SciTech Connect

    Vanka, S.P.; Krazinski, J.L.; Nejad, A.S.

    1988-01-01

    A multigrid based calculation procedure is presented for the efficient solution of the time-averaged equations of a turbulent elliptic reacting flow. The equations are solved on a non-orthogonal curvilinear coordinate system. The physical models currently incorporated are a two equation k-epsilon turbulence model, a four-step chemical kinetics mechanism, and a Lagrangian particle tracking procedure applicable for dilute sprays. Demonstration calculations are presented to illustrate the performance of the calculation procedure for a ramjet dump combustor configuration. 21 refs., 9 figs., 2 tabs.

  8. Mapping methods for computationally efficient and accurate structural reliability

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1992-01-01

    Mapping methods are developed to improve the accuracy and efficiency of probabilistic structural analyses with coarse finite element meshes. The mapping methods consist of: (1) deterministic structural analyses with fine (convergent) finite element meshes, (2) probabilistic structural analyses with coarse finite element meshes, (3) the relationship between the probabilistic structural responses from the coarse and fine finite element meshes, and (4) a probabilistic mapping. The results show that the scatter of the probabilistic structural responses and structural reliability can be accurately predicted using a coarse finite element model with proper mapping methods. Therefore, large structures can be analyzed probabilistically using finite element methods.

  9. Probabilistic Damage Characterization Using the Computationally-Efficient Bayesian Approach

    NASA Technical Reports Server (NTRS)

    Warner, James E.; Hochhalter, Jacob D.

    2016-01-01

    This work presents a computationally-ecient approach for damage determination that quanti es uncertainty in the provided diagnosis. Given strain sensor data that are polluted with measurement errors, Bayesian inference is used to estimate the location, size, and orientation of damage. This approach uses Bayes' Theorem to combine any prior knowledge an analyst may have about the nature of the damage with information provided implicitly by the strain sensor data to form a posterior probability distribution over possible damage states. The unknown damage parameters are then estimated based on samples drawn numerically from this distribution using a Markov Chain Monte Carlo (MCMC) sampling algorithm. Several modi cations are made to the traditional Bayesian inference approach to provide signi cant computational speedup. First, an ecient surrogate model is constructed using sparse grid interpolation to replace a costly nite element model that must otherwise be evaluated for each sample drawn with MCMC. Next, the standard Bayesian posterior distribution is modi ed using a weighted likelihood formulation, which is shown to improve the convergence of the sampling process. Finally, a robust MCMC algorithm, Delayed Rejection Adaptive Metropolis (DRAM), is adopted to sample the probability distribution more eciently. Numerical examples demonstrate that the proposed framework e ectively provides damage estimates with uncertainty quanti cation and can yield orders of magnitude speedup over standard Bayesian approaches.

  10. Computationally Efficient Marginal Models for Clustered Recurrent Event Data

    PubMed Central

    Liu, Dandan; Schaubel, Douglas E.; Kalbfleisch, John D.

    2012-01-01

    Summary Large observational databases derived from disease registries and retrospective cohort studies have proven very useful for the study of health services utilization. However, the use of large databases may introduce computational difficulties, particularly when the event of interest is recurrent. In such settings, grouping the recurrent event data into pre-specified intervals leads to a flexible event rate model and a data reduction which remedies the computational issues. We propose a possibly stratified marginal proportional rates model with a piecewise-constant baseline event rate for recurrent event data. Both the absence and the presence of a terminal event are considered. Large-sample distributions are derived for the proposed estimators. Simulation studies are conducted under various data configurations, including settings in which the model is misspecified. Guidelines for interval selection are provided and assessed using numerical studies. We then show that the proposed procedures can be carried out using standard statistical software (e.g., SAS, R). An application based on national hospitalization data for end stage renal disease patients is provided. PMID:21957989

  11. WE-E-217A-02: Methodologies for Evaluation of Standalone CAD System Performance.

    PubMed

    Sahiner, B

    2012-06-01

    Standalone performance evaluation of a CAD system provides information about the abnormality detection or classification performance of the computerized system alone. Although the performance of the reader with CAD is the final step in CAD system assessment, standalone performance evaluation is an important component for several reasons: First, standalone evaluation informs the reader about the performance level of the CAD system and may have an impact on how the reader uses the system. Second, it provides essential information to the system designer for algorithm optimization during system development. Third, standalone evaluation can provide a detailed description of algorithm performance (e.g., on subgroups of the population) because a larger data set with more samples from different subgroups can be included in standalone studies compared to reader studies. Proper standalone evaluation of a CAD system involves a number of key components, some of which are shared with the assessment of reader performance with CAD. These include (1) selection of a test data set that allows performance assessment with little or no bias and acceptable uncertainty; (2) a reference standard that indicates disease status as well as the location and extent of disease; (3) a clearly defined method for labeling each CAD mark as a true-positive or false-positive; and (4) a properly selected set of metrics to summarize the accuracy of the computer marks and their corresponding scores. In this lecture, we will discuss various approaches for the key components of standalone CAD performance evaluation listed above, and present some of the recommendations and opinions from the AAPM CAD subcommittee on these issues. Learning Objectives 1. Identify basic components and metrics in the assessment of standalone CAD systems 2. Understand how each component may affect the assessed performance 3. Learn about AAPM CAD subcommittee's opinions and recommendations on factors and metrics related to the

  12. Computer modeling of high-efficiency solar cells

    NASA Technical Reports Server (NTRS)

    Schartz, R. J.; Lundstrom, M. S.

    1980-01-01

    Transport equations which describe the flow of holes and electrons in the heavily doped regions of a solar cell are presented in a form that is suitable for device modeling. Two experimentally determinable parameters, the effective bandgap shrinkage and the effective asymmetry factor are required to completely model the cell in these regions. Nevertheless, a knowledge of only the effective bandgap shrinkage is sufficient to model the terminal characteristics of the cell. The results of computer simulations of the effects of heavy doping are presented. The insensitivity of the terminal characteristics to the choice of effective asymmetry factor is shown along with the sensitivity of the electric field and quasielectric fields to this parameter. The dependence of the terminal characteristics on the effective bandgap shrinkage is also presented.

  13. Efficient relaxed-Jacobi smoothers for multigrid on parallel computers

    NASA Astrophysics Data System (ADS)

    Yang, Xiang; Mittal, Rajat

    2017-03-01

    In this Technical Note, we present a family of Jacobi-based multigrid smoothers suitable for the solution of discretized elliptic equations. These smoothers are based on the idea of scheduled-relaxation Jacobi proposed recently by Yang & Mittal (2014) [18] and employ two or three successive relaxed Jacobi iterations with relaxation factors derived so as to maximize the smoothing property of these iterations. The performance of these new smoothers measured in terms of convergence acceleration and computational workload, is assessed for multi-domain implementations typical of parallelized solvers, and compared to the lexicographic point Gauss-Seidel smoother. The tests include the geometric multigrid method on structured grids as well as the algebraic grid method on unstructured grids. The tests demonstrate that unlike Gauss-Seidel, the convergence of these Jacobi-based smoothers is unaffected by domain decomposition, and furthermore, they outperform the lexicographic Gauss-Seidel by factors that increase with domain partition count.

  14. Efficient Computation of Approximate Gene Clusters Based on Reference Occurrences

    NASA Astrophysics Data System (ADS)

    Jahn, Katharina

    Whole genome comparison based on the analysis of gene cluster conservation has become a popular approach in comparative genomics. While gene order and gene content as a whole randomize over time, it is observed that certain groups of genes which are often functionally related remain co-located across species. However, the conservation is usually not perfect which turns the identification of these structures, often referred to as approximate gene clusters, into a challenging task. In this paper, we present a polynomial time algorithm that computes approximate gene clusters based on reference occurrences. We show that our approach yields highly comparable results to a more general approach and allows for approximate gene cluster detection in parameter ranges currently not feasible for non-reference based approaches.

  15. BSC/T: A Tool For Efficient Magnetic Field Computation

    NASA Astrophysics Data System (ADS)

    Hebert, Jonathan; Hanson, James

    2010-11-01

    The modeling of fusion plasmas requires the accurate modeling of the fields which confine these plasmas. BSC/T is a Fortran module which allows for accurate and expedient computation of these fields from the currents which produce them. Near field producing elements, analytic solutions are used to retain maximum accuracy for geometries such as pure dipoles, infinite line currents, current rings, and finite current segments [1]. At greater distances, series expansions are used to quicken calculation with little loss of precision. More complex geometries may be modeled by arrays of these simple geometries and by current carrying mesh forms. Accuracy and performance benchmarks are presented, as well as reconstructions using the V3FIT code with various vacuum vessel models.[4pt] [1] J. D. Hanson and S. P. Hirshman, Phys. Plasmas {9}, 4410 (2002).

  16. Chino Hills --- A highly computationally efficient 2 Hz validation exercise

    NASA Astrophysics Data System (ADS)

    Taborda, R.; Karaoglu, H.; Bielak, J.; Urbanic, J.; Lopez, J.; Ramirez Guzman, L.

    2009-12-01

    The 2008 Chino Hills earthquake was the largest earthquake in the Los Angeles metropolitan region since the 1994 Northridge earthquake. With a magnitude Mw 5.4, the July 29, 2008 Chino Hills earthquake was recorded by most networks in the area. Its occurrence constitutes an excellent opportunity to study the response of the greater Los Angeles basin and to test the most common assumptions for crustal structure and material properties under ideal conditions of anelastic modeling due to a kinematic point source excitation. We present here a preliminary validation study for a set of simulations of the Chino Hills earthquake using Hercules---the parallel octree-based finite element simulator developed by the Quake Group at Carnegie Mellon University. In the past, we have reported on the simulation capabilities of Hercules for more complex---yet hypothetical---earthquake scenarios such as TeraShake and ShakeOut. For the latter, we have also conducted a comprehensive verification of results in collaboration with other SCEC simulation groups, using different methodologies. With this new simulation we attempt to come full circle in the verification and validation paradigm as understood by the modeling and simulation community. The results presented here correspond to a set of four different simulations, the most challenging one with a maximum frequency of 2 Hz and a minimum shear wave velocity of 200 m/s. These particular values of these two critical parameters help us explore the influence of higher frequencies and lower velocity profiles on ground motion. The extension to these parameters is becoming possible in our simulations thanks to the latest computational improvements we are implementing into Hercules. While our focus is on the physical interpretation of the results of our simulations and their comparison with observations, we also report on the computing resources employed. Our preliminary results suggest that extending the maximum frequency beyond the de facto

  17. CAD-CAM data exchange pilot project

    SciTech Connect

    Hintz, J.; Williams, D.

    1986-03-01

    CAD-CAM data were exchanged between dissimilar CAD systems and the information was used to fabricate three parts. Problems were identified and solutions were proposed or implemented in the area of translation methods, data verification, CAD drawing conventions, and data handling. Additional software needed for productive data exchange has been identified.

  18. CAD Skills Increased through Multicultural Design Project

    ERIC Educational Resources Information Center

    Clemons, Stephanie

    2006-01-01

    This article discusses how students in a college-entry-level CAD course researched four generations of their family histories and documented cultural and symbolic influences within their family backgrounds. AutoCAD software was then used to manipulate those cultural and symbolic images to create the design for a multicultural area rug. AutoCAD was…

  19. Cool-and Unusual-CAD Applications

    ERIC Educational Resources Information Center

    Calhoun, Ken

    2004-01-01

    This article describes several very useful applications of AutoCAD that may lie outside the normal scope of application. AutoCAD commands used in this article are based on AutoCAD 2000I. The author and his students used a Hewlett Packard 750C DesignJet plotter for plotting. (Contains 5 figures and 5 photos.)

  20. Cool-and Unusual-CAD Applications

    ERIC Educational Resources Information Center

    Calhoun, Ken

    2004-01-01

    This article describes several very useful applications of AutoCAD that may lie outside the normal scope of application. AutoCAD commands used in this article are based on AutoCAD 2000I. The author and his students used a Hewlett Packard 750C DesignJet plotter for plotting. (Contains 5 figures and 5 photos.)

  1. CAD Skills Increased through Multicultural Design Project

    ERIC Educational Resources Information Center

    Clemons, Stephanie

    2006-01-01

    This article discusses how students in a college-entry-level CAD course researched four generations of their family histories and documented cultural and symbolic influences within their family backgrounds. AutoCAD software was then used to manipulate those cultural and symbolic images to create the design for a multicultural area rug. AutoCAD was…

  2. Efficient computation of the compositional model for gas condensate reservoirs

    NASA Astrophysics Data System (ADS)

    Zhou, Jifu; Li, Jiachun; Ye, Jigen

    2000-12-01

    In this paper, a direct method, unsymmetric-pattern multifrontal factorization, for a large sparse system of linear equations is applied in the compositional reservoir model. The good performances of this approach are shown by solving the Poisson equation. And then the numerical module is embedded in the compositional model for simulating X1/5 (3) gas condensate reservoir in KeKeYa gas field, Northwest China. The results of oil/gas reserves, variations of stratum pressure and oil/gas production, etc. are compared with the observation. Good agreement comparable to COMP4 model is achieved, suggesting that the present model is both efficient and powerful in compositional reservoir simulations.

  3. fjoin: simple and efficient computation of feature overlaps.

    PubMed

    Richardson, Joel E

    2006-10-01

    Sets of biological features with genome coordinates (e.g., genes and promoters) are a particularly common form of data in bioinformatics today. Accordingly, an increasingly important processing step involves comparing coordinates from large sets of features to find overlapping feature pairs. This paper presents fjoin, an efficient, robust, and simple algorithm for finding these pairs, and a downloadable implementation. For typical bioinformatics feature sets, fjoin requires O(n log(n)) time (O(n) if the inputs are sorted) and uses O(1) space. The reference implementation is a stand-alone Python program; it implements the basic algorithm and a number of useful extensions, which are also discussed in this paper.

  4. Quality assurance and training procedures for computer-aided detection and diagnosis systems in clinical use.

    PubMed

    Huo, Zhimin; Summers, Ronald M; Paquerault, Sophie; Lo, Joseph; Hoffmeister, Jeffrey; Armato, Samuel G; Freedman, Matthew T; Lin, Jesse; Lo, Shih-Chung Ben; Petrick, Nicholas; Sahiner, Berkman; Fryd, David; Yoshida, Hiroyuki; Chan, Heang-Ping

    2013-07-01

    Computer-aided detection/diagnosis (CAD) is increasingly used for decision support by clinicians for detection and interpretation of diseases. However, there are no quality assurance (QA) requirements for CAD in clinical use at present. QA of CAD is important so that end users can be made aware of changes in CAD performance both due to intentional or unintentional causes. In addition, end-user training is critical to prevent improper use of CAD, which could potentially result in lower overall clinical performance. Research on QA of CAD and user training are limited to date. The purpose of this paper is to bring attention to these issues, inform the readers of the opinions of the members of the American Association of Physicists in Medicine (AAPM) CAD subcommittee, and thus stimulate further discussion in the CAD community on these topics. The recommendations in this paper are intended to be work items for AAPM task groups that will be formed to address QA and user training issues on CAD in the future. The work items may serve as a framework for the discussion and eventual design of detailed QA and training procedures for physicists and users of CAD. Some of the recommendations are considered by the subcommittee to be reasonably easy and practical and can be implemented immediately by the end users; others are considered to be "best practice" approaches, which may require significant effort, additional tools, and proper training to implement. The eventual standardization of the requirements of QA procedures for CAD will have to be determined through consensus from members of the CAD community, and user training may require support of professional societies. It is expected that high-quality CAD and proper use of CAD could allow these systems to achieve their true potential, thus benefiting both the patients and the clinicians, and may bring about more widespread clinical use of CAD for many other diseases and applications. It is hoped that the awareness of the need

  5. Quality assurance and training procedures for computer-aided detection and diagnosis systems in clinical usea)

    PubMed Central

    Huo, Zhimin; Summers, Ronald M.; Paquerault, Sophie; Lo, Joseph; Hoffmeister, Jeffrey; Armato, Samuel G.; Freedman, Matthew T.; Lin, Jesse; Ben Lo, Shih-Chung; Petrick, Nicholas; Sahiner, Berkman; Fryd, David; Yoshida, Hiroyuki; Chan, Heang-Ping

    2013-01-01

    Computer-aided detection/diagnosis (CAD) is increasingly used for decision support by clinicians for detection and interpretation of diseases. However, there are no quality assurance (QA) requirements for CAD in clinical use at present. QA of CAD is important so that end users can be made aware of changes in CAD performance both due to intentional or unintentional causes. In addition, end-user training is critical to prevent improper use of CAD, which could potentially result in lower overall clinical performance. Research on QA of CAD and user training are limited to date. The purpose of this paper is to bring attention to these issues, inform the readers of the opinions of the members of the American Association of Physicists in Medicine (AAPM) CAD subcommittee, and thus stimulate further discussion in the CAD community on these topics. The recommendations in this paper are intended to be work items for AAPM task groups that will be formed to address QA and user training issues on CAD in the future. The work items may serve as a framework for the discussion and eventual design of detailed QA and training procedures for physicists and users of CAD. Some of the recommendations are considered by the subcommittee to be reasonably easy and practical and can be implemented immediately by the end users; others are considered to be “best practice” approaches, which may require significant effort, additional tools, and proper training to implement. The eventual standardization of the requirements of QA procedures for CAD will have to be determined through consensus from members of the CAD community, and user training may require support of professional societies. It is expected that high-quality CAD and proper use of CAD could allow these systems to achieve their true potential, thus benefiting both the patients and the clinicians, and may bring about more widespread clinical use of CAD for many other diseases and applications. It is hoped that the awareness of the

  6. An accurate and efficient computation method of the hydration free energy of a large, complex molecule.

    PubMed

    Yoshidome, Takashi; Ekimoto, Toru; Matubayasi, Nobuyuki; Harano, Yuichi; Kinoshita, Masahiro; Ikeguchi, Mitsunori

    2015-05-07

    The hydration free energy (HFE) is a crucially important physical quantity to discuss various chemical processes in aqueous solutions. Although an explicit-solvent computation with molecular dynamics (MD) simulations is a preferable treatment of the HFE, huge computational load has been inevitable for large, complex solutes like proteins. In the present paper, we propose an efficient computation method for the HFE. In our method, the HFE is computed as a sum of 〈UUV〉/2 (〈UUV〉 is the ensemble average of the sum of pair interaction energy between solute and water molecule) and the water reorganization term mainly reflecting the excluded volume effect. Since 〈UUV〉 can readily be computed through a MD of the system composed of solute and water, an efficient computation of the latter term leads to a reduction of computational load. We demonstrate that the water reorganization term can quantitatively be calculated using the morphometric approach (MA) which expresses the term as the linear combinations of the four geometric measures of a solute and the corresponding coefficients determined with the energy representation (ER) method. Since the MA enables us to finish the computation of the solvent reorganization term in less than 0.1 s once the coefficients are determined, the use of the MA enables us to provide an efficient computation of the HFE even for large, complex solutes. Through the applications, we find that our method has almost the same quantitative performance as the ER method with substantial reduction of the computational load.

  7. Comparative fracture strength analysis of Lava and Digident CAD/CAM zirconia ceramic crowns.

    PubMed

    Kwon, Taek-Ka; Pak, Hyun-Soon; Yang, Jae-Ho; Han, Jung-Suk; Lee, Jai-Bong; Kim, Sung-Hun; Yeo, In-Sung

    2013-05-01

    All-ceramic crowns are subject to fracture during function. To minimize this common clinical complication, zirconium oxide has been used as the framework for all-ceramic crowns. The aim of this study was to compare the fracture strengths of two computer-aided design/computer-aided manufacturing (CAD/CAM) zirconia crown systems: Lava and Digident. Twenty Lava CAD/CAM zirconia crowns and twenty Digident CAD/CAM zirconia crowns were fabricated. A metal die was also duplicated from the original prepared tooth for fracture testing. A universal testing machine was used to determine the fracture strength of the crowns. THE MEAN FRACTURE STRENGTHS WERE AS FOLLOWS: 54.9 ± 15.6 N for the Lava CAD/CAM zirconia crowns and 87.0 ± 16.0 N for the Digident CAD/CAM zirconia crowns. The difference between the mean fracture strengths of the Lava and Digident crowns was statistically significant (P<.001). Lava CAD/CAM zirconia crowns showed a complete fracture of both the veneering porcelain and the core whereas the Digident CAD/CAM zirconia crowns showed fracture only of the veneering porcelain. The fracture strengths of CAD/CAM zirconia crowns differ depending on the compatibility of the core material and the veneering porcelain.

  8. Comparative fracture strength analysis of Lava and Digident CAD/CAM zirconia ceramic crowns

    PubMed Central

    Kwon, Taek-Ka; Pak, Hyun-Soon; Han, Jung-Suk; Lee, Jai-Bong; Kim, Sung-Hun

    2013-01-01

    PURPOSE All-ceramic crowns are subject to fracture during function. To minimize this common clinical complication, zirconium oxide has been used as the framework for all-ceramic crowns. The aim of this study was to compare the fracture strengths of two computer-aided design/computer-aided manufacturing (CAD/CAM) zirconia crown systems: Lava and Digident. MATERIALS AND METHODS Twenty Lava CAD/CAM zirconia crowns and twenty Digident CAD/CAM zirconia crowns were fabricated. A metal die was also duplicated from the original prepared tooth for fracture testing. A universal testing machine was used to determine the fracture strength of the crowns. RESULTS The mean fracture strengths were as follows: 54.9 ± 15.6 N for the Lava CAD/CAM zirconia crowns and 87.0 ± 16.0 N for the Digident CAD/CAM zirconia crowns. The difference between the mean fracture strengths of the Lava and Digident crowns was statistically significant (P<.001). Lava CAD/CAM zirconia crowns showed a complete fracture of both the veneering porcelain and the core whereas the Digident CAD/CAM zirconia crowns showed fracture only of the veneering porcelain. CONCLUSION The fracture strengths of CAD/CAM zirconia crowns differ depending on the compatibility of the core material and the veneering porcelain. PMID:23755332

  9. Efficient computer algebra algorithms for polynomial matrices in control design

    NASA Technical Reports Server (NTRS)

    Baras, J. S.; Macenany, D. C.; Munach, R.

    1989-01-01

    The theory of polynomial matrices plays a key role in the design and analysis of multi-input multi-output control and communications systems using frequency domain methods. Examples include coprime factorizations of transfer functions, cannonical realizations from matrix fraction descriptions, and the transfer function design of feedback compensators. Typically, such problems abstract in a natural way to the need to solve systems of Diophantine equations or systems of linear equations over polynomials. These and other problems involving polynomial matrices can in turn be reduced to polynomial matrix triangularization procedures, a result which is not surprising given the importance of matrix triangularization techniques in numerical linear algebra. Matrices with entries from a field and Gaussian elimination play a fundamental role in understanding the triangularization process. In the case of polynomial matrices, matrices with entries from a ring for which Gaussian elimination is not defined and triangularization is accomplished by what is quite properly called Euclidean elimination. Unfortunately, the numerical stability and sensitivity issues which accompany floating point approaches to Euclidean elimination are not very well understood. New algorithms are presented which circumvent entirely such numerical issues through the use of exact, symbolic methods in computer algebra. The use of such error-free algorithms guarantees that the results are accurate to within the precision of the model data--the best that can be hoped for. Care must be taken in the design of such algorithms due to the phenomenon of intermediate expressions swell.

  10. Enabling Efficient Climate Science Workflows in High Performance Computing Environments

    NASA Astrophysics Data System (ADS)

    Krishnan, H.; Byna, S.; Wehner, M. F.; Gu, J.; O'Brien, T. A.; Loring, B.; Stone, D. A.; Collins, W.; Prabhat, M.; Liu, Y.; Johnson, J. N.; Paciorek, C. J.

    2015-12-01

    A typical climate science workflow often involves a combination of acquisition of data, modeling, simulation, analysis, visualization, publishing, and storage of results. Each of these tasks provide a myriad of challenges when running on a high performance computing environment such as Hopper or Edison at NERSC. Hurdles such as data transfer and management, job scheduling, parallel analysis routines, and publication require a lot of forethought and planning to ensure that proper quality control mechanisms are in place. These steps require effectively utilizing a combination of well tested and newly developed functionality to move data, perform analysis, apply statistical routines, and finally, serve results and tools to the greater scientific community. As part of the CAlibrated and Systematic Characterization, Attribution and Detection of Extremes (CASCADE) project we highlight a stack of tools our team utilizes and has developed to ensure that large scale simulation and analysis work are commonplace and provide operations that assist in everything from generation/procurement of data (HTAR/Globus) to automating publication of results to portals like the Earth Systems Grid Federation (ESGF), all while executing everything in between in a scalable environment in a task parallel way (MPI). We highlight the use and benefit of these tools by showing several climate science analysis use cases they have been applied to.

  11. Efficient computation of coherent synchrotron radiation in a rectangular chamber

    NASA Astrophysics Data System (ADS)

    Warnock, Robert L.; Bizzozero, David A.

    2016-09-01

    We study coherent synchrotron radiation (CSR) in a perfectly conducting vacuum chamber of rectangular cross section, in a formalism allowing an arbitrary sequence of bends and straight sections. We apply the paraxial method in the frequency domain, with a Fourier development in the vertical coordinate but with no other mode expansions. A line charge source is handled numerically by a new method that rids the equations of singularities through a change of dependent variable. The resulting algorithm is fast compared to earlier methods, works for short bunches with complicated structure, and yields all six field components at any space-time point. As an example we compute the tangential magnetic field at the walls. From that one can make a perturbative treatment of the Poynting flux to estimate the energy deposited in resistive walls. The calculation was motivated by a design issue for LCLS-II, the question of how much wall heating from CSR occurs in the last bend of a bunch compressor and the following straight section. Working with a realistic longitudinal bunch form of r.m.s. length 10.4 μ m and a charge of 100 pC we conclude that the radiated power is quite small (28 W at a 1 MHz repetition rate), and all radiated energy is absorbed in the walls within 7 m along the straight section.

  12. Complete denture fabrication supported by CAD/CAM.

    PubMed

    Wimmer, Timea; Gallus, Korbinian; Eichberger, Marlis; Stawarczyk, Bogna

    2016-05-01

    The inclusion of computer-aided design/computer-aided manufacturing (CAD/CAM) technology into complete denture fabrication facilitates the procedures. The presented workflow for complete denture fabrication combines conventional and digitally supported treatment steps for improving dental care. With the presented technique, the registration of the occlusal plane, the determination of the ideal lip support, and the verification of the maxillomandibular relationship record are considered.

  13. IFEMS, an Interactive Finite Element Modeling System Using a CAD/CAM System

    NASA Technical Reports Server (NTRS)

    Mckellip, S.; Schuman, T.; Lauer, S.

    1980-01-01

    A method of coupling a CAD/CAM system with a general purpose finite element mesh generator is described. The three computer programs which make up the interactive finite element graphics system are discussed.

  14. Computational Efficiency through Visual Argument: Do Graphic Organizers Communicate Relations in Text Too Effectively?

    ERIC Educational Resources Information Center

    Robinson, Daniel H.; Schraw, Gregory

    1994-01-01

    Three experiments involving 138 college students investigated why one type of graphic organizer (a matrix) may communicate interconcept relations better than an outline or text. Results suggest that a matrix is more computationally efficient than either outline or text, allowing the easier computation of relationships. (SLD)

  15. Parametric bicubic spline and CAD tools for complex targets shape modelling in physical optics radar cross section prediction

    NASA Astrophysics Data System (ADS)

    Delogu, A.; Furini, F.

    1991-09-01

    Increasing interest in radar cross section (RCS) reduction is placing new demands on theoretical, computation, and graphic techniques for calculating scattering properties of complex targets. In particular, computer codes capable of predicting the RCS of an entire aircraft at high frequency and of achieving RCS control with modest structural changes, are becoming of paramount importance in stealth design. A computer code, evaluating the RCS of arbitrary shaped metallic objects that are computer aided design (CAD) generated, and its validation with measurements carried out using ALENIA RCS test facilities are presented. The code, based on the physical optics method, is characterized by an efficient integration algorithm with error control, in order to contain the computer time within acceptable limits, and by an accurate parametric representation of the target surface in terms of bicubic splines.

  16. Digital data management for CAD/CAM technology. An update of current systems.

    PubMed

    Andreiotelli, M; Kamposiora, P; Papavasiliou, G

    2013-03-01

    Abstract - Computer-aided design/computer-aided manufacturing (CAD/CAM) technology continues to rapidly evolve in the dental community. This review article provides an overview of the operational components and methodologies used with some of the CAD/CAM systems. Future trends are also discussed. While these systems show great promise, the quality of performance varies among systems. No single system currently acquires data directly in the oral cavity and produces restorations using all materials available. Further refinements of these CAD/CAM technologies may increase their capabilities, but further special training will be required for effective use.

  17. Fracture resistance of CAD/CAM-fabricated fiber-reinforced composite denture retainers.

    PubMed

    Nagata, Kohji; Wakabayashi, Noriyuki; Takahashi, Hidekazu; Vallittu, Pekka K; Lassila, Lippo V J

    2013-01-01

    The purpose of this study was to evaluate the fracture resistance of computer-aided design/computer-assisted manufacture (CAD/CAM)-fabricated fiber-reinforced composite (FRC) denture retainers. Distal extension dentures incorporating two telescopic retainers and two molar pontics, with or without fiberglass, were fabricated by CAD/CAM or by the conventional polymerization method. The dentures were subjected to a vertical load on the second molar pontic until fracture. Within each manufacturing method, embedment of the FRC increased the mean final fracture load, suggesting the reinforcing effect of fiberglass. The polymerized dentures with FRC showed greater mean final fracture load than the CAD/CAM dentures with FRC.

  18. An Efficient Objective Analysis System for Parallel Computers

    NASA Technical Reports Server (NTRS)

    Stobie, J.

    1999-01-01

    A new atmospheric objective analysis system designed for parallel computers will be described. The system can produce a global analysis (on a 1 X 1 lat-lon grid with 18 levels of heights and winds and 10 levels of moisture) using 120,000 observations in 17 minutes on 32 CPUs (SGI Origin 2000). No special parallel code is needed (e.g. MPI or multitasking) and the 32 CPUs do not have to be on the same platform. The system is totally portable and can run on several different architectures at once. In addition, the system can easily scale up to 100 or more CPUS. This will allow for much higher resolution and significant increases in input data. The system scales linearly as the number of observations and the number of grid points. The cost overhead in going from 1 to 32 CPUs is 18%. In addition, the analysis results are identical regardless of the number of processors used. This system has all the characteristics of optimal interpolation, combining detailed instrument and first guess error statistics to produce the best estimate of the atmospheric state. Static tests with a 2 X 2.5 resolution version of this system showed it's analysis increments are comparable to the latest NASA operational system including maintenance of mass-wind balance. Results from several months of cycling test in the Goddard EOS Data Assimilation System (GEOS DAS) show this new analysis retains the same level of agreement between the first guess and observations (O-F statistics) as the current operational system.

  19. An Efficient Objective Analysis System for Parallel Computers

    NASA Technical Reports Server (NTRS)

    Stobie, James G.

    1999-01-01

    A new objective analysis system designed for parallel computers will be described. The system can produce a global analysis (on a 2 x 2.5 lat-lon grid with 20 levels of heights and winds and 10 levels of moisture) using 120,000 observations in less than 3 minutes on 32 CPUs (SGI Origin 2000). No special parallel code is needed (e.g. MPI or multitasking) and the 32 CPUs do not have to be on the same platform. The system Ls totally portable and can run on -several different architectures at once. In addition, the system can easily scale up to 100 or more CPUS. This will allow for much higher resolution and significant increases in input data. The system scales linearly as the number of observations and the number of grid points. The cost overhead in going from I to 32 CPus is 18%. in addition, the analysis results are identical regardless of the number of processors used. T'his system has all the characteristics of optimal interpolation, combining detailed instrument and first guess error statistics to produce the best estimate of the atmospheric state. It also includes a new quality control (buddy check) system. Static tests with the system showed it's analysis increments are comparable to the latest NASA operational system including maintenance of mass-wind balance. Results from a 2-month cycling test in the Goddard EOS Data Assimilation System (GEOS DAS) show this new analysis retains the same level of agreement between the first guess and observations (0-F statistics) throughout the entire two months.

  20. Computer-aided design development transition for IPAD environment

    NASA Technical Reports Server (NTRS)

    Owens, H. G.; Mock, W. D.; Mitchell, J. C.

    1980-01-01

    The relationship of federally sponsored computer-aided design/computer-aided manufacturing (CAD/CAM) programs to the aircraft life cycle design process, an overview of NAAD'S CAD development program, an evaluation of the CAD design process, a discussion of the current computing environment within which NAAD is developing its CAD system, some of the advantages/disadvantages of the NAAD-IPAD approach, and CAD developments during transition into the IPAD system are discussed.

  1. Computer-aided design development transition for IPAD environment

    NASA Technical Reports Server (NTRS)

    Owens, H. G.; Mock, W. D.; Mitchell, J. C.

    1980-01-01

    The relationship of federally sponsored computer-aided design/computer-aided manufacturing (CAD/CAM) programs to the aircraft life cycle design process, an overview of NAAD'S CAD development program, an evaluation of the CAD design process, a discussion of the current computing environment within which NAAD is developing its CAD system, some of the advantages/disadvantages of the NAAD-IPAD approach, and CAD developments during transition into the IPAD system are discussed.

  2. Fabricating a fiber-reinforced post and zirconia core with CAD/CAM technology.

    PubMed

    Lee, Ju-Hyoung; Sohn, Dong-Seok; Lee, Cheong-Hee

    2014-09-01

    This article describes a technique for overcoming the limitations of dental scanners in imaging post spaces by using a single fiber-reinforced post and computer-aided design and computer-aided manufacturing (CAD/CAM) technology, thereby eliminating the need for a 'Scan Post' and the post and core module in the CAD. This technique produces an anatomically correct core and ensures the correct thickness of crown restorations.

  3. Model-Based Engineering and Manufacturing CAD/CAM Benchmark.

    SciTech Connect

    Domm, T.C.; Underwood, R.S.

    1999-10-13

    The Benchmark Project was created from a desire to identify best practices and improve the overall efficiency and performance of the Y-12 Plant's systems and personnel supporting the manufacturing mission. The mission of the benchmark team was to search out industry leaders in manufacturing and evaluate their engineering practices and processes to determine direction and focus for Y-12 modernization efforts. The companies visited included several large established companies and a new, small, high-tech machining firm. As a result of this effort, changes are recommended that will enable Y-12 to become a more modern, responsive, cost-effective manufacturing facility capable of supporting the needs of the Nuclear Weapons Complex (NWC) into the 21st century. The benchmark team identified key areas of interest, both focused and general. The focus areas included Human Resources, Information Management, Manufacturing Software Tools, and Standards/Policies and Practices. Areas of general interest included Infrastructure, Computer Platforms and Networking, and Organizational Structure. The results of this benchmark showed that all companies are moving in the direction of model-based engineering and manufacturing. There was evidence that many companies are trying to grasp how to manage current and legacy data. In terms of engineering design software tools, the companies contacted were somewhere between 3-D solid modeling and surfaced wire-frame models. The manufacturing computer tools were varied, with most companies using more than one software product to generate machining data and none currently performing model-based manufacturing (MBM) from a common model. The majority of companies were closer to identifying or using a single computer-aided design (CAD) system than a single computer-aided manufacturing (CAM) system. The Internet was a technology that all companies were looking to either transport information more easily throughout the corporation or as a conduit for

  4. Model-Based Engineering and Manufacturing CAD/CAM Benchmark

    SciTech Connect

    Domm, T.D.; Underwood, R.S.

    1999-04-26

    The Benehmark Project was created from a desire to identify best practices and improve the overall efficiency and performance of the Y-12 Plant's systems and personnel supprting the manufacturing mission. The mission of the benchmark team was to search out industry leaders in manufacturing and evaluate lheir engineering practices and processes to determine direction and focus fm Y-12 modmizadon efforts. The companies visited included several large established companies and anew, small, high-tech machining firm. As a result of this efforL changes are recommended that will enable Y-12 to become a more responsive cost-effective manufacturing facility capable of suppordng the needs of the Nuclear Weapons Complex (NW@) and Work Fw Others into the 21' century. The benchmark team identified key areas of interest, both focused and gencml. The focus arm included Human Resources, Information Management, Manufacturing Software Tools, and Standarda/ Policies and Practices. Areas of general interest included Inhstructure, Computer Platforms and Networking, and Organizational Structure. The method for obtaining the desired information in these areas centered on the creation of a benchmark questionnaire. The questionnaire was used throughout each of the visits as the basis for information gathering. The results of this benchmark showed that all companies are moving in the direction of model-based engineering and manufacturing. There was evidence that many companies are trying to grasp how to manage current and legacy data. In terms of engineering design software tools, the companies contacted were using both 3-D solid modeling and surfaced Wire-frame models. The manufacturing computer tools were varie4 with most companies using more than one software product to generate machining data and none currently performing model-based manufacturing (MBM) ftom a common medel. The majority of companies were closer to identifying or using a single computer-aided design (CAD) system than a

  5. Computer aided production engineering

    SciTech Connect

    Not Available

    1986-01-01

    This book presents the following contents: CIM in avionics; computer analysis of product designs for robot assembly; a simulation decision mould for manpower forecast and its application; development of flexible manufacturing system; advances in microcomputer applications in CAD/CAM; an automated interface between CAD and process planning; CAM and computer vision; low friction pneumatic actuators for accurate robot control; robot assembly of printed circuit boards; information systems design for computer integrated manufacture; and a CAD engineering language to aid manufacture.

  6. CAD-Based Aerodynamic Design of Complex Configurations using a Cartesian Method

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.; Pulliam, Thomas H.

    2003-01-01

    A modular framework for aerodynamic optimization of complex geometries is developed. By working directly with a parametric CAD system, complex-geometry models are modified nnd tessellated in an automatic fashion. The use of a component-based Cartesian method significantly reduces the demands on the CAD system, and also provides for robust and efficient flowfield analysis. The optimization is controlled using either a genetic or quasi-Newton algorithm. Parallel efficiency of the framework is maintained even when subject to limited CAD resources by dynamically re-allocating the processors of the flow solver. Overall, the resulting framework can explore designs incorporating large shape modifications and changes in topology.

  7. Computationally efficient multipoint linkage analysis on extended pedigrees for trait models with two contributing major loci

    PubMed Central

    Su, Ming; Thompson, Elizabeth A.

    2013-01-01

    We have developed a computationally efficient method for multipoint linkage analysis on extended pedigrees for trait models having a two-locus QTL effect. The method has been implemented in the program, hg-lod, which uses MCMC method to sample realizations of descent patterns conditional on marker data, then calculates the trait likelihood for each realization by efficient exact computation. Given its computational efficiency, hg-lod can handle data on large pedigrees with a lot of unobserved individuals, and can compute accurate estimates of lod scores at a much larger number of hypothesized locations than can any existing method. We have compared hg-lod to lm-twoqtl, the first publically available linkage program for trait models with two major loci, using simulated data. Results show that our method is orders of magnitude faster while the accuracy of QTL localization is retained. The efficiency of our method also facilitates analyses with multiple trait models, e.g. sensitivity analysis. Additionally, since the MCMC sampling conditions only on the marker data, there is no need to resample the descent patterns to compute likelihoods under alternative trait models. This achieves additional computational efficiency. PMID:22740194

  8. Mechanical properties and DIC analyses of CAD/CAM materials

    PubMed Central

    Roperto, Renato; Akkus, Anna; Akkus, Ozan; Porto-Neto, Sizenando; Teich, Sorin; Lang, Lisa; Campos, Edson

    2016-01-01

    Background This study compared two well-known computer-aided-design/computer-aided-manufactured (CAD/CAM) blocks (Paradigm MZ100 [3M ESPE] and Vitablocs Mark II [Vita] in terms of fracture toughness (Kic), index of brittleness (BI) and stress/strain distributions. Material and Methods Three-point bending test was used to calculate the fracture toughness, and the relationship between the Kic and the Vickers hardness was used to calculate the index of brittleness. Additionally, digital image correlation (DIC) was used to analyze the stress/strain distribution on both materials. Results The values for fracture toughness obtained under three-point bending were 1.87Pa√m (±0.69) for Paradigm MZ100 and 1.18Pa√m (±0.17) for Vitablocs Mark II. For the index of brittleness, the values for Paradigm and Vitablocs were 73.13μm-1/2 (±30.72) and 550.22μm-1/2 (±82.46). One-way ANOVA was performed to find differences (α=0.05) and detected deviation between the stress/strain distributions on both materials. Conclusions Both CAD/CAM materials tested presented similar fracture toughness, but, different strain/stress distributions. Both materials may perform similarly when used in CAD/CAM restorations. Key words:Ceramic, CAD/CAM, hybrid materials, composite resin, fracture toughness. PMID:27957262

  9. Recovery Act - CAREER: Sustainable Silicon -- Energy-Efficient VLSI Interconnect for Extreme-Scale Computing

    SciTech Connect

    Chiang, Patrick

    2014-01-31

    The research goal of this CAREER proposal is to develop energy-efficient, VLSI interconnect circuits and systems that will facilitate future massively-parallel, high-performance computing. Extreme-scale computing will exhibit massive parallelism on multiple vertical levels, from thou­ sands of computational units on a single processor to thousands of processors in a single data center. Unfortunately, the energy required to communicate between these units at every level (on­ chip, off-chip, off-rack) will be the critical limitation to energy efficiency. Therefore, the PI's career goal is to become a leading researcher in the design of energy-efficient VLSI interconnect for future computing systems.

  10. Modelling and computationally efficient time domain linear equalisation of nonlinear bandlimited QPSK satellite channels

    NASA Technical Reports Server (NTRS)

    Konstantinides, K.; Yao, K.

    1990-01-01

    The problem of modeling and equalization of a nonlinear satellite channel is considered. The channel is assumed to be bandlimited and exhibits both amplitude and phase nonlinearities. In traditional models, computations are usually performed in the frequency domain and solutions are based on complex numerical techniques. A discrete time model is used to represent the satellite link with both uplink and downlink white Gaussian noise. Under conditions of practical interest, a simple and computationally efficient time-domain design technique for the minimum mean square error linear equalizer is presented. The efficiency of this technique is enhanced by the use of a fast and simple iterative algorithm for the computation of the autocorrelation coefficients of the output of the nonlinear channel. Numerical results on the evaluations of bit error probability and other relevant parameters needed in the design and analysis of a nonlinear bandlimited QPSK system demonstrate the simplicity and computational efficiency of the proposed approach.

  11. Modelling and computationally efficient time domain linear equalisation of nonlinear bandlimited QPSK satellite channels

    NASA Technical Reports Server (NTRS)

    Konstantinides, K.; Yao, K.

    1990-01-01

    The problem of modeling and equalization of a nonlinear satellite channel is considered. The channel is assumed to be bandlimited and exhibits both amplitude and phase nonlinearities. In traditional models, computations are usually performed in the frequency domain and solutions are based on complex numerical techniques. A discrete time model is used to represent the satellite link with both uplink and downlink white Gaussian noise. Under conditions of practical interest, a simple and computationally efficient time-domain design technique for the minimum mean square error linear equalizer is presented. The efficiency of this technique is enhanced by the use of a fast and simple iterative algorithm for the computation of the autocorrelation coefficients of the output of the nonlinear channel. Numerical results on the evaluations of bit error probability and other relevant parameters needed in the design and analysis of a nonlinear bandlimited QPSK system demonstrate the simplicity and computational efficiency of the proposed approach.

  12. Selective reduction of CAD false-positive findings

    NASA Astrophysics Data System (ADS)

    Camarlinghi, N.; Gori, I.; Retico, A.; Bagagli, F.

    2010-03-01

    Computer-Aided Detection (CAD) systems are becoming widespread supporting tools to radiologists' diagnosis, especially in screening contexts. However, a large amount of false positive (FP) alarms would inevitably lead both to an undesired possible increase in time for diagnosis, and to a reduction in radiologists' confidence in CAD as a useful tool. Most CAD systems implement as final step of the analysis a classifier which assigns a score to each entry of a list of findings; by thresholding this score it is possible to define the system performance on an annotated validation dataset in terms of a FROC curve (sensitivity vs. FP per scan). To use a CAD as a supportive tool for most clinical activities, an operative point has to be chosen on the system FROC curve, according to the obvious criterion of keeping the sensitivity as high as possible, while maintaining the number of FP alarms still acceptable. The strategy proposed in this study is to choose an operative point with high sensitivity on the CAD FROC curve, then to implement in cascade a further classification step, constituted by a smarter classifier. The key issue of this approach is that the smarter classifier is actually a meta-classifier of more then one decision system, each specialized in rejecting a particular type of FP findings generated by the CAD. The application of this approach to a dataset of 16 lung CT scans previously processed by the VBNACAD system is presented. The lung CT VBNACAD performance of 87.1% sensitivity to juxtapleural nodules with 18.5 FP per scan is improved up to 10.1 FP per scan while maintaining the same value of sensitivity. This work has been carried out in the framework of the MAGIC-V collaboration.

  13. Expert validation of the knowledge base for E-CAD - a pre-hospital dispatch triage decision support system.

    PubMed

    Mirza, Muzna; Saini, Devashish; Brown, Todd B; Orthner, Helmuth F; Mazza, Giovanni; Battles, Marcie M

    2007-10-11

    The knowledge base (KB) for E-CAD (Enhanced Computer-Aided Dispatch), a triage decision support system for Emergency Medical Dispatch (EMD) of medical resources in trauma cases, is being evaluated. We aim to achieve expert consensus for validation and refinement of the E-CAD KB using the modified Delphi technique. Evidence-based, expert-validated and refined KB will provide improved EMD practice guidelines and may facilitate acceptance of the E-CAD by state-wide professionals.

  14. A single user efficiency measure for evaluation of parallel or pipeline computer architectures

    NASA Technical Reports Server (NTRS)

    Jones, W. P.

    1978-01-01

    A precise statement of the relationship between sequential computation at one rate, parallel or pipeline computation at a much higher rate, the data movement rate between levels of memory, the fraction of inherently sequential operations or data that must be processed sequentially, the fraction of data to be moved that cannot be overlapped with computation, and the relative computational complexity of the algorithms for the two processes, scalar and vector, was developed. The relationship should be applied to the multirate processes that obtain in the employment of various new or proposed computer architectures for computational aerodynamics. The relationship, an efficiency measure that the single user of the computer system perceives, argues strongly in favor of separating scalar and vector processes, sometimes referred to as loosely coupled processes, to achieve optimum use of hardware.

  15. A Simple and Resource-efficient Setup for the Computer-aided Drug Design Laboratory.

    PubMed

    Moretti, Loris; Sartori, Luca

    2016-10-01

    Undertaking modelling investigations for Computer-Aided Drug Design (CADD) requires a proper environment. In principle, this could be done on a single computer, but the reality of a drug discovery program requires robustness and high-throughput computing (HTC) to efficiently support the research. Therefore, a more capable alternative is needed but its implementation has no widespread solution. Here, the realization of such a computing facility is discussed, from general layout to technical details all aspects are covered. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Understanding dental CAD/CAM for restorations--the digital workflow from a mechanical engineering viewpoint.

    PubMed

    Tapie, L; Lebon, N; Mawussi, B; Fron Chabouis, H; Duret, F; Attal, J-P

    2015-01-01

    As digital technology infiltrates every area of daily life, including the field of medicine, so it is increasingly being introduced into dental practice. Apart from chairside practice, computer-aided design/computer-aided manufacturing (CAD/CAM) solutions are available for creating inlays, crowns, fixed partial dentures (FPDs), implant abutments, and other dental prostheses. CAD/CAM dental solutions can be considered a chain of digital devices and software for the almost automatic design and creation of dental restorations. However, dentists who want to use the technology often do not have the time or knowledge to understand it. A basic knowledge of the CAD/CAM digital workflow for dental restorations can help dentists to grasp the technology and purchase a CAM/CAM system that meets the needs of their office. This article provides a computer-science and mechanical-engineering approach to the CAD/CAM digital workflow to help dentists understand the technology.

  17. Intrinsic Efficiency Calibration Considering Geometric Factors in Gamma-ray Computed Tomography for Radioactive Waste Assay

    SciTech Connect

    Liu, Zhe; Zhang, Li

    2015-07-01

    In radioactive waste assay with gamma-ray computed tomography, calibration for intrinsic efficiency of the system is important to the reconstruction of radioactivity distribution. Due to the geometric characteristics of the system, the non-uniformity of intrinsic efficiency for gamma-rays with different incident positions and directions are often un-negligible. Intrinsic efficiency curves versus geometric parameters of incident gamma-ray are obtained by Monte-Carlo simulation, and two intrinsic efficiency models are suggested to characterize the intrinsic efficiency determined by relative source-detector position and system geometry in the system matrix. Monte-Carlo simulation is performed to compare the different intrinsic efficiency models. Better reconstruction results of radioactivity distribution are achieved by both suggested models than by the uniform intrinsic efficiency model. And compared to model based on detector position, model based on point response increases reconstruction accuracy as well as complexity and time of calculation. (authors)

  18. Perceptual challenges to computer-aided diagnosis

    NASA Astrophysics Data System (ADS)

    Jiang, Yulei

    2012-03-01

    We review the motivation and development of computer-aided diagnosis (CAD) in diagnostic medical imaging, particularly in breast cancer screening. After briefly describe in generic terms of typical CAD methods, we focus on the question of whether CAD helps improve diagnostic accuracy. We review both studies that support the notion that CAD helps improve diagnostic accuracy and studies that do not. We further identify difficulties in conducting this type of evaluation studies and suggest areas of perceptual challenges to applications of CAD.

  19. Marginal adaptation and CAD-CAM technology: A systematic review of restorative material and fabrication techniques.

    PubMed

    Papadiochou, Sofia; Pissiotis, Argirios L

    2017-09-27

    The comparative assessment of computer-aided design and computer-aided manufacturing (CAD-CAM) technology and other fabrication techniques pertaining to marginal adaptation should be documented. Limited evidence exists on the effect of restorative material on the performance of a CAD-CAM system relative to marginal adaptation. The purpose of this systematic review was to investigate whether the marginal adaptation of CAD-CAM single crowns, fixed dental prostheses, and implant-retained fixed dental prostheses or their infrastructures differs from that obtained by other fabrication techniques using a similar restorative material and whether it depends on the type of restorative material. An electronic search of English-language literature published between January 1, 2000, and June 30, 2016, was conducted of the Medline/PubMed database. Of the 55 included comparative studies, 28 compared CAD-CAM technology with conventional fabrication techniques, 12 contrasted CAD-CAM technology and copy milling, 4 compared CAD-CAM milling with direct metal laser sintering (DMLS), and 22 investigated the performance of a CAD-CAM system regarding marginal adaptation in restorations/infrastructures produced with different restorative materials. Most of the CAD-CAM restorations/infrastructures were within the clinically acceptable marginal discrepancy (MD) range. The performance of a CAD-CAM system relative to marginal adaptation is influenced by the restorative material. Compared with CAD-CAM, most of the heat-pressed lithium disilicate crowns displayed equal or smaller MD values. Slip-casting crowns exhibited similar or better marginal accuracy than those fabricated with CAD-CAM. Cobalt-chromium and titanium implant infrastructures produced using a CAD-CAM system elicited smaller MD values than zirconia. The majority of cobalt-chromium restorations/infrastructures produced by DMLS displayed better marginal accuracy than those fabricated with the casting technique. Compared with copy

  20. Learning-based image preprocessing for robust computer-aided detection

    NASA Astrophysics Data System (ADS)

    Raghupathi, Laks; Devarakota, Pandu R.; Wolf, Matthias

    2013-03-01

    Recent studies have shown that low dose computed tomography (LDCT) can be an effective screening tool to reduce lung cancer mortality. Computer-aided detection (CAD) would be a beneficial second reader for radiologists in such cases. Studies demonstrate that while iterative reconstructions (IR) improve LDCT diagnostic quality, it however degrades CAD performance significantly (increased false positives) when applied directly. For improving CAD performance, solutions such as retraining with newer data or applying a standard preprocessing technique may not be suffice due to high prevalence of CT scanners and non-uniform acquisition protocols. Here, we present a learning-based framework that can adaptively transform a wide variety of input data to boost an existing CAD performance. This not only enhances their robustness but also their applicability in clinical workflows. Our solution consists of applying a suitable pre-processing filter automatically on the given image based on its characteristics. This requires the preparation of ground truth (GT) of choosing an appropriate filter resulting in improved CAD performance. Accordingly, we propose an efficient consolidation process with a novel metric. Using key anatomical landmarks, we then derive consistent feature descriptors for the classification scheme that then uses a priority mechanism to automatically choose an optimal preprocessing filter. We demonstrate CAD prototype∗ performance improvement using hospital-scale datasets acquired from North America, Europe and Asia. Though we demonstrated our results for a lung nodule CAD, this scheme is straightforward to extend to other post-processing tools dedicated to other organs and modalities.

  1. Efficient use of high performance computers for integrated controls and structures design. [of large space platforms

    NASA Technical Reports Server (NTRS)

    Belvin, W. K.; Maghami, P. G.; Nguyen, D. T.

    1992-01-01

    Simply transporting design codes from sequential-scalar computers to parallel-vector computers does not fully utilize the computational benefits offered by high performance computers. By performing integrated controls and structures design on an experimental truss platform with both sequential-scalar and parallel-vector design codes, conclusive results are presented to substantiate this claim. The efficiency of a Cholesky factorization scheme in conjunction with a variable-band row data structure is presented. In addition, the Lanczos eigensolution algorithm has been incorporated in the design code for both parallel and vector computations. Comparisons of computational efficiency between the initial design code and the parallel-vector design code are presented. It is shown that the Lanczos algorithm with the Cholesky factorization scheme is far superior to the sub-space iteration method of eigensolution when substantial numbers of eigenvectors are required for control design and/or performance optimization. Integrated design results show the need for continued efficiency studies in the area of element computations and matrix assembly.

  2. An approximate solution to improve computational efficiency of impedance-type payload load prediction

    NASA Technical Reports Server (NTRS)

    White, C. W.

    1981-01-01

    The computational efficiency of the impedance type loads prediction method was studied. Three goals were addressed: devise a method to make the impedance method operate more efficiently in the computer; assess the accuracy and convenience of the method for determining the effect of design changes; and investigate the use of the method to identify design changes for reduction of payload loads. The method is suitable for calculation of dynamic response in either the frequency or time domain. It is concluded that: the choice of an orthogonal coordinate system will allow the impedance method to operate more efficiently in the computer; the approximate mode impedance technique is adequate for determining the effect of design changes, and is applicable for both statically determinate and statically indeterminate payload attachments; and beneficial design changes to reduce payload loads can be identified by the combined application of impedance techniques and energy distribution review techniques.

  3. A Computationally Efficient Parallel Levenberg-Marquardt Algorithm for Large-Scale Big-Data Inversion

    NASA Astrophysics Data System (ADS)

    Lin, Y.; O'Malley, D.; Vesselinov, V. V.

    2015-12-01

    Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a

  4. Improving CAD performance by fusion of the bilateral mammographic tissue asymmetry information

    NASA Astrophysics Data System (ADS)

    Wang, Xingwei; Li, Lihua; Liu, Wei; Xu, Weidong; Lederman, Dror; Zheng, Bin

    2012-03-01

    Bilateral mammographic tissue density asymmetry could be an important factor in assessing risk of developing breast cancer and improving the detection of the suspicious lesions. This study aims to assess whether fusion of the bilateral mammographic density asymmetrical information into a computer-aided detection (CAD) scheme could improve CAD performance in detecting mass-like breast cancers. A testing dataset involving 1352 full-field digital mammograms (FFDM) acquired from 338 cases was used. In this dataset, half (169) cases are positive containing malignant masses and half are negative. Two computerized schemes were first independently applied to process FFDM images of each case. The first single-image based CAD scheme detected suspicious mass regions on each image. The second scheme detected and computed the bilateral mammographic tissue density asymmetry for each case. A fusion method was then applied to combine the output scores of the two schemes. The CAD performance levels using the original CAD-generated detection scores and the new fusion scores were evaluated and compared using a free-response receiver operating characteristic (FROC) type data analysis method. By fusion with the bilateral mammographic density asymmetrical scores, the case-based CAD sensitivity was increased from 79.2% to 84.6% at a false-positive rate of 0.3 per image. CAD also cued more "difficult" masses with lower CAD-generated detection scores while discarded some "easy" cases. The study indicated that fusion between the scores generated by a single-image based CAD scheme and the computed bilateral mammographic density asymmetry scores enabled to increase mass detection sensitivity in particular to detect more subtle masses.

  5. Dental students' preferences and performance in crown design: conventional wax-added versus CAD.

    PubMed

    Douglas, R Duane; Hopp, Christa D; Augustin, Marcus A

    2014-12-01

    The purpose of this study was to evaluate dental students' perceptions of traditional waxing vs. computer-aided crown design and to determine the effectiveness of either technique through comparative grading of the final products. On one of twoidentical tooth preparations, second-year students at one dental school fabricated a wax pattern for a full contour crown; on the second tooth preparation, the same students designed and fabricated an all-ceramic crown using computer-aided design (CAD) and computer-aided manufacturing (CAM) technology. Projects were graded for occlusion and anatomic form by three faculty members. On completion of the projects, 100 percent of the students (n=50) completed an eight-question, five-point Likert scalesurvey, designed to assess their perceptions of and learning associated with the two design techniques. The average grades for the crown design projects were 78.3 (CAD) and 79.1 (wax design). The mean numbers of occlusal contacts were 3.8 (CAD) and 2.9(wax design), which was significantly higher for CAD (p=0.02). The survey results indicated that students enjoyed designing afull contour crown using CAD as compared to using conventional wax techniques and spent less time designing the crown using CAD. From a learning perspective, students felt that they learned more about position and the size/strength of occlusal contacts using CAD. However, students recognized that CAD technology has limits in terms of representing anatomic contours and excursive occlusion compared to conventional wax techniques. The results suggest that crown design using CAD could be considered as an adjunct to conventional wax-added techniques in preclinical fixed prosthodontic curricula.

  6. Efficient and Flexible Computation of Many-Electron Wave Function Overlaps

    PubMed Central

    2016-01-01

    A new algorithm for the computation of the overlap between many-electron wave functions is described. This algorithm allows for the extensive use of recurring intermediates and thus provides high computational efficiency. Because of the general formalism employed, overlaps can be computed for varying wave function types, molecular orbitals, basis sets, and molecular geometries. This paves the way for efficiently computing nonadiabatic interaction terms for dynamics simulations. In addition, other application areas can be envisaged, such as the comparison of wave functions constructed at different levels of theory. Aside from explaining the algorithm and evaluating the performance, a detailed analysis of the numerical stability of wave function overlaps is carried out, and strategies for overcoming potential severe pitfalls due to displaced atoms and truncated wave functions are presented. PMID:26854874

  7. An efficient numerical algorithm for computing densely distributed positive interior transmission eigenvalues

    NASA Astrophysics Data System (ADS)

    Li, Tiexiang; Huang, Tsung-Ming; Lin, Wen-Wei; Wang, Jenn-Nan

    2017-03-01

    We propose an efficient eigensolver for computing densely distributed spectra of the two-dimensional transmission eigenvalue problem (TEP), which is derived from Maxwell’s equations with Tellegen media and the transverse magnetic mode. The governing equations, when discretized by the standard piecewise linear finite element method, give rise to a large-scale quadratic eigenvalue problem (QEP). Our numerical simulation shows that half of the positive eigenvalues of the QEP are densely distributed in some interval near the origin. The quadratic Jacobi-Davidson method with a so-called non-equivalence deflation technique is proposed to compute the dense spectrum of the QEP. Extensive numerical simulations show that our proposed method processes the convergence efficiently, even when it needs to compute more than 5000 desired eigenpairs. Numerical results also illustrate that the computed eigenvalue curves can be approximated by nonlinear functions, which can be applied to estimate the denseness of the eigenvalues for the TEP.

  8. Efficient Computation of Functional Brain Networks: toward Real-Time Functional Connectivity

    PubMed Central

    García-Prieto, Juan; Bajo, Ricardo; Pereda, Ernesto

    2017-01-01

    Functional Connectivity has demonstrated to be a key concept for unraveling how the brain balances functional segregation and integration properties while processing information. This work presents a set of open-source tools that significantly increase computational efficiency of some well-known connectivity indices and Graph-Theory measures. PLV, PLI, ImC, and wPLI as Phase Synchronization measures, Mutual Information as an information theory based measure, and Generalized Synchronization indices are computed much more efficiently than prior open-source available implementations. Furthermore, network theory related measures like Strength, Shortest Path Length, Clustering Coefficient, and Betweenness Centrality are also implemented showing computational times up to thousands of times faster than most well-known implementations. Altogether, this work significantly expands what can be computed in feasible times, even enabling whole-head real-time network analysis of brain function. PMID:28220071

  9. Efficient and Flexible Computation of Many-Electron Wave Function Overlaps.

    PubMed

    Plasser, Felix; Ruckenbauer, Matthias; Mai, Sebastian; Oppel, Markus; Marquetand, Philipp; González, Leticia

    2016-03-08

    A new algorithm for the computation of the overlap between many-electron wave functions is described. This algorithm allows for the extensive use of recurring intermediates and thus provides high computational efficiency. Because of the general formalism employed, overlaps can be computed for varying wave function types, molecular orbitals, basis sets, and molecular geometries. This paves the way for efficiently computing nonadiabatic interaction terms for dynamics simulations. In addition, other application areas can be envisaged, such as the comparison of wave functions constructed at different levels of theory. Aside from explaining the algorithm and evaluating the performance, a detailed analysis of the numerical stability of wave function overlaps is carried out, and strategies for overcoming potential severe pitfalls due to displaced atoms and truncated wave functions are presented.

  10. Efficient Computation of Functional Brain Networks: toward Real-Time Functional Connectivity.

    PubMed

    García-Prieto, Juan; Bajo, Ricardo; Pereda, Ernesto

    2017-01-01

    Functional Connectivity has demonstrated to be a key concept for unraveling how the brain balances functional segregation and integration properties while processing information. This work presents a set of open-source tools that significantly increase computational efficiency of some well-known connectivity indices and Graph-Theory measures. PLV, PLI, ImC, and wPLI as Phase Synchronization measures, Mutual Information as an information theory based measure, and Generalized Synchronization indices are computed much more efficiently than prior open-source available implementations. Furthermore, network theory related measures like Strength, Shortest Path Length, Clustering Coefficient, and Betweenness Centrality are also implemented showing computational times up to thousands of times faster than most well-known implementations. Altogether, this work significantly expands what can be computed in feasible times, even enabling whole-head real-time network analysis of brain function.

  11. Microhardness evaluations of CAD/CAM ceramics irradiated with CO2 or Nd:YAP laser.

    PubMed

    El Gamal, Ahmed; Rocca, Jean Paul; Fornaini, Carlo; Medioni, Etienne; Brulat-Bouchard, Nathalie

    2017-03-31

    The aim of this study was to measure the microhardness values of irradiated computer-aided design/computer-aided manufacturing (CAD/CAM) ceramics surfaces before and after thermal treatment. Sixty CAD/CAM ceramic discs were prepared and grouped by material, i.e. lithium disilicate ceramic (Emax CAD) and zirconia ceramic (Emax ZirCAD). Laser irradiation at the material surface was performed with a carbon dioxide laser at 5 Watt (W) or 10 W power in continuous mode (CW mode), or with a neodymium:yttrium aluminum perovskite (Nd:YAP) laser at 10 W on graphite and non-graphite surfaces. Vickers hardness was tested at 0.3 kgf for lithium disilicate and 1 kgf for zirconia. Emax CAD irradiated with CO2 at 5 W increased microhardness by 6.32 GPa whereas Emax ZirCAD irradiated with Nd:YAP decreased microhardness by 17.46 GPa. CO2 laser effectively increases the microhardness of lithium disilicate ceramics (Emax CAD).

  12. CAD-Based Shielding Analysis for ITER Port Diagnostics

    NASA Astrophysics Data System (ADS)

    Serikov, Arkady; Fischer, Ulrich; Anthoine, David; Bertalot, Luciano; De Bock, Maartin; O'Connor, Richard; Juarez, Rafael; Krasilnikov, Vitaly

    2017-09-01

    Radiation shielding analysis conducted in support of design development of the contemporary diagnostic systems integrated inside the ITER ports is relied on the use of CAD models. This paper presents the CAD-based MCNP Monte Carlo radiation transport and activation analyses for the Diagnostic Upper and Equatorial Port Plugs (UPP #3 and EPP #8, #17). The creation process of the complicated 3D MCNP models of the diagnostics systems was substantially accelerated by application of the CAD-to-MCNP converter programs MCAM and McCad. High performance computing resources of the Helios supercomputer allowed to speed-up the MCNP parallel transport calculations with the MPI/OpenMP interface. The found shielding solutions could be universal, reducing ports R&D costs. The shield block behind the Tritium and Deposit Monitor (TDM) optical box was added to study its influence on Shut-Down Dose Rate (SDDR) in Port Interspace (PI) of EPP#17. Influence of neutron streaming along the Lost Alpha Monitor (LAM) on the neutron energy spectra calculated in the Tangential Neutron Spectrometer (TNS) of EPP#8. For the UPP#3 with Charge eXchange Recombination Spectroscopy (CXRS-core), an excessive neutron streaming along the CXRS shutter, which should be prevented in further design iteration.

  13. Some computational challenges of developing efficient parallel algorithms for data-dependent computations in thermal-hydraulics supercomputer applications

    SciTech Connect

    Woodruff, S.B.

    1992-01-01

    The Transient Reactor Analysis Code (TRAC), which features a two- fluid treatment of thermal-hydraulics, is designed to model transients in water reactors and related facilities. One of the major computational costs associated with TRAC and similar codes is calculating constitutive coefficients. Although the formulations for these coefficients are local the costs are flow-regime- or data-dependent; i.e., the computations needed for a given spatial node often vary widely as a function of time. Consequently, poor load balancing will degrade efficiency on either vector or data parallel architectures when the data are organized according to spatial location. Unfortunately, a general automatic solution to the load-balancing problem associated with data-dependent computations is not yet available for massively parallel architectures. This document discusses why developers algorithms, such as a neural net representation, that do not exhibit algorithms, such as a neural net representation, that do not exhibit load-balancing problems.

  14. Development of efficient computer program for dynamic simulation of telerobotic manipulation

    NASA Technical Reports Server (NTRS)

    Chen, J.; Ou, Y. J.

    1989-01-01

    Research in robot control has generated interest in computationally efficient forms of dynamic equations for multi-body systems. For a simply connected open-loop linkage, dynamic equations arranged in recursive form were found to be particularly efficient. A general computer program capable of simulating an open-loop manipulator with arbitrary number of links has been developed based on an efficient recursive form of Kane's dynamic equations. Also included in the program is some of the important dynamics of the joint drive system, i.e., the rotational effect of the motor rotors. Further efficiency is achieved by the use of symbolic manipulation program to generate the FORTRAN simulation program tailored for a specific manipulator based on the parameter values given. The formulations and the validation of the program are described, and some results are shown.

  15. Efficient migration of complex off-line computer vision software to real-time system implementation on generic computer hardware.

    PubMed

    Tyrrell, James Alexander; LaPre, Justin M; Carothers, Christopher D; Roysam, Badrinath; Stewart, Charles V

    2004-06-01

    This paper addresses the problem of migrating large and complex computer vision code bases that have been developed off-line, into efficient real-time implementations avoiding the need for rewriting the software, and the associated costs. Creative linking strategies based on Linux loadable kernel modules are presented to create a simultaneous realization of real-time and off-line frame rate computer vision systems from a single code base. In this approach, systemic predictability is achieved by inserting time-critical components of a user-level executable directly into the kernel as a virtual device driver. This effectively emulates a single process space model that is nonpreemptable, nonpageable, and that has direct access to a powerful set of system-level services. This overall approach is shown to provide the basis for building a predictable frame-rate vision system using commercial off-the-shelf hardware and a standard uniprocessor Linux operating system. Experiments on a frame-rate vision system designed for computer-assisted laser retinal surgery show that this method reduces the variance of observed per-frame central processing unit cycle counts by two orders of magnitude. The conclusion is that when predictable application algorithms are used, it is possible to efficiently migrate to a predictable frame-rate computer vision system.

  16. A computationally efficient denoising and hole-filling method for depth image enhancement

    NASA Astrophysics Data System (ADS)

    Liu, Soulan; Chen, Chen; Kehtarnavaz, Nasser

    2016-04-01

    Depth maps captured by Kinect depth cameras are being widely used for 3D action recognition. However, such images often appear noisy and contain missing pixels or black holes. This paper presents a computationally efficient method for both denoising and hole-filling in depth images. The denoising is achieved by utilizing a combination of Gaussian kernel filtering and anisotropic filtering. The hole-filling is achieved by utilizing a combination of morphological filtering and zero block filtering. Experimental results using the publicly available datasets are provided indicating the superiority of the developed method in terms of both depth error and computational efficiency compared to three existing methods.

  17. Integration of a CAD System Into an MDO Framework

    NASA Technical Reports Server (NTRS)

    Townsend, J. C.; Samareh, J. A.; Weston, R. P.; Zorumski, W. E.

    1998-01-01

    NASA Langley has developed a heterogeneous distributed computing environment, called the Framework for Inter-disciplinary Design Optimization, or FIDO. Its purpose has been to demonstrate framework technical feasibility and usefulness for optimizing the preliminary design of complex systems and to provide a working environment for testing optimization schemes. Its initial implementation has been for a simplified model of preliminary design of a high-speed civil transport. Upgrades being considered for the FIDO system include a more complete geometry description, required by high-fidelity aerodynamics and structures codes and based on a commercial Computer Aided Design (CAD) system. This report presents the philosophy behind some of the decisions that have shaped the FIDO system and gives a brief case study of the problems and successes encountered in integrating a CAD system into the FEDO framework.

  18. Energy-efficient Data-intensive Computing with a Fast Array of Wimpy Nodes

    DTIC Science & Technology

    2011-10-01

    power to servers. Providing this low of a PUE has required innovation in battery backup systems , efficient power supplies, voltage regulators, and novel...Large-scale data-intensive computing systems have become a critical foundation for Internet-scale services. eir widespread growth during the...power servers that are individually optimized for energy efficiency rather than raw performance alone. FAWN systems , however, have a different set of

  19. Computationally Efficient Use of Derivatives in Emulation of Complex Computational Models

    SciTech Connect

    Williams, Brian J.; Marcy, Peter W.

    2012-06-07

    We will investigate the use of derivative information in complex computer model emulation when the correlation function is of the compactly supported Bohman class. To this end, a Gaussian process model similar to that used by Kaufman et al. (2011) is extended to a situation where first partial derivatives in each dimension are calculated at each input site (i.e. using gradients). A simulation study in the ten-dimensional case is conducted to assess the utility of the Bohman correlation function against strictly positive correlation functions when a high degree of sparsity is induced.

  20. Energy-Efficient Computational Chemistry: Comparison of x86 and ARM Systems.

    PubMed

    Keipert, Kristopher; Mitra, Gaurav; Sunriyal, Vaibhav; Leang, Sarom S; Sosonkina, Masha; Rendell, Alistair P; Gordon, Mark S

    2015-11-10

    The computational efficiency and energy-to-solution of several applications using the GAMESS quantum chemistry suite of codes is evaluated for 32-bit and 64-bit ARM-based computers, and compared to an x86 machine. The x86 system completes all benchmark computations more quickly than either ARM system and is the best choice to minimize time to solution. The ARM64 and ARM32 computational performances are similar to each other for Hartree-Fock and density functional theory energy calculations. However, for memory-intensive second-order perturbation theory energy and gradient computations the lower ARM32 read/write memory bandwidth results in computation times as much as 86% longer than on the ARM64 system. The ARM32 system is more energy efficient than the x86 and ARM64 CPUs for all benchmarked methods, while the ARM64 CPU is more energy efficient than the x86 CPU for some core counts and molecular sizes.

  1. Combining associative computing and distributed arithmetic methods for efficient implementation of multiple inner products

    NASA Astrophysics Data System (ADS)

    Guevorkian, David; Yli-Pietilä, Timo; Liuha, Petri; Egiazarian, Karen

    2012-02-01

    Many multimedia processing algorithms as well as communication algorithms implemented in mobile devices are based on intensive implementation of linear algebra methods, in particular, implying implementation of a large number of inner products in real time. Among most efficient approaches to perform inner products are the Associative Computing (ASC) approach and Distributed Arithmetic (DA) approach. In ASC, computations are performed on Associative Processors (ASP), where Content-Addressable memories (CAMs) are used instead of traditional processing elements to perform basic arithmetic operations. In the DA approach, computations are reduced to look-up table reads with respect to binary planes of inputs. In this work, we propose a modification of Associative processors that supports efficient implementation of the DA method. Thus, the two powerful methods are combined to further improve the efficiency of multiple inner product computation. Computational complexity analysis of the proposed method illustrates significant speed-up when computing multiple inner products as compared both to the pure ASC method and to the pure DA method as well as to other state-of the art traditional methods for inner product calculation.

  2. Evaluation of Emerging Energy-Efficient Heterogeneous Computing Platforms for Biomolecular and Cellular Simulation Workloads

    PubMed Central

    Stone, John E.; Hallock, Michael J.; Phillips, James C.; Peterson, Joseph R.; Luthey-Schulten, Zaida; Schulten, Klaus

    2016-01-01

    Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers. PMID:27516922

  3. Evaluation of Emerging Energy-Efficient Heterogeneous Computing Platforms for Biomolecular and Cellular Simulation Workloads.

    PubMed

    Stone, John E; Hallock, Michael J; Phillips, James C; Peterson, Joseph R; Luthey-Schulten, Zaida; Schulten, Klaus

    2016-05-01

    Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers.

  4. A Software Demonstration of 'rap': Preparing CAD Geometries for Overlapping Grid Generation

    SciTech Connect

    Anders Petersson, N.

    2002-02-15

    We demonstrate the application code ''rap'' which is part of the ''Overture'' library. A CAD geometry imported from an IGES file is first cleaned up and simplified to suit the needs of mesh generation. Thereafter, the topology of the model is computed and a water-tight surface triangulation is created on the CAD surface. This triangulation is used to speed up the projection of points onto the CAD surface during the generation of overlapping surface grids. From each surface grid, volume grids are grown into the domain using a hyperbolic marching procedure. The final step is to fill any remaining parts of the interior with background meshes.

  5. CYBERSECURITY AND USER ACCOUNTABILITY IN THE C-AD CONTROL SYSTEM

    SciTech Connect

    MORRIS,J.T.; BINELLO, S.; D OTTAVIO, T.; KATZ, R.A.

    2007-10-15

    A heightened awareness of cybersecurity has led to a review of the procedures that ensure user accountability for actions performed on the computers of the Collider-Accelerator Department (C-AD) Control System. Control system consoles are shared by multiple users in control rooms throughout the C-AD complex. A significant challenge has been the establishment of procedures that securely control and monitor access to these shared consoles without impeding accelerator operations. This paper provides an overview of C-AD cybersecurity strategies with an emphasis on recent enhancements in user authentication and tracking methods.

  6. Adjoint Algorithm for CAD-Based Shape Optimization Using a Cartesian Method

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.

    2004-01-01

    Adjoint solutions of the governing flow equations are becoming increasingly important for the development of efficient analysis and optimization algorithms. A well-known use of the adjoint method is gradient-based shape optimization. Given an objective function that defines some measure of performance, such as the lift and drag functionals, its gradient is computed at a cost that is essentially independent of the number of design variables (geometric parameters that control the shape). More recently, emerging adjoint applications focus on the analysis problem, where the adjoint solution is used to drive mesh adaptation, as well as to provide estimates of functional error bounds and corrections. The attractive feature of this approach is that the mesh-adaptation procedure targets a specific functional, thereby localizing the mesh refinement and reducing computational cost. Our focus is on the development of adjoint-based optimization techniques for a Cartesian method with embedded boundaries.12 In contrast t o implementations on structured and unstructured grids, Cartesian methods decouple the surface discretization from the volume mesh. This feature makes Cartesian methods well suited for the automated analysis of complex geometry problems, and consequently a promising approach to aerodynamic optimization. Melvin et developed an adjoint formulation for the TRANAIR code, which is based on the full-potential equation with viscous corrections. More recently, Dadone and Grossman presented an adjoint formulation for the Euler equations. In both approaches, a boundary condition is introduced to approximate the effects of the evolving surface shape that results in accurate gradient computation. Central to automated shape optimization algorithms is the issue of geometry modeling and control. The need to optimize complex, "real-life" geometry provides a strong incentive for the use of parametric-CAD systems within the optimization procedure. In previous work, we presented

  7. An Educational Exercise Examining the Role of Model Attributes on the Creation and Alteration of CAD Models

    ERIC Educational Resources Information Center

    Johnson, Michael D.; Diwakaran, Ram Prasad

    2011-01-01

    Computer-aided design (CAD) is a ubiquitous tool that today's students will be expected to use proficiently for numerous engineering purposes. Taking full advantage of the features available in modern CAD programs requires that models are created in a manner that allows others to easily understand how they are organized and alter them in an…

  8. An Educational Exercise Examining the Role of Model Attributes on the Creation and Alteration of CAD Models

    ERIC Educational Resources Information Center

    Johnson, Michael D.; Diwakaran, Ram Prasad

    2011-01-01

    Computer-aided design (CAD) is a ubiquitous tool that today's students will be expected to use proficiently for numerous engineering purposes. Taking full advantage of the features available in modern CAD programs requires that models are created in a manner that allows others to easily understand how they are organized and alter them in an…

  9. Computationally efficient scalar nonparaxial modeling of optical wave propagation in the far-field.

    PubMed

    Nguyen, Giang-Nam; Heggarty, Kevin; Gérard, Philippe; Serio, Bruno; Meyrueis, Patrick

    2014-04-01

    We present a scalar model to overcome the computation time and sampling interval limitations of the traditional Rayleigh-Sommerfeld (RS) formula and angular spectrum method in computing wide-angle diffraction in the far-field. Numerical and experimental results show that our proposed method based on an accurate nonparaxial diffraction step onto a hemisphere and a projection onto a plane accurately predicts the observed nonparaxial far-field diffraction pattern, while its calculation time is much lower than the more rigorous RS integral. The results enable a fast and efficient way to compute far-field nonparaxial diffraction when the conventional Fraunhofer pattern fails to predict correctly.

  10. Performance evaluation of the NASA/KSC CAD/CAE and office automation LAN's

    NASA Technical Reports Server (NTRS)

    Zobrist, George W.

    1994-01-01

    This study's objective is the performance evaluation of the existing CAD/CAE (Computer Aided Design/Computer Aided Engineering) network at NASA/KSC. This evaluation also includes a similar study of the Office Automation network, since it is being planned to integrate this network into the CAD/CAE network. The Microsoft mail facility which is presently on the CAD/CAE network was monitored to determine its present usage. This performance evaluation of the various networks will aid the NASA/KSC network managers in planning for the integration of future workload requirements into the CAD/CAE network and determining the effectiveness of the planned FDDI (Fiber Distributed Data Interface) migration.

  11. Spin-neurons: A possible path to energy-efficient neuromorphic computers

    NASA Astrophysics Data System (ADS)

    Sharad, Mrigank; Fan, Deliang; Roy, Kaushik

    2013-12-01

    Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices. Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and "thresholding" operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that "spin-neurons" (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.

  12. Spin-neurons: A possible path to energy-efficient neuromorphic computers

    SciTech Connect

    Sharad, Mrigank; Fan, Deliang; Roy, Kaushik

    2013-12-21

    Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices. Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and “thresholding” operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that “spin-neurons” (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.

  13. Dual vs. single computer monitor in a Canadian hospital Archiving Department: a study of efficiency and satisfaction.

    PubMed

    Poder, Thomas G; Godbout, Sylvie T; Bellemare, Christian

    2011-01-01

    This paper describes a comparative study of clinical coding by Archivists (also known as Clinical Coders in some other countries) using single and dual computer monitors. In the present context, processing a record corresponds to checking the available information; searching for the missing physician information; and finally, performing clinical coding. We collected data for each Archivist during her use of the single monitor for 40 hours and during her use of the dual monitor for 20 hours. During the experimental periods, Archivists did not perform other related duties, so we were able to measure the real-time processing of records. To control for the type of records and their impact on the process time required, we categorised the cases as major or minor, based on whether acute care or day surgery was involved. Overall results show that 1,234 records were processed using a single monitor and 647 records using a dual monitor. The time required to process a record was significantly higher (p= .071) with a single monitor compared to a dual monitor (19.83 vs.18.73 minutes). However, the percentage of major cases was significantly higher (p= .000) in the single monitor group compared to the dual monitor group (78% vs. 69%). As a consequence, we adjusted our results, which reduced the difference in time required to process a record between the two systems from 1.1 to 0.61 minutes. Thus, the net real-time difference was only 37 seconds in favour of the dual monitor system. Extrapolated over a 5-year period, this would represent a time savings of 3.1% and generate a net cost savings of $7,729 CAD (Canadian dollars) for each workstation that devoted 35 hours per week to the processing of records. Finally, satisfaction questionnaire responses indicated a high level of satisfaction and support for the dual-monitor system. The implementation of a dual-monitor system in a hospital archiving department is an efficient option in the context of scarce human resources and has the

  14. NREL's Building-Integrated Supercomputer Provides Heating and Efficient Computing (Fact Sheet)

    SciTech Connect

    Not Available

    2014-09-01

    NREL's Energy Systems Integration Facility (ESIF) is meant to investigate new ways to integrate energy sources so they work together efficiently, and one of the key tools to that investigation, a new supercomputer, is itself a prime example of energy systems integration. NREL teamed with Hewlett-Packard (HP) and Intel to develop the innovative warm-water, liquid-cooled Peregrine supercomputer, which not only operates efficiently but also serves as the primary source of building heat for ESIF offices and laboratories. This innovative high-performance computer (HPC) can perform more than a quadrillion calculations per second as part of the world's most energy-efficient HPC data center.

  15. Dynamic MRI-based computer aided diagnostic systems for early detection of kidney transplant rejection: A survey

    NASA Astrophysics Data System (ADS)

    Mostapha, Mahmoud; Khalifa, Fahmi; Alansary, Amir; Soliman, Ahmed; Gimel'farb, Georgy; El-Baz, Ayman

    2013-10-01

    Early detection of renal transplant rejection is important to implement appropriate medical and immune therapy in patients with transplanted kidneys. In literature, a large number of computer-aided diagnostic (CAD) systems using different image modalities, such as ultrasound (US), magnetic resonance imaging (MRI), computed tomography (CT), and radionuclide imaging, have been proposed for early detection of kidney diseases. A typical CAD system for kidney diagnosis consists of a set of processing steps including: motion correction, segmentation of the kidney and/or its internal structures (e.g., cortex, medulla), construction of agent kinetic curves, functional parameter estimation, diagnosis, and assessment of the kidney status. In this paper, we survey the current state-of-the-art CAD systems that have been developed for kidney disease diagnosis using dynamic MRI. In addition, the paper addresses several challenges that researchers face in developing efficient, fast and reliable CAD systems for the early detection of kidney diseases.

  16. Computational prediction of efficient splice sites for trans-splicing ribozymes

    PubMed Central

    Meluzzi, Dario; Olson, Karen E.; Dolan, Gregory F.; Arya, Gaurav; Müller, Ulrich F.

    2012-01-01

    Group I introns have been engineered into trans-splicing ribozymes capable of replacing the 3′-terminal portion of an external mRNA with their own 3′-exon. Although this design makes trans-splicing ribozymes potentially useful for therapeutic application, their trans-splicing efficiency is usually too low for medical use. One factor that strongly influences trans-splicing efficiency is the position of the target splice site on the mRNA substrate. Viable splice sites are currently determined using a biochemical trans-tagging assay. Here, we propose a rapid and inexpensive alternative approach to identify efficient splice sites. This approach involves the computation of the binding free energies between ribozyme and mRNA substrate. We found that the computed binding free energies correlate well with the trans-splicing efficiency experimentally determined at 18 different splice sites on the mRNA of chloramphenicol acetyl transferase. In contrast, our results from the trans-tagging assay correlate less well with measured trans-splicing efficiency. The computed free energy components suggest that splice site efficiency depends on the following secondary structure rearrangements: hybridization of the ribozyme's internal guide sequence (IGS) with mRNA substrate (most important), unfolding of substrate proximal to the splice site, and release of the IGS from the 3′-exon (least important). The proposed computational approach can also be extended to fulfill additional design requirements of efficient trans-splicing ribozymes, such as the optimization of 3′-exon and extended guide sequences. PMID:22274956

  17. Framework for computationally efficient optimal irrigation scheduling using ant colony optimization

    USDA-ARS?s Scientific Manuscript database

    A general optimization framework is introduced with the overall goal of reducing search space size and increasing the computational efficiency of evolutionary algorithm application for optimal irrigation scheduling. The framework achieves this goal by representing the problem in the form of a decisi...

  18. The Improvement of Efficiency in the Numerical Computation of Orbit Trajectories

    NASA Technical Reports Server (NTRS)

    Dyer, J.; Danchick, R.; Pierce, S.; Haney, R.

    1972-01-01

    An analysis, system design, programming, and evaluation of results are described for numerical computation of orbit trajectories. Evaluation of generalized methods, interaction of different formulations for satellite motion, transformation of equations of motion and integrator loads, and development of efficient integrators are also considered.

  19. Improving the Efficiency and Effectiveness of Grading through the Use of Computer-Assisted Grading Rubrics

    ERIC Educational Resources Information Center

    Anglin, Linda; Anglin, Kenneth; Schumann, Paul L.; Kaliski, John A.

    2008-01-01

    This study tests the use of computer-assisted grading rubrics compared to other grading methods with respect to the efficiency and effectiveness of different grading processes for subjective assignments. The test was performed on a large Introduction to Business course. The students in this course were randomly assigned to four treatment groups…

  20. Improving the Efficiency and Effectiveness of Grading through the Use of Computer-Assisted Grading Rubrics

    ERIC Educational Resources Information Center

    Anglin, Linda; Anglin, Kenneth; Schumann, Paul L.; Kaliski, John A.

    2008-01-01

    This study tests the use of computer-assisted grading rubrics compared to other grading methods with respect to the efficiency and effectiveness of different grading processes for subjective assignments. The test was performed on a large Introduction to Business course. The students in this course were randomly assigned to four treatment groups…

  1. Integrated Computer-Aided Drafting Instruction (ICADI).

    ERIC Educational Resources Information Center

    Chen, C. Y.; McCampbell, David H.

    Until recently, computer-aided drafting and design (CAD) systems were almost exclusively operated on mainframes or minicomputers and their cost prohibited many schools from offering CAD instruction. Today, many powerful personal computers are capable of performing the high-speed calculation and analysis required by the CAD application; however,…

  2. Efficient shortest-path-tree computation in network routing based on pulse-coupled neural networks.

    PubMed

    Qu, Hong; Yi, Zhang; Yang, Simon X

    2013-06-01

    Shortest path tree (SPT) computation is a critical issue for routers using link-state routing protocols, such as the most commonly used open shortest path first and intermediate system to intermediate system. Each router needs to recompute a new SPT rooted from itself whenever a change happens in the link state. Most commercial routers do this computation by deleting the current SPT and building a new one using static algorithms such as the Dijkstra algorithm at the beginning. Such recomputation of an entire SPT is inefficient, which may consume a considerable amount of CPU time and result in a time delay in the network. Some dynamic updating methods using the information in the updated SPT have been proposed in recent years. However, there are still many limitations in those dynamic algorithms. In this paper, a new modified model of pulse-coupled neural networks (M-PCNNs) is proposed for the SPT computation. It is rigorously proved that the proposed model is capable of solving some optimization problems, such as the SPT. A static algorithm is proposed based on the M-PCNNs to compute the SPT efficiently for large-scale problems. In addition, a dynamic algorithm that makes use of the structure of the previously computed SPT is proposed, which significantly improves the efficiency of the algorithm. Simulation results demonstrate the effective and efficient performance of the proposed approach.

  3. The efficient implementation of correction procedure via reconstruction with GPU computing

    NASA Astrophysics Data System (ADS)

    Zimmerman, Ben J.

    Computational fluid dynamics (CFD) has long been a useful tool to model fluid flow problems across many engineering disciplines, and while problem size, complexity, and difficulty continue to expand, the demands for robustness and accuracy grow. Furthermore, generating high-order accurate solutions has escalated the required computational resources, and as problems continue to increase in complexity, so will computational needs such as memory requirements and calculation time for accurate flow field prediction. To improve upon computational time, vast amounts of computational power and resources are employed, but even over dozens to hundreds of central processing units (CPUs), the required computational time to formulate solutions can be weeks, months, or longer, which is particularly true when generating high-order accurate solutions over large computational domains. One response to lower the computational time for CFD problems is to implement graphical processing units (GPUs) with current CFD solvers. GPUs have illustrated the ability to solve problems orders of magnitude faster than their CPU counterparts with identical accuracy. The goal of the presented work is to combine a CFD solver and GPU computing with the intent to solve complex problems at a high-order of accuracy while lowering the computational time required to generate the solution. The CFD solver should have high-order spacial capabilities to evaluate small fluctuations and fluid structures not generally captured by lower-order methods and be efficient for the GPU architecture. This research combines the high-order Correction Procedure via Reconstruction (CPR) method with compute unified device architecture (CUDA) from NVIDIA to reach these goals. In addition, the study demonstrates accuracy of the developed solver by comparing results with other solvers and exact solutions. Solving CFD problems accurately and quickly are two factors to consider for the next generation of solvers. GPU computing is a

  4. Computationally efficient measure of topological redundancy of biological and social networks

    NASA Astrophysics Data System (ADS)

    Albert, Réka; Dasgupta, Bhaskar; Hegde, Rashmi; Sivanathan, Gowri Sangeetha; Gitter, Anthony; Gürsoy, Gamze; Paul, Pradyut; Sontag, Eduardo

    2011-09-01

    It is well known that biological and social interaction networks have a varying degree of redundancy, though a consensus of the precise cause of this is so far lacking. In this paper, we introduce a topological redundancy measure for labeled directed networks that is formal, computationally efficient, and applicable to a variety of directed networks such as cellular signaling, and metabolic and social interaction networks. We demonstrate the computational efficiency of our measure by computing its value and statistical significance on a number of biological and social networks with up to several thousands of nodes and edges. Our results suggest a number of interesting observations: (1) Social networks are more redundant that their biological counterparts, (2) transcriptional networks are less redundant than signaling networks, (3) the topological redundancy of the C. elegans metabolic network is largely due to its inclusion of currency metabolites, and (4) the redundancy of signaling networks is highly (negatively) correlated with the monotonicity of their dynamics.

  5. Developing a computationally efficient dynamic multilevel hybrid optimization scheme using multifidelity model interactions.

    SciTech Connect

    Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Castro, Joseph Pete Jr.; Giunta, Anthony Andrew

    2006-01-01

    Many engineering application problems use optimization algorithms in conjunction with numerical simulators to search for solutions. The formulation of relevant objective functions and constraints dictate possible optimization algorithms. Often, a gradient based approach is not possible since objective functions and constraints can be nonlinear, nonconvex, non-differentiable, or even discontinuous and the simulations involved can be computationally expensive. Moreover, computational efficiency and accuracy are desirable and also influence the choice of solution method. With the advent and increasing availability of massively parallel computers, computational speed has increased tremendously. Unfortunately, the numerical and model complexities of many problems still demand significant computational resources. Moreover, in optimization, these expenses can be a limiting factor since obtaining solutions often requires the completion of numerous computationally intensive simulations. Therefore, we propose a multifidelity optimization algorithm (MFO) designed to improve the computational efficiency of an optimization method for a wide range of applications. In developing the MFO algorithm, we take advantage of the interactions between multi fidelity models to develop a dynamic and computational time saving optimization algorithm. First, a direct search method is applied to the high fidelity model over a reduced design space. In conjunction with this search, a specialized oracle is employed to map the design space of this high fidelity model to that of a computationally cheaper low fidelity model using space mapping techniques. Then, in the low fidelity space, an optimum is obtained using gradient or non-gradient based optimization, and it is mapped back to the high fidelity space. In this paper, we describe the theory and implementation details of our MFO algorithm. We also demonstrate our MFO method on some example problems and on two applications: earth penetrators and

  6. Efficient texture mapping by adaptive mesh division in mesh-based computer generated hologram.

    PubMed

    Ji, Yeong-Min; Yeom, Hanju-; Park, Jae-Hyeung

    2016-11-28

    We propose a method that achieves efficient texture mapping in fully-analytic computer generated holograms based on triangular meshes. In computer graphics, the texture mapping is commonly used to represent the details of objects without increasing the number of the triangular meshes. In fully-analytic triangular-mesh-based computer generated holograms, however, those methods cannot be directly applied because each mesh cannot have arbitrary amplitude distribution inside the triangular mesh area in order to keep the analytic representation. In this paper, we propose an efficient texture mapping method for fully-analytic mesh-based computer generated hologram. The proposed method uses an adaptive triangular mesh division to minimize the increase of the number of the triangular meshes for the given texture image data. The geometrical similarity relationship between the original triangular mesh and the divided one is also exploited to obtain the angular spectrum of the divided mesh from pre-calculated data for the original one. As a result, the proposed method enables to obtain the computer generated hologram of high details with much smaller computation time in comparison with the brute-force approach. The feasibility of the proposed method is confirmed by simulations and optical experiments.

  7. Efficient implementation for spherical flux computation and its application to vascular segmentation.

    PubMed

    Law, Max W K; Chung, Albert C S

    2009-03-01

    Spherical flux is the flux inside a spherical region, and it is very useful in the analysis of tubular structures in magnetic resonance angiography and computed tomographic angiography. The conventional approach is to estimate the spherical flux in the spatial domain. Its running time depends on the sphere radius quadratically, which leads to very slow spherical flux computation when the sphere size is large. This paper proposes a more efficient implementation for spherical flux computation in the Fourier domain. Our implementation is based on the reformulation of the spherical flux calculation using the divergence theorem, spherical step function, and the convolution operation. With this reformulation, most of the calculations are performed in the Fourier domain. We show how to select the frequency subband so that the computation accuracy can be maintained. It is experimentally demonstrated that, using the synthetic and clinical phase contrast magnetic resonance angiographic volumes, our implementation is more computationally efficient than the conventional spatial implementation. The accuracies of our implementation and that of the conventional spatial implementation are comparable. Finally, the proposed implementation can definitely benefit the computation of the multiscale spherical flux with a set of radii because, unlike the conventional spatial implementation, the time complexity of the proposed implementation does not depend on the sphere radius.

  8. The Challenging Academic Development (CAD) Collective

    ERIC Educational Resources Information Center

    Peseta, Tai

    2005-01-01

    This article discusses the Challenging Academic Development (CAD) Collective and describes how it came out of a symposium called "Liminality, identity, and hybridity: On the promise of new conceptual frameworks for theorising academic/faculty development." The CAD Collective is and represents a space where people can open up their…

  9. The Challenging Academic Development (CAD) Collective

    ERIC Educational Resources Information Center

    Peseta, Tai

    2005-01-01

    This article discusses the Challenging Academic Development (CAD) Collective and describes how it came out of a symposium called "Liminality, identity, and hybridity: On the promise of new conceptual frameworks for theorising academic/faculty development." The CAD Collective is and represents a space where people can open up their…

  10. Stress-induced alteration of left ventricular eccentricity: An additional marker of multivessel CAD.

    PubMed

    Gimelli, Alessia; Liga, Riccardo; Giorgetti, Assuero; Casagranda, Mirta; Marzullo, Paolo

    2017-03-28

    Abnormal left ventricular (LV) eccentricity index (EI) is a marker of adverse cardiac remodeling. However, the interaction between stress-induced alterations of EI and major cardiac parameters has not been explored. We sought to evaluate the relationship between LV EI and coronary artery disease (CAD) burden in patients submitted to myocardial perfusion imaging (MPI). Three-hundred and forty-three patients underwent MPI and coronary angiography. LV ejection fraction (EF) and EI were computed from gated stress images as measures of stress-induced functional impairment. One-hundred and thirty-six (40%), 122 (35%), and 85 (25%) patients had normal coronary arteries, single-vessel CAD, and multivessel CAD, respectively. Post-stress EI was lower in patients with multivessel CAD than in those with normal coronary arteries and single-vessel CAD (P = 0.001). This relationship was confirmed only in patients undergoing exercise stress test, where a lower post-stress EI predicted the presence of multivessel CAD (P = 0.039). Post-stress alterations of LV EI on MPI may unmask the presence of multivessel CAD.

  11. Interproximal Papilla Stability Around CAD/CAM and Stock Abutments in Anterior Regions: A 2-Year Prospective Multicenter Cohort Study.

    PubMed

    Lops, Diego; Parpaiola, Andrea; Paniz, Gianluca; Sbricoli, Luca; Magaz, Vanessa Ruiz; Venezze, Alvise Cenzi; Bressan, Eriberto; Stellini, Edoardo

    The aim of this study was to compare the interproximal papilla stability of restorations supported by computer-aided design/computer-assisted manufacture (CAD/CAM) abutments to those supported by prefabricated stock abutments in anterior areas over a 2-year follow-up. Abutments were selected depending on implant inclination and thickness of buccal peri-implant soft tissues from the following: zirconia stock, titanium stock, zirconia CAD/CAM and titanium CAD/CAM. Differences between the height of the papilla tip were measured (REC). REC values of titanium and zirconia CAD/CAM abutments were significantly lower than those of titanium and zirconia stock. The use of titanium and zirconia CAD/CAM abutments is related to better interproximal papillae stability.

  12. Indications for Computer-Aided Design and Manufacturing in Congenital Craniofacial Reconstruction.

    PubMed

    Fisher, Mark; Medina, Miguel; Bojovic, Branko; Ahn, Edward; Dorafshar, Amir H

    2016-09-01

    The complex three-dimensional relationships in congenital craniofacial reconstruction uniquely lend themselves to the ability to accurately plan and model the result provided by computer-aided design and manufacturing (CAD/CAM). The goal of this study was to illustrate indications where CAD/CAM would be helpful in the treatment of congenital craniofacial anomalies reconstruction and to discuss the application of this technology and its outcomes. A retrospective review was performed of all congenital craniofacial cases performed by the senior author between 2010 and 2014. Cases where CAD/CAM was used were identified, and illustrative cases to demonstrate the benefits of CAD/CAM were selected. Preoperative appearance, computerized plan, intraoperative course, and final outcome were analyzed. Preoperative planning enabled efficient execution of the operative plan with predictable results. Risk factors which made these patients good candidates for CAD/CAM were identified and compiled. Several indications, including multisuture and revisional craniosynostosis, facial bipartition, four-wall box osteotomy, reduction cranioplasty, and distraction osteogenesis could benefit most from this technology. We illustrate the use of CAD/CAM for these applications and describe the decision-making process both before and during surgery. We explore why we believe that CAD/CAM is indicated in these scenarios as well as the disadvantages and risks.

  13. A computationally efficient approach for hidden-Markov model-augmented fingerprint-based positioning

    NASA Astrophysics Data System (ADS)

    Roth, John; Tummala, Murali; McEachen, John

    2016-09-01

    This paper presents a computationally efficient approach for mobile subscriber position estimation in wireless networks. A method of data scaling assisted by timing adjust is introduced in fingerprint-based location estimation under a framework which allows for minimising computational cost. The proposed method maintains a comparable level of accuracy to the traditional case where no data scaling is used and is evaluated in a simulated environment under varying channel conditions. The proposed scheme is studied when it is augmented by a hidden-Markov model to match the internal parameters to the channel conditions that present, thus minimising computational cost while maximising accuracy. Furthermore, the timing adjust quantity, available in modern wireless signalling messages, is shown to be able to further reduce computational cost and increase accuracy when available. The results may be seen as a significant step towards integrating advanced position-based modelling with power-sensitive mobile devices.

  14. Computationally efficient analysis of extraordinary optical transmission through infinite and truncated subwavelength hole arrays

    NASA Astrophysics Data System (ADS)

    Camacho, Miguel; Boix, Rafael R.; Medina, Francisco

    2016-06-01

    The authors present a computationally efficient technique for the analysis of extraordinary transmission through both infinite and truncated periodic arrays of slots in perfect conductor screens of negligible thickness. An integral equation is obtained for the tangential electric field in the slots both in the infinite case and in the truncated case. The unknown functions are expressed as linear combinations of known basis functions, and the unknown weight coefficients are determined by means of Galerkin's method. The coefficients of Galerkin's matrix are obtained in the spatial domain in terms of double finite integrals containing the Green's functions (which, in the infinite case, is efficiently computed by means of Ewald's method) times cross-correlations between both the basis functions and their divergences. The computation in the spatial domain is an efficient alternative to the direct computation in the spectral domain since this latter approach involves the determination of either slowly convergent double infinite summations (infinite case) or slowly convergent double infinite integrals (truncated case). The results obtained are validated by means of commercial software, and it is found that the integral equation technique presented in this paper is at least two orders of magnitude faster than commercial software for a similar accuracy. It is also shown that the phenomena related to periodicity such as extraordinary transmission and Wood's anomaly start to appear in the truncated case for arrays with more than 100 (10 ×10 ) slots.

  15. Fabricating Complete Dentures with CAD/CAM and RP Technologies.

    PubMed

    Bilgin, Mehmet Selim; Erdem, Ali; Aglarci, Osman Sami; Dilber, Erhan

    2015-06-01

    Two techological approaches for fabricating dentures; computer-aided design and computer-aided manufacturing (CAD/CAM) and rapid prototyping (RP), are combined with the conventional techniques of impression and jaw relation recording to determine their feasibility and applicability. Maxillary and mandibular edentulous jaw models were produced using silicone molds. After obtaining a gypsum working model, acrylic bases were crafted, and occlusal rims for each model were fabricated with previously determined standard vertical and centric relationships. The maxillary and mandibular relationships were recorded with guides. The occlusal rims were then scanned with a digital scanner. The alignment of the maxillary and mandibular teeth was verified. The teeth in each arch were fabricated in one piece, or set, either by CAM or RP. Conventional waxing and flasking was then performed for both methods. These techniques obviate a practitioner's need for technicians during design and provide the patient with an opportunity to participate in esthetic design with the dentist. In addition, CAD/CAM and RP reduce chair time; however, the materials and techniques need further improvements. Both CAD/CAM and RP techniques seem promising for reducing chair time and allowing the patient to participate in esthetics design. Furthermore, the one-set aligned artificial tooth design may increase the acrylic's durability.

  16. Use of three-dimensional, CAD/CAM-assisted, virtual surgical simulation and planning in the pediatric craniofacial population.

    PubMed

    Gray, Rachel; Gougoutas, Alexander; Nguyen, Vinh; Taylor, Jesse; Bastidas, Nicholas

    2017-06-01

    Virtual Surgical Planning (VSP) and computer-aided design/computer-aided manufacturing (CAD/CAM) have recently helped improve efficiency and accuracy in many different craniofacial surgeries. Research has mainly focused on the use in the adult population with the exception of the use for mandibular distractions and cranial vault remodeling in the pediatric population. This study aims to elucidate the role of VSP and CAD/CAM in complex pediatric craniofacial cases by exploring its use in the correction of midface hypoplasia, orbital dystopia, mandibular reconstruction, and posterior cranial vault expansion. A retrospective analysis of thirteen patients who underwent 3d, CAD/CAM- assisted preoperative surgical planning between 2012 and 2016 was performed. All CAD/CAM assisted surgical planning was done in conjunction with a third party vendor (either 3D Systems or Materialise). Cutting and positioning guides as well as models were produced based on the virtual plan. Surgeries included free fibula mandible reconstruction (n = 4), lefort I osteotomy and distraction (n = 2), lefort II osteotomy with monobloc distraction (n = 1), expansion of the posterior vault for correction of chiari malformation (n = 3), and secondary orbital and midface reconstruction for facial trauma (n = 3). The patient's age, diagnosis, previous surgeries, length of operating time, complications, and post-surgery satisfaction were determined. In all cases we found presurgical planning was helpful to improve accuracy and significantly decrease intra-operative time. In cases where distraction was used, the planned and actual vectors were found to be accurate with excellent clinical outcomes. There were no complications except for one patient who experienced a wound infection post-operatively which did not alter the ultimate reconstruction. All patients experienced high satisfaction with their outcomes and excellent subjective aesthetic results were achieved. Preoperative planning using

  17. Can computational efficiency alone drive the evolution of modularity in neural networks?

    PubMed Central

    Tosh, Colin R.

    2016-01-01

    Some biologists have abandoned the idea that computational efficiency in processing multipart tasks or input sets alone drives the evolution of modularity in biological networks. A recent study confirmed that small modular (neural) networks are relatively computationally-inefficient but large modular networks are slightly more efficient than non-modular ones. The present study determines whether these efficiency advantages with network size can drive the evolution of modularity in networks whose connective architecture can evolve. The answer is no, but the reason why is interesting. All simulations (run in a wide variety of parameter states) involving gradualistic connective evolution end in non-modular local attractors. Thus while a high performance modular attractor exists, such regions cannot be reached by gradualistic evolution. Non-gradualistic evolutionary simulations in which multi-modularity is obtained through duplication of existing architecture appear viable. Fundamentally, this study indicates that computational efficiency alone does not drive the evolution of modularity, even in large biological networks, but it may still be a viable mechanism when networks evolve by non-gradualistic means. PMID:27573614

  18. Efficient scatter model for simulation of ultrasound images from computed tomography data

    NASA Astrophysics Data System (ADS)

    D'Amato, J. P.; Lo Vercio, L.; Rubi, P.; Fernandez Vera, E.; Barbuzza, R.; Del Fresno, M.; Larrabide, I.

    2015-12-01

    Background and motivation: Real-time ultrasound simulation refers to the process of computationally creating fully synthetic ultrasound images instantly. Due to the high value of specialized low cost training for healthcare professionals, there is a growing interest in the use of this technology and the development of high fidelity systems that simulate the acquisitions of echographic images. The objective is to create an efficient and reproducible simulator that can run either on notebooks or desktops using low cost devices. Materials and methods: We present an interactive ultrasound simulator based on CT data. This simulator is based on ray-casting and provides real-time interaction capabilities. The simulation of scattering that is coherent with the transducer position in real time is also introduced. Such noise is produced using a simplified model of multiplicative noise and convolution with point spread functions (PSF) tailored for this purpose. Results: The computational efficiency of scattering maps generation was revised with an improved performance. This allowed a more efficient simulation of coherent scattering in the synthetic echographic images while providing highly realistic result. We describe some quality and performance metrics to validate these results, where a performance of up to 55fps was achieved. Conclusion: The proposed technique for real-time scattering modeling provides realistic yet computationally efficient scatter distributions. The error between the original image and the simulated scattering image was compared for the proposed method and the state-of-the-art, showing negligible differences in its distribution.

  19. Can computational efficiency alone drive the evolution of modularity in neural networks?

    PubMed

    Tosh, Colin R

    2016-08-30

    Some biologists have abandoned the idea that computational efficiency in processing multipart tasks or input sets alone drives the evolution of modularity in biological networks. A recent study confirmed that small modular (neural) networks are relatively computationally-inefficient but large modular networks are slightly more efficient than non-modular ones. The present study determines whether these efficiency advantages with network size can drive the evolution of modularity in networks whose connective architecture can evolve. The answer is no, but the reason why is interesting. All simulations (run in a wide variety of parameter states) involving gradualistic connective evolution end in non-modular local attractors. Thus while a high performance modular attractor exists, such regions cannot be reached by gradualistic evolution. Non-gradualistic evolutionary simulations in which multi-modularity is obtained through duplication of existing architecture appear viable. Fundamentally, this study indicates that computational efficiency alone does not drive the evolution of modularity, even in large biological networks, but it may still be a viable mechanism when networks evolve by non-gradualistic means.

  20. Reducing Vehicle Weight and Improving U.S. Energy Efficiency Using Integrated Computational Materials Engineering

    NASA Astrophysics Data System (ADS)

    Joost, William J.

    2012-09-01

    Transportation accounts for approximately 28% of U.S. energy consumption with the majority of transportation energy derived from petroleum sources. Many technologies such as vehicle electrification, advanced combustion, and advanced fuels can reduce transportation energy consumption by improving the efficiency of cars and trucks. Lightweight materials are another important technology that can improve passenger vehicle fuel efficiency by 6-8% for each 10% reduction in weight while also making electric and alternative vehicles more competitive. Despite the opportunities for improved efficiency, widespread deployment of lightweight materials for automotive structures is hampered by technology gaps most often associated with performance, manufacturability, and cost. In this report, the impact of reduced vehicle weight on energy efficiency is discussed with a particular emphasis on quantitative relationships determined by several researchers. The most promising lightweight materials systems are described along with a brief review of the most significant technical barriers to their implementation. For each material system, the development of accurate material models is critical to support simulation-intensive processing and structural design for vehicles; improved models also contribute to an integrated computational materials engineering (ICME) approach for addressing technical barriers and accelerating deployment. The value of computational techniques is described by considering recent ICME and computational materials science success stories with an emphasis on applying problem-specific methods.