Science.gov

Sample records for computationally efficient cad

  1. A new computationally efficient CAD system for pulmonary nodule detection in CT imagery.

    PubMed

    Messay, Temesguen; Hardie, Russell C; Rogers, Steven K

    2010-06-01

    Early detection of lung nodules is extremely important for the diagnosis and clinical management of lung cancer. In this paper, a novel computer aided detection (CAD) system for the detection of pulmonary nodules in thoracic computed tomography (CT) imagery is presented. The paper describes the architecture of the CAD system and assesses its performance on a publicly available database to serve as a benchmark for future research efforts. Training and tuning of all modules in our CAD system is done using a separate and independent dataset provided courtesy of the University of Texas Medical Branch (UTMB). The publicly available testing dataset is that created by the Lung Image Database Consortium (LIDC). The LIDC data used here is comprised of 84 CT scans containing 143 nodules ranging from 3 to 30mm in effective size that are manually segmented at least by one of the four radiologists. The CAD system uses a fully automated lung segmentation algorithm to define the boundaries of the lung regions. It combines intensity thresholding with morphological processing to detect and segment nodule candidates simultaneously. A set of 245 features is computed for each segmented nodule candidate. A sequential forward selection process is used to determine the optimum subset of features for two distinct classifiers, a Fisher Linear Discriminant (FLD) classifier and a quadratic classifier. A performance comparison between the two classifiers is presented, and based on this, the FLD classifier is selected for the CAD system. With an average of 517.5 nodule candidates per case/scan (517.5+/-72.9), the proposed front-end detector/segmentor is able to detect 92.8% of all the nodules in the LIDC/testing dataset (based on merged ground truth). The mean overlap between the nodule regions delineated by three or more radiologists and the ones segmented by the proposed segmentation algorithm is approximately 63%. Overall, with a specificity of 3 false positives (FPs) per case/patient on

  2. Computing Mass Properties From AutoCAD

    NASA Technical Reports Server (NTRS)

    Jones, A.

    1990-01-01

    Mass properties of structures computed from data in drawings. AutoCAD to Mass Properties (ACTOMP) computer program developed to facilitate quick calculations of mass properties of structures containing many simple elements in such complex configurations as trusses or sheet-metal containers. Mathematically modeled in AutoCAD or compatible computer-aided design (CAD) system in minutes by use of three-dimensional elements. Written in Microsoft Quick-Basic (Version 2.0).

  3. A CAD (Classroom Assessment Design) of a Computer Programming Course

    ERIC Educational Resources Information Center

    Hawi, Nazir S.

    2012-01-01

    This paper presents a CAD (classroom assessment design) of an entry-level undergraduate computer programming course "Computer Programming I". CAD has been the product of a long experience in teaching computer programming courses including teaching "Computer Programming I" 22 times. Each semester, CAD is evaluated and modified…

  4. CAD-centric Computation Management System for a Virtual TBM

    SciTech Connect

    Ramakanth Munipalli; K.Y. Szema; P.Y. Huang; C.M. Rowell; A.Ying; M. Abdou

    2011-05-03

    HyPerComp Inc. in research collaboration with TEXCEL has set out to build a Virtual Test Blanket Module (VTBM) computational system to address the need in contemporary fusion research for simulating the integrated behavior of the blanket, divertor and plasma facing components in a fusion environment. Physical phenomena to be considered in a VTBM will include fluid flow, heat transfer, mass transfer, neutronics, structural mechanics and electromagnetics. We seek to integrate well established (third-party) simulation software in various disciplines mentioned above. The integrated modeling process will enable user groups to interoperate using a common modeling platform at various stages of the analysis. Since CAD is at the core of the simulation (as opposed to computational meshes which are different for each problem,) VTBM will have a well developed CAD interface, governing CAD model editing, cleanup, parameter extraction, model deformation (based on simulation,) CAD-based data interpolation. In Phase-I, we built the CAD-hub of the proposed VTBM and demonstrated its use in modeling a liquid breeder blanket module with coupled MHD and structural mechanics using HIMAG and ANSYS. A complete graphical user interface of the VTBM was created, which will form the foundation of any future development. Conservative data interpolation via CAD (as opposed to mesh-based transfer), the regeneration of CAD models based upon computed deflections, are among the other highlights of phase-I activity.

  5. CAD/CAM (Computer-Aided Design/Computer-Aided Manufacturing) Highlights.

    DTIC Science & Technology

    1984-10-01

    AD-Aift 532 CAD/CAN (COMPUTER-AIDED DESIGN /COMPUTER-ADD D7 MANUFACTURING) HIGHLIGHTSMU ARMY INDUSTRIAL BASE ENGINEERING ACTIVITY ROCK ISLAND IL D L...1985 B DISCLAIMER This document presents information for the US Army Materiel Command (AMC) Computer-Alded Design /Computer-Aided Manufacturing...contains summaries of Army Computer-Aided Design (CAD) and . Computer-Aided Manufacturing (CAM) efforts that are either completed or ongoing. The Army CAD

  6. Introduction to CAD/Computers. High-Technology Training Module.

    ERIC Educational Resources Information Center

    Lockerby, Hugh

    This learning module for an eighth-grade introductory technology course is designed to help teachers introduce students to computer-assisted design (CAD) in a communications unit on graphics. The module contains a module objective and five specific objectives, a content outline, suggested instructor methodology, student activities, a list of six…

  7. Intelligent Embedded Instruction for Computer-Aided Design (CAD) systems

    DTIC Science & Technology

    1988-10-01

    8217 convenience and according to their individual learning preferences. This benefit is extremely valuable for adult professionals whose needs may be highly...solving problems. Adult designers tend to develop their own personal ways of using CAD software which can optimize a system’s use. This ability has been...average age for subjects with more than 1 year of computer experience was 34 whereas those with less than 2 months of experience averaged 41 years old

  8. Role of computer aided detection (CAD) integration: case study with meniscal and articular cartilage CAD applications

    NASA Astrophysics Data System (ADS)

    Safdar, Nabile; Ramakrishna, Bharath; Saiprasad, Ganesh; Siddiqui, Khan; Siegel, Eliot

    2008-03-01

    Knee-related injuries involving the meniscal or articular cartilage are common and require accurate diagnosis and surgical intervention when appropriate. With proper techniques and experience, confidence in detection of meniscal tears and articular cartilage abnormalities can be quite high. However, for radiologists without musculoskeletal training, diagnosis of such abnormalities can be challenging. In this paper, the potential of improving diagnosis through integration of computer-aided detection (CAD) algorithms for automatic detection of meniscal tears and articular cartilage injuries of the knees is studied. An integrated approach in which the results of algorithms evaluating either meniscal tears or articular cartilage injuries provide feedback to each other is believed to improve the diagnostic accuracy of the individual CAD algorithms due to the known association between abnormalities in these distinct anatomic structures. The correlation between meniscal tears and articular cartilage injuries is exploited to improve the final diagnostic results of the individual algorithms. Preliminary results from the integrated application are encouraging and more comprehensive tests are being planned.

  9. Converting Between PLY and Ballistic Research Laboratory-Computer-Aided Design (BRL-CAD) File Formats

    DTIC Science & Technology

    2015-02-01

    Converting Between PLY and Ballistic Research Laboratory–Computer-Aided Design (BRL-CAD) File Formats by Rishub Jain ARL-CR-0760...0760 February 2015 Converting Between PLY and Ballistic Research Laboratory–Computer-Aided Design (BRL-CAD) File Formats Rishub Jain US...and Ballistic Research Laboratory–Computer-Aided Design (BRL-CAD) File Formats 5a. CONTRACT NUMBER W911NF-10-2-0076 5b. GRANT NUMBER 5c. PROGRAM

  10. Target Impact Detection Algorithm Using Computer-aided Design (CAD) Model Geometry

    DTIC Science & Technology

    2014-09-01

    UNCLASSIFIED AD-E403 558 Technical Report ARMET-TR-13024 TARGET IMPACT DETECTION ALGORITHM USING COMPUTER-AIDED DESIGN ( CAD ...DETECTION ALGORITHM USING COMPUTER-AIDED DESIGN ( CAD ) MODEL GEOMETRY 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...This report documents a method and algorithm to export geometry from a three-dimensional, computer-aided design ( CAD ) model in a format that can be

  11. Computer Aided Detection (CAD) Systems for Mammography and the Use of GRID in Medicine

    NASA Astrophysics Data System (ADS)

    Lauria, Adele

    It is well known that the most effective way to defeat breast cancer is early detection, as surgery and medical therapies are more efficient when the disease is diagnosed at an early stage. The principal diagnostic technique for breast cancer detection is X-ray mammography. Screening programs have been introduced in many European countries to invite women to have periodic radiological breast examinations. In such screenings, radiologists are often required to examine large numbers of mammograms with a double reading, that is, two radiologists examine the images independently and then compare their results. In this way an increment in sensitivity (the rate of correctly identified images with a lesion) of up to 15% is obtained.1,2 In most radiological centres, it is a rarity to find two radiologists to examine each report. In recent years different Computer Aided Detection (CAD) systems have been developed as a support to radiologists working in mammography: one may hope that the "second opinion" provided by CAD might represent a lower cost alternative to improve the diagnosis. At present, four CAD systems have obtained the FDA approval in the USA. † Studies3,4 show an increment in sensitivity when CAD systems are used. Freer and Ulissey in 2001 5 demonstrated that the use of a commercial CAD system (ImageChecker M1000, R2 Technology) increases the number of cancers detected up to 19.5% with little increment in recall rate. Ciatto et al.,5 in a study simulating a double reading with a commercial CAD system (SecondLook‡), showed a moderate increment in sensitivity while reducing specificity (the rate of correctly identified images without a lesion). Notwithstanding these optimistic results, there is an ongoing debate to define the advantages of the use of CAD as second reader: the main limits underlined, e.g., by Nishikawa6 are that retrospective studies are considered much too optimistic and that clinical studies must be performed to demonstrate a statistically

  12. Analog Computer-Aided Detection (CAD) information can be more effective than binary marks.

    PubMed

    Cunningham, Corbin A; Drew, Trafton; Wolfe, Jeremy M

    2017-02-01

    In socially important visual search tasks, such as baggage screening and diagnostic radiology, experts miss more targets than is desirable. Computer-aided detection (CAD) programs have been developed specifically to improve performance in these professional search tasks. For example, in breast cancer screening, many CAD systems are capable of detecting approximately 90% of breast cancer, with approximately 0.5 false-positive detections per image. Nevertheless, benefits of CAD in clinical settings tend to be small (Birdwell, 2009) or even absent (Meziane et al., 2011; Philpotts, 2009). The marks made by a CAD system can be "binary," giving the same signal to any location where the signal is above some threshold. Alternatively, a CAD system presents an analog signal that reflects strength of the signal at a location. In the experiments reported, we compare analog and binary CAD presentations using nonexpert observers and artificial stimuli defined by two noisy signals: a visible color signal and an "invisible" signal that informed our simulated CAD system. We found that analog CAD generally yielded better overall performance than binary CAD. The analog benefit is similar at high and low target prevalence. Our data suggest that the form of the CAD signal can directly influence performance. Analog CAD may allow the computer to be more helpful to the searcher.

  13. Computer-aided design and computer-aided manufacture (CAD/CAM) system for construction of spinal orthosis for patients with adolescent idiopathic scoliosis.

    PubMed

    Wong, M S

    2011-01-01

    ABSTRACT Spinal orthoses are commonly prescribed to patients with moderate adolescent idiopathic scoliosis (AIS) for prevention of further curve deterioration. In conventional manufacturing method, plaster bandages are used to obtain the patient's body contour and then the plaster cast is rectified manually. With computer-aided design and computer-aided manufacture (CAD/CAM) system, a series of automated processes from body scanning to digital rectification and milling of the positive model can be performed in a fast and accurate fashion. The purpose of this manuscript is to introduce the application of CAD/CAM system to the construction of spinal orthosis for patients with AIS. Based on evidence within the literature, CAD/CAM method can achieve similar clinical outcomes but with higher efficiency than the conventional fabrication method. Therefore, CAD/CAM method should be considered a substitute to the conventional method in fabrication of spinal orthoses for patients with AIS.

  14. Creation of Anatomically Accurate Computer-Aided Design (CAD) Solid Models from Medical Images

    NASA Technical Reports Server (NTRS)

    Stewart, John E.; Graham, R. Scott; Samareh, Jamshid A.; Oberlander, Eric J.; Broaddus, William C.

    1999-01-01

    Most surgical instrumentation and implants used in the world today are designed with sophisticated Computer-Aided Design (CAD)/Computer-Aided Manufacturing (CAM) software. This software automates the mechanical development of a product from its conceptual design through manufacturing. CAD software also provides a means of manipulating solid models prior to Finite Element Modeling (FEM). Few surgical products are designed in conjunction with accurate CAD models of human anatomy because of the difficulty with which these models are created. We have developed a novel technique that creates anatomically accurate, patient specific CAD solids from medical images in a matter of minutes.

  15. A Multidisciplinary Research Team Approach to Computer-Aided Drafting (CAD) System Selection. Final Report.

    ERIC Educational Resources Information Center

    Franken, Ken; And Others

    A multidisciplinary research team was assembled to review existing computer-aided drafting (CAD) systems for the purpose of enabling staff in the Design Drafting Department at Linn Technical College (Missouri) to select the best system out of the many CAD systems in existence. During the initial stage of the evaluation project, researchers…

  16. Evolution of facility layout requirements and CAD (computer-aided design) system development

    SciTech Connect

    Jones, M. )

    1990-06-01

    The overall configuration of the Superconducting Super Collider (SSC) including the infrastructure and land boundary requirements were developed using a computer-aided design (CAD) system. The evolution of the facility layout requirements and the use of the CAD system are discussed. The emphasis has been on minimizing the amount of input required and maximizing the speed by which the output may be obtained. The computer system used to store the data is also described.

  17. Methodology for Benefit Analysis of CAD/CAM (Computer-Aided Design/Computer-Aided Manufacturing) in USN Shipyards.

    DTIC Science & Technology

    1984-03-01

    D-Ri38 398 METHODOLOGY FOR BENEFIT ANALYSIS OF CAD/CAM / (COMPUTER-HIDED DESIGN/COMPUTER-AIDED MANUFACTURING) IN USN SHIPYARDS(U) NAVAL POSTGRADUATE...Monterey, California DT I ~" t • EB3 1984 THESIS METHODOLOGY FOR BENEFIT ANALYSIS OF CAD/CAM IN USN SHIPYARDS by Richard B. Grahlman March 1984 Thesis...REPORT & PERIOD COVERED Methodology for Benefit Analysis of CAD/CAM Mastrch 1984 i in UM Sipyads. PERFORMIANG ORG. REPORT NUM8ER 7- AUHOW11111 4

  18. Computationally efficient multibody simulations

    NASA Technical Reports Server (NTRS)

    Ramakrishnan, Jayant; Kumar, Manoj

    1994-01-01

    Computationally efficient approaches to the solution of the dynamics of multibody systems are presented in this work. The computational efficiency is derived from both the algorithmic and implementational standpoint. Order(n) approaches provide a new formulation of the equations of motion eliminating the assembly and numerical inversion of a system mass matrix as required by conventional algorithms. Computational efficiency is also gained in the implementation phase by the symbolic processing and parallel implementation of these equations. Comparison of this algorithm with existing multibody simulation programs illustrates the increased computational efficiency.

  19. Adjoint Sensitivity Computations for an Embedded-Boundary Cartesian Mesh Method and CAD Geometry

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis,Michael J.

    2006-01-01

    Cartesian-mesh methods are perhaps the most promising approach for addressing the issues of flow solution automation for aerodynamic design problems. In these methods, the discretization of the wetted surface is decoupled from that of the volume mesh. This not only enables fast and robust mesh generation for geometry of arbitrary complexity, but also facilitates access to geometry modeling and manipulation using parametric Computer-Aided Design (CAD) tools. Our goal is to combine the automation capabilities of Cartesian methods with an eficient computation of design sensitivities. We address this issue using the adjoint method, where the computational cost of the design sensitivities, or objective function gradients, is esseutially indepeudent of the number of design variables. In previous work, we presented an accurate and efficient algorithm for the solution of the adjoint Euler equations discretized on Cartesian meshes with embedded, cut-cell boundaries. Novel aspects of the algorithm included the computation of surface shape sensitivities for triangulations based on parametric-CAD models and the linearization of the coupling between the surface triangulation and the cut-cells. The objective of the present work is to extend our adjoint formulation to problems involving general shape changes. Central to this development is the computation of volume-mesh sensitivities to obtain a reliable approximation of the objective finction gradient. Motivated by the success of mesh-perturbation schemes commonly used in body-fitted unstructured formulations, we propose an approach based on a local linearization of a mesh-perturbation scheme similar to the spring analogy. This approach circumvents most of the difficulties that arise due to non-smooth changes in the cut-cell layer as the boundary shape evolves and provides a consistent approximation tot he exact gradient of the discretized abjective function. A detailed gradient accurace study is presented to verify our approach

  20. An Analysis of Computer Aided Design (CAD) Packages Used at MSFC for the Recent Initiative to Integrate Engineering Activities

    NASA Technical Reports Server (NTRS)

    Smith, Leigh M.; Parker, Nelson C. (Technical Monitor)

    2002-01-01

    This paper analyzes the use of Computer Aided Design (CAD) packages at NASA's Marshall Space Flight Center (MSFC). It examines the effectiveness of recent efforts to standardize CAD practices across MSFC engineering activities. An assessment of the roles played by management, designers, analysts, and manufacturers in this initiative will be explored. Finally, solutions are presented for better integration of CAD across MSFC in the future.

  1. Computer-aided detection (CAD) of breast masses in mammography: combined detection and ensemble classification

    NASA Astrophysics Data System (ADS)

    Choi, Jae Young; Kim, Dae Hoe; Plataniotis, Konstantinos N.; Ro, Yong Man

    2014-07-01

    We propose a novel computer-aided detection (CAD) framework of breast masses in mammography. To increase detection sensitivity for various types of mammographic masses, we propose the combined use of different detection algorithms. In particular, we develop a region-of-interest combination mechanism that integrates detection information gained from unsupervised and supervised detection algorithms. Also, to significantly reduce the number of false-positive (FP) detections, the new ensemble classification algorithm is developed. Extensive experiments have been conducted on a benchmark mammogram database. Results show that our combined detection approach can considerably improve the detection sensitivity with a small loss of FP rate, compared to representative detection algorithms previously developed for mammographic CAD systems. The proposed ensemble classification solution also has a dramatic impact on the reduction of FP detections; as much as 70% (from 15 to 4.5 per image) at only cost of 4.6% sensitivity loss (from 90.0% to 85.4%). Moreover, our proposed CAD method performs as well or better (70.7% and 80.0% per 1.5 and 3.5 FPs per image respectively) than the results of mammography CAD algorithms previously reported in the literature.

  2. Project Integration Architecture (PIA) and Computational Analysis Programming Interface (CAPRI) for Accessing Geometry Data from CAD Files

    NASA Technical Reports Server (NTRS)

    Benyo, Theresa L.

    2002-01-01

    Integration of a supersonic inlet simulation with a computer aided design (CAD) system is demonstrated. The integration is performed using the Project Integration Architecture (PIA). PIA provides a common environment for wrapping many types of applications. Accessing geometry data from CAD files is accomplished by incorporating appropriate function calls from the Computational Analysis Programming Interface (CAPRI). CAPRI is a CAD vendor neutral programming interface that aids in acquiring geometry data directly from CAD files. The benefits of wrapping a supersonic inlet simulation into PIA using CAPRI are; direct access of geometry data, accurate capture of geometry data, automatic conversion of data units, CAD vendor neutral operation, and on-line interactive history capture. This paper describes the PIA and the CAPRI wrapper and details the supersonic inlet simulation demonstration.

  3. Computationally efficient control allocation

    NASA Technical Reports Server (NTRS)

    Durham, Wayne (Inventor)

    2001-01-01

    A computationally efficient method for calculating near-optimal solutions to the three-objective, linear control allocation problem is disclosed. The control allocation problem is that of distributing the effort of redundant control effectors to achieve some desired set of objectives. The problem is deemed linear if control effectiveness is affine with respect to the individual control effectors. The optimal solution is that which exploits the collective maximum capability of the effectors within their individual physical limits. Computational efficiency is measured by the number of floating-point operations required for solution. The method presented returned optimal solutions in more than 90% of the cases examined; non-optimal solutions returned by the method were typically much less than 1% different from optimal and the errors tended to become smaller than 0.01% as the number of controls was increased. The magnitude of the errors returned by the present method was much smaller than those that resulted from either pseudo inverse or cascaded generalized inverse solutions. The computational complexity of the method presented varied linearly with increasing numbers of controls; the number of required floating point operations increased from 5.5 i, to seven times faster than did the minimum-norm solution (the pseudoinverse), and at about the same rate as did the cascaded generalized inverse solution. The computational requirements of the method presented were much better than that of previously described facet-searching methods which increase in proportion to the square of the number of controls.

  4. Teaching for CAD Expertise

    ERIC Educational Resources Information Center

    Chester, Ivan

    2007-01-01

    CAD (Computer Aided Design) has now become an integral part of Technology Education. The recent introduction of highly sophisticated, low-cost CAD software and CAM hardware capable of running on desktop computers has accelerated this trend. There is now quite widespread introduction of solid modeling CAD software into secondary schools but how…

  5. Development of simulation tools for numerical investigation and computer-aided design (CAD) of gyrotrons

    NASA Astrophysics Data System (ADS)

    Damyanova, M.; Sabchevski, S.; Zhelyazkov, I.; Vasileva, E.; Balabanova, E.; Dankov, P.; Malinov, P.

    2016-10-01

    As the most powerful CW sources of coherent radiation in the sub-terahertz to terahertz frequency range the gyrotrons have demonstrated a remarkable potential for numerous novel and prospective applications in the fundamental physical research and the technologies. Among them are powerful gyrotrons for electron cyclotron resonance heating (ECRH) and current drive (ECCD) of magnetically confined plasma in various reactors for controlled thermonuclear fusion (e.g., tokamaks and most notably ITER), high-frequency gyrotrons for sub-terahertz spectroscopy (for example NMR-DNP, XDMR, study of the hyperfine structure of positronium, etc.), gyrotrons for thermal processing and so on. Modelling and simulation are indispensable tools for numerical studies, computer-aided design (CAD) and optimization of such sophisticated vacuum tubes (fast-wave devices) operating on a physical principle known as electron cyclotron resonance maser (ECRM) instability. During the recent years, our research team has been involved in the development of physical models and problem-oriented software packages for numerical analysis and CAD of different gyrotrons in the framework of a broad international collaboration. In this paper we present the current status of our simulation tools (GYROSIM and GYREOSS packages) and illustrate their functionality by results of numerical experiments carried out recently. Finally, we provide an outlook on the envisaged further development of the computer codes and the computational modules belonging to these packages and specialized to different subsystems of the gyrotrons.

  6. Materials for chairside CAD/CAM restorations.

    PubMed

    Fasbinder, Dennis J

    2010-01-01

    Chairside computer-aided design/computer-aided manufacturing (CAD/CAM) systems have become considerably more accurate, efficient, and prevalent as the technology has evolved in the past 25 years. The initial restorative material option for chairside CAD/CAM restorations was limited to ceramic blocks. Restorative material options have multiplied and now include esthetic ceramics, high-strength ceramics, and composite materials for both definitive and temporary restoration applications. This article will review current materials available for chairside CAD/CAM restorations.

  7. The computation of all plane/surface intersections for CAD/CAM applications

    NASA Technical Reports Server (NTRS)

    Hoitsma, D. H., Jr.; Roche, M.

    1984-01-01

    The problem of the computation and display of all intersections of a given plane with a rational bicubic surface patch for use on an interactive CAD/CAM system is examined. The general problem of calculating all intersections of a plane and a surface consisting of rational bicubic patches is reduced to the case of a single generic patch by applying a rejection algorithm which excludes all patches that do not intersect the plane. For each pertinent patch the algorithm presented computed the intersection curves by locating an initial point on each curve, and computes successive points on the curve using a tolerance step equation. A single cubic equation solver is used to compute the initial curve points lying on the boundary of a surface patch, and the method of resultants as applied to curve theory is used to determine critical points which, in turn, are used to locate initial points that lie on intersection curves which are in the interior of the patch. Examples are given to illustrate the ability of this algorithm to produce all intersection curves.

  8. Improvement of MS (multiple sclerosis) CAD (computer aided diagnosis) performance using C/C++ and computing engine in the graphical processing unit (GPU)

    NASA Astrophysics Data System (ADS)

    Suh, Joohyung; Ma, Kevin; Le, Anh

    2011-03-01

    Multiple Sclerosis (MS) is a disease which is caused by damaged myelin around axons of the brain and spinal cord. Currently, MR Imaging is used for diagnosis, but it is very highly variable and time-consuming since the lesion detection and estimation of lesion volume are performed manually. For this reason, we developed a CAD (Computer Aided Diagnosis) system which would assist segmentation of MS to facilitate physician's diagnosis. The MS CAD system utilizes K-NN (k-nearest neighbor) algorithm to detect and segment the lesion volume in an area based on the voxel. The prototype MS CAD system was developed under the MATLAB environment. Currently, the MS CAD system consumes a huge amount of time to process data. In this paper we will present the development of a second version of MS CAD system which has been converted into C/C++ in order to take advantage of the GPU (Graphical Processing Unit) which will provide parallel computation. With the realization of C/C++ and utilizing the GPU, we expect to cut running time drastically. The paper investigates the conversion from MATLAB to C/C++ and the utilization of a high-end GPU for parallel computing of data to improve algorithm performance of MS CAD.

  9. Revision of Electro-Mechanical Drafting Program to Include CAD/D (Computer-Aided Drafting/Design). Final Report.

    ERIC Educational Resources Information Center

    Snyder, Nancy V.

    North Seattle Community College decided to integrate computer-aided design/drafting (CAD/D) into its Electro-Mechanical Drafting Program. This choice necessitated a redefinition of the program through new curriculum and course development. To initiate the project, a new industrial advisory council was formed. Major electronic and recruiting firms…

  10. Computer-assisted detection (CAD) of pulmonary nodules on thoracic CT scans using image processing and classification techniques

    NASA Astrophysics Data System (ADS)

    Dehmeshki, Jamshid; Valdivieso-Casique, Manlio; Siddique, Musib; Dehkordi, Mandana E.; Costello, John; Roddie, Mary

    2004-05-01

    Computer assisted methods for the detection of pulmonary nodules have become more important as the resolution of CT scanners has increased and as more accurate and reproducible detections are needed. In this paper we describe the results of a CAD system for the detection of lung nodules and compare them against the interpretations of three independent radiologists.

  11. Efficient Universal Blind Quantum Computation

    NASA Astrophysics Data System (ADS)

    Giovannetti, Vittorio; Maccone, Lorenzo; Morimae, Tomoyuki; Rudolph, Terry G.

    2013-12-01

    We give a cheat sensitive protocol for blind universal quantum computation that is efficient in terms of computational and communication resources: it allows one party to perform an arbitrary computation on a second party’s quantum computer without revealing either which computation is performed, or its input and output. The first party’s computational capabilities can be extremely limited: she must only be able to create and measure single-qubit superposition states. The second party is not required to use measurement-based quantum computation. The protocol requires the (optimal) exchange of O(Jlog⁡2(N)) single-qubit states, where J is the computational depth and N is the number of qubits needed for the computation.

  12. Efficient universal blind quantum computation.

    PubMed

    Giovannetti, Vittorio; Maccone, Lorenzo; Morimae, Tomoyuki; Rudolph, Terry G

    2013-12-06

    We give a cheat sensitive protocol for blind universal quantum computation that is efficient in terms of computational and communication resources: it allows one party to perform an arbitrary computation on a second party's quantum computer without revealing either which computation is performed, or its input and output. The first party's computational capabilities can be extremely limited: she must only be able to create and measure single-qubit superposition states. The second party is not required to use measurement-based quantum computation. The protocol requires the (optimal) exchange of O(Jlog2(N)) single-qubit states, where J is the computational depth and N is the number of qubits needed for the computation.

  13. Development of problem-oriented software packages for numerical studies and computer-aided design (CAD) of gyrotrons

    NASA Astrophysics Data System (ADS)

    Damyanova, M.; Sabchevski, S.; Zhelyazkov, I.; Vasileva, E.; Balabanova, E.; Dankov, P.; Malinov, P.

    2016-03-01

    Gyrotrons are the most powerful sources of coherent CW (continuous wave) radiation in the frequency range situated between the long-wavelength edge of the infrared light (far-infrared region) and the microwaves, i.e., in the region of the electromagnetic spectrum which is usually called the THz-gap (or T-gap), since the output power of other devices (e.g., solid-state oscillators) operating in this interval is by several orders of magnitude lower. In the recent years, the unique capabilities of the sub-THz and THz gyrotrons have opened the road to many novel and future prospective applications in various physical studies and advanced high-power terahertz technologies. In this paper, we present the current status and functionality of the problem-oriented software packages (most notably GYROSIM and GYREOSS) used for numerical studies, computer-aided design (CAD) and optimization of gyrotrons for diverse applications. They consist of a hierarchy of codes specialized to modelling and simulation of different subsystems of the gyrotrons (EOS, resonant cavity, etc.) and are based on adequate physical models, efficient numerical methods and algorithms.

  14. Establishment of a Computer-Aided Design (CAD)/Computer-Aided Manufacturing (CAM) Process for the Production of Cold Forged Gears

    DTIC Science & Technology

    1984-01-01

    Continue on reverse side if necessary and Identify by block number) Computer Aided Design/Manufacturing (CAD/CAM), Spur and Helical Gears, Cold Forging...for cold forging spur and helical gears. The geometry of the spur and helical gears has been obtained from the kinematics of the hobbing/shaper machines...or shaping) to cut the electrode for a helical gear die were then computed using the corrections described above. A computer program called GEARDI

  15. Web-Based Architecture to Enable Compute-Intensive CAD Tools and Multi-user Synchronization in Teleradiology

    NASA Astrophysics Data System (ADS)

    Mehta, Neville; Kompalli, Suryaprakash; Chaudhary, Vipin

    Teleradiology is the electronic transmission of radiological patient images, such as x-rays, CT, or MR across multiple locations. The goal could be interpretation, consultation, or medical records keeping. Information technology solutions have enabled electronic records and their associated benefits are evident in health care today. However, salient aspects of collaborative interfaces, and computer assisted diagnostic (CAD) tools are yet to be integrated into workflow designs. The Computer Assisted Diagnostics and Interventions (CADI) group at the University at Buffalo has developed an architecture that facilitates web-enabled use of CAD tools, along with the novel concept of synchronized collaboration. The architecture can support multiple teleradiology applications and case studies are presented here.

  16. CAD/CAM/CNC.

    ERIC Educational Resources Information Center

    Domermuth, Dave; And Others

    1996-01-01

    Includes "Quick Start CNC (computer numerical control) with a Vacuum Filter and Laminated Plastic" (Domermuth); "School and Industry Cooperate for Mutual Benefit" (Buckler); and "CAD (computer-assisted drafting) Careers--What Professionals Have to Say" (Skinner). (JOW)

  17. TGeoCad: an Interface between ROOT and CAD Systems

    NASA Astrophysics Data System (ADS)

    Luzzi, C.; Carminati, F.

    2014-06-01

    In the simulation of High Energy Physics experiment a very high precision in the description of the detector geometry is essential to achieve the required performances. The physicists in charge of Monte Carlo Simulation of the detector need to collaborate efficiently with the engineers working at the mechanical design of the detector. Often, this collaboration is made hard by the usage of different and incompatible software. ROOT is an object-oriented C++ framework used by physicists for storing, analyzing and simulating data produced by the high-energy physics experiments while CAD (Computer-Aided Design) software is used for mechanical design in the engineering field. The necessity to improve the level of communication between physicists and engineers led to the implementation of an interface between the ROOT geometrical modeler used by the virtual Monte Carlo simulation software and the CAD systems. In this paper we describe the design and implementation of the TGeoCad Interface that has been developed to enable the use of ROOT geometrical models in several CAD systems. To achieve this goal, the ROOT geometry description is converted into STEP file format (ISO 10303), which can be imported and used by many CAD systems.

  18. Improved Foundry Castings Utilizing CAD/CAM (Computer Aided Design/ Computer Aided Manufacture). Volume 1. Overview

    DTIC Science & Technology

    1988-06-30

    several organizations. Members of the project staffs at the University of Pittsburgh, Battelle Columbus Laboratories, Blaw - Knox Foundry and Mill...with the University of Pittsburgh, James Echlin, Blaw - Knox , and A. Roulet, General Dynamics. Computing facilities on the DEC 10 system were made...Akgerman, A. Badawy, C. Wilson, and T. Altan. The project staff at Blaw - Knox included Mssrs. R. Nariman, KI Fahey, and S. Miller. Mr. W. Northey

  19. Shape optimization and CAD

    NASA Technical Reports Server (NTRS)

    Rasmussen, John

    1990-01-01

    Structural optimization has attracted the attention since the days of Galileo. Olhoff and Taylor have produced an excellent overview of the classical research within this field. However, the interest in structural optimization has increased greatly during the last decade due to the advent of reliable general numerical analysis methods and the computer power necessary to use them efficiently. This has created the possibility of developing general numerical systems for shape optimization. Several authors, eg., Esping; Braibant & Fleury; Bennet & Botkin; Botkin, Yang, and Bennet; and Stanton have published practical and successful applications of general optimization systems. Ding and Homlein have produced extensive overviews of available systems. Furthermore, a number of commercial optimization systems based on well-established finite element codes have been introduced. Systems like ANSYS, IDEAS, OASIS, and NISAOPT are widely known examples. In parallel to this development, the technology of computer aided design (CAD) has gained a large influence on the design process of mechanical engineering. The CAD technology has already lived through a rapid development driven by the drastically growing capabilities of digital computers. However, the systems of today are still considered as being only the first generation of a long row of computer integrated manufacturing (CIM) systems. These systems to come will offer an integrated environment for design, analysis, and fabrication of products of almost any character. Thus, the CAD system could be regarded as simply a database for geometrical information equipped with a number of tools with the purpose of helping the user in the design process. Among these tools are facilities for structural analysis and optimization as well as present standard CAD features like drawing, modeling, and visualization tools. The state of the art of structural optimization is that a large amount of mathematical and mechanical techniques are

  20. CAD/CAE Integration Enhanced by New CAD Services Standard

    NASA Technical Reports Server (NTRS)

    Claus, Russell W.

    2002-01-01

    A Government-industry team led by the NASA Glenn Research Center has developed a computer interface standard for accessing data from computer-aided design (CAD) systems. The Object Management Group, an international computer standards organization, has adopted this CAD services standard. The new standard allows software (e.g., computer-aided engineering (CAE) and computer-aided manufacturing software to access multiple CAD systems through one programming interface. The interface is built on top of a distributed computing system called the Common Object Request Broker Architecture (CORBA). CORBA allows the CAD services software to operate in a distributed, heterogeneous computing environment.

  1. CAD: Designs on Business.

    ERIC Educational Resources Information Center

    Milburn, Ken

    1988-01-01

    Provides a general review of the field of Computer-Aided Design Software including specific reviews of "Autosketch,""Generic CADD,""Drafix 1 Plus,""FastCAD," and "Autocad Release 9." Brief articles include "Blueprint for Generation,""CAD for Every Department,""Ideas…

  2. Computer-aided drafting and design (CAD) in the Plant Engineering organization at Sandia National Laboratories

    SciTech Connect

    Hall, J.T.; Knott, D.D.; Moore, M.B.

    1983-03-01

    The Plant Engineering organization at Sandia National Laboratories, Albuquerque (SNLA), has been working with a CAD system for approximately 2 1/2 yr, and finds itself at a crossroads. CAD has not been a panacea to workload problems to date, and Plant Engineering commissioned a study to try to determine why and to make recommendations to management on what steps might be taken in the future. Recommendations range from making the current system more productive to enhancing it significantly with newer and more powerful graphics technology.

  3. Computer Graphic Design Using Auto-CAD and Plug Nozzle Research

    NASA Technical Reports Server (NTRS)

    Rogers, Rayna C.

    2004-01-01

    The purpose of creating computer generated images varies widely. They can be use for computational fluid dynamics (CFD), or as a blueprint for designing parts. The schematic that I will be working on the summer will be used to create nozzles that are a part of a larger system. At this phase in the project, the nozzles needed for the systems have been fabricated. One part of my mission is to create both three dimensional and two dimensional models on Auto-CAD 2002 of the nozzles. The research on plug nozzles will allow me to have a better understanding of how they assist in the thrust need for a missile to take off. NASA and the United States military are working together to develop a new design concept. On most missiles a convergent-divergent nozzle is used to create thrust. However, the two are looking into different concepts for the nozzle. The standard convergent-divergent nozzle forces a mixture of combustible fluids and air through a smaller area in comparison to where the combination was mixed. Once it passes through the smaller area known as A8 it comes out the end of the nozzle which is larger the first or area A9. This creates enough thrust for the mechanism whether it is an F-18 fighter jet or a missile. The A9 section of the convergent-divergent nozzle has a mechanism that controls how large A9 can be. This is needed because the pressure of the air coming out nozzle must be equal to that of the ambient pressure other wise there will be a loss of performance in the machine. The plug nozzle however does not need to have an A9 that can vary. When the air flow comes out it can automatically sense what the ambient pressure is and will adjust accordingly. The objective of this design is to create a plug nozzle that is not as complicated mechanically as it counterpart the convergent-divergent nozzle.

  4. Area-Efficient VLSI Computation.

    DTIC Science & Technology

    1981-10-01

    BUREAU OF STANDARDS-1963-A p w V" QIU-CS-82-108 Area-Efficient VLSI Computation 6 0! " Charles Eric Leiserson Department of Computer Science Carnegie...Doctor of Philosophy. *7 This research was sponsored in part by the Defense Advanced Rcscarch Projects Agency (1)O!)) ARPA Order No. 3597 which is...Office of Naval Research ,under Contract N00014-76-C-i370. The vicws anJ Conclusions contained in this document arc thosC of the Author and should Iot

  5. Improving Computational Efficiency of VAST

    DTIC Science & Technology

    2013-09-01

    Improving Computational Efficiency of VAST Lei Jiang and Tom Macadam Martec Limited Prepared By: Martec Limited 400...1800 Brunswick Street Halifax, Nova Scotia B3J 3J8 Canada Contract Project Manager: Lei Jiang, 902-425-5101 Ext 228 Contract Number: W7707...unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Principal Author Lei Jiang Senior Research Engineer

  6. The VE/CAD synergism

    SciTech Connect

    Sperling, R.B.

    1993-03-19

    Value Engineering (VE) and Computer-Aided Design (CAD) can be used synergistically to reduce costs and improve facilities designs. The cost and schedule impacts of implementing alternative design ideas developed by VE teams can be greatly reduced when the drawings have been produced with interactive CAD systems. To better understand the interrelationship between VE and CAD, the fundamentals of the VE process are explained; and example of a VE proposal is described and the way CAD drawings facilitated its implementation is illustrated.

  7. A computational investigation on radiation damage and activation of structural material for C-ADS

    NASA Astrophysics Data System (ADS)

    Liang, Tairan; Shen, Fei; Yin, Wen; Yu, Quanzhi; Liang, Tianjiao

    2015-11-01

    The C-ADS (China Accelerator-Driven Subcritical System) project, which aims at transmuting high-level radiotoxic waste (HLW) and power generation, is now in the research and development stage. In this paper, a simplified ADS model is set up based on the IAEA Th-ADS benchmark calculation model, then the radiation damage as well as the residual radioactivity of the structural material are estimated using the Monte Carlo simulation method. The peak displacement production rate, gas productions, activity and residual dose rate of the structural components like beam window and outer casing of subcritical reactor core are calculated. The calculation methods and the corresponding results provide the basic reference for making reasonable predictions for the lifetime and maintenance operations of the structural material of C-ADS.

  8. Implementation and display of Computer Aided Design (CAD) models in Monte Carlo radiation transport and shielding applications

    SciTech Connect

    Burns, T.J.

    1994-03-01

    An Xwindow application capable of importing geometric information directly from two Computer Aided Design (CAD) based formats for use in radiation transport and shielding analyses is being developed at ORNL. The application permits the user to graphically view the geometric models imported from the two formats for verification and debugging. Previous models, specifically formatted for the radiation transport and shielding codes can also be imported. Required extensions to the existing combinatorial geometry analysis routines are discussed. Examples illustrating the various options and features which will be implemented in the application are presented. The use of the application as a visualization tool for the output of the radiation transport codes is also discussed.

  9. Computer-Aided Design/Manufacturing (CAD/M) for High-Speed Interconnect.

    DTIC Science & Technology

    1981-10-01

    Signal timing, particularly for synchronous logic circuits Interconnection Ordering is performed by a software tool which determines the order in which...element equivalent circuits . This is particularly true for thru-holes and vias. This approach lends itself especially well to a CAD/M approach, because the...software can automatically determine , for each discon- tinuity, its location, type, and the equivalent lumped RLC network. Then, transparent to the

  10. Assessment of updated CAD without a new reader study: effect of calibration of computer output on the computer-aided reader performance in CADx

    NASA Astrophysics Data System (ADS)

    Chen, Weijie; Petrick, Nicholas; Sahiner, Berkman

    2011-03-01

    It is very resource-demanding to assess each new version of a CAD system through a new reader study. We conjecture that the aided reader performance on a new version can be predicted by using certain characteristics of the computer output and the reader study conducted when the CAD system was initially introduced. This would likely reduce the need for additional reader studies. However, investigations are needed to develop a sound scientific foundation to test this conjecture. In this work, we consider a CADx system that outputs a disease score to aid the physician in making a diagnostic decision on a located lesion. Our major contribution is to show that calibration, reflected as a change in scale, is a characteristic of the computer output that needs to be considered in order to predict the aided reader performance in a new CADx version without a reader study. We used a bivariate bi-beta distribution to model the joint distribution of the decision variable underlying the reader without aid and the decision variable underlying the version 1 computer output in the initial version. We then applied a monotonic transformation to the computer output to simulate the computer output in a new version, i.e., the scores in the two versions differ only in calibration (specifically a change in scale). By further modeling certain mechanisms that the human reader may use for combining the computer output and the reader-alone scores, we computed the aided reader performance in terms of AUC for the new version of the CADx system. Our results show that the aided reader performance could depend on the degree of calibration difference between the two CAD system outputs. We conclude that for the purpose of predicting the aided reader performance of a new version of the CADx system, ROC performance (or any other rank-based metric) of the stand-alone CADx system may not be sufficient by itself.

  11. Evaluation of Five Microcomputer CAD Packages.

    ERIC Educational Resources Information Center

    Leach, James A.

    1987-01-01

    Discusses the similarities, differences, advanced features, applications and number of users of five microcomputer computer-aided design (CAD) packages. Included are: "AutoCAD (V.2.17)"; "CADKEY (V.2.0)"; "CADVANCE (V.1.0)"; "Super MicroCAD"; and "VersaCAD Advanced (V.4.00)." Describes the…

  12. Efficient computation of optimal actions.

    PubMed

    Todorov, Emanuel

    2009-07-14

    Optimal choice of actions is a fundamental problem relevant to fields as diverse as neuroscience, psychology, economics, computer science, and control engineering. Despite this broad relevance the abstract setting is similar: we have an agent choosing actions over time, an uncertain dynamical system whose state is affected by those actions, and a performance criterion that the agent seeks to optimize. Solving problems of this kind remains hard, in part, because of overly generic formulations. Here, we propose a more structured formulation that greatly simplifies the construction of optimal control laws in both discrete and continuous domains. An exhaustive search over actions is avoided and the problem becomes linear. This yields algorithms that outperform Dynamic Programming and Reinforcement Learning, and thereby solve traditional problems more efficiently. Our framework also enables computations that were not possible before: composing optimal control laws by mixing primitives, applying deterministic methods to stochastic systems, quantifying the benefits of error tolerance, and inferring goals from behavioral data via convex optimization. Development of a general class of easily solvable problems tends to accelerate progress--as linear systems theory has done, for example. Our framework may have similar impact in fields where optimal choice of actions is relevant.

  13. Efficient computation of optimal actions

    PubMed Central

    Todorov, Emanuel

    2009-01-01

    Optimal choice of actions is a fundamental problem relevant to fields as diverse as neuroscience, psychology, economics, computer science, and control engineering. Despite this broad relevance the abstract setting is similar: we have an agent choosing actions over time, an uncertain dynamical system whose state is affected by those actions, and a performance criterion that the agent seeks to optimize. Solving problems of this kind remains hard, in part, because of overly generic formulations. Here, we propose a more structured formulation that greatly simplifies the construction of optimal control laws in both discrete and continuous domains. An exhaustive search over actions is avoided and the problem becomes linear. This yields algorithms that outperform Dynamic Programming and Reinforcement Learning, and thereby solve traditional problems more efficiently. Our framework also enables computations that were not possible before: composing optimal control laws by mixing primitives, applying deterministic methods to stochastic systems, quantifying the benefits of error tolerance, and inferring goals from behavioral data via convex optimization. Development of a general class of easily solvable problems tends to accelerate progress—as linear systems theory has done, for example. Our framework may have similar impact in fields where optimal choice of actions is relevant. PMID:19574462

  14. Computationally efficient lossless image coder

    NASA Astrophysics Data System (ADS)

    Sriram, Parthasarathy; Sudharsanan, Subramania I.

    1999-12-01

    Lossless coding of image data has been a very active area of research in the field of medical imaging, remote sensing and document processing/delivery. While several lossless image coders such as JPEG and JBIG have been in existence for a while, their compression performance for encoding continuous-tone images were rather poor. Recently, several state of the art techniques like CALIC and LOCO were introduced with significant improvement in compression performance over traditional coders. However, these coders are very difficult to implement using dedicated hardware or in software using media processors due to their inherently serial nature of their encoding process. In this work, we propose a lossless image coding technique with a compression performance that is very close to the performance of CALIC and LOCO while being very efficient to implement both in hardware and software. Comparisons for encoding the JPEG- 2000 image set show that the compression performance of the proposed coder is within 2 - 5% of the more complex coders while being computationally very efficient. In addition, the encoder is shown to be parallelizabl at a hierarchy of levels. The execution time of the proposed encoder is smaller than what is required by LOCO while the decoder is 2 - 3 times faster that the execution time required by LOCO decoder.

  15. Influence of surface roughness on mechanical properties of two computer-aided design/computer-aided manufacturing (CAD/CAM) ceramic materials.

    PubMed

    Flury, S; Peutzfeldt, A; Lussi, A

    2012-01-01

    The aim of this study was to evaluate the influence of surface roughness on surface hardness (Vickers; VHN), elastic modulus (EM), and flexural strength (FLS) of two computer-aided design/computer-aided manufacturing (CAD/CAM) ceramic materials. One hundred sixty-two samples of VITABLOCS Mark II (VMII) and 162 samples of IPS Empress CAD (IPS) were ground according to six standardized protocols producing decreasing surface roughnesses (n=27/group): grinding with 1) silicon carbide (SiC) paper #80, 2) SiC paper #120, 3) SiC paper #220, 4) SiC paper #320, 5) SiC paper #500, and 6) SiC paper #1000. Surface roughness (Ra/Rz) was measured with a surface roughness meter, VHN and EM with a hardness indentation device, and FLS with a three-point bending test. To test for a correlation between surface roughness (Ra/Rz) and VHN, EM, or FLS, Spearman rank correlation coefficients were calculated. The decrease in surface roughness led to an increase in VHN from (VMII/IPS; medians) 263.7/256.5 VHN to 646.8/601.5 VHN, an increase in EM from 45.4/41.0 GPa to 66.8/58.4 GPa, and an increase in FLS from 49.5/44.3 MPa to 73.0/97.2 MPa. For both ceramic materials, Spearman rank correlation coefficients showed a strong negative correlation between surface roughness (Ra/Rz) and VHN or EM and a moderate negative correlation between Ra/Rz and FLS. In conclusion, a decrease in surface roughness generally improved the mechanical properties of the CAD/CAM ceramic materials tested. However, FLS was less influenced by surface roughness than expected.

  16. Use of CAD systems in design of Space Station and space robots

    NASA Technical Reports Server (NTRS)

    Dwivedi, Suren N.; Yadav, P.; Jones, Gary; Travis, Elmer W.

    1988-01-01

    The evolution of CAD systems is traced. State-of-the-art CAD systems are reviewed and various advanced CAD facilities and supplementing systems being used at NASA-Goddard are described. CAD hardware, computer software, and protocols are detailed.

  17. Quantum computing: Efficient fault tolerance

    NASA Astrophysics Data System (ADS)

    Gottesman, Daniel

    2016-12-01

    Dealing with errors in a quantum computer typically requires complex programming and many additional quantum bits. A technique for controlling errors has been proposed that alleviates both of these problems.

  18. Efficient Computational Model of Hysteresis

    NASA Technical Reports Server (NTRS)

    Shields, Joel

    2005-01-01

    A recently developed mathematical model of the output (displacement) versus the input (applied voltage) of a piezoelectric transducer accounts for hysteresis. For the sake of computational speed, the model is kept simple by neglecting the dynamic behavior of the transducer. Hence, the model applies to static and quasistatic displacements only. A piezoelectric transducer of the type to which the model applies is used as an actuator in a computer-based control system to effect fine position adjustments. Because the response time of the rest of such a system is usually much greater than that of a piezoelectric transducer, the model remains an acceptably close approximation for the purpose of control computations, even though the dynamics are neglected. The model (see Figure 1) represents an electrically parallel, mechanically series combination of backlash elements, each having a unique deadband width and output gain. The zeroth element in the parallel combination has zero deadband width and, hence, represents a linear component of the input/output relationship. The other elements, which have nonzero deadband widths, are used to model the nonlinear components of the hysteresis loop. The deadband widths and output gains of the elements are computed from experimental displacement-versus-voltage data. The hysteresis curve calculated by use of this model is piecewise linear beyond deadband limits.

  19. Gathering Empirical Evidence Concerning Links between Computer Aided Design (CAD) and Creativity

    ERIC Educational Resources Information Center

    Musta'amal, Aede Hatib; Norman, Eddie; Hodgson, Tony

    2009-01-01

    Discussion is often reported concerning potential links between computer-aided designing and creativity, but there is a lack of systematic enquiry to gather empirical evidence concerning such links. This paper reports an indication of findings from other research studies carried out in contexts beyond general education that have sought evidence…

  20. A Dedicated Computational Platform for Cellular Monte Carlo T-CAD Software Tools

    DTIC Science & Technology

    2015-07-14

    Nanoscience School of Electrical , Computer, and Energy Engineering Arizona State University Final Report for the AFOSR Grant FA9550-14-1-0083 1...development of a novel approach for the self-consistent microscopic simulation of the electrical and thermal properties of semiconductor devices...The reduction of mobility due to dislocation scattering results in a smaller velocity response to the electric field. Furthermore, the carrier

  1. Orbital implant placement using a computer-aided design and manufacturing (CAD/CAM) stereolithographic surgical template protocol.

    PubMed

    Goh, B T; Teoh, K H

    2015-05-01

    Surgical implant placement in the orbital region for the support of a prosthesis is challenging due to the thin orbital rim and proximity to vital structures. This article reports the use of a computer-aided design and manufacturing (CAD/CAM) stereolithographic surgical template protocol for orbital implant placement in four patients, who were followed-up for about 7 years. A total of 11 orbital implants were inserted, eight of these in irradiated bone. No intraoperative complications were noted in any of the patients and the implants were all inserted in the planned positions. The survival rate of implants placed in irradiated bone that received hyperbaric oxygen therapy was 62.5% (5/8). One implant failed in a burns injury patient at 74 months after functional loading. The overall survival of implants in the orbital region and the cumulative survival at 7 years was 63.6%. With regard to skin reactions around the abutments, 85% were grade 0, 13% were grade 1, and 2% were grade 2 according to the Holgers classification. The mean survival time of the first prosthesis was 49 months. High patient satisfaction was achieved with the implant-retained orbital prostheses.

  2. ESPC Computational Efficiency of Earth System Models

    DTIC Science & Technology

    2014-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. ESPC Computational Efficiency of Earth System Models...00-00-2014 to 00-00-2014 4. TITLE AND SUBTITLE ESPC Computational Efficiency of Earth System Models 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c...optimization in this system. 3 Figure 1 – Plot showing seconds per forecast day wallclock time for a T639L64 (~21 km at the equator) NAVGEM

  3. CAD/CAM (Computer Aided Design/Computer Aided Manufacture). A Brief Guide to Materials in the Library of Congress.

    ERIC Educational Resources Information Center

    Havas, George D.

    This brief guide to materials in the Library of Congress (LC) on computer aided design and/or computer aided manufacturing lists reference materials and other information sources under 13 headings: (1) brief introductions; (2) LC subject headings used for such materials; (3) textbooks; (4) additional titles; (5) glossaries and handbooks; (6)…

  4. Extension of a Computer Assisted Decision Support (CADS) Study to Improve Outcomes in Patients with Type 2 DM Treated by Primary Care Providers. Addendum

    DTIC Science & Technology

    2015-04-01

    test the clinical effects of a Computer Assisted Decision Support (CADS) System for the management of Type 2 diabetes (T2D) by primary care... Diabetes mellitus (DM) affects more than 29 million people in the United States and is associated with devastating complications in both personal and...financial terms. Diabetes is the leading cause of blindness, non-traumatic amputations, and renal failure in adults and reduces life expectancy by 5

  5. Immersive CAD

    SciTech Connect

    Ames, A.L.

    1999-02-01

    This paper documents development of a capability for performing shape-changing editing operations on solid model representations in an immersive environment. The capability includes part- and assembly-level operations, with part modeling supporting topology-invariant and topology-changing modifications. A discussion of various design considerations in developing an immersive capability is included, along with discussion of a prototype implementation we have developed and explored. The project investigated approaches to providing both topology-invariant and topology-changing editing. A prototype environment was developed to test the approaches and determine the usefulness of immersive editing. The prototype showed exciting potential in redefining the CAD interface. It is fun to use. Editing is much faster and friendlier than traditional feature-based CAD software. The prototype algorithms did not reliably provide a sufficient frame rate for complex geometries, but has provided the necessary roadmap for development of a production capability.

  6. A supervised 'lesion-enhancement' filter by use of a massive-training artificial neural network (MTANN) in computer-aided diagnosis (CAD)

    NASA Astrophysics Data System (ADS)

    Suzuki, Kenji

    2009-09-01

    Computer-aided diagnosis (CAD) has been an active area of study in medical image analysis. A filter for the enhancement of lesions plays an important role for improving the sensitivity and specificity in CAD schemes. The filter enhances objects similar to a model employed in the filter; e.g. a blob-enhancement filter based on the Hessian matrix enhances sphere-like objects. Actual lesions, however, often differ from a simple model; e.g. a lung nodule is generally modeled as a solid sphere, but there are nodules of various shapes and with internal inhomogeneities such as a nodule with spiculations and ground-glass opacity. Thus, conventional filters often fail to enhance actual lesions. Our purpose in this study was to develop a supervised filter for the enhancement of actual lesions (as opposed to a lesion model) by use of a massive-training artificial neural network (MTANN) in a CAD scheme for detection of lung nodules in CT. The MTANN filter was trained with actual nodules in CT images to enhance actual patterns of nodules. By use of the MTANN filter, the sensitivity and specificity of our CAD scheme were improved substantially. With a database of 69 lung cancers, nodule candidate detection by the MTANN filter achieved a 97% sensitivity with 6.7 false positives (FPs) per section, whereas nodule candidate detection by a difference-image technique achieved a 96% sensitivity with 19.3 FPs per section. Classification-MTANNs were applied for further reduction of the FPs. The classification-MTANNs removed 60% of the FPs with a loss of one true positive; thus, it achieved a 96% sensitivity with 2.7 FPs per section. Overall, with our CAD scheme based on the MTANN filter and classification-MTANNs, an 84% sensitivity with 0.5 FPs per section was achieved. First presented at the Seventh International Conference on Machine Learning and Applications, San Diego, CA, USA, 11-13 December 2008.

  7. Use of CAD Geometry in MDO

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.

    1996-01-01

    The purpose of this paper is to discuss the use of Computer-Aided Design (CAD) geometry in a Multi-Disciplinary Design Optimization (MDO) environment. Two techniques are presented to facilitate the use of CAD geometry by different disciplines, such as Computational Fluid Dynamics (CFD) and Computational Structural Mechanics (CSM). One method is to transfer the load from a CFD grid to a CSM grid. The second method is to update the CAD geometry for CSM deflection.

  8. Conservative restorative treatment using a single-visit, all-ceramic CAD/CAM system.

    PubMed

    Benk, Joel

    2007-01-01

    Computer-aided design/computer-aided manufacturing (CAD/CAM) continues to radically change the way in which the dental team plans, prepares, and fabricates a patient's restoration. This advancing technology offers the clinician the ability to scan the patient's failing dentition and then designs a long-lasting, reliable restoration based on this data. CAD/CAM systems also permit efficient, single-visit placement of the restoration while preserving much of the natural tooth structure. This article discusses how a chairside CAD/CAM system can be used to provide such a restoration in the posterior region in a single-visit.

  9. Two-view information fusion for improvement of computer-aided detection (CAD) of breast masses on mammograms

    NASA Astrophysics Data System (ADS)

    Wei, Jun; Sahiner, Berkman; Hadjiiski, Lubomir M.; Chan, Heang-Ping; Helvie, Mark A.; Roubidoux, Marilyn A.; Zhou, Chuan; Ge, Jun; Zhang, Yiheng

    2006-03-01

    We are developing a two-view information fusion method to improve the performance of our CAD system for mass detection. Mass candidates on each mammogram were first detected with our single-view CAD system. Potential object pairs on the two-view mammograms were then identified by using the distance between the object and the nipple. Morphological features, Hessian feature, correlation coefficients between the two paired objects and texture features were used as input to train a similarity classifier that estimated a similarity scores for each pair. Finally, a linear discriminant analysis (LDA) classifier was used to fuse the score from the single-view CAD system and the similarity score. A data set of 475 patients containing 972 mammograms with 475 biopsy-proven masses was used to train and test the CAD system. All cases contained the CC view and the MLO or LM view. We randomly divided the data set into two independent sets of 243 cases and 232 cases. The training and testing were performed using the 2-fold cross validation method. The detection performance of the CAD system was assessed by free response receiver operating characteristic (FROC) analysis. The average test FROC curve was obtained from averaging the FP rates at the same sensitivity along the two corresponding test FROC curves from the 2-fold cross validation. At the case-based sensitivities of 90%, 85% and 80% on the test set, the single-view CAD system achieved an FP rate of 2.0, 1.5, and 1.2 FPs/image, respectively. With the two-view fusion system, the FP rates were reduced to 1.7, 1.3, and 1.0 FPs/image, respectively, at the corresponding sensitivities. The improvement was found to be statistically significant (p<0.05) by the AFROC method. Our results indicate that the two-view fusion scheme can improve the performance of mass detection on mammograms.

  10. Efficient Methods to Compute Genomic Predictions

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Efficient methods for processing genomic data were developed to increase reliability of estimated breeding values and simultaneously estimate thousands of marker effects. Algorithms were derived and computer programs tested on simulated data for 50,000 markers and 2,967 bulls. Accurate estimates of ...

  11. Effects of tributylborane-activated adhesive and two silane agents on bonding computer-aided design and manufacturing (CAD/CAM) resin composite.

    PubMed

    Shinohara, Ayano; Taira, Yohsuke; Sawase, Takashi

    2017-01-09

    The present study was conducted to evaluate the effects of an experimental adhesive agent [methyl methacrylate-tributylborane liquid (MT)] and two adhesive agents containing silane on the bonding between a resin composite block of a computer-aided design and manufacturing (CAD/CAM) system and a light-curing resin composite veneering material. The surfaces of CAD/CAM resin composite specimens were ground with silicon-carbide paper, treated with phosphoric acid, and then primed with either one of the two silane agents [Scotchbond Universal Adhesive (SC) and GC Ceramic Primer II (GC)], no adhesive control (Cont), or one of three combinations (MT/SC, MT/GC, and MT/Cont). A light-curing resin composite was veneered on the primed CAD/CAM resin composite surface. The veneered specimens were subjected to thermocycling between 4 and 60 °C for 10,000 cycles, and the shear bond strengths were determined. All data were analyzed using analysis of variance and a post hoc Tukey-Kramer HSD test (α = 0.05, n = 8). MT/SC (38.7 MPa) exhibited the highest mean bond strengths, followed by MT/GC (30.4 MPa), SC (27.9 MPa), and MT/Cont (25.7 MPa), while Cont (12.9 MPa) and GC (12.3 MPa) resulted in the lowest bond strengths. The use of MT in conjunction with a silane agent significantly improved the bond strength. Surface treatment with appropriate adhesive agents was confirmed as a prerequisite for veneering CAD/CAM resin composite restorations.

  12. Efficient Calibration of Computationally Intensive Hydrological Models

    NASA Astrophysics Data System (ADS)

    Poulin, A.; Huot, P. L.; Audet, C.; Alarie, S.

    2015-12-01

    A new hybrid optimization algorithm for the calibration of computationally-intensive hydrological models is introduced. The calibration of hydrological models is a blackbox optimization problem where the only information available to the optimization algorithm is the objective function value. In the case of distributed hydrological models, the calibration process is often known to be hampered by computational efficiency issues. Running a single simulation may take several minutes and since the optimization process may require thousands of model evaluations, the computational time can easily expand to several hours or days. A blackbox optimization algorithm, which can substantially improve the calibration efficiency, has been developed. It merges both the convergence analysis and robust local refinement from the Mesh Adaptive Direct Search (MADS) algorithm, and the global exploration capabilities from the heuristic strategies used by the Dynamically Dimensioned Search (DDS) algorithm. The new algorithm is applied to the calibration of the distributed and computationally-intensive HYDROTEL model on three different river basins located in the province of Quebec (Canada). Two calibration problems are considered: (1) calibration of a 10-parameter version of HYDROTEL, and (2) calibration of a 19-parameter version of the same model. A previous study by the authors had shown that the original version of DDS was the most efficient method for the calibration of HYDROTEL, when compared to the MADS and the very well-known SCEUA algorithms. The computational efficiency of the hybrid DDS-MADS method is therefore compared with the efficiency of the DDS algorithm based on a 2000 model evaluations budget. Results show that the hybrid DDS-MADS method can reduce the total number of model evaluations by 70% for the 10-parameter version of HYDROTEL and by 40% for the 19-parameter version without compromising the quality of the final objective function value.

  13. PC Board Layout and Electronic Drafting with CAD. Teacher Edition.

    ERIC Educational Resources Information Center

    Bryson, Jimmy

    This teacher's guide contains 11 units of instruction for a course on computer electronics and computer-assisted drafting (CAD) using a personal computer (PC). The course covers the following topics: introduction to electronic drafting with CAD; CAD system and software; basic electronic theory; component identification; basic integrated circuit…

  14. An Efficient Method for Computing All Reducts

    NASA Astrophysics Data System (ADS)

    Bao, Yongguang; Du, Xiaoyong; Deng, Mingrong; Ishii, Naohiro

    In the process of data mining of decision table using Rough Sets methodology, the main computational effort is associated with the determination of the reducts. Computing all reducts is a combinatorial NP-hard computational problem. Therefore the only way to achieve its faster execution is by providing an algorithm, with a better constant factor, which may solve this problem in reasonable time for real-life data sets. The purpose of this presentation is to propose two new efficient algorithms to compute reducts in information systems. The proposed algorithms are based on the proposition of reduct and the relation between the reduct and discernibility matrix. Experiments have been conducted on some real world domains in execution time. The results show it improves the execution time when compared with the other methods. In real application, we can combine the two proposed algorithms.

  15. Fabricating a tooth- and implant-supported maxillary obturator for a patient after maxillectomy with computer-guided surgery and CAD/CAM technology: A clinical report.

    PubMed

    Noh, Kwantae; Pae, Ahran; Lee, Jung-Woo; Kwon, Yong-Dae

    2016-05-01

    An obturator prosthesis with insufficient retention and support may be improved with implant placement. However, implant surgery in patients after maxillary tumor resection can be complicated because of limited visibility and anatomic complexity. Therefore, computer-guided surgery can be advantageous even for experienced surgeons. In this clinical report, the use of computer-guided surgery is described for implant placement using a bone-supported surgical template for a patient with maxillary defects. The prosthetic procedure was facilitated and simplified by using computer-aided design/computer-aided manufacture (CAD/CAM) technology. Oral function and phonetics were restored using a tooth- and implant-supported obturator prosthesis. No clinical symptoms and no radiographic signs of significant bone loss around the implants were found at a 3-year follow-up. The treatment approach presented here can be a viable option for patients with insufficient remaining zygomatic bone after a hemimaxillectomy.

  16. Computationally Efficient Multiconfigurational Reactive Molecular Dynamics

    PubMed Central

    Yamashita, Takefumi; Peng, Yuxing; Knight, Chris; Voth, Gregory A.

    2012-01-01

    It is a computationally demanding task to explicitly simulate the electronic degrees of freedom in a system to observe the chemical transformations of interest, while at the same time sampling the time and length scales required to converge statistical properties and thus reduce artifacts due to initial conditions, finite-size effects, and limited sampling. One solution that significantly reduces the computational expense consists of molecular models in which effective interactions between particles govern the dynamics of the system. If the interaction potentials in these models are developed to reproduce calculated properties from electronic structure calculations and/or ab initio molecular dynamics simulations, then one can calculate accurate properties at a fraction of the computational cost. Multiconfigurational algorithms model the system as a linear combination of several chemical bonding topologies to simulate chemical reactions, also sometimes referred to as “multistate”. These algorithms typically utilize energy and force calculations already found in popular molecular dynamics software packages, thus facilitating their implementation without significant changes to the structure of the code. However, the evaluation of energies and forces for several bonding topologies per simulation step can lead to poor computational efficiency if redundancy is not efficiently removed, particularly with respect to the calculation of long-ranged Coulombic interactions. This paper presents accurate approximations (effective long-range interaction and resulting hybrid methods) and multiple-program parallelization strategies for the efficient calculation of electrostatic interactions in reactive molecular simulations. PMID:25100924

  17. Improving the radiologist-CAD interaction: designing for appropriate trust.

    PubMed

    Jorritsma, W; Cnossen, F; van Ooijen, P M A

    2015-02-01

    Computer-aided diagnosis (CAD) has great potential to improve radiologists' diagnostic performance. However, the reported performance of the radiologist-CAD team is lower than what might be expected based on the performance of the radiologist and the CAD system in isolation. This indicates that the interaction between radiologists and the CAD system is not optimal. An important factor in the interaction between humans and automated aids (such as CAD) is trust. Suboptimal performance of the human-automation team is often caused by an inappropriate level of trust in the automation. In this review, we examine the role of trust in the radiologist-CAD interaction and suggest ways to improve the output of the CAD system so that it allows radiologists to calibrate their trust in the CAD system more effectively. Observer studies of the CAD systems show that radiologists often have an inappropriate level of trust in the CAD system. They sometimes under-trust CAD, thereby reducing its potential benefits, and sometimes over-trust it, leading to diagnostic errors they would not have made without CAD. Based on the literature on trust in human-automation interaction and the results of CAD observer studies, we have identified four ways to improve the output of CAD so that it allows radiologists to form a more appropriate level of trust in CAD. Designing CAD systems for appropriate trust is important and can improve the performance of the radiologist-CAD team. Future CAD research and development should acknowledge the importance of the radiologist-CAD interaction, and specifically the role of trust therein, in order to create the perfect artificial partner for the radiologist. This review focuses on the role of trust in the radiologist-CAD interaction. The aim of the review is to encourage CAD developers to design for appropriate trust and thereby improve the performance of the radiologist-CAD team.

  18. Computationally Efficient Prediction of Ionic Liquid Properties.

    PubMed

    Chaban, Vitaly V; Prezhdo, Oleg V

    2014-06-05

    Due to fundamental differences, room-temperature ionic liquids (RTIL) are significantly more viscous than conventional molecular liquids and require long simulation times. At the same time, RTILs remain in the liquid state over a much broader temperature range than the ordinary liquids. We exploit the ability of RTILs to stay liquid at several hundred degrees Celsius and introduce a straightforward and computationally efficient method for predicting RTIL properties at ambient temperature. RTILs do not alter phase behavior at 600-800 K. Therefore, their properties can be smoothly extrapolated down to ambient temperatures. We numerically prove the validity of the proposed concept for density and ionic diffusion of four different RTILs. This simple method enhances the computational efficiency of the existing simulation approaches as applied to RTILs by more than an order of magnitude.

  19. Changing computing paradigms towards power efficiency

    PubMed Central

    Klavík, Pavel; Malossi, A. Cristiano I.; Bekas, Costas; Curioni, Alessandro

    2014-01-01

    Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications. PMID:24842033

  20. Viewing CAD Drawings on the Internet

    ERIC Educational Resources Information Center

    Schwendau, Mark

    2004-01-01

    Computer aided design (CAD) has been producing 3-D models for years. AutoCAD software is frequently used to create sophisticated 3-D models. These CAD files can be exported as 3DS files for import into Autodesk's 3-D Studio Viz. In this program, the user can render and modify the 3-D model before exporting it out as a WRL (world file hyperlinked)…

  1. CAD/CAM. High-Technology Training Module.

    ERIC Educational Resources Information Center

    Zuleger, Robert

    This high technology training module is an advanced course on computer-assisted design/computer-assisted manufacturing (CAD/CAM) for grades 11 and 12. This unit, to be used with students in advanced drafting courses, introduces the concept of CAD/CAM. The content outline includes the following seven sections: (1) CAD/CAM software; (2) computer…

  2. Current techniques in CAD/CAM denture fabrication.

    PubMed

    Baba, Nadim Z; AlRumaih, Hamad S; Goodacre, Brian J; Goodacre, Charles J

    2016-01-01

    Recently, the use of computer-aided design/computer-aided manufacturing (CAD/CAM) to produce complete dentures has seen exponential growth in the dental market, and the number of commercially available CAD/CAM denture systems grows every year. The purpose of this article is to describe the clinical and laboratory procedures of 5 CAD/CAM denture systems.

  3. CAD Services: an Industry Standard Interface for Mechanical CAD Interoperability

    NASA Technical Reports Server (NTRS)

    Claus, Russell; Weitzer, Ilan

    2002-01-01

    Most organizations seek to design and develop new products in increasingly shorter time periods. At the same time, increased performance demands require a team-based multidisciplinary design process that may span several organizations. One approach to meet these demands is to use 'Geometry Centric' design. In this approach, design engineers team their efforts through one united representation of the design that is usually captured in a CAD system. Standards-based interfaces are critical to provide uniform, simple, distributed services that enable the 'Geometry Centric' design approach. This paper describes an industry-wide effort, under the Object Management Group's (OMG) Manufacturing Domain Task Force, to define interfaces that enable the interoperability of CAD, Computer Aided Manufacturing (CAM), and Computer Aided Engineering (CAE) tools. This critical link to enable 'Geometry Centric' design is called: Cad Services V1.0. This paper discusses the features of this standard and proposed application.

  4. Efficient quantum computing of complex dynamics.

    PubMed

    Benenti, G; Casati, G; Montangero, S; Shepelyansky, D L

    2001-11-26

    We propose a quantum algorithm which uses the number of qubits in an optimal way and efficiently simulates a physical model with rich and complex dynamics described by the quantum sawtooth map. The numerical study of the effect of static imperfections in the quantum computer hardware shows that the main elements of the phase space structures are accurately reproduced up to a time scale which is polynomial in the number of qubits. The errors generated by these imperfections are more significant than the errors of random noise in gate operations.

  5. Efficient Radiative Transfer Computations in the Atmosphere.

    DTIC Science & Technology

    1981-01-01

    absorptance, A = 1 - r , the net flux at level Z is given by equation (5) Net Flux, F (Z) = I - I, = B(Zsfc) -B(Ztop) A (ZtopZ) Zsfc - sft A (Z’,Z)dB(Z’) (5) ztop 11... F . Alyea, N. Phillips and R . Prinn, 1975; A three dimensional dynamical-chemical model of atmos- pheric ozone, J. Atmos. Sci., 32:170-194. 4...AD-ADO? 289 AIR FORCE INST OF TECH WRIGHT-PATTERSON AFB OH F /0 41/I EFFICIENT RADIATIVE TRANSFER COMPUTATIONS IN THE ATNOSI*ERE.fUI JAN 81 C R POSEY

  6. Impact of image normalization and quantization on the performance of sonar computer-aided detection/computer-aided classification (CAD/CAC) algorithms

    NASA Astrophysics Data System (ADS)

    Ciany, Charles M.; Zurawski, William C.

    2007-04-01

    Raytheon has extensively processed high-resolution sonar images with its CAD/CAC algorithms to provide real-time classification of mine-like bottom objects in a wide range of shallow-water environments. The algorithm performance is measured in terms of probability of correct classification (Pcc) as a function of false alarm rate, and is impacted by variables associated with both the physics of the problem and the signal processing design choices. Some examples of prominent variables pertaining to the choices of signal processing parameters are image resolution (i.e., pixel dimensions), image normalization scheme, and pixel intensity quantization level (i.e., number of bits used to represent the intensity of each image pixel). Improvements in image resolution associated with the technology transition from sidescan to synthetic aperture sonars have prompted the use of image decimation algorithms to reduce the number of pixels per image that are processed by the CAD/CAC algorithms, in order to meet real-time processor throughput requirements. Additional improvements in digital signal processing hardware have also facilitated the use of an increased quantization level in converting the image data from analog to digital format. This study evaluates modifications to the normalization algorithm and image pixel quantization level within the image processing prior to CAD/CAC processing, and examines their impact on the resulting CAD/CAC algorithm performance. The study utilizes a set of at-sea data from multiple test exercises in varying shallow water environments.

  7. A primer on the energy efficiency of computing

    SciTech Connect

    Koomey, Jonathan G.

    2015-03-30

    The efficiency of computing at peak output has increased rapidly since the dawn of the computer age. This paper summarizes some of the key factors affecting the efficiency of computing in all usage modes. While there is still great potential for improving the efficiency of computing devices, we will need to alter how we do computing in the next few decades because we are finally approaching the limits of current technologies.

  8. Efficient computation of volume in flow predictions

    NASA Technical Reports Server (NTRS)

    Vinokur, M.; Kordulla, W.

    1983-01-01

    An efficient method for calculating cell volumes for time-dependent three-dimensional flow predictions by finite volume calculations is presented. Eight arbitrary corner points are considered and the shape face is divided into two planar triangles. The volume is then dependent on the orientation of the partitioning. In the case of a hexahedron, it is noted that any open surface with a boundary that is a closed curve possesses a surface vector independent of the surface shape. Expressions are defined for the surface vector, which is independent of the partitioning surface diagonal used to quantify the volume. Using a decomposition of the cell volume involving two corners, with each the vertex of three diagonals and six corners which are vertices of one diagonal, gives portions which are tetrahedra. The resultant mesh is can be used for time-dependent finite volume calculations one requires less computer time than previous methods.

  9. Computed tomography and CAD/CAE methods for the study of the osseus inner Ear bone of Greek quaternary endemic mammals

    NASA Astrophysics Data System (ADS)

    Provatidis, C. G.; Theodorou, E. G.; Theodorou, G. E.

    It is undisputed that the use of computed tomography gives the researcher an inside view of the internal morphology of precious findings. The main goal, in this study, is to take advantage of the huge possibilities that derive from the use of CT Scans in the field of Vertebrate Palaeontology. Rare fossils skull parts (Ospetrosum of Elephas tiliensis from Tilos, Phanourios minor from Cyprus and Candiacervus sp. from Crete) brought to light by excavations, required further analysis of their inside structure by non destructive methods. Selected specimens were scanned and exported into Dicom files. These were then imported into MIMICS Software in order to develop the required 3D digital CAD models. By using distinctive reference points on the bone geometry based on palaeontological criteria, section views were created thus revealing the extremely complex inside structure and making it available for farther palaeontological analysis.

  10. Dimensioning storage and computing clusters for efficient high throughput computing

    NASA Astrophysics Data System (ADS)

    Accion, E.; Bria, A.; Bernabeu, G.; Caubet, M.; Delfino, M.; Espinal, X.; Merino, G.; Lopez, F.; Martinez, F.; Planas, E.

    2012-12-01

    Scientific experiments are producing huge amounts of data, and the size of their datasets and total volume of data continues increasing. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of scientific data centers has shifted from efficiently coping with PetaByte scale storage to deliver quality data processing throughput. The dimensioning of the internal components in High Throughput Computing (HTC) data centers is of crucial importance to cope with all the activities demanded by the experiments, both the online (data acceptance) and the offline (data processing, simulation and user analysis). This requires a precise setup involving disk and tape storage services, a computing cluster and the internal networking to prevent bottlenecks, overloads and undesired slowness that lead to losses cpu cycles and batch jobs failures. In this paper we point out relevant features for running a successful data storage and processing service in an intensive HTC environment.

  11. Education and Training Packages for CAD/CAM.

    ERIC Educational Resources Information Center

    Wright, I. C.

    1986-01-01

    Discusses educational efforts in the fields of Computer Assisted Design and Manufacturing (CAD/CAM). Describes two educational training initiatives underway in the United Kingdom, one of which is a resource materials package for teachers of CAD/CAM at the undergraduate level, and the other a training course for managers of CAD/CAM systems. (TW)

  12. Microcomputer Simulated CAD for Engineering Graphics.

    ERIC Educational Resources Information Center

    Huggins, David L.; Myers, Roy E.

    1983-01-01

    Describes a simulated computer-aided-graphics (CAD) program at The Pennsylvania State University. Rationale for the program, facilities, microcomputer equipment (Apple) used, and development of a software package for simulating applied engineering graphics are considered. (JN)

  13. Application of Fisher fusion techniques to improve the individual performance of sonar computer-aided detection/computer-aided classification (CAD/CAC) algorithms

    NASA Astrophysics Data System (ADS)

    Ciany, Charles M.; Zurawski, William C.

    2009-05-01

    Raytheon has extensively processed high-resolution sidescan sonar images with its CAD/CAC algorithms to provide classification of targets in a variety of shallow underwater environments. The Raytheon CAD/CAC algorithm is based on non-linear image segmentation into highlight, shadow, and background regions, followed by extraction, association, and scoring of features from candidate highlight and shadow regions of interest (ROIs). The targets are classified by thresholding an overall classification score, which is formed by summing the individual feature scores. The algorithm performance is measured in terms of probability of correct classification as a function of false alarm rate, and is determined by both the choice of classification features and the manner in which the classifier rates and combines these features to form its overall score. In general, the algorithm performs very reliably against targets that exhibit "strong" highlight and shadow regions in the sonar image- i.e., both the highlight echo and its associated shadow region from the target are distinct relative to the ambient background. However, many real-world undersea environments can produce sonar images in which a significant percentage of the targets exhibit either "weak" highlight or shadow regions in the sonar image. The challenge of achieving robust performance in these environments has traditionally been addressed by modifying the individual feature scoring algorithms to optimize the separation between the corresponding highlight or shadow feature scores of targets and non-targets. This study examines an alternate approach that employs principles of Fisher fusion to determine a set of optimal weighting coefficients that are applied to the individual feature scores before summing to form the overall classification score. The results demonstrate improved performance of the CAD/CAC algorithm on at-sea data sets.

  14. Characterizing and Implementing Efficient Primitives for Privacy-Preserving Computation

    DTIC Science & Technology

    2015-07-01

    CHARACTERIZING AND IMPLEMENTING EFFICIENT PRIMITIVES FOR PRIVACY-PRESERVING COMPUTATION GEORGIA INSTITUTE OF TECHNOLOGY JULY 2015...FINAL TECHNICAL REPORT 3. DATES COVERED (From - To) MAY 2011 – MAR 2015 4. TITLE AND SUBTITLE CHARACTERIZING AND IMPLEMENTING EFFICIENT PRIMITIVES ...computation to be executed upon it. However, the primitives making such computation possible are extremely expensive, and have long been viewed as

  15. Using AutoCAD for descriptive geometry exercises. in undergraduate structural geology

    NASA Astrophysics Data System (ADS)

    Jacobson, Carl E.

    2001-02-01

    The exercises in descriptive geometry typically utilized in undergraduate structural geology courses are quickly and easily solved using the computer drafting program AutoCAD. The key to efficient use of AutoCAD for descriptive geometry involves taking advantage of User Coordinate Systems, alternative angle conventions, relative coordinates, and other aspects of AutoCAD that may not be familiar to the beginning user. A summary of these features and an illustration of their application to the creation of structure contours for a planar dipping bed provides the background necessary to solve other problems in descriptive geometry with the computer. The ease of the computer constructions reduces frustration for the student and provides more time to think about the principles of the problems.

  16. Project CAD as of July 1978: CAD support project, situation in July 1978

    NASA Technical Reports Server (NTRS)

    Boesch, L.; Lang-Lendorff, G.; Rothenberg, R.; Stelzer, V.

    1979-01-01

    The structure of Computer Aided Design (CAD) and the requirements for program developments in past and future are described. The actual standard and the future aims of CAD programs are presented. The developed programs in: (1) civil engineering; (2) mechanical engineering; (3) chemical engineering/shipbuilding; (4) electrical engineering; and (5) general programs are discussed.

  17. Productivity increase through implementation of CAD/CAE workstation

    NASA Technical Reports Server (NTRS)

    Bromley, L. K.

    1985-01-01

    The tracking and communication division computer aided design/computer aided engineering system is now operational. The system is utilized in an effort to automate certain tasks that were previously performed manually. These tasks include detailed test configuration diagrams of systems under certification test in the ESTL, floorplan layouts of future planned laboratory reconfigurations, and other graphical documentation of division activities. The significant time savings achieved with this CAD/CAE system are examined: (1) input of drawings and diagrams; (2) editing of initial drawings; (3) accessibility of the data; and (4) added versatility. It is shown that the Applicon CAD/CAE system, with its ease of input and editing, the accessibility of data, and its added versatility, has made more efficient many of the necessary but often time-consuming tasks associated with engineering design and testing.

  18. Efficient quantum computing using coherent photon conversion.

    PubMed

    Langford, N K; Ramelow, S; Prevedel, R; Munro, W J; Milburn, G J; Zeilinger, A

    2011-10-12

    Single photons are excellent quantum information carriers: they were used in the earliest demonstrations of entanglement and in the production of the highest-quality entanglement reported so far. However, current schemes for preparing, processing and measuring them are inefficient. For example, down-conversion provides heralded, but randomly timed, single photons, and linear optics gates are inherently probabilistic. Here we introduce a deterministic process--coherent photon conversion (CPC)--that provides a new way to generate and process complex, multiquanta states for photonic quantum information applications. The technique uses classically pumped nonlinearities to induce coherent oscillations between orthogonal states of multiple quantum excitations. One example of CPC, based on a pumped four-wave-mixing interaction, is shown to yield a single, versatile process that provides a full set of photonic quantum processing tools. This set satisfies the DiVincenzo criteria for a scalable quantum computing architecture, including deterministic multiqubit entanglement gates (based on a novel form of photon-photon interaction), high-quality heralded single- and multiphoton states free from higher-order imperfections, and robust, high-efficiency detection. It can also be used to produce heralded multiphoton entanglement, create optically switchable quantum circuits and implement an improved form of down-conversion with reduced higher-order effects. Such tools are valuable building blocks for many quantum-enabled technologies. Finally, using photonic crystal fibres we experimentally demonstrate quantum correlations arising from a four-colour nonlinear process suitable for CPC and use these measurements to study the feasibility of reaching the deterministic regime with current technology. Our scheme, which is based on interacting bosonic fields, is not restricted to optical systems but could also be implemented in optomechanical, electromechanical and superconducting

  19. Efficient Computation Of Behavior Of Aircraft Tires

    NASA Technical Reports Server (NTRS)

    Tanner, John A.; Noor, Ahmed K.; Andersen, Carl M.

    1989-01-01

    NASA technical paper discusses challenging application of computational structural mechanics to numerical simulation of responses of aircraft tires during taxing, takeoff, and landing. Presents details of three main elements of computational strategy: use of special three-field, mixed-finite-element models; use of operator splitting; and application of technique reducing substantially number of degrees of freedom. Proposed computational strategy applied to two quasi-symmetric problems: linear analysis of anisotropic tires through use of two-dimensional-shell finite elements and nonlinear analysis of orthotropic tires subjected to unsymmetric loading. Three basic types of symmetry and combinations exhibited by response of tire identified.

  20. Synthesis of Efficient Structures for Concurrent Computation.

    DTIC Science & Technology

    1983-10-01

    CONTRACT OR GRANT NUMBER(a) Richard M. King and Ernst Mayr F49620-82-C-0007 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT, TASK...for CONCURRENT COMPUTATION by Richard M. King Ernst W. Mayrt Cordel Green Principal Investigator Kestrel Institute 1801 Page Mill Road Palo Alto, CA... Mayr , and A. Siegel ’Techniques for Solving Graph Problems in Parallel Environments’ Proceedings of the W4h Symposium on Foundation* of Computer

  1. A panorama of dental CAD/CAM restorative systems.

    PubMed

    Liu, Perng-Ru

    2005-07-01

    In the last 2 decades, exciting new developments in dental materials and computer technology have led to the success of contemporary dental computer-aided design/computer-aided manufacturing (CAD/CAM) technology. Several highly sophisticated chairside and laboratory CAD/CAM systems have been introduced or are under development. This article provides an overview of the development of various CAD/CAM systems. Operational components, methodologies, and restorative materials used with common CAD/CAM systems are discussed. Research data and clinical studies are presented to substantiate the clinical performance of these systems.

  2. Panorama of dental CAD/CAM restorative systems.

    PubMed

    Liu, Perng-Ru; Essig, Milton E

    2008-10-01

    In the past two decades, exciting new developments in dental materials and computer technology have led to the success of contemporary dental computer-aided design/computer-aided manufacture (CAD/CAM) technology. Several highly sophisticated in-office and laboratory CAD/CAM systems have been introduced or are under development. This article provides an overview of the development of various CAD/CAM systems. Operational components, methodologies, and restorative materials used with common CAD/CAM systems are discussed. Research data and clinical studies are presented to substantiate the clinical performance of these systems.

  3. Aerodynamic Design of Complex Configurations Using Cartesian Methods and CAD Geometry

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.; Pulliam, Thomas H.

    2003-01-01

    The objective for this paper is to present the development of an optimization capability for the Cartesian inviscid-flow analysis package of Aftosmis et al. We evaluate and characterize the following modules within the new optimization framework: (1) A component-based geometry parameterization approach using a CAD solid representation and the CAPRI interface. (2) The use of Cartesian methods in the development Optimization techniques using a genetic algorithm. The discussion and investigations focus on several real world problems of the optimization process. We examine the architectural issues associated with the deployment of a CAD-based design approach in a heterogeneous parallel computing environment that contains both CAD workstations and dedicated compute nodes. In addition, we study the influence of noise on the performance of optimization techniques, and the overall efficiency of the optimization process for aerodynamic design of complex three-dimensional configurations. of automated optimization tools. rithm and a gradient-based algorithm.

  4. AutoCAD-To-NASTRAN Translator Program

    NASA Technical Reports Server (NTRS)

    Jones, A.

    1989-01-01

    Program facilitates creation of finite-element mathematical models from geometric entities. AutoCAD to NASTRAN translator (ACTON) computer program developed to facilitate quick generation of small finite-element mathematical models for use with NASTRAN finite-element modeling program. Reads geometric data of drawing from Data Exchange File (DXF) used in AutoCAD and other PC-based drafting programs. Written in Microsoft Quick-Basic (Version 2.0).

  5. Efficient Computation Of Confidence Intervals Of Parameters

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.

    1992-01-01

    Study focuses on obtaining efficient algorithm for estimation of confidence intervals of ML estimates. Four algorithms selected to solve associated constrained optimization problem. Hybrid algorithms, following search and gradient approaches, prove best.

  6. Experimental Implementation of Efficient Linear Optics Quantum Computation

    DTIC Science & Technology

    2007-11-02

    Experimental Implementation of Efficient Linear Optics Quantum Computation Final Report G. J. Milburn, T. C. Ralph, and A. G. White University of...Queensland, Australia 1. Statement of Problem. One of the earliest proposals [1] for implementing quantum computation was based on encoding...containing few photons. In 2001 Knill, Laflamme and Milburn (KLM) found a way to circumvent this restriction and implement efficient quantum computation

  7. Efficient Parallel Engineering Computing on Linux Workstations

    NASA Technical Reports Server (NTRS)

    Lou, John Z.

    2010-01-01

    A C software module has been developed that creates lightweight processes (LWPs) dynamically to achieve parallel computing performance in a variety of engineering simulation and analysis applications to support NASA and DoD project tasks. The required interface between the module and the application it supports is simple, minimal and almost completely transparent to the user applications, and it can achieve nearly ideal computing speed-up on multi-CPU engineering workstations of all operating system platforms. The module can be integrated into an existing application (C, C++, Fortran and others) either as part of a compiled module or as a dynamically linked library (DLL).

  8. CAD/CAM: Practical and Persuasive in Canadian Schools

    ERIC Educational Resources Information Center

    Willms, Ed

    2007-01-01

    Chances are that many high school students would not know how to use drafting instruments, but some might want to gain competence in computer-assisted design (CAD) and possibly computer-assisted manufacturing (CAM). These students are often attracted to tech courses by the availability of CAD/CAM instructions, and many go on to impress employers…

  9. Efficient Kinematic Computations For 7-DOF Manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun; Long, Mark K.; Kreutz-Delgado, Kenneth

    1994-01-01

    Efficient algorithms for forward kinematic mappings of seven-degree-of-freedom (7-DOF) robotic manipulator having revolute joints developed on basis of representation of redundant DOF in terms of parameter called "arm angle." Continuing effort to exploit redundancy in manipulator according to concept of basic and additional tasks. Concept also discussed in "Configuration-Control Scheme Copes With Singularities" (NPO-18556) and "Increasing the Dexterity of Redundant Robots" (NPO-17801).

  10. An application protocol for CAD to CAD transfer of electronic information

    NASA Technical Reports Server (NTRS)

    Azu, Charles C., Jr.

    1993-01-01

    The exchange of Computer Aided Design (CAD) information between dissimilar CAD systems is a problem. This is especially true for transferring electronics CAD information such as multi-chip module (MCM), hybrid microcircuit assembly (HMA), and printed circuit board (PCB) designs. Currently, there exists several neutral data formats for transferring electronics CAD information. These include IGES, EDIF, and DXF formats. All these formats have limitations for use in exchanging electronic data. In an attempt to overcome these limitations, the Navy's MicroCIM program implemented a project to transfer hybrid microcircuit design information between dissimilar CAD systems. The IGES (Initial Graphics Exchange Specification) format is used since it is well established within the CAD industry. The goal of the project is to have a complete transfer of microelectronic CAD information, using IGES, without any data loss. An Application Protocol (AP) is being developed to specify how hybrid microcircuit CAD information will be represented by IGES entity constructs. The AP defines which IGES data items are appropriate for describing HMA geometry, connectivity, and processing as well as HMA material characteristics.

  11. Efficient Associative Computation with Discrete Synapses.

    PubMed

    Knoblauch, Andreas

    2016-01-01

    Neural associative networks are a promising computational paradigm for both modeling neural circuits of the brain and implementing associative memory and Hebbian cell assemblies in parallel VLSI or nanoscale hardware. Previous work has extensively investigated synaptic learning in linear models of the Hopfield type and simple nonlinear models of the Steinbuch/Willshaw type. Optimized Hopfield networks of size n can store a large number of about n(2)/k memories of size k (or associations between them) but require real-valued synapses, which are expensive to implement and can store at most C = 0.72 bits per synapse. Willshaw networks can store a much smaller number of about n(2)/k(2) memories but get along with much cheaper binary synapses. Here I present a learning model employing synapses with discrete synaptic weights. For optimal discretization parameters, this model can store, up to a factor ζ close to one, the same number of memories as for optimized Hopfield-type learning--for example, ζ = 0.64 for binary synapses, ζ = 0.88 for 2 bit (four-state) synapses, ζ = 0.96 for 3 bit (8-state) synapses, and ζ > 0.99 for 4 bit (16-state) synapses. The model also provides the theoretical framework to determine optimal discretization parameters for computer implementations or brainlike parallel hardware including structural plasticity. In particular, as recently shown for the Willshaw network, it is possible to store C(I) = 1 bit per computer bit and up to C(S) = log n bits per nonsilent synapse, whereas the absolute number of stored memories can be much larger than for the Willshaw model.

  12. Efficient tree codes on SIMD computer architectures

    NASA Astrophysics Data System (ADS)

    Olson, Kevin M.

    1996-11-01

    This paper describes changes made to a previous implementation of an N -body tree code developed for a fine-grained, SIMD computer architecture. These changes include (1) switching from a balanced binary tree to a balanced oct tree, (2) addition of quadrupole corrections, and (3) having the particles search the tree in groups rather than individually. An algorithm for limiting errors is also discussed. In aggregate, these changes have led to a performance increase of over a factor of 10 compared to the previous code. For problems several times larger than the processor array, the code now achieves performance levels of ~ 1 Gflop on the Maspar MP-2 or roughly 20% of the quoted peak performance of this machine. This percentage is competitive with other parallel implementations of tree codes on MIMD architectures. This is significant, considering the low relative cost of SIMD architectures.

  13. Efficient computation of parameter confidence intervals

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.

    1987-01-01

    An important step in system identification of aircraft is the estimation of stability and control derivatives from flight data along with an assessment of parameter accuracy. When the maximum likelihood estimation technique is used, parameter accuracy is commonly assessed by the Cramer-Rao lower bound. It is known, however, that in some cases the lower bound can be substantially different from the parameter variance. Under these circumstances the Cramer-Rao bounds may be misleading as an accuracy measure. This paper discusses the confidence interval estimation problem based on likelihood ratios, which offers a more general estimate of the error bounds. Four approaches are considered for computing confidence intervals of maximum likelihood parameter estimates. Each approach is applied to real flight data and compared.

  14. Fit of CAD/CAM implant frameworks: a comprehensive review.

    PubMed

    Abduo, Jaafar

    2014-12-01

    Computer-aided design and computer-aided manufacturing (CAD/CAM) is a strongly emerging prosthesis fabrication method for implant dentistry. Currently, CAD/CAM allows the construction of implant frameworks from different materials. This review evaluates the literature pertaining to the precision fit of fixed implant frameworks fabricated by CAD/CAM. Following a comprehensive electronic search through PubMed (MEDLINE), 14 relevant articles were identified. The results indicate that the precision fit of CAD/CAM frameworks exceeded the fit of the 1-piece cast frameworks and laser-welded frameworks. A similar fit was observed for CAD/CAM frameworks and bonding of the framework body to prefabricated cylinders. The influence of CAD/CAM materials on the fit of a framework is minimal.

  15. Some Workplace Effects of CAD and CAM.

    ERIC Educational Resources Information Center

    Ebel, Karl-H.; Ulrich, Erhard

    1987-01-01

    Examines the impact of computer-aided design (CAD) and computer-aided manufacturing (CAM) on employment, work organization, working conditions, job content, training, and industrial relations in several countries. Finds little evidence of negative employment effects since productivity gains are offset by various compensatory factors. (Author/CH)

  16. Quantum-enhanced Sensing and Efficient Quantum Computation

    DTIC Science & Technology

    2015-07-27

    AFRL-AFOSR-UK-TR-2015-0039 Quantum -enhanced sensing and efficient quantum computation Ian Walmsley THE UNIVERSITY OF...COVERED (From - To) 1 February 2013 - 31 January 2015 4. TITLE AND SUBTITLE Quantum -enhanced sensing and efficient quantum computation 5a. CONTRACT...accuracy. The system was used to improve quantum boson sampling tests. 15. SUBJECT TERMS EOARD, Quantum Information Processing, Transition Edge Sensors

  17. Computationally efficient Bayesian inference for inverse problems.

    SciTech Connect

    Marzouk, Youssef M.; Najm, Habib N.; Rahn, Larry A.

    2007-10-01

    Bayesian statistics provides a foundation for inference from noisy and incomplete data, a natural mechanism for regularization in the form of prior information, and a quantitative assessment of uncertainty in the inferred results. Inverse problems - representing indirect estimation of model parameters, inputs, or structural components - can be fruitfully cast in this framework. Complex and computationally intensive forward models arising in physical applications, however, can render a Bayesian approach prohibitive. This difficulty is compounded by high-dimensional model spaces, as when the unknown is a spatiotemporal field. We present new algorithmic developments for Bayesian inference in this context, showing strong connections with the forward propagation of uncertainty. In particular, we introduce a stochastic spectral formulation that dramatically accelerates the Bayesian solution of inverse problems via rapid evaluation of a surrogate posterior. We also explore dimensionality reduction for the inference of spatiotemporal fields, using truncated spectral representations of Gaussian process priors. These new approaches are demonstrated on scalar transport problems arising in contaminant source inversion and in the inference of inhomogeneous material or transport properties. We also present a Bayesian framework for parameter estimation in stochastic models, where intrinsic stochasticity may be intermingled with observational noise. Evaluation of a likelihood function may not be analytically tractable in these cases, and thus several alternative Markov chain Monte Carlo (MCMC) schemes, operating on the product space of the observations and the parameters, are introduced.

  18. Earthquake detection through computationally efficient similarity search.

    PubMed

    Yoon, Clara E; O'Reilly, Ossian; Bergen, Karianne J; Beroza, Gregory C

    2015-12-01

    Seismology is experiencing rapid growth in the quantity of data, which has outpaced the development of processing algorithms. Earthquake detection-identification of seismic events in continuous data-is a fundamental operation for observational seismology. We developed an efficient method to detect earthquakes using waveform similarity that overcomes the disadvantages of existing detection methods. Our method, called Fingerprint And Similarity Thresholding (FAST), can analyze a week of continuous seismic waveform data in less than 2 hours, or 140 times faster than autocorrelation. FAST adapts a data mining algorithm, originally designed to identify similar audio clips within large databases; it first creates compact "fingerprints" of waveforms by extracting key discriminative features, then groups similar fingerprints together within a database to facilitate fast, scalable search for similar fingerprint pairs, and finally generates a list of earthquake detections. FAST detected most (21 of 24) cataloged earthquakes and 68 uncataloged earthquakes in 1 week of continuous data from a station located near the Calaveras Fault in central California, achieving detection performance comparable to that of autocorrelation, with some additional false detections. FAST is expected to realize its full potential when applied to extremely long duration data sets over a distributed network of seismic stations. The widespread application of FAST has the potential to aid in the discovery of unexpected seismic signals, improve seismic monitoring, and promote a greater understanding of a variety of earthquake processes.

  19. Earthquake detection through computationally efficient similarity search

    PubMed Central

    Yoon, Clara E.; O’Reilly, Ossian; Bergen, Karianne J.; Beroza, Gregory C.

    2015-01-01

    Seismology is experiencing rapid growth in the quantity of data, which has outpaced the development of processing algorithms. Earthquake detection—identification of seismic events in continuous data—is a fundamental operation for observational seismology. We developed an efficient method to detect earthquakes using waveform similarity that overcomes the disadvantages of existing detection methods. Our method, called Fingerprint And Similarity Thresholding (FAST), can analyze a week of continuous seismic waveform data in less than 2 hours, or 140 times faster than autocorrelation. FAST adapts a data mining algorithm, originally designed to identify similar audio clips within large databases; it first creates compact “fingerprints” of waveforms by extracting key discriminative features, then groups similar fingerprints together within a database to facilitate fast, scalable search for similar fingerprint pairs, and finally generates a list of earthquake detections. FAST detected most (21 of 24) cataloged earthquakes and 68 uncataloged earthquakes in 1 week of continuous data from a station located near the Calaveras Fault in central California, achieving detection performance comparable to that of autocorrelation, with some additional false detections. FAST is expected to realize its full potential when applied to extremely long duration data sets over a distributed network of seismic stations. The widespread application of FAST has the potential to aid in the discovery of unexpected seismic signals, improve seismic monitoring, and promote a greater understanding of a variety of earthquake processes. PMID:26665176

  20. Efficiently modeling neural networks on massively parallel computers

    NASA Technical Reports Server (NTRS)

    Farber, Robert M.

    1993-01-01

    Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper describes the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD (Single Instruction Multiple Data) architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SIMD computers and can be implemented on MIMD computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors (which has a sub-linear runtime growth on the order of O(log(number of processors)). We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can extend to arbitrarily large networks (within memory limitations) by merging the memory space of separate processors with fast adjacent processor interprocessor communications. This paper will consider the simulation of only feed forward neural network although this method is extendable to recurrent networks.

  1. A CAD System for Hemorrhagic Stroke

    PubMed Central

    Nowinski, Wieslaw L; Qian, Guoyu; Hanley, Daniel F

    2014-01-01

    Summary Computer-aided detection/diagnosis (CAD) is a key component of routine clinical practice, increasingly used for detection, interpretation, quantification and decision support. Despite a critical need, there is no clinically accepted CAD system for stroke yet. Here we introduce a CAD system for hemorrhagic stroke. This CAD system segments, quantifies, and displays hematoma in 2D/3D, and supports evacuation of hemorrhage by thrombolytic treatment monitoring progression and quantifying clot removal. It supports seven-step workflow: select patient, add a new study, process patient's scans, show segmentation results, plot hematoma volumes, show 3D synchronized time series hematomas, and generate report. The system architecture contains four components: library, tools, application with user interface, and hematoma segmentation algorithm. The tools include a contour editor, 3D surface modeler, 3D volume measure, histogramming, hematoma volume plot, and 3D synchronized time-series hematoma display. The CAD system has been designed and implemented in C++. It has also been employed in the CLEAR and MISTIE phase-III, multicenter clinical trials. This stroke CAD system is potentially useful in research and clinical applications, particularly for clinical trials. PMID:25196612

  2. A CAD System for Hemorrhagic Stroke.

    PubMed

    Nowinski, Wieslaw L; Qian, Guoyu; Hanley, Daniel F

    2014-09-01

    Computer-aided detection/diagnosis (CAD) is a key component of routine clinical practice, increasingly used for detection, interpretation, quantification and decision support. Despite a critical need, there is no clinically accepted CAD system for stroke yet. Here we introduce a CAD system for hemorrhagic stroke. This CAD system segments, quantifies, and displays hematoma in 2D/3D, and supports evacuation of hemorrhage by thrombolytic treatment monitoring progression and quantifying clot removal. It supports seven-step workflow: select patient, add a new study, process patient's scans, show segmentation results, plot hematoma volumes, show 3D synchronized time series hematomas, and generate report. The system architecture contains four components: library, tools, application with user interface, and hematoma segmentation algorithm. The tools include a contour editor, 3D surface modeler, 3D volume measure, histogramming, hematoma volume plot, and 3D synchronized time-series hematoma display. The CAD system has been designed and implemented in C++. It has also been employed in the CLEAR and MISTIE phase-III, multicenter clinical trials. This stroke CAD system is potentially useful in research and clinical applications, particularly for clinical trials.

  3. Future CAD in multi-dimensional medical images--project on multi-organ, multi-disease CAD system.

    PubMed

    Kobatake, Hidefumi

    2007-01-01

    A large research project on the subject of computer-aided diagnosis (CAD) entitled "Intelligent Assistance in Diagnosis of Multi-dimensional Medical Images" was initiated in Japan in 2003. The objective of this research project is to develop a multi-organ, multi-disease CAD system that incorporates anatomical knowledge of the human body and diagnostic knowledge of various types of diseases. The present paper provides an overview of the project and clarifies the trend of future CAD technologies in Japan.

  4. A highly efficient cocaine detoxifying enzyme obtained by computational design

    PubMed Central

    Zheng, Fang; Xue, Liu; Hou, Shurong; Liu, Junjun; Zhan, Max; Yang, Wenchao; Zhan, Chang-Guo

    2014-01-01

    Compared to naturally occurring enzymes, computationally designed enzymes are usually much less efficient, with their catalytic activities being more than six orders of magnitude below the diffusion limit. Here we use a two-step computational design approach, combined with experimental work, to design a highly efficient cocaine hydrolising enzyme. We engineer E30-6 from human butyrylcholinesterase (BChE), which is specific for cocaine hydrolysis, and obtain a much higher catalytic efficiency for cocaine conversion than for conversion of the natural BChE substrate, acetylcholine (ACh). The catalytic efficiency of E30-6 for cocaine hydrolysis is comparable to that of the most efficient known naturally-occurring hydrolytic enzyme, acetylcholinesterase, the catalytic activity of which approaches the diffusion limit. We further show that E30-6 can protect mice from a subsequently administered lethal dose of cocaine, suggesting the enzyme may have therapeutic potential in the setting of cocaine detoxification or cocaine abuse. PMID:24643289

  5. On the Use of CAD and Cartesian Methods for Aerodynamic Optimization

    NASA Technical Reports Server (NTRS)

    Nemec, M.; Aftosmis, M. J.; Pulliam, T. H.

    2004-01-01

    The objective for this paper is to present the development of an optimization capability for Curt3D, a Cartesian inviscid-flow analysis package. We present the construction of a new optimization framework and we focus on the following issues: 1) Component-based geometry parameterization approach using parametric-CAD models and CAPRI. A novel geometry server is introduced that addresses the issue of parallel efficiency while only sparingly consuming CAD resources; 2) The use of genetic and gradient-based algorithms for three-dimensional aerodynamic design problems. The influence of noise on the optimization methods is studied. Our goal is to create a responsive and automated framework that efficiently identifies design modifications that result in substantial performance improvements. In addition, we examine the architectural issues associated with the deployment of a CAD-based approach in a heterogeneous parallel computing environment that contains both CAD workstations and dedicated compute engines. We demonstrate the effectiveness of the framework for a design problem that features topology changes and complex geometry.

  6. Making a Case for CAD in the Curriculum.

    ERIC Educational Resources Information Center

    Threlfall, K. Denise

    1995-01-01

    Computer-assisted design (CAD) technology is transforming the apparel industry. Students of fashion merchandising and clothing design must be prepared on state-of-the-art equipment. ApparelCAD software is one example of courseware for instruction in pattern design and production. (SK)

  7. An Evaluation of Internet-Based CAD Collaboration Tools

    ERIC Educational Resources Information Center

    Smith, Shana Shiang-Fong

    2004-01-01

    Due to the now widespread use of the Internet, most companies now require computer aided design (CAD) tools that support distributed collaborative design on the Internet. Such CAD tools should enable designers to share product models, as well as related data, from geographically distant locations. However, integrated collaborative design…

  8. Positive Wigner functions render classical simulation of quantum computation efficient.

    PubMed

    Mari, A; Eisert, J

    2012-12-07

    We show that quantum circuits where the initial state and all the following quantum operations can be represented by positive Wigner functions can be classically efficiently simulated. This is true both for continuous-variable as well as discrete variable systems in odd prime dimensions, two cases which will be treated on entirely the same footing. Noting the fact that Clifford and Gaussian operations preserve the positivity of the Wigner function, our result generalizes the Gottesman-Knill theorem. Our algorithm provides a way of sampling from the output distribution of a computation or a simulation, including the efficient sampling from an approximate output distribution in the case of sampling imperfections for initial states, gates, or measurements. In this sense, this work highlights the role of the positive Wigner function as separating classically efficiently simulable systems from those that are potentially universal for quantum computing and simulation, and it emphasizes the role of negativity of the Wigner function as a computational resource.

  9. Applying Performance Models to Understand Data-Intensive Computing Efficiency

    DTIC Science & Technology

    2010-05-01

    data - intensive computing, cloud computing, analytical modeling, Hadoop, MapReduce , performance and efficiency 1 Introduction “ Data - intensive scalable...the writing of the output data to disk. In systems that replicate data across multiple nodes, such as the GFS [11] and HDFS [3] distributed file...evenly distributed across all participating nodes in the cluster , that nodes are homogeneous, and that each node retrieves its initial input from local

  10. I/O-Efficient Scientific Computation Using TPIE

    NASA Technical Reports Server (NTRS)

    Vengroff, Darren Erik; Vitter, Jeffrey Scott

    1996-01-01

    In recent years, input/output (I/O)-efficient algorithms for a wide variety of problems have appeared in the literature. However, systems specifically designed to assist programmers in implementing such algorithms have remained scarce. TPIE is a system designed to support I/O-efficient paradigms for problems from a variety of domains, including computational geometry, graph algorithms, and scientific computation. The TPIE interface frees programmers from having to deal not only with explicit read and write calls, but also the complex memory management that must be performed for I/O-efficient computation. In this paper we discuss applications of TPIE to problems in scientific computation. We discuss algorithmic issues underlying the design and implementation of the relevant components of TPIE and present performance results of programs written to solve a series of benchmark problems using our current TPIE prototype. Some of the benchmarks we present are based on the NAS parallel benchmarks while others are of our own creation. We demonstrate that the central processing unit (CPU) overhead required to manage I/O is small and that even with just a single disk, the I/O overhead of I/O-efficient computation ranges from negligible to the same order of magnitude as CPU time. We conjecture that if we use a number of disks in parallel this overhead can be all but eliminated.

  11. Equilibrium analysis of the efficiency of an autonomous molecular computer

    NASA Astrophysics Data System (ADS)

    Rose, John A.; Deaton, Russell J.; Hagiya, Masami; Suyama, Akira

    2002-02-01

    In the whiplash polymerase chain reaction (WPCR), autonomous molecular computation is implemented in vitro by the recursive, self-directed polymerase extension of a mixture of DNA hairpins. Although computational efficiency is known to be reduced by a tendency for DNAs to self-inhibit by backhybridization, both the magnitude of this effect and its dependence on the reaction conditions have remained open questions. In this paper, the impact of backhybridization on WPCR efficiency is addressed by modeling the recursive extension of each strand as a Markov chain. The extension efficiency per effective polymerase-DNA encounter is then estimated within the framework of a statistical thermodynamic model. Model predictions are shown to provide close agreement with the premature halting of computation reported in a recent in vitro WPCR implementation, a particularly significant result, given that backhybridization had been discounted as the dominant error process. The scaling behavior further indicates completion times to be sufficiently long to render WPCR-based massive parallelism infeasible. A modified architecture, PNA-mediated WPCR (PWPCR) is then proposed in which the occupancy of backhybridized hairpins is reduced by targeted PNA2/DNA triplex formation. The efficiency of PWPCR is discussed using a modified form of the model developed for WPCR. Predictions indicate the PWPCR efficiency is sufficient to allow the implementation of autonomous molecular computation on a massive scale.

  12. A concurrent computer aided detection (CAD) tool for articular cartilage disease of the knee on MR imaging using active shape models

    NASA Astrophysics Data System (ADS)

    Ramakrishna, Bharath; Saiprasad, Ganesh; Safdar, Nabile; Siddiqui, Khan; Chang, Chein-I.; Siegel, Eliot

    2008-03-01

    Osteoarthritis (OA) is the most common form of arthritis and a major cause of morbidity affecting millions of adults in the US and world wide. In the knee, OA begins with the degeneration of joint articular cartilage, eventually resulting in the femur and tibia coming in contact, and leading to severe pain and stiffness. There has been extensive research examining 3D MR imaging sequences and automatic/semi-automatic techniques for 2D/3D articular cartilage extraction. However, in routine clinical practice the most popular technique still remain radiographic examination and qualitative assessment of the joint space. This may be in large part because of a lack of tools that can provide clinically relevant diagnosis in adjunct (in near real time fashion) with the radiologist and which can serve the needs of the radiologists and reduce inter-observer variation. Our work aims to fill this void by developing a CAD application that can generate clinically relevant diagnosis of the articular cartilage damage in near real time fashion. The algorithm features a 2D Active Shape Model (ASM) for modeling the bone-cartilage interface on all the slices of a Double Echo Steady State (DESS) MR sequence, followed by measurement of the cartilage thickness from the surface of the bone, and finally by the identification of regions of abnormal thinness and focal/degenerative lesions. A preliminary evaluation of CAD tool was carried out on 10 cases taken from the Osteoarthritis Initiative (OAI) database. When compared with 2 board-certified musculoskeletal radiologists, the automatic CAD application was able to get segmentation/thickness maps in little over 60 seconds for all of the cases. This observation poses interesting possibilities for increasing radiologist productivity and confidence, improving patient outcomes, and applying more sophisticated CAD algorithms to routine orthopedic imaging tasks.

  13. A Case Study in CAD Design Automation

    ERIC Educational Resources Information Center

    Lowe, Andrew G.; Hartman, Nathan W.

    2011-01-01

    Computer-aided design (CAD) software and other product life-cycle management (PLM) tools have become ubiquitous in industry during the past 20 years. Over this time they have continuously evolved, becoming programs with enormous capabilities, but the companies that use them have not evolved their design practices at the same rate. Due to the…

  14. The BRL-CAD Package: An Overview

    DTIC Science & Technology

    2013-04-01

    TERMS NURBS BSpline, raytracing , CSG, BRL-CAD 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF PAGES 14 19a...Definition and Raytracing of B-spline Objects in a Combinatorial Solid Geometric Modeling System," USENIX: Proceeding of the Fourth Computer Graphics

  15. Mechanical Drafting with CAD. Teacher Edition.

    ERIC Educational Resources Information Center

    McClain, Gerald R.

    This instructor's manual contains 13 units of instruction for a course on mechanical drafting with options for using computer-aided drafting (CAD). Each unit includes some or all of the following basic components of a unit of instruction: objective sheet, suggested activities for the teacher, assignment sheets and answers to assignment sheets,…

  16. Computationally Efficient Composite Likelihood Statistics for Demographic Inference.

    PubMed

    Coffman, Alec J; Hsieh, Ping Hsun; Gravel, Simon; Gutenkunst, Ryan N

    2016-02-01

    Many population genetics tools employ composite likelihoods, because fully modeling genomic linkage is challenging. But traditional approaches to estimating parameter uncertainties and performing model selection require full likelihoods, so these tools have relied on computationally expensive maximum-likelihood estimation (MLE) on bootstrapped data. Here, we demonstrate that statistical theory can be applied to adjust composite likelihoods and perform robust computationally efficient statistical inference in two demographic inference tools: ∂a∂i and TRACTS. On both simulated and real data, the adjustments perform comparably to MLE bootstrapping while using orders of magnitude less computational time.

  17. Popescu-Rohrlich correlations imply efficient instantaneous nonlocal quantum computation

    NASA Astrophysics Data System (ADS)

    Broadbent, Anne

    2016-08-01

    In instantaneous nonlocal quantum computation, two parties cooperate in order to perform a quantum computation on their joint inputs, while being restricted to a single round of simultaneous communication. Previous results showed that instantaneous nonlocal quantum computation is possible, at the cost of an exponential amount of prior shared entanglement (in the size of the input). Here, we show that a linear amount of entanglement suffices, (in the size of the computation), as long as the parties share nonlocal correlations as given by the Popescu-Rohrlich box. This means that communication is not required for efficient instantaneous nonlocal quantum computation. Exploiting the well-known relation to position-based cryptography, our result also implies the impossibility of secure position-based cryptography against adversaries with nonsignaling correlations. Furthermore, our construction establishes a quantum analog of the classical communication complexity collapse under nonsignaling correlations.

  18. Simulation model to analyze the scatter radiation effects on breast cancer diagnosis by CAD system

    NASA Astrophysics Data System (ADS)

    Irita, Ricardo T.; Frere, Annie F.; Fujita, Hiroshi

    2002-05-01

    One of factors that more affect the radiographic image quality is the scatter radiation produced by interaction between the x-ray and the radiographed object. Recently the Computer Aided Diagnosis (CAD) Systems are coming to aid the detection of breast small details. Nevertheless, we not sure how much the scatter radiation decrease the efficiency of this systems. This work presents a model in order to quantify the scatter radiation and find it relation between CAD's results used for the microcalcification detection. We simulated scatter photons that reaches the film and we added it to the mammography image. The new images were processed and the alterations of the CAD's results were analyzed. The information loss to breast composed by 80 percent adipose tissue was 0,0561 per each centimeter increased in the breast's thickness. We calculated these same data considering a proportion variation of adipose tissue and considering the breast composition of 90 percent and 70 percent the loss it would be of 0.0504 and 0.07559 per increased cm, respectively. We can increase the wanted scattered radiation to any image with its own characteristics and analyze the disturbances that it can bring to the visual inspection or the automatic detection (CAD system) efficiently.

  19. A Software for CAD Photomask --- ZB-761,

    DTIC Science & Technology

    1981-05-21

    Xian-long Department of Eleotronie Engineering, - Qinhue University Abstract As a part ef the L81 UAD, a software for CAD photomask ZU-761 was designed...by meens o" paper tape or keyboard. After the processing of the CAue language compiler, the computer produces a paper tape ror program control which...Th’o matrix representation of the transformation computation can be generalized to handle the date representing an array of 15 regularly arranged

  20. CAD-CAE in Electrical Machines and Drives Teaching.

    ERIC Educational Resources Information Center

    Belmans, R.; Geysen, W.

    1988-01-01

    Describes the use of computer-aided design (CAD) techniques in teaching the design of electrical motors. Approaches described include three technical viewpoints, such as electromagnetics, thermal, and mechanical aspects. Provides three diagrams, a table, and conclusions. (YP)

  1. Overview of NASA MSFC IEC Multi-CAD Collaboration Capability

    NASA Technical Reports Server (NTRS)

    Moushon, Brian; McDuffee, Patrick

    2005-01-01

    This viewgraph presentation provides an overview of a Design and Data Management System (DDMS) for Computer Aided Design (CAD) collaboration in order to support the Integrated Engineering Capability (IEC) at Marshall Space Flight Center (MSFC).

  2. Single unit CAD/CAM restorations: a literature review.

    PubMed

    Freedman, Michael; Quinn, Frank; O'Sullivan, Michael

    2007-01-01

    Computer-aided design/computer-aided manufacture (CAD/CAM) has been used in dentistry since 1987. Since then, many CAD/CAM systems have been described, which enable the production of chair-side single unit dental restorations. These restorations are of comparable quality to those made by conventional techniques and have some specific advantages, including rapid production, improved wear properties, decreased laboratory fee and improved cross infection control. This literature review investigates the evidence base for the use of single unit CAD/CAM restorations. Materials, marginal gap, aesthetics, post-operative sensitivity, cementation, cost-effectiveness and longevity are discussed.

  3. An Accurate and Efficient Method of Computing Differential Seismograms

    NASA Astrophysics Data System (ADS)

    Hu, S.; Zhu, L.

    2013-12-01

    Inversion of seismic waveforms for Earth structure usually requires computing partial derivatives of seismograms with respect to velocity model parameters. We developed an accurate and efficient method to calculate differential seismograms for multi-layered elastic media, based on the Thompson-Haskell propagator matrix technique. We first derived the partial derivatives of the Haskell matrix and its compound matrix respect to the layer parameters (P wave velocity, shear wave velocity and density). We then derived the partial derivatives of surface displacement kernels in the frequency-wavenumber domain. The differential seismograms are obtained by using the frequency-wavenumber double integration method. The implementation is computationally efficient and the total computing time is proportional to the time of computing the seismogram itself, i.e., independent of the number of layers in the model. We verified the correctness of results by comparing with differential seismograms computed using the finite differences method. Our results are more accurate because of the analytical nature of the derived partial derivatives.

  4. Efficient quantum circuits for one-way quantum computing.

    PubMed

    Tanamoto, Tetsufumi; Liu, Yu-Xi; Hu, Xuedong; Nori, Franco

    2009-03-13

    While Ising-type interactions are ideal for implementing controlled phase flip gates in one-way quantum computing, natural interactions between solid-state qubits are most often described by either the XY or the Heisenberg models. We show an efficient way of generating cluster states directly using either the imaginary SWAP (iSWAP) gate for the XY model, or the sqrt[SWAP] gate for the Heisenberg model. Our approach thus makes one-way quantum computing more feasible for solid-state devices.

  5. Efficient computations of quantum canonical Gibbs state in phase space

    NASA Astrophysics Data System (ADS)

    Bondar, Denys I.; Campos, Andre G.; Cabrera, Renan; Rabitz, Herschel A.

    2016-06-01

    The Gibbs canonical state, as a maximum entropy density matrix, represents a quantum system in equilibrium with a thermostat. This state plays an essential role in thermodynamics and serves as the initial condition for nonequilibrium dynamical simulations. We solve a long standing problem for computing the Gibbs state Wigner function with nearly machine accuracy by solving the Bloch equation directly in the phase space. Furthermore, the algorithms are provided yielding high quality Wigner distributions for pure stationary states as well as for Thomas-Fermi and Bose-Einstein distributions. The developed numerical methods furnish a long-sought efficient computation framework for nonequilibrium quantum simulations directly in the Wigner representation.

  6. A SINDA thermal model using CAD/CAE technologies

    NASA Technical Reports Server (NTRS)

    Rodriguez, Jose A.; Spencer, Steve

    1992-01-01

    The approach to thermal analysis described by this paper is a technique that incorporates Computer Aided Design (CAD) and Computer Aided Engineering (CAE) to develop a thermal model that has the advantages of Finite Element Methods (FEM) without abandoning the unique advantages of Finite Difference Methods (FDM) in the analysis of thermal systems. The incorporation of existing CAD geometry, the powerful use of a pre and post processor and the ability to do interdisciplinary analysis, will be described.

  7. Volume-averaged SAR in adult and child head models when using mobile phones: a computational study with detailed CAD-based models of commercial mobile phones.

    PubMed

    Keshvari, Jafar; Heikkilä, Teemu

    2011-12-01

    Previous studies comparing SAR difference in the head of children and adults used highly simplified generic models or half-wave dipole antennas. The objective of this study was to investigate the SAR difference in the head of children and adults using realistic EMF sources based on CAD models of commercial mobile phones. Four MRI-based head phantoms were used in the study. CAD models of Nokia 8310 and 6630 mobile phones were used as exposure sources. Commercially available FDTD software was used for the SAR calculations. SAR values were simulated at frequencies 900 MHz and 1747 MHz for Nokia 8310, and 900 MHz, 1747 MHz and 1950 MHz for Nokia 6630. The main finding of this study was that the SAR distribution/variation in the head models highly depends on the structure of the antenna and phone model, which suggests that the type of the exposure source is the main parameter in EMF exposure studies to be focused on. Although the previous findings regarding significant role of the anatomy of the head, phone position, frequency, local tissue inhomogeneity and tissue composition specifically in the exposed area on SAR difference were confirmed, the SAR values and SAR distributions caused by generic source models cannot be extrapolated to the real device exposures. The general conclusion is that from a volume averaged SAR point of view, no systematic differences between child and adult heads were found.

  8. Computational methods for efficient structural reliability and reliability sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1993-01-01

    This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

  9. A Computationally Efficient Method for Polyphonic Pitch Estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Ruohua; Reiss, Joshua D.; Mattavelli, Marco; Zoia, Giorgio

    2009-12-01

    This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI) as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.

  10. Efficient and accurate computation of the incomplete Airy functions

    NASA Technical Reports Server (NTRS)

    Constantinides, E. D.; Marhefka, R. J.

    1993-01-01

    The incomplete Airy integrals serve as canonical functions for the uniform ray optical solutions to several high-frequency scattering and diffraction problems that involve a class of integrals characterized by two stationary points that are arbitrarily close to one another or to an integration endpoint. Integrals with such analytical properties describe transition region phenomena associated with composite shadow boundaries. An efficient and accurate method for computing the incomplete Airy functions would make the solutions to such problems useful for engineering purposes. In this paper a convergent series solution for the incomplete Airy functions is derived. Asymptotic expansions involving several terms are also developed and serve as large argument approximations. The combination of the series solution with the asymptotic formulae provides for an efficient and accurate computation of the incomplete Airy functions. Validation of accuracy is accomplished using direct numerical integration data.

  11. Full-mouth rehabilitation with monolithic CAD/CAM-fabricated hybrid and all-ceramic materials: A case report and 3-year follow up.

    PubMed

    Selz, Christian F; Vuck, Alexander; Guess, Petra C

    2016-02-01

    Esthetic full-mouth rehabilitation represents a great challenge for clinicians and dental technicians. Computer-aided design/ computer-assisted manufacture (CAD/CAM) technology and novel ceramic materials in combination with adhesive cementation provide a reliable, predictable, and economic workflow. Polychromatic feldspathic CAD/CAM ceramics that are specifically designed for anterior indications result in superior esthetics, whereas novel CAD/CAM hybrid ceramics provide sufficient fracture resistance and adsorption of the occlusal load in posterior areas. Screw-retained monolithic CAD/CAM lithium disilicate crowns (ie, hybrid abutment crowns) represent a reliable and time- and cost-efficient prosthetic implant solution. This case report details a CAD/CAM approach to the full-arch rehabilitation of a 65-year-old patient with toothand implant-supported restorations and provides an overview of the applied CAD/CAM materials and the utilized chairside intraoral scanner. The esthetics, functional occlusion, and gingival and peri-implant tissues remained stable over a follow-up period of 3 years. No signs of fractures within the restorations were observed.

  12. Convolutional networks for fast, energy-efficient neuromorphic computing

    PubMed Central

    Esser, Steven K.; Merolla, Paul A.; Arthur, John V.; Cassidy, Andrew S.; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J.; McKinstry, Jeffrey L.; Melano, Timothy; Barch, Davis R.; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D.; Modha, Dharmendra S.

    2016-01-01

    Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware’s underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer. PMID:27651489

  13. Efficient MATLAB computations with sparse and factored tensors.

    SciTech Connect

    Bader, Brett William; Kolda, Tamara Gibson (Sandia National Lab, Livermore, CA)

    2006-12-01

    In this paper, the term tensor refers simply to a multidimensional or N-way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose storing sparse tensors using coordinate format and describe the computational efficiency of this scheme for various mathematical operations, including those typical to tensor decomposition algorithms. Second, we study factored tensors, which have the property that they can be assembled from more basic components. We consider two specific types: a Tucker tensor can be expressed as the product of a core tensor (which itself may be dense, sparse, or factored) and a matrix along each mode, and a Kruskal tensor can be expressed as the sum of rank-1 tensors. We are interested in the case where the storage of the components is less than the storage of the full tensor, and we demonstrate that many elementary operations can be computed using only the components. All of the efficiencies described in this paper are implemented in the Tensor Toolbox for MATLAB.

  14. Convolutional networks for fast, energy-efficient neuromorphic computing.

    PubMed

    Esser, Steven K; Merolla, Paul A; Arthur, John V; Cassidy, Andrew S; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J; McKinstry, Jeffrey L; Melano, Timothy; Barch, Davis R; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D; Modha, Dharmendra S

    2016-10-11

    Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware's underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.

  15. Computationally efficient ASIC implementation of space-time block decoding

    NASA Astrophysics Data System (ADS)

    Cavus, Enver; Daneshrad, Babak

    2002-12-01

    In this paper, we describe a computationally efficient ASIC design that leads to a highly efficient power and area implementation of space-time block decoder compared to a direct implementation of the original algorithm. Our study analyzes alternative methods of evaluating as well as implementing the previously reported maximum likelihood algorithms (Tarokh et al. 1998) for a more favorable hardware design. In our previous study (Cavus et al. 2001), after defining some intermediate variables at the algorithm level, highly computationally efficient decoding approaches, namely sign and double-sign methods, are developed and their effectiveness are illustrated for 2x2, 8x3 and 8x4 systems using BPSK, QPSK, 8-PSK, or 16-QAM modulation. In this work, alternative architectures for the decoder implementation are investigated and an implementation having a low computation approach is proposed. The applied techniques at the higher algorithm and architectural levels lead to a substantial simplification of the hardware architecture and significantly reduced power consumption. The proposed architecture is being fabricated in TSMC 0.18 μ process.

  16. A/E/C Graphics Standard: Release 2.0 (formerly titled CAD Drafting Standard)

    DTIC Science & Technology

    2015-08-01

    Civil Information Modeling (CIM), and Computer- Aided Design (CAD). It is through the collection and documentation of these practices that consistent...acronyms: • A/E/C – Architecture, Engineering, and Construction • BIM – Building Information Modeling • CADComputer- Aided Design • CIM – Civil...Building Information Modeling (BIM), Civil Information Modeling (CIM), and Computer- Aided Design (CAD). It is through the collection and documentation

  17. Energy Efficient Biomolecular Simulations with FPGA-based Reconfigurable Computing

    SciTech Connect

    Hampton, Scott S; Agarwal, Pratul K

    2010-05-01

    Reconfigurable computing (RC) is being investigated as a hardware solution for improving time-to-solution for biomolecular simulations. A number of popular molecular dynamics (MD) codes are used to study various aspects of biomolecules. These codes are now capable of simulating nanosecond time-scale trajectories per day on conventional microprocessor-based hardware, but biomolecular processes often occur at the microsecond time-scale or longer. A wide gap exists between the desired and achievable simulation capability; therefore, there is considerable interest in alternative algorithms and hardware for improving the time-to-solution of MD codes. The fine-grain parallelism provided by Field Programmable Gate Arrays (FPGA) combined with their low power consumption make them an attractive solution for improving the performance of MD simulations. In this work, we use an FPGA-based coprocessor to accelerate the compute-intensive calculations of LAMMPS, a popular MD code, achieving up to 5.5 fold speed-up on the non-bonded force computations of the particle mesh Ewald method and up to 2.2 fold speed-up in overall time-to-solution, and potentially an increase by a factor of 9 in power-performance efficiencies for the pair-wise computations. The results presented here provide an example of the multi-faceted benefits to an application in a heterogeneous computing environment.

  18. Improving robustness and computational efficiency using modern C++

    SciTech Connect

    Paterno, M.; Kowalkowski, J.; Green, C.

    2014-01-01

    For nearly two decades, the C++ programming language has been the dominant programming language for experimental HEP. The publication of ISO/IEC 14882:2011, the current version of the international standard for the C++ programming language, makes available a variety of language and library facilities for improving the robustness, expressiveness, and computational efficiency of C++ code. However, much of the C++ written by the experimental HEP community does not take advantage of the features of the language to obtain these benefits, either due to lack of familiarity with these features or concern that these features must somehow be computationally inefficient. In this paper, we address some of the features of modern C+-+, and show how they can be used to make programs that are both robust and computationally efficient. We compare and contrast simple yet realistic examples of some common implementation patterns in C, currently-typical C++, and modern C++, and show (when necessary, down to the level of generated assembly language code) the quality of the executable code produced by recent C++ compilers, with the aim of allowing the HEP community to make informed decisions on the costs and benefits of the use of modern C++.

  19. Improving robustness and computational efficiency using modern C++

    NASA Astrophysics Data System (ADS)

    Paterno, M.; Kowalkowski, J.; Green, C.

    2014-06-01

    For nearly two decades, the C++ programming language has been the dominant programming language for experimental HEP. The publication of ISO/IEC 14882:2011, the current version of the international standard for the C++ programming language, makes available a variety of language and library facilities for improving the robustness, expressiveness, and computational efficiency of C++ code. However, much of the C++ written by the experimental HEP community does not take advantage of the features of the language to obtain these benefits, either due to lack of familiarity with these features or concern that these features must somehow be computationally inefficient. In this paper, we address some of the features of modern C+-+, and show how they can be used to make programs that are both robust and computationally efficient. We compare and contrast simple yet realistic examples of some common implementation patterns in C, currently-typical C++, and modern C++, and show (when necessary, down to the level of generated assembly language code) the quality of the executable code produced by recent C++ compilers, with the aim of allowing the HEP community to make informed decisions on the costs and benefits of the use of modern C++.

  20. CAD/CAM for optomechatronics

    NASA Astrophysics Data System (ADS)

    Zhou, Haiguang; Han, Min

    2003-10-01

    We focus at CAD/CAM for optomechatronics. We have developed a kind of CAD/CAM, which is not only for mechanics but also for optics and electronic. The software can be used for training and education. We introduce mechanical CAD, optical CAD and electrical CAD, we show how to draw a circuit diagram, mechanical diagram and luminous transmission diagram, from 2D drawing to 3D drawing. We introduce how to create 2D and 3D parts for optomechatronics, how to edit tool paths, how to select parameters for process, how to run the post processor, dynamic show the tool path and generate the CNC programming. We introduce the joint application of CAD&CAM. We aim at how to match the requirement of optical, mechanical and electronics.

  1. Computing highly specific and mismatch tolerant oligomers efficiently.

    PubMed

    Yamada, Tomoyuki; Morishita, Shinichi

    2003-01-01

    The sequencing of the genomes of a variety of species and the growing databases containing expressed sequence tags (ESTs) and complementary DNAs (cDNAs) facilitate the design of highly specific oligomers for use as genomic markers, PCR primers, or DNA oligo microarrays. The first step in evaluating the specificity of short oligomers of about twenty units in length is to determine the frequencies at which the oligomers occur. However, for oligomers longer than about fifty units this is not efficient, as they usually have a frequency of only 1. A more suitable procedure is to consider the mismatch tolerance of an oligomer, that is, the minimum number of mismatches that allows a given oligomer to match a sub-sequence other than the target sequence anywhere in the genome or the EST database. However, calculating the exact value of mismatch tolerance is computationally costly and impractical. Therefore, we studied the problem of checking whether an oligomer meets the constraint that its mismatch tolerance is no less than a given threshold. Here, we present an efficient dynamic programming algorithm solution that utilizes suffix and height arrays. We demonstrated the effectiveness of this algorithm by efficiently computing a dense list of oligo-markers applicable to the human genome. Experimental results show that the algorithm runs faster than well-known Abrahamson's algorithm by orders of magnitude and is able to enumerate 63% to approximately 79% of qualified oligomers.

  2. Computing highly specific and noise-tolerant oligomers efficiently.

    PubMed

    Yamada, Tomoyuki; Morishita, Shinichi

    2004-03-01

    The sequencing of the genomes of a variety of species and the growing databases containing expressed sequence tags (ESTs) and complementary DNAs (cDNAs) facilitate the design of highly specific oligomers for use as genomic markers, PCR primers, or DNA oligo microarrays. The first step in evaluating the specificity of short oligomers of about 20 units in length is to determine the frequencies at which the oligomers occur. However, for oligomers longer than about fifty units this is not efficient, as they usually have a frequency of only 1. A more suitable procedure is to consider the mismatch tolerance of an oligomer, that is, the minimum number of mismatches that allows a given oligomer to match a substring other than the target sequence anywhere in the genome or the EST database. However, calculating the exact value of mismatch tolerance is computationally costly and impractical. Therefore, we studied the problem of checking whether an oligomer meets the constraint that its mismatch tolerance is no less than a given threshold. Here, we present an efficient dynamic programming algorithm solution that utilizes suffix and height arrays. We demonstrated the effectiveness of this algorithm by efficiently computing a dense list of numerous oligo-markers applicable to the human genome. Experimental results show that the algorithm runs faster than well-known Abrahamson's algorithm by orders of magnitude and is able to enumerate 65% approximately 76% of qualified oligomers.

  3. Methods for increased computational efficiency of multibody simulations

    NASA Astrophysics Data System (ADS)

    Epple, Alexander

    This thesis is concerned with the efficient numerical simulation of finite element based flexible multibody systems. Scaling operations are systematically applied to the governing index-3 differential algebraic equations in order to solve the problem of ill conditioning for small time step sizes. The importance of augmented Lagrangian terms is demonstrated. The use of fast sparse solvers is justified for the solution of the linearized equations of motion resulting in significant savings of computational costs. Three time stepping schemes for the integration of the governing equations of flexible multibody systems are discussed in detail. These schemes are the two-stage Radau IIA scheme, the energy decaying scheme, and the generalized-a method. Their formulations are adapted to the specific structure of the governing equations of flexible multibody systems. The efficiency of the time integration schemes is comprehensively evaluated on a series of test problems. Formulations for structural and constraint elements are reviewed and the problem of interpolation of finite rotations in geometrically exact structural elements is revisited. This results in the development of a new improved interpolation algorithm, which preserves the objectivity of the strain field and guarantees stable simulations in the presence of arbitrarily large rotations. Finally, strategies for the spatial discretization of beams in the presence of steep variations in cross-sectional properties are developed. These strategies reduce the number of degrees of freedom needed to accurately analyze beams with discontinuous properties, resulting in improved computational efficiency.

  4. ProperCAD: A portable object-oriented parallel environment for VLSI CAD

    NASA Technical Reports Server (NTRS)

    Ramkumar, Balkrishna; Banerjee, Prithviraj

    1993-01-01

    Most parallel algorithms for VLSI CAD proposed to date have one important drawback: they work efficiently only on machines that they were designed for. As a result, algorithms designed to date are dependent on the architecture for which they are developed and do not port easily to other parallel architectures. A new project under way to address this problem is described. A Portable object-oriented parallel environment for CAD algorithms (ProperCAD) is being developed. The objectives of this research are (1) to develop new parallel algorithms that run in a portable object-oriented environment (CAD algorithms using a general purpose platform for portable parallel programming called CARM is being developed and a C++ environment that is truly object-oriented and specialized for CAD applications is also being developed); and (2) to design the parallel algorithms around a good sequential algorithm with a well-defined parallel-sequential interface (permitting the parallel algorithm to benefit from future developments in sequential algorithms). One CAD application that has been implemented as part of the ProperCAD project, flat VLSI circuit extraction, is described. The algorithm, its implementation, and its performance on a range of parallel machines are discussed in detail. It currently runs on an Encore Multimax, a Sequent Symmetry, Intel iPSC/2 and i860 hypercubes, a NCUBE 2 hypercube, and a network of Sun Sparc workstations. Performance data for other applications that were developed are provided: namely test pattern generation for sequential circuits, parallel logic synthesis, and standard cell placement.

  5. DeviceEditor visual biological CAD canvas

    PubMed Central

    2012-01-01

    Background Biological Computer Aided Design (bioCAD) assists the de novo design and selection of existing genetic components to achieve a desired biological activity, as part of an integrated design-build-test cycle. To meet the emerging needs of Synthetic Biology, bioCAD tools must address the increasing prevalence of combinatorial library design, design rule specification, and scar-less multi-part DNA assembly. Results We report the development and deployment of web-based bioCAD software, DeviceEditor, which provides a graphical design environment that mimics the intuitive visual whiteboard design process practiced in biological laboratories. The key innovations of DeviceEditor include visual combinatorial library design, direct integration with scar-less multi-part DNA assembly design automation, and a graphical user interface for the creation and modification of design specification rules. We demonstrate how biological designs are rendered on the DeviceEditor canvas, and we present effective visualizations of genetic component ordering and combinatorial variations within complex designs. Conclusions DeviceEditor liberates researchers from DNA base-pair manipulation, and enables users to create successful prototypes using standardized, functional, and visual abstractions. Open and documented software interfaces support further integration of DeviceEditor with other bioCAD tools and software platforms. DeviceEditor saves researcher time and institutional resources through correct-by-construction design, the automation of tedious tasks, design reuse, and the minimization of DNA assembly costs. PMID:22373390

  6. Ergonomics Perspective in Agricultural Research: A User-Centred Approach Using CAD and Digital Human Modeling (DHM) Technologies

    NASA Astrophysics Data System (ADS)

    Patel, Thaneswer; Sanjog, J.; Karmakar, Sougata

    2016-09-01

    Computer-aided Design (CAD) and Digital Human Modeling (DHM) (specialized CAD software for virtual human representation) technologies endow unique opportunities to incorporate human factors pro-actively in design development. Challenges of enhancing agricultural productivity through improvement of agricultural tools/machineries and better human-machine compatibility can be ensured by adoption of these modern technologies. Objectives of present work are to provide the detailed scenario of CAD and DHM applications in agricultural sector; and finding out means for wide adoption of these technologies for design and development of cost-effective, user-friendly, efficient and safe agricultural tools/equipment and operator's workplace. Extensive literature review has been conducted for systematic segregation and representation of available information towards drawing inferences. Although applications of various CAD software have momentum in agricultural research particularly for design and manufacturing of agricultural equipment/machinery, use of DHM is still at its infancy in this sector. Current review discusses about reasons of less adoption of these technologies in agricultural sector and steps to be taken for their wide adoption. It also suggests possible future research directions to come up with better ergonomic design strategies for improvement of agricultural equipment/machines and workstations through application of CAD and DHM.

  7. Improving the efficiency of abdominal aortic aneurysm wall stress computations.

    PubMed

    Zelaya, Jaime E; Goenezen, Sevan; Dargon, Phong T; Azarbal, Amir-Farzin; Rugonyi, Sandra

    2014-01-01

    An abdominal aortic aneurysm is a pathological dilation of the abdominal aorta, which carries a high mortality rate if ruptured. The most commonly used surrogate marker of rupture risk is the maximal transverse diameter of the aneurysm. More recent studies suggest that wall stress from models of patient-specific aneurysm geometries extracted, for instance, from computed tomography images may be a more accurate predictor of rupture risk and an important factor in AAA size progression. However, quantification of wall stress is typically computationally intensive and time-consuming, mainly due to the nonlinear mechanical behavior of the abdominal aortic aneurysm walls. These difficulties have limited the potential of computational models in clinical practice. To facilitate computation of wall stresses, we propose to use a linear approach that ensures equilibrium of wall stresses in the aneurysms. This proposed linear model approach is easy to implement and eliminates the burden of nonlinear computations. To assess the accuracy of our proposed approach to compute wall stresses, results from idealized and patient-specific model simulations were compared to those obtained using conventional approaches and to those of a hypothetical, reference abdominal aortic aneurysm model. For the reference model, wall mechanical properties and the initial unloaded and unstressed configuration were assumed to be known, and the resulting wall stresses were used as reference for comparison. Our proposed linear approach accurately approximates wall stresses for varying model geometries and wall material properties. Our findings suggest that the proposed linear approach could be used as an effective, efficient, easy-to-use clinical tool to estimate patient-specific wall stresses.

  8. Generating Composite Overlapping Grids on CAD Geometries

    SciTech Connect

    Henshaw, W.D.

    2002-02-07

    We describe some algorithms and tools that have been developed to generate composite overlapping grids on geometries that have been defined with computer aided design (CAD) programs. This process consists of five main steps. Starting from a description of the surfaces defining the computational domain we (1) correct errors in the CAD representation, (2) determine topology of the patched-surface, (3) build a global triangulation of the surface, (4) construct structured surface and volume grids using hyperbolic grid generation, and (5) generate the overlapping grid by determining the holes and the interpolation points. The overlapping grid generator which is used for the final step also supports the rapid generation of grids for block-structured adaptive mesh refinement and for moving grids. These algorithms have been implemented as part of the Overture object-oriented framework.

  9. Adding computationally efficient realism to Monte Carlo turbulence simulation

    NASA Technical Reports Server (NTRS)

    Campbell, C. W.

    1985-01-01

    Frequently in aerospace vehicle flight simulation, random turbulence is generated using the assumption that the craft is small compared to the length scales of turbulence. The turbulence is presumed to vary only along the flight path of the vehicle but not across the vehicle span. The addition of the realism of three-dimensionality is a worthy goal, but any such attempt will not gain acceptance in the simulator community unless it is computationally efficient. A concept for adding three-dimensional realism with a minimum of computational complexity is presented. The concept involves the use of close rational approximations to irrational spectra and cross-spectra so that systems of stable, explicit difference equations can be used to generate the turbulence.

  10. Efficient simulation of open quantum system in duality quantum computing

    NASA Astrophysics Data System (ADS)

    Wei, Shi-Jie; Long, Gui-Lu

    2016-11-01

    Practical quantum systems are open systems due to interactions with their environment. Understanding the evolution of open systems dynamics is important for quantum noise processes , designing quantum error correcting codes, and performing simulations of open quantum systems. Here we proposed an efficient quantum algorithm for simulating the evolution of an open quantum system on a duality quantum computer. In contrast to unitary evolution in a usual quantum computer, the evolution operator in a duality quantum computer is a linear combination of unitary operators. In this duality algorithm, the time evolution of open quantum system is realized by using Kraus operators which is naturally realized in duality quantum computing. Compared to the Lloyd's quantum algorithm [Science.273, 1073(1996)] , the dependence on the dimension of the open quantum system in our algorithm is decreased. Moreover, our algorithm uses a truncated Taylor series of the evolution operators, exponentially improving the performance on the precision compared with existing quantum simulation algorithms with unitary evolution operations.

  11. Experiences With Efficient Methodologies for Teaching Computer Programming to Geoscientists

    NASA Astrophysics Data System (ADS)

    Jacobs, Christian T.; Gorman, Gerard J.; Rees, Huw E.; Craig, Lorraine E.

    2016-08-01

    Computer programming was once thought of as a skill required only by professional software developers. But today, given the ubiquitous nature of computation and data science it is quickly becoming necessary for all scientists and engineers to have at least a basic knowledge of how to program. Teaching how to program, particularly to those students with little or no computing background, is well-known to be a difficult task. However, there is also a wealth of evidence-based teaching practices for teaching programming skills which can be applied to greatly improve learning outcomes and the student experience. Adopting these practices naturally gives rise to greater learning efficiency - this is critical if programming is to be integrated into an already busy geoscience curriculum. This paper considers an undergraduate computer programming course, run during the last 5 years in the Department of Earth Science and Engineering at Imperial College London. The teaching methodologies that were used each year are discussed alongside the challenges that were encountered, and how the methodologies affected student performance. Anonymised student marks and feedback are used to highlight this, and also how the adjustments made to the course eventually resulted in a highly effective learning environment.

  12. Increasing Computational Efficiency of Cochlear Models Using Boundary Layers

    PubMed Central

    Alkhairy, Samiya A.; Shera, Christopher A.

    2016-01-01

    Our goal is to develop methods to improve the efficiency of computational models of the cochlea for applications that require the solution accurately only within a basal region of interest, specifically by decreasing the number of spatial sections needed for simulation of the problem with good accuracy. We design algebraic spatial and parametric transformations to computational models of the cochlea. These transformations are applied after the basal region of interest and allow for spatial preservation, driven by the natural characteristics of approximate spatial causality of cochlear models. The project is of foundational nature and hence the goal is to design, characterize and develop an understanding and framework rather than optimization and globalization. Our scope is as follows: designing the transformations; understanding the mechanisms by which computational load is decreased for each transformation; development of performance criteria; characterization of the results of applying each transformation to a specific physical model and discretization and solution schemes. In this manuscript, we introduce one of the proposed methods (complex spatial transformation) for a case study physical model that is a linear, passive, transmission line model in which the various abstraction layers (electric parameters, filter parameters, wave parameters) are clearer than other models. This is conducted in the frequency domain for multiple frequencies using a second order finite difference scheme for discretization and direct elimination for solving the discrete system of equations. The performance is evaluated using two developed simulative criteria for each of the transformations. In conclusion, the developed methods serve to increase efficiency of a computational traveling wave cochlear model when spatial preservation can hold, while maintaining good correspondence with the solution of interest and good accuracy, for applications in which the interest is in the solution

  13. Increasing computational efficiency of cochlear models using boundary layers

    NASA Astrophysics Data System (ADS)

    Alkhairy, Samiya A.; Shera, Christopher A.

    2015-12-01

    Our goal is to develop methods to improve the efficiency of computational models of the cochlea for applications that require the solution accurately only within a basal region of interest, specifically by decreasing the number of spatial sections needed for simulation of the problem with good accuracy. We design algebraic spatial and parametric transformations to computational models of the cochlea. These transformations are applied after the basal region of interest and allow for spatial preservation, driven by the natural characteristics of approximate spatial causality of cochlear models. The project is of foundational nature and hence the goal is to design, characterize and develop an understanding and framework rather than optimization and globalization. Our scope is as follows: designing the transformations; understanding the mechanisms by which computational load is decreased for each transformation; development of performance criteria; characterization of the results of applying each transformation to a specific physical model and discretization and solution schemes. In this manuscript, we introduce one of the proposed methods (complex spatial transformation) for a case study physical model that is a linear, passive, transmission line model in which the various abstraction layers (electric parameters, filter parameters, wave parameters) are clearer than other models. This is conducted in the frequency domain for multiple frequencies using a second order finite difference scheme for discretization and direct elimination for solving the discrete system of equations. The performance is evaluated using two developed simulative criteria for each of the transformations. In conclusion, the developed methods serve to increase efficiency of a computational traveling wave cochlear model when spatial preservation can hold, while maintaining good correspondence with the solution of interest and good accuracy, for applications in which the interest is in the solution

  14. Efficient quantum algorithm for computing n-time correlation functions.

    PubMed

    Pedernales, J S; Di Candia, R; Egusquiza, I L; Casanova, J; Solano, E

    2014-07-11

    We propose a method for computing n-time correlation functions of arbitrary spinorial, fermionic, and bosonic operators, consisting of an efficient quantum algorithm that encodes these correlations in an initially added ancillary qubit for probe and control tasks. For spinorial and fermionic systems, the reconstruction of arbitrary n-time correlation functions requires the measurement of two ancilla observables, while for bosonic variables time derivatives of the same observables are needed. Finally, we provide examples applicable to different quantum platforms in the frame of the linear response theory.

  15. Efficient Parallel Kernel Solvers for Computational Fluid Dynamics Applications

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He

    1997-01-01

    Distributed-memory parallel computers dominate today's parallel computing arena. These machines, such as Intel Paragon, IBM SP2, and Cray Origin2OO, have successfully delivered high performance computing power for solving some of the so-called "grand-challenge" problems. Despite initial success, parallel machines have not been widely accepted in production engineering environments due to the complexity of parallel programming. On a parallel computing system, a task has to be partitioned and distributed appropriately among processors to reduce communication cost and to attain load balance. More importantly, even with careful partitioning and mapping, the performance of an algorithm may still be unsatisfactory, since conventional sequential algorithms may be serial in nature and may not be implemented efficiently on parallel machines. In many cases, new algorithms have to be introduced to increase parallel performance. In order to achieve optimal performance, in addition to partitioning and mapping, a careful performance study should be conducted for a given application to find a good algorithm-machine combination. This process, however, is usually painful and elusive. The goal of this project is to design and develop efficient parallel algorithms for highly accurate Computational Fluid Dynamics (CFD) simulations and other engineering applications. The work plan is 1) developing highly accurate parallel numerical algorithms, 2) conduct preliminary testing to verify the effectiveness and potential of these algorithms, 3) incorporate newly developed algorithms into actual simulation packages. The work plan has well achieved. Two highly accurate, efficient Poisson solvers have been developed and tested based on two different approaches: (1) Adopting a mathematical geometry which has a better capacity to describe the fluid, (2) Using compact scheme to gain high order accuracy in numerical discretization. The previously developed Parallel Diagonal Dominant (PDD) algorithm

  16. IMPROVING TACONITE PROCESSING PLANT EFFICIENCY BY COMPUTER SIMULATION, Final Report

    SciTech Connect

    William M. Bond; Salih Ersayin

    2007-03-30

    This project involved industrial scale testing of a mineral processing simulator to improve the efficiency of a taconite processing plant, namely the Minorca mine. The Concentrator Modeling Center at the Coleraine Minerals Research Laboratory, University of Minnesota Duluth, enhanced the capabilities of available software, Usim Pac, by developing mathematical models needed for accurate simulation of taconite plants. This project provided funding for this technology to prove itself in the industrial environment. As the first step, data representing existing plant conditions were collected by sampling and sample analysis. Data were then balanced and provided a basis for assessing the efficiency of individual devices and the plant, and also for performing simulations aimed at improving plant efficiency. Performance evaluation served as a guide in developing alternative process strategies for more efficient production. A large number of computer simulations were then performed to quantify the benefits and effects of implementing these alternative schemes. Modification of makeup ball size was selected as the most feasible option for the target performance improvement. This was combined with replacement of existing hydrocyclones with more efficient ones. After plant implementation of these modifications, plant sampling surveys were carried out to validate findings of the simulation-based study. Plant data showed very good agreement with the simulated data, confirming results of simulation. After the implementation of modifications in the plant, several upstream bottlenecks became visible. Despite these bottlenecks limiting full capacity, concentrator energy improvement of 7% was obtained. Further improvements in energy efficiency are expected in the near future. The success of this project demonstrated the feasibility of a simulation-based approach. Currently, the Center provides simulation-based service to all the iron ore mining companies operating in northern

  17. CAD/CAM improves productivity in nonaerospace job shops

    NASA Astrophysics Data System (ADS)

    Koenig, D. T.

    1982-12-01

    Business cost improvements that can result from Computer Aided Design/Computer Aided Manufacturing (CAD/CAM), when properly applied, are discussed. Emphasis is placed on the use of CAD/CAM for machine and process control, design and planning control, and production and measurement control. It is pointed out that the implementation of CAD/CAM should be based on the following priorities: (1) recognize interrelationships between the principal functions of CAD/CAM; (2) establish a Systems Council to determine overall strategy and specify the communications/decision-making system; (3) implement the communications/decision-making system to improve productivity; and (4) implement interactive graphics and other additions to further improve productivity.

  18. An image database management system for conducting CAD research

    NASA Astrophysics Data System (ADS)

    Gruszauskas, Nicholas; Drukker, Karen; Giger, Maryellen L.

    2007-03-01

    The development of image databases for CAD research is not a trivial task. The collection and management of images and their related metadata from multiple sources is a time-consuming but necessary process. By standardizing and centralizing the methods in which these data are maintained, one can generate subsets of a larger database that match the specific criteria needed for a particular research project in a quick and efficient manner. A research-oriented management system of this type is highly desirable in a multi-modality CAD research environment. An online, webbased database system for the storage and management of research-specific medical image metadata was designed for use with four modalities of breast imaging: screen-film mammography, full-field digital mammography, breast ultrasound and breast MRI. The system was designed to consolidate data from multiple clinical sources and provide the user with the ability to anonymize the data. Input concerning the type of data to be stored as well as desired searchable parameters was solicited from researchers in each modality. The backbone of the database was created using MySQL. A robust and easy-to-use interface for entering, removing, modifying and searching information in the database was created using HTML and PHP. This standardized system can be accessed using any modern web-browsing software and is fundamental for our various research projects on computer-aided detection, diagnosis, cancer risk assessment, multimodality lesion assessment, and prognosis. Our CAD database system stores large amounts of research-related metadata and successfully generates subsets of cases that match the user's desired search criteria.

  19. CAD of control systems: Application of nonlinear programming to a linear quadratic formulation

    NASA Technical Reports Server (NTRS)

    Fleming, P.

    1983-01-01

    The familiar suboptimal regulator design approach is recast as a constrained optimization problem and incorporated in a Computer Aided Design (CAD) package where both design objective and constraints are quadratic cost functions. This formulation permits the separate consideration of, for example, model following errors, sensitivity measures and control energy as objectives to be minimized or limits to be observed. Efficient techniques for computing the interrelated cost functions and their gradients are utilized in conjunction with a nonlinear programming algorithm. The effectiveness of the approach and the degree of insight into the problem which it affords is illustrated in a helicopter regulation design example.

  20. A new CAD approach for improving efficacy of cancer screening

    NASA Astrophysics Data System (ADS)

    Zheng, Bin; Qian, Wei; Li, Lihua; Pu, Jiantao; Kang, Yan; Lure, Fleming; Tan, Maxine; Qiu, Yuchen

    2015-03-01

    Since performance and clinical utility of current computer-aided detection (CAD) schemes of detecting and classifying soft tissue lesions (e.g., breast masses and lung nodules) is not satisfactory, many researchers in CAD field call for new CAD research ideas and approaches. The purpose of presenting this opinion paper is to share our vision and stimulate more discussions of how to overcome or compensate the limitation of current lesion-detection based CAD schemes in the CAD research community. Since based on our observation that analyzing global image information plays an important role in radiologists' decision making, we hypothesized that using the targeted quantitative image features computed from global images could also provide highly discriminatory power, which are supplementary to the lesion-based information. To test our hypothesis, we recently performed a number of independent studies. Based on our published preliminary study results, we demonstrated that global mammographic image features and background parenchymal enhancement of breast MR images carried useful information to (1) predict near-term breast cancer risk based on negative screening mammograms, (2) distinguish between true- and false-positive recalls in mammography screening examinations, and (3) classify between malignant and benign breast MR examinations. The global case-based CAD scheme only warns a risk level of the cases without cueing a large number of false-positive lesions. It can also be applied to guide lesion-based CAD cueing to reduce false-positives but enhance clinically relevant true-positive cueing. However, before such a new CAD approach is clinically acceptable, more work is needed to optimize not only the scheme performance but also how to integrate with lesion-based CAD schemes in the clinical practice.

  1. Digital dentistry: an overview of recent developments for CAD/CAM generated restorations.

    PubMed

    Beuer, F; Schweiger, J; Edelhoff, D

    2008-05-10

    As in many other industries, production stages are increasingly becoming automated in dental technology. As the price of dental laboratory work has become a major factor in treatment planning and therapy, automation could enable more competitive production in high-wage areas like Western Europe and the USA. Advances in computer technology now enable cost-effective production of individual pieces. Dental restorations produced with computer assistance have become more common in recent years. Most dental companies have access to CAD/CAM procedures, either in the dental practice, the dental laboratory or in the form of production centres. The many benefits associated with CAD/CAM generated dental restorations include: the access to new, almost defect-free, industrially prefabricated and controlled materials; an increase in quality and reproducibility and also data storage commensurate with a standardised chain of production; an improvement in precision and planning, as well as an increase in efficiency. As a result of continual developments in computer hardware and software, new methods of production and new treatment concepts are to be expected, which will enable an additional reduction in costs. Dentists, who will be confronted with these techniques in the future, require certain basic knowledge if they are to benefit from these new procedures. This article gives an overview of CAD/CAM-technologies and systems available for dentistry today.

  2. Efficient Hessian computation using sparse matrix derivatives in RAM notation.

    PubMed

    von Oertzen, Timo; Brick, Timothy R

    2014-06-01

    This article proposes a new, more efficient method to compute the minus two log likelihood, its gradient, and the Hessian for structural equation models (SEMs) in reticular action model (RAM) notation. The method exploits the beneficial aspect of RAM notation that the matrix derivatives used in RAM are sparse. For an SEM with K variables, P parameters, and P' entries in the symmetrical or asymmetrical matrix of the RAM notation filled with parameters, the asymptotical run time of the algorithm is O(P ' K (2) + P (2) K (2) + K (3)). The naive implementation and numerical implementations are both O(P (2) K (3)), so that for typical applications of SEM, the proposed algorithm is asymptotically K times faster than the best previously known algorithm. A simulation comparison with a numerical algorithm shows that the asymptotical efficiency is transferred to an applied computational advantage that is crucial for the application of maximum likelihood estimation, even in small, but especially in moderate or large, SEMs.

  3. A computational efficient modelling of laminar separation bubbles

    NASA Technical Reports Server (NTRS)

    Dini, Paolo; Maughmer, Mark D.

    1990-01-01

    In predicting the aerodynamic characteristics of airfoils operating at low Reynolds numbers, it is often important to account for the effects of laminar (transitional) separation bubbles. Previous approaches to the modelling of this viscous phenomenon range from fast but sometimes unreliable empirical correlations for the length of the bubble and the associated increase in momentum thickness, to more accurate but significantly slower displacement-thickness iteration methods employing inverse boundary-layer formulations in the separated regions. Since the penalty in computational time associated with the more general methods is unacceptable for airfoil design applications, use of an accurate yet computationally efficient model is highly desirable. To this end, a semi-empirical bubble model was developed and incorporated into the Eppler and Somers airfoil design and analysis program. The generality and the efficiency was achieved by successfully approximating the local viscous/inviscid interaction, the transition location, and the turbulent reattachment process within the framework of an integral boundary-layer method. Comparisons of the predicted aerodynamic characteristics with experimental measurements for several airfoils show excellent and consistent agreement for Reynolds numbers from 2,000,000 down to 100,000.

  4. Efficient Computation of the Topology of Level Sets

    SciTech Connect

    Pascucci, V; Cole-McLaughlin, K

    2002-07-19

    This paper introduces two efficient algorithms that compute the Contour Tree of a 3D scalar field F and its augmented version with the Betti numbers of each isosurface. The Contour Tree is a fundamental data structure in scientific visualization that is used to pre-process the domain mesh to allow optimal computation of isosurfaces with minimal storage overhead. The Contour Tree can be also used to build user interfaces reporting the complete topological characterization of a scalar field, as shown in Figure 1. In the first part of the paper we present a new scheme that augments the Contour Tree with the Betti numbers of each isocontour in linear time. We show how to extend the scheme introduced in 3 with the Betti number computation without increasing its complexity. Thus we improve on the time complexity from our previous approach 8 from 0(m log m) to 0(n log n+m), where m is the number of tetrahedra and n is the number of vertices in the domain of F. In the second part of the paper we introduce a new divide and conquer algorithm that computes the Augmented Contour Tree for scalar fields defined on rectilinear grids. The central part of the scheme computes the output contour tree by merging two intermediate contour trees and is independent of the interpolant. In this way we confine any knowledge regarding a specific interpolant to an oracle that computes the tree for a single cell. We have implemented this oracle for the trilinear interpolant and plan to replace it with higher order interpolants when needed. The complexity of the scheme is O(n + t log n), where t is the number of critical points of F. This allows for the first time to compute the Contour Tree in linear time in many practical cases when t = O(n{sup 1-e}). We report the running times for a parallel implementation of our algorithm, showing good scalability with the number of processors.

  5. Computationally efficient implementation of combustion chemistry in parallel PDF calculations

    SciTech Connect

    Lu Liuyan Lantz, Steven R.; Ren Zhuyin; Pope, Stephen B.

    2009-08-20

    In parallel calculations of combustion processes with realistic chemistry, the serial in situ adaptive tabulation (ISAT) algorithm [S.B. Pope, Computationally efficient implementation of combustion chemistry using in situ adaptive tabulation, Combustion Theory and Modelling, 1 (1997) 41-63; L. Lu, S.B. Pope, An improved algorithm for in situ adaptive tabulation, Journal of Computational Physics 228 (2009) 361-386] substantially speeds up the chemistry calculations on each processor. To improve the parallel efficiency of large ensembles of such calculations in parallel computations, in this work, the ISAT algorithm is extended to the multi-processor environment, with the aim of minimizing the wall clock time required for the whole ensemble. Parallel ISAT strategies are developed by combining the existing serial ISAT algorithm with different distribution strategies, namely purely local processing (PLP), uniformly random distribution (URAN), and preferential distribution (PREF). The distribution strategies enable the queued load redistribution of chemistry calculations among processors using message passing. They are implemented in the software x2f{sub m}pi, which is a Fortran 95 library for facilitating many parallel evaluations of a general vector function. The relative performance of the parallel ISAT strategies is investigated in different computational regimes via the PDF calculations of multiple partially stirred reactors burning methane/air mixtures. The results show that the performance of ISAT with a fixed distribution strategy strongly depends on certain computational regimes, based on how much memory is available and how much overlap exists between tabulated information on different processors. No one fixed strategy consistently achieves good performance in all the regimes. Therefore, an adaptive distribution strategy, which blends PLP, URAN and PREF, is devised and implemented. It yields consistently good performance in all regimes. In the adaptive

  6. Computationally efficient implementation of combustion chemistry in parallel PDF calculations

    NASA Astrophysics Data System (ADS)

    Lu, Liuyan; Lantz, Steven R.; Ren, Zhuyin; Pope, Stephen B.

    2009-08-01

    In parallel calculations of combustion processes with realistic chemistry, the serial in situ adaptive tabulation (ISAT) algorithm [S.B. Pope, Computationally efficient implementation of combustion chemistry using in situ adaptive tabulation, Combustion Theory and Modelling, 1 (1997) 41-63; L. Lu, S.B. Pope, An improved algorithm for in situ adaptive tabulation, Journal of Computational Physics 228 (2009) 361-386] substantially speeds up the chemistry calculations on each processor. To improve the parallel efficiency of large ensembles of such calculations in parallel computations, in this work, the ISAT algorithm is extended to the multi-processor environment, with the aim of minimizing the wall clock time required for the whole ensemble. Parallel ISAT strategies are developed by combining the existing serial ISAT algorithm with different distribution strategies, namely purely local processing (PLP), uniformly random distribution (URAN), and preferential distribution (PREF). The distribution strategies enable the queued load redistribution of chemistry calculations among processors using message passing. They are implemented in the software x2f_mpi, which is a Fortran 95 library for facilitating many parallel evaluations of a general vector function. The relative performance of the parallel ISAT strategies is investigated in different computational regimes via the PDF calculations of multiple partially stirred reactors burning methane/air mixtures. The results show that the performance of ISAT with a fixed distribution strategy strongly depends on certain computational regimes, based on how much memory is available and how much overlap exists between tabulated information on different processors. No one fixed strategy consistently achieves good performance in all the regimes. Therefore, an adaptive distribution strategy, which blends PLP, URAN and PREF, is devised and implemented. It yields consistently good performance in all regimes. In the adaptive parallel

  7. EXCAVATOR: a computer program for efficiently mining gene expression data.

    PubMed

    Xu, Dong; Olman, Victor; Wang, Li; Xu, Ying

    2003-10-01

    Massive amounts of gene expression data are generated using microarrays for functional studies of genes and gene expression data clustering is a useful tool for studying the functional relationship among genes in a biological process. We have developed a computer package EXCAVATOR for clustering gene expression profiles based on our new framework for representing gene expression data as a minimum spanning tree. EXCAVATOR uses a number of rigorous and efficient clustering algorithms. This program has a number of unique features, including capabilities for: (i) data- constrained clustering; (ii) identification of genes with similar expression profiles to pre-specified seed genes; (iii) cluster identification from a noisy background; (iv) computational comparison between different clustering results of the same data set. EXCAVATOR can be run from a Unix/Linux/DOS shell, from a Java interface or from a Web server. The clustering results can be visualized as colored figures and 2-dimensional plots. Moreover, EXCAVATOR provides a wide range of options for data formats, distance measures, objective functions, clustering algorithms, methods to choose number of clusters, etc. The effectiveness of EXCAVATOR has been demonstrated on several experimental data sets. Its performance compares favorably against the popular K-means clustering method in terms of clustering quality and computing time.

  8. A Computational Framework for Efficient Low Temperature Plasma Simulations

    NASA Astrophysics Data System (ADS)

    Verma, Abhishek Kumar; Venkattraman, Ayyaswamy

    2016-10-01

    Over the past years, scientific computing has emerged as an essential tool for the investigation and prediction of low temperature plasmas (LTP) applications which includes electronics, nanomaterial synthesis, metamaterials etc. To further explore the LTP behavior with greater fidelity, we present a computational toolbox developed to perform LTP simulations. This framework will allow us to enhance our understanding of multiscale plasma phenomenon using high performance computing tools mainly based on OpenFOAM FVM distribution. Although aimed at microplasma simulations, the modular framework is able to perform multiscale, multiphysics simulations of physical systems comprises of LTP. Some salient introductory features are capability to perform parallel, 3D simulations of LTP applications on unstructured meshes. Performance of the solver is tested based on numerical results assessing accuracy and efficiency of benchmarks for problems in microdischarge devices. Numerical simulation of microplasma reactor at atmospheric pressure with hemispherical dielectric coated electrodes will be discussed and hence, provide an overview of applicability and future scope of this framework.

  9. Efficient parameter sensitivity computation for spatially extended reaction networks

    NASA Astrophysics Data System (ADS)

    Lester, C.; Yates, C. A.; Baker, R. E.

    2017-01-01

    Reaction-diffusion models are widely used to study spatially extended chemical reaction systems. In order to understand how the dynamics of a reaction-diffusion model are affected by changes in its input parameters, efficient methods for computing parametric sensitivities are required. In this work, we focus on the stochastic models of spatially extended chemical reaction systems that involve partitioning the computational domain into voxels. Parametric sensitivities are often calculated using Monte Carlo techniques that are typically computationally expensive; however, variance reduction techniques can decrease the number of Monte Carlo simulations required. By exploiting the characteristic dynamics of spatially extended reaction networks, we are able to adapt existing finite difference schemes to robustly estimate parametric sensitivities in a spatially extended network. We show that algorithmic performance depends on the dynamics of the given network and the choice of summary statistics. We then describe a hybrid technique that dynamically chooses the most appropriate simulation method for the network of interest. Our method is tested for functionality and accuracy in a range of different scenarios.

  10. Efficient Homotopy Continuation Algorithms with Application to Computational Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Brown, David A.

    New homotopy continuation algorithms are developed and applied to a parallel implicit finite-difference Newton-Krylov-Schur external aerodynamic flow solver for the compressible Euler, Navier-Stokes, and Reynolds-averaged Navier-Stokes equations with the Spalart-Allmaras one-equation turbulence model. Many new analysis tools, calculations, and numerical algorithms are presented for the study and design of efficient and robust homotopy continuation algorithms applicable to solving very large and sparse nonlinear systems of equations. Several specific homotopies are presented and studied and a methodology is presented for assessing the suitability of specific homotopies for homotopy continuation. . A new class of homotopy continuation algorithms, referred to as monolithic homotopy continuation algorithms, is developed. These algorithms differ from classical predictor-corrector algorithms by combining the predictor and corrector stages into a single update, significantly reducing the amount of computation and avoiding wasted computational effort resulting from over-solving in the corrector phase. The new algorithms are also simpler from a user perspective, with fewer input parameters, which also improves the user's ability to choose effective parameters on the first flow solve attempt. Conditional convergence is proved analytically and studied numerically for the new algorithms. The performance of a fully-implicit monolithic homotopy continuation algorithm is evaluated for several inviscid, laminar, and turbulent flows over NACA 0012 airfoils and ONERA M6 wings. The monolithic algorithm is demonstrated to be more efficient than the predictor-corrector algorithm for all applications investigated. It is also demonstrated to be more efficient than the widely-used pseudo-transient continuation algorithm for all inviscid and laminar cases investigated, and good performance scaling with grid refinement is demonstrated for the inviscid cases. Performance is also demonstrated

  11. CAD/CAM data management

    NASA Technical Reports Server (NTRS)

    Bray, O. H.

    1984-01-01

    The role of data base management in CAD/CAM, particularly for geometric data is described. First, long term and short term objectives for CAD/CAM data management are identified. Second, the benefits of the data base management approach are explained. Third, some of the additional work needed in the data base area is discussed.

  12. Efficient Universal Computing Architectures for Decoding Neural Activity

    PubMed Central

    Rapoport, Benjamin I.; Turicchia, Lorenzo; Wattanapanitch, Woradorn; Davidson, Thomas J.; Sarpeshkar, Rahul

    2012-01-01

    The ability to decode neural activity into meaningful control signals for prosthetic devices is critical to the development of clinically useful brain– machine interfaces (BMIs). Such systems require input from tens to hundreds of brain-implanted recording electrodes in order to deliver robust and accurate performance; in serving that primary function they should also minimize power dissipation in order to avoid damaging neural tissue; and they should transmit data wirelessly in order to minimize the risk of infection associated with chronic, transcutaneous implants. Electronic architectures for brain– machine interfaces must therefore minimize size and power consumption, while maximizing the ability to compress data to be transmitted over limited-bandwidth wireless channels. Here we present a system of extremely low computational complexity, designed for real-time decoding of neural signals, and suited for highly scalable implantable systems. Our programmable architecture is an explicit implementation of a universal computing machine emulating the dynamics of a network of integrate-and-fire neurons; it requires no arithmetic operations except for counting, and decodes neural signals using only computationally inexpensive logic operations. The simplicity of this architecture does not compromise its ability to compress raw neural data by factors greater than . We describe a set of decoding algorithms based on this computational architecture, one designed to operate within an implanted system, minimizing its power consumption and data transmission bandwidth; and a complementary set of algorithms for learning, programming the decoder, and postprocessing the decoded output, designed to operate in an external, nonimplanted unit. The implementation of the implantable portion is estimated to require fewer than 5000 operations per second. A proof-of-concept, 32-channel field-programmable gate array (FPGA) implementation of this portion is consequently energy efficient

  13. CAD Model and Visual Assisted Control System for NIF Target Area Positioners

    SciTech Connect

    Tekle, E A; Wilson, E F; Paik, T S

    2007-10-03

    The National Ignition Facility (NIF) target chamber contains precision motion control systems that reach up to 6 meters into the target chamber for handling targets and diagnostics. Systems include the target positioner, an alignment sensor, and diagnostic manipulators (collectively called positioners). Target chamber shot experiments require a variety of positioner arrangements near the chamber center to be aligned to an accuracy of 10 micrometers. Positioners are some of the largest devices in NIF, and they require careful monitoring and control in 3 dimensions to prevent interferences. The Integrated Computer Control System provides efficient and flexible multi-positioner controls. This is accomplished through advanced video-control integration incorporating remote position sensing and realtime analysis of a CAD model of target chamber devices. The control system design, the method used to integrate existing mechanical CAD models, and the offline test laboratory used to verify proper operation of the control system are described.

  14. The Efficiency of Various Computers and Optimizations in Performing Finite Element Computations

    NASA Technical Reports Server (NTRS)

    Marcus, Martin H.; Broduer, Steve (Technical Monitor)

    2001-01-01

    With the advent of computers with many processors, it becomes unclear how to best exploit this advantage. For example, matrices can be inverted by applying several processors to each vector operation, or one processor can be applied to each matrix. The former approach has diminishing returns beyond a handful of processors, but how many processors depends on the computer architecture. Applying one processor to each matrix is feasible with enough ram memory and scratch disk space, but the speed at which this is done is found to vary by a factor of three depending on how it is done. The cost of the computer must also be taken into account. A computer with many processors and fast interprocessor communication is much more expensive than the same computer and processors with slow interprocessor communication. Consequently, for problems that require several matrices to be inverted, the best speed per dollar for computers is found to be several small workstations that are networked together, such as in a Beowulf cluster. Since these machines typically have two processors per node, each matrix is most efficiently inverted with no more than two processors assigned to it.

  15. An efficient algorithm for computing the crossovers in satellite altimetry

    NASA Technical Reports Server (NTRS)

    Tai, Chang-Kou

    1988-01-01

    An efficient algorithm has been devised to compute the crossovers in satellite altimetry. The significance of the crossovers is twofold. First, they are needed to perform the crossover adjustment to remove the orbit error. Secondly, they yield important insight into oceanic variability. Nevertheless, there is no published algorithm to make this very time consuming task easier, which is the goal of this report. The success of the algorithm is predicated on the ability to predict (by analytical means) the crossover coordinates to within 6 km and 1 sec of the true values. Hence, only one interpolation/extrapolation step on the data is needed to derive the crossover coordinates in contrast to the many interpolation/extrapolation operations usually needed to arrive at the same accuracy level if deprived of this information.

  16. A computationally efficient spectral method for modeling core dynamics

    NASA Astrophysics Data System (ADS)

    Marti, P.; Calkins, M. A.; Julien, K.

    2016-08-01

    An efficient, spectral numerical method is presented for solving problems in a spherical shell geometry that employs spherical harmonics in the angular dimensions and Chebyshev polynomials in the radial direction. We exploit the three-term recurrence relation for Chebyshev polynomials that renders all matrices sparse in spectral space. This approach is significantly more efficient than the collocation approach and is generalizable to both the Galerkin and tau methodologies for enforcing boundary conditions. The sparsity of the matrices reduces the computational complexity of the linear solution of implicit-explicit time stepping schemes to O(N) operations, compared to O>(N2>) operations for a collocation method. The method is illustrated by considering several example problems of important dynamical processes in the Earth's liquid outer core. Results are presented from both fully nonlinear, time-dependent numerical simulations and eigenvalue problems arising from the investigation of the onset of convection and the inertial wave spectrum. We compare the explicit and implicit temporal discretization of the Coriolis force; the latter becomes computationally feasible given the sparsity of the differential operators. We find that implicit treatment of the Coriolis force allows for significantly larger time step sizes compared to explicit algorithms; for hydrodynamic and dynamo problems at an Ekman number of E=10-5, time step sizes can be increased by a factor of 3 to 16 times that of the explicit algorithm, depending on the order of the time stepping scheme. The implementation with explicit Coriolis force scales well to at least 2048 cores, while the implicit implementation scales to 512 cores.

  17. Efficient free energy calculations of quantum systems through computer simulations

    NASA Astrophysics Data System (ADS)

    Antonelli, Alex; Ramirez, Rafael; Herrero, Carlos; Hernandez, Eduardo

    2009-03-01

    In general, the classical limit is assumed in computer simulation calculations of free energy. This approximation, however, is not justifiable for a class of systems in which quantum contributions for the free energy cannot be neglected. The inclusion of quantum effects is important for the determination of reliable phase diagrams of these systems. In this work, we present a new methodology to compute the free energy of many-body quantum systems [1]. This methodology results from the combination of the path integral formulation of statistical mechanics and efficient non-equilibrium methods to estimate free energy, namely, the adiabatic switching and reversible scaling methods. A quantum Einstein crystal is used as a model to show the accuracy and reliability the methodology. This new method is applied to the calculation of solid-liquid coexistence properties of neon. Our findings indicate that quantum contributions to properties such as, melting point, latent heat of fusion, entropy of fusion, and slope of melting line can be up to 10% of the calculated values using the classical approximation. [1] R. M. Ramirez, C. P. Herrero, A. Antonelli, and E. R. Hernández, Journal of Chemical Physics 129, 064110 (2008)

  18. An Algorithm for Projecting Points onto a Patched CAD Model

    SciTech Connect

    Henshaw, W D

    2001-05-29

    We are interested in building structured overlapping grids for geometries defined by computer-aided-design (CAD) packages. Geometric information defining the boundary surfaces of a computation domain is often provided in the form of a collection of possibly hundreds of trimmed patches. The first step in building an overlapping volume grid on such a geometry is to build overlapping surface grids. A surface grid is typically built using hyperbolic grid generation; starting from a curve on the surface, a grid is grown by marching over the surface. A given hyperbolic grid will typically cover many of the underlying CAD surface patches. The fundamental operation needed for building surface grids is that of projecting a point in space onto the closest point on the CAD surface. We describe an fast algorithm for performing this projection, it will make use of a fairly coarse global triangulation of the CAD geometry. We describe how to build this global triangulation by first determining the connectivity of the CAD surface patches. This step is necessary since it often the case that the CAD description will contain no information specifying how a given patch connects to other neighboring patches. Determining the connectivity is difficult since the surface patches may contain mistakes such as gaps or overlaps between neighboring patches.

  19. An efficient parallel algorithm for accelerating computational protein design

    PubMed Central

    Zhou, Yichao; Xu, Wei; Donald, Bruce R.; Zeng, Jianyang

    2014-01-01

    Motivation: Structure-based computational protein design (SCPR) is an important topic in protein engineering. Under the assumption of a rigid backbone and a finite set of discrete conformations of side-chains, various methods have been proposed to address this problem. A popular method is to combine the dead-end elimination (DEE) and A* tree search algorithms, which provably finds the global minimum energy conformation (GMEC) solution. Results: In this article, we improve the efficiency of computing A* heuristic functions for protein design and propose a variant of A* algorithm in which the search process can be performed on a single GPU in a massively parallel fashion. In addition, we make some efforts to address the memory exceeding problem in A* search. As a result, our enhancements can achieve a significant speedup of the A*-based protein design algorithm by four orders of magnitude on large-scale test data through pre-computation and parallelization, while still maintaining an acceptable memory overhead. We also show that our parallel A* search algorithm could be successfully combined with iMinDEE, a state-of-the-art DEE criterion, for rotamer pruning to further improve SCPR with the consideration of continuous side-chain flexibility. Availability: Our software is available and distributed open-source under the GNU Lesser General License Version 2.1 (GNU, February 1999). The source code can be downloaded from http://www.cs.duke.edu/donaldlab/osprey.php or http://iiis.tsinghua.edu.cn/∼compbio/software.html. Contact: zengjy321@tsinghua.edu.cn Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24931991

  20. Textbook Multigrid Efficiency for Computational Fluid Dynamics Simulations

    NASA Technical Reports Server (NTRS)

    Brandt, Achi; Thomas, James L.; Diskin, Boris

    2001-01-01

    Considerable progress over the past thirty years has been made in the development of large-scale computational fluid dynamics (CFD) solvers for the Euler and Navier-Stokes equations. Computations are used routinely to design the cruise shapes of transport aircraft through complex-geometry simulations involving the solution of 25-100 million equations; in this arena the number of wind-tunnel tests for a new design has been substantially reduced. However, simulations of the entire flight envelope of the vehicle, including maximum lift, buffet onset, flutter, and control effectiveness have not been as successful in eliminating the reliance on wind-tunnel testing. These simulations involve unsteady flows with more separation and stronger shock waves than at cruise. The main reasons limiting further inroads of CFD into the design process are: (1) the reliability of turbulence models; and (2) the time and expense of the numerical simulation. Because of the prohibitive resolution requirements of direct simulations at high Reynolds numbers, transition and turbulence modeling is expected to remain an issue for the near term. The focus of this paper addresses the latter problem by attempting to attain optimal efficiencies in solving the governing equations. Typically current CFD codes based on the use of multigrid acceleration techniques and multistage Runge-Kutta time-stepping schemes are able to converge lift and drag values for cruise configurations within approximately 1000 residual evaluations. An optimally convergent method is defined as having textbook multigrid efficiency (TME), meaning the solutions to the governing system of equations are attained in a computational work which is a small (less than 10) multiple of the operation count in the discretized system of equations (residual equations). In this paper, a distributed relaxation approach to achieving TME for Reynolds-averaged Navier-Stokes (RNAS) equations are discussed along with the foundations that form the

  1. Schools (Students) Exchanging CAD/CAM Files over the Internet.

    ERIC Educational Resources Information Center

    Mahoney, Gary S.; Smallwood, James E.

    This document discusses how students and schools can benefit from exchanging computer-aided design/computer-aided manufacturing (CAD/CAM) files over the Internet, explains how files are exchanged, and examines the problem of selected hardware/software incompatibility. Key terms associated with information search services are defined, and several…

  2. Grayscale optical correlator for CAD/CAC applications

    NASA Astrophysics Data System (ADS)

    Chao, Tien-Hsin; Lu, Thomas

    2008-03-01

    This paper describes JPL's recent work on high-performance automatic target recognition (ATR) processor consisting of a Grayscale Optical Correlator (GOC) and neural network for various Computer Aided Detection and Computer Aided Classification (CAD/CAC) applications. A simulation study for sonar mine and mine-like target detection and classification is presented. Applications to periscope video ATR is also presented.

  3. Efficiently computing exact geodesic loops within finite steps.

    PubMed

    Xin, Shi-Qing; He, Ying; Fu, Chi-Wing

    2012-06-01

    Closed geodesics, or geodesic loops, are crucial to the study of differential topology and differential geometry. Although the existence and properties of closed geodesics on smooth surfaces have been widely studied in mathematics community, relatively little progress has been made on how to compute them on polygonal surfaces. Most existing algorithms simply consider the mesh as a graph and so the resultant loops are restricted only on mesh edges, which are far from the actual geodesics. This paper is the first to prove the existence and uniqueness of geodesic loop restricted on a closed face sequence; it contributes also with an efficient algorithm to iteratively evolve an initial closed path on a given mesh into an exact geodesic loop within finite steps. Our proposed algorithm takes only an O(k) space complexity and an O(mk) time complexity (experimentally), where m is the number of vertices in the region bounded by the initial loop and the resultant geodesic loop, and k is the average number of edges in the edge sequences that the evolving loop passes through. In contrast to the existing geodesic curvature flow methods which compute an approximate geodesic loop within a predefined threshold, our method is exact and can apply directly to triangular meshes without needing to solve any differential equation with a numerical solver; it can run at interactive speed, e.g., in the order of milliseconds, for a mesh with around 50K vertices, and hence, significantly outperforms existing algorithms. Actually, our algorithm could run at interactive speed even for larger meshes. Besides the complexity of the input mesh, the geometric shape could also affect the number of evolving steps, i.e., the performance. We motivate our algorithm with an interactive shape segmentation example shown later in the paper.

  4. Computer Aided Drafting. Instructor's Guide.

    ERIC Educational Resources Information Center

    Henry, Michael A.

    This guide is intended for use in introducing students to the operation and applications of computer-aided drafting (CAD) systems. The following topics are covered in the individual lessons: understanding CAD (CAD versus traditional manual drafting and care of software and hardware); using the components of a CAD system (primary and other input…

  5. CAD/CAM generated all-ceramic primary telescopic prostheses.

    PubMed

    Kurbad, A; Ganz, S; Kurbad, S

    2012-01-01

    Computer-aided design and manufacturing (CAD/CAM) systems have proven effective not only for the manufacture of crown and bridge frameworks, inlays, onlays and veneers, but also for the generation of all-ceramic primary telescopic prostheses in more than 10 years of use in dental technology. The new InLab 4.0 software generation makes it possible to design and mill primary telescopic prostheses with CAD/CAM technology. The computer-generated raw crowns for these restorations require very little manual adaptation. The secondary crowns are manufactured by electroforming and bonded onto the tertiary structure or framework.

  6. The Effect of Computer Automation on Institutional Review Board (IRB) Office Efficiency

    ERIC Educational Resources Information Center

    Oder, Karl; Pittman, Stephanie

    2015-01-01

    Companies purchase computer systems to make their processes more efficient through automation. Some academic medical centers (AMC) have purchased computer systems for their institutional review boards (IRB) to increase efficiency and compliance with regulations. IRB computer systems are expensive to purchase, deploy, and maintain. An AMC should…

  7. False positive reduction for lung nodule CAD

    NASA Astrophysics Data System (ADS)

    Zhao, Luyin; Boroczky, Lilla; Drysdale, Jeremy; Agnihotri, Lalitha; Lee, Michael C.

    2007-03-01

    Computer-aided detection (CAD) algorithms 'automatically' identify lung nodules on thoracic multi-slice CT scans (MSCT) thereby providing physicians with a computer-generated 'second opinion'. While CAD systems can achieve high sensitivity, their limited specificity has hindered clinical acceptance. To overcome this problem, we propose a false positive reduction (FPR) system based on image processing and machine learning to reduce the number of false positive lung nodules identified by CAD algorithms and thereby improve system specificity. To discriminate between true and false nodules, twenty-three 3D features were calculated from each candidate nodule's volume of interest (VOI). A genetic algorithm (GA) and support vector machine (SVM) were then used to select an optimal subset of features from this pool of candidate features. Using this feature subset, we trained an SVM classifier to eliminate as many false positives as possible while retaining all the true nodules. To overcome the imbalanced nature of typical datasets (significantly more false positives than true positives), an intelligent data selection algorithm was designed and integrated into the machine learning framework, thus further improving the FPR rate. Three independent datasets were used to train and validate the system. Using two datasets for training and the third for validation, we achieved a 59.4% FPR rate while removing one true nodule on the validation datasets. In a second experiment, 75% of the cases were randomly selected from each of the three datasets and the remaining cases were used for validation. A similar FPR rate and true positive retention rate was achieved. Additional experiments showed that the GA feature selection process integrated with the proposed data selection algorithm outperforms the one without it by 5%-10% FPR rate. The methods proposed can be also applied to other application areas, such as computer-aided diagnosis of lung nodules.

  8. Computer Aided Design of Computer Generated Holograms for electron beam fabrication

    NASA Technical Reports Server (NTRS)

    Urquhart, Kristopher S.; Lee, Sing H.; Guest, Clark C.; Feldman, Michael R.; Farhoosh, Hamid

    1989-01-01

    Computer Aided Design (CAD) systems that have been developed for electrical and mechanical design tasks are also effective tools for the process of designing Computer Generated Holograms (CGHs), particularly when these holograms are to be fabricated using electron beam lithography. CAD workstations provide efficient and convenient means of computing, storing, displaying, and preparing for fabrication many of the features that are common to CGH designs. Experience gained in the process of designing CGHs with various types of encoding methods is presented. Suggestions are made so that future workstations may further accommodate the CGH design process.

  9. Building Efficient Wireless Infrastructures for Pervasive Computing Environments

    ERIC Educational Resources Information Center

    Sheng, Bo

    2010-01-01

    Pervasive computing is an emerging concept that thoroughly brings computing devices and the consequent technology into people's daily life and activities. Most of these computing devices are very small, sometimes even "invisible", and often embedded into the objects surrounding people. In addition, these devices usually are not isolated, but…

  10. Computer-Aided Apparel Design in University Curricula.

    ERIC Educational Resources Information Center

    Belleau, Bonnie D.; Bourgeois, Elva B.

    1991-01-01

    As computer-assisted design (CAD) become an integral part of the fashion industry, universities must integrate CAD into the apparel curriculum. Louisiana State University's curriculum enables students to collaborate in CAD problem solving with industry personnel. (SK)

  11. The Use of a Parametric Feature Based CAD System to Teach Introductory Engineering Graphics.

    ERIC Educational Resources Information Center

    Howell, Steven K.

    1995-01-01

    Describes the use of a parametric-feature-based computer-aided design (CAD) System, AutoCAD Designer, in teaching concepts of three dimensional geometrical modeling and design. Allows engineering graphics to go beyond the role of documentation and communication and allows an engineer to actually build a virtual prototype of a design idea and…

  12. 3D-CAD Effects on Creative Design Performance of Different Spatial Abilities Students

    ERIC Educational Resources Information Center

    Chang, Y.

    2014-01-01

    Students' creativity is an important focus globally and is interrelated with students' spatial abilities. Additionally, three-dimensional computer-assisted drawing (3D-CAD) overcomes barriers to spatial expression during the creative design process. Does 3D-CAD affect students' creative abilities? The purpose of this study was to explore the…

  13. [Computer-assisted orbital floor reconstruction. Use of a CAD/CAM implant with intraoperative contact-free 3D endo- and exophthalmometry].

    PubMed

    Kühnel, T V; Vairaktaris, E; Alexiou, C; Schlegel, K A; Neukam, F W; Nkenke, E

    2008-11-01

    Pronounced enophthalmos can restrict patients both functionally and aesthetically. Typical symptoms are double vision on both eyes and obvious asymmetry, both of which were present in the 67-year-old male patient presented in this paper. The resulting data of computed tomography was used to fabricate a patient specific ceramic implant for reconstruction of the left orbital floor with an enophthalmos of 4mm. During the surgery the implant fitted anatomically correct, but exophthalmos occurred. The implant needed to be regraded and recontoured in the dorsal fraction, so that overcorrection could be reduced. With the assistance of optical 3D en- and exophthalmometry during surgery, the position of the cornea vertex was reproducible measured. At the end of surgery, exophthalmos was 1.5 mm. After 12 months, enophthalmos of only 1mm exists. This case displays the combination of a patient specific fabricated implant for reconstruction of the orbital floor with optical 3D-en-and exophthalmometry to correct enophthalmos with a high degree of accuracy. Therefore these two techniques in combination should be used when complex corrections of enophthalmos are needed.

  14. A Wheat Cinnamyl Alcohol Dehydrogenase TaCAD12 Contributes to Host Resistance to the Sharp Eyespot Disease.

    PubMed

    Rong, Wei; Luo, Meiying; Shan, Tianlei; Wei, Xuening; Du, Lipu; Xu, Huijun; Zhang, Zengyan

    2016-01-01

    Sharp eyespot, caused mainly by the necrotrophic fungus Rhizoctonia cerealis, is a destructive disease in hexaploid wheat (Triticum aestivum L.). In Arabidopsis, certain cinnamyl alcohol dehydrogenases (CADs) have been implicated in monolignol biosynthesis and in defense response to bacterial pathogen infection. However, little is known about CADs in wheat defense responses to necrotrophic or soil-borne pathogens. In this study, we isolate a wheat CAD gene TaCAD12 in response to R. cerealis infection through microarray-based comparative transcriptomics, and study the enzyme activity and defense role of TaCAD12 in wheat. The transcriptional levels of TaCAD12 in sharp eyespot-resistant wheat lines were significantly higher compared with those in susceptible wheat lines. The sequence and phylogenetic analyses revealed that TaCAD12 belongs to IV group in CAD family. The biochemical assay proved that TaCAD12 protein is an authentic CAD enzyme and possesses catalytic efficiencies toward both coniferyl aldehyde and sinapyl aldehyde. Knock-down of TaCAD12 transcript significantly repressed resistance of the gene-silenced wheat plants to sharp eyespot caused by R. cerealis, whereas TaCAD12 overexpression markedly enhanced resistance of the transgenic wheat lines to sharp eyespot. Furthermore, certain defense genes (Defensin, PR10, PR17c, and Chitinase1) and monolignol biosynthesis-related genes (TaCAD1, TaCCR, and TaCOMT1) were up-regulated in the TaCAD12-overexpressing wheat plants but down-regulated in TaCAD12-silencing plants. These results suggest that TaCAD12 positively contributes to resistance against sharp eyespot through regulation of the expression of certain defense genes and monolignol biosynthesis-related genes in wheat.

  15. A Wheat Cinnamyl Alcohol Dehydrogenase TaCAD12 Contributes to Host Resistance to the Sharp Eyespot Disease

    PubMed Central

    Rong, Wei; Luo, Meiying; Shan, Tianlei; Wei, Xuening; Du, Lipu; Xu, Huijun; Zhang, Zengyan

    2016-01-01

    Sharp eyespot, caused mainly by the necrotrophic fungus Rhizoctonia cerealis, is a destructive disease in hexaploid wheat (Triticum aestivum L.). In Arabidopsis, certain cinnamyl alcohol dehydrogenases (CADs) have been implicated in monolignol biosynthesis and in defense response to bacterial pathogen infection. However, little is known about CADs in wheat defense responses to necrotrophic or soil-borne pathogens. In this study, we isolate a wheat CAD gene TaCAD12 in response to R. cerealis infection through microarray-based comparative transcriptomics, and study the enzyme activity and defense role of TaCAD12 in wheat. The transcriptional levels of TaCAD12 in sharp eyespot-resistant wheat lines were significantly higher compared with those in susceptible wheat lines. The sequence and phylogenetic analyses revealed that TaCAD12 belongs to IV group in CAD family. The biochemical assay proved that TaCAD12 protein is an authentic CAD enzyme and possesses catalytic efficiencies toward both coniferyl aldehyde and sinapyl aldehyde. Knock-down of TaCAD12 transcript significantly repressed resistance of the gene-silenced wheat plants to sharp eyespot caused by R. cerealis, whereas TaCAD12 overexpression markedly enhanced resistance of the transgenic wheat lines to sharp eyespot. Furthermore, certain defense genes (Defensin, PR10, PR17c, and Chitinase1) and monolignol biosynthesis-related genes (TaCAD1, TaCCR, and TaCOMT1) were up-regulated in the TaCAD12-overexpressing wheat plants but down-regulated in TaCAD12-silencing plants. These results suggest that TaCAD12 positively contributes to resistance against sharp eyespot through regulation of the expression of certain defense genes and monolignol biosynthesis-related genes in wheat. PMID:27899932

  16. Experiences with Efficient Methodologies for Teaching Computer Programming to Geoscientists

    ERIC Educational Resources Information Center

    Jacobs, Christian T.; Gorman, Gerard J.; Rees, Huw E.; Craig, Lorraine E.

    2016-01-01

    Computer programming was once thought of as a skill required only by professional software developers. But today, given the ubiquitous nature of computation and data science it is quickly becoming necessary for all scientists and engineers to have at least a basic knowledge of how to program. Teaching how to program, particularly to those students…

  17. Automated CD-SEM recipe creation technology for mass production using CAD data

    NASA Astrophysics Data System (ADS)

    Kawahara, Toshikazu; Yoshida, Masamichi; Tanaka, Masashi; Ido, Sanyu; Nakano, Hiroyuki; Adachi, Naokaka; Abe, Yuichi; Nagatomo, Wataru

    2011-03-01

    Critical Dimension Scanning Electron Microscope (CD-SEM) recipe creation needs sample preparation necessary for matching pattern registration, and recipe creation on CD-SEM using the sample, which hinders the reduction in test production cost and time in semiconductor manufacturing factories. From the perspective of cost reduction and improvement of the test production efficiency, automated CD-SEM recipe creation without the sample preparation and the manual operation has been important in the production lines. For the automated CD-SEM recipe creation, we have introduced RecipeDirector (RD) that enables the recipe creation by using Computer-Aided Design (CAD) data and text data that includes measurement information. We have developed a system that automatically creates the CAD data and the text data necessary for the recipe creation on RD; and, for the elimination of the manual operation, we have enhanced RD so that all measurement information can be specified in the text data. As a result, we have established an automated CD-SEM recipe creation system without the sample preparation and the manual operation. For the introduction of the CD-SEM recipe creation system using RD to the production lines, the accuracy of the pattern matching was an issue. The shape of design templates for the matching created from the CAD data was different from that of SEM images in vision. Thus, a development of robust pattern matching algorithm that considers the shape difference was needed. The addition of image processing of the templates for the matching and shape processing of the CAD patterns in the lower layer has enabled the robust pattern matching. This paper describes the automated CD-SEM recipe creation technology for the production lines without the sample preparation and the manual operation using RD applied in Sony Semiconductor Kyusyu Corporation Kumamoto Technology Center (SCK Corporation Kumamoto TEC).

  18. CAD-PACS integration tool kit based on DICOM secondary capture, structured report and IHE workflow profiles.

    PubMed

    Zhou, Zheng; Liu, Brent J; Le, Anh H

    2007-01-01

    Computer aided diagnosis/detection (CAD) goes beyond subjective visual assessment of clinical images providing quantitative computer analysis of the image content, and can greatly improve clinical diagnostic outcome. Many CAD applications, including commercial and research CAD, have been developed with no ability to integrate the CAD results with a clinical picture archiving and communication system (PACS). This has hindered the extensive use of CAD for maximum benefit within a clinical environment. In this paper, we present a CAD-PACS integration toolkit that integrates CAD results with a clinical PACS. The toolkit is a software package with two versions: DICOM (digital imaging and communications in medicine)-SC (secondary capture) and DICOM-IHE (Integrating the Healthcare Enterprise). The former uses the DICOM secondary capture object model to convert the screen shot of the CAD results to a DICOM image file for PACS workstations to display, while the latter converts the CAD results to a DICOM structured report (SR) based on IHE Workflow Profiles. The DICOM-SC method is simple and easy to be implemented without ability for further data mining of CAD results, while the DICOM-IHE can be used for data mining of CAD results in the future but more complicated to implement than the DICOM-SC method.

  19. On the Use of CAD-Native Predicates and Geometry in Surface Meshing

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.

    1999-01-01

    Several paradigms for accessing computer-aided design (CAD) geometry during surface meshing for computational fluid dynamics are discussed. File translation, inconsistent geometry engines, and nonnative point construction are all identified as sources of nonrobustness. The paper argues in favor of accessing CAD parts and assemblies in their native format, without translation, and for the use of CAD-native predicates and constructors in surface mesh generation. The discussion also emphasizes the importance of examining the computational requirements for exact evaluation of triangulation predicates during surface meshing.

  20. Creation of the Driver Fixed Heel Point (FHP) CAD Accommodation Model for Military Ground Vehicle Design

    DTIC Science & Technology

    2016-08-04

    NOTES N/A 14. ABSTRACT The objective of this effort is to create a parametric Computer- Aided Design (CAD) accommodation model for the Fixed Heel...Heel Point (FHP), accommodation model, occupant work space, central 90% of the Soldier population, encumbrance, posture and position, computer aided ...Arbor, MI ABSTRACT The objective of this effort is to create a parametric Computer- Aided Design (CAD) accommodation model for the Fixed Heel

  1. Program Evolves from Basic CAD to Total Manufacturing Experience

    ERIC Educational Resources Information Center

    Cassola, Joel

    2011-01-01

    Close to a decade ago, John Hersey High School (JHHS) in Arlington Heights, Illinois, made a transition from a traditional classroom-based pre-engineering program. The new program is geared towards helping students understand the entire manufacturing process. Previously, a JHHS student would design a project in computer-aided design (CAD) software…

  2. Present State of CAD Teaching in Spanish Universities

    ERIC Educational Resources Information Center

    Garcia, Ramon Rubio; Santos, Ramon Gallego; Quiros, Javier Suarez; Penin, Pedro I. Alvarez

    2005-01-01

    During the 1990s, all Spanish Universities updated the syllabuses of their courses as a result of the entry into force of the new Organic Law of Universities ("Ley Organica de Universidades") and, for the first time, "Computer Assisted Design" (CAD) appears in the list of core subjects (compulsory teaching content set by the…

  3. Correlating Trainee Attributes to Performance in 3D CAD Training

    ERIC Educational Resources Information Center

    Hamade, Ramsey F.; Artail, Hassan A.; Sikstrom, Sverker

    2007-01-01

    Purpose: The purpose of this exploratory study is to identify trainee attributes relevant for development of skills in 3D computer-aided design (CAD). Design/methodology/approach: Participants were trained to perform cognitive tasks of comparable complexity over time. Performance data were collected on the time needed to construct test models, and…

  4. The design and construction of the CAD-1 airship

    NASA Technical Reports Server (NTRS)

    Kleiner, H. J.; Schneider, R.; Duncan, J. L.

    1975-01-01

    The background history, design philosophy and Computer application as related to the design of the envelope shape, stress calculations and flight trajectories of the CAD-1 airship, now under construction by Canadian Airship Development Corporation are reported. A three-phase proposal for future development of larger cargo carrying airships is included.

  5. Computationally efficient algorithms for real-time attitude estimation

    NASA Technical Reports Server (NTRS)

    Pringle, Steven R.

    1993-01-01

    For many practical spacecraft applications, algorithms for determining spacecraft attitude must combine inputs from diverse sensors and provide redundancy in the event of sensor failure. A Kalman filter is suitable for this task, however, it may impose a computational burden which may be avoided by sub optimal methods. A suboptimal estimator is presented which was implemented successfully on the Delta Star spacecraft which performed a 9 month SDI flight experiment in 1989. This design sought to minimize algorithm complexity to accommodate the limitations of an 8K guidance computer. The algorithm used is interpreted in the framework of Kalman filtering and a derivation is given for the computation.

  6. Efficient reinforcement learning: computational theories, neuroscience and robotics.

    PubMed

    Kawato, Mitsuo; Samejima, Kazuyuki

    2007-04-01

    Reinforcement learning algorithms have provided some of the most influential computational theories for behavioral learning that depends on reward and penalty. After briefly reviewing supporting experimental data, this paper tackles three difficult theoretical issues that remain to be explored. First, plain reinforcement learning is much too slow to be considered a plausible brain model. Second, although the temporal-difference error has an important role both in theory and in experiments, how to compute it remains an enigma. Third, function of all brain areas, including the cerebral cortex, cerebellum, brainstem and basal ganglia, seems to necessitate a new computational framework. Computational studies that emphasize meta-parameters, hierarchy, modularity and supervised learning to resolve these issues are reviewed here, together with the related experimental data.

  7. How to Quickly Import CAD Geometry into Thermal Desktop

    NASA Technical Reports Server (NTRS)

    Wright, Shonte; Beltran, Emilio

    2002-01-01

    There are several groups at JPL (Jet Propulsion Laboratory) that are committed to concurrent design efforts, two are featured here. Center for Space Mission Architecture and Design (CSMAD) enables the practical application of advanced process technologies in JPL's mission architecture process. Team I functions as an incubator for projects that are in the Discovery, and even pre-Discovery proposal stages. JPL's concurrent design environment is to a large extent centered on the CAD (Computer Aided Design) file. During concurrent design sessions CAD geometry is ported to other more specialized engineering design packages.

  8. Mapping methods for computationally efficient and accurate structural reliability

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1992-01-01

    Mapping methods are developed to improve the accuracy and efficiency of probabilistic structural analyses with coarse finite element meshes. The mapping methods consist of the following: (1) deterministic structural analyses with fine (convergent) finite element meshes; (2) probabilistic structural analyses with coarse finite element meshes; (3) the relationship between the probabilistic structural responses from the coarse and fine finite element meshes; and (4) a probabilistic mapping. The results show that the scatter in the probabilistic structural responses and structural reliability can be efficiently predicted using a coarse finite element model and proper mapping methods with good accuracy. Therefore, large structures can be efficiently analyzed probabilistically using finite element methods.

  9. Limits on efficient computation in the physical world

    NASA Astrophysics Data System (ADS)

    Aaronson, Scott Joel

    More than a speculative technology, quantum computing seems to challenge our most basic intuitions about how the physical world should behave. In this thesis I show that, while some intuitions from classical computer science must be jettisoned in the light of modern physics, many others emerge nearly unscathed; and I use powerful tools from computational complexity theory to help determine which are which. In the first part of the thesis, I attack the common belief that quantum computing resembles classical exponential parallelism, by showing that quantum computers would face serious limitations on a wider range of problems than was previously known. In particular, any quantum algorithm that solves the collision problem---that of deciding whether a sequence of n integers is one-to-one or two-to-one---must query the sequence O (n1/5) times. This resolves a question that was open for years; previously no lower bound better than constant was known. A corollary is that there is no "black-box" quantum algorithm to break cryptographic hash functions or solve the Graph Isomorphism problem in polynomial time. I also show that relative to an oracle, quantum computers could not solve NP-complete problems in polynomial time, even with the help of nonuniform "quantum advice states"; and that any quantum algorithm needs O (2n/4/n) queries to find a local minimum of a black-box function on the n-dimensional hypercube. Surprisingly, the latter result also leads to new classical lower bounds for the local search problem. Finally, I give new lower bounds on quantum one-way communication complexity, and on the quantum query complexity of total Boolean functions and recursive Fourier sampling. The second part of the thesis studies the relationship of the quantum computing model to physical reality. I first examine the arguments of Leonid Levin, Stephen Wolfram, and others who believe quantum computing to be fundamentally impossible. I find their arguments unconvincing without a "Sure

  10. Methods for Computationally Efficient Structured CFD Simulations of Complex Turbomachinery Flows

    NASA Technical Reports Server (NTRS)

    Herrick, Gregory P.; Chen, Jen-Ping

    2012-01-01

    This research presents more efficient computational methods by which to perform multi-block structured Computational Fluid Dynamics (CFD) simulations of turbomachinery, thus facilitating higher-fidelity solutions of complicated geometries and their associated flows. This computational framework offers flexibility in allocating resources to balance process count and wall-clock computation time, while facilitating research interests of simulating axial compressor stall inception with more complete gridding of the flow passages and rotor tip clearance regions than is typically practiced with structured codes. The paradigm presented herein facilitates CFD simulation of previously impractical geometries and flows. These methods are validated and demonstrate improved computational efficiency when applied to complicated geometries and flows.

  11. CAD Skills Increased through Multicultural Design Project

    ERIC Educational Resources Information Center

    Clemons, Stephanie

    2006-01-01

    This article discusses how students in a college-entry-level CAD course researched four generations of their family histories and documented cultural and symbolic influences within their family backgrounds. AutoCAD software was then used to manipulate those cultural and symbolic images to create the design for a multicultural area rug. AutoCAD was…

  12. Cool-and Unusual-CAD Applications

    ERIC Educational Resources Information Center

    Calhoun, Ken

    2004-01-01

    This article describes several very useful applications of AutoCAD that may lie outside the normal scope of application. AutoCAD commands used in this article are based on AutoCAD 2000I. The author and his students used a Hewlett Packard 750C DesignJet plotter for plotting. (Contains 5 figures and 5 photos.)

  13. An Efficient Virtual Machine Consolidation Scheme for Multimedia Cloud Computing

    PubMed Central

    Han, Guangjie; Que, Wenhui; Jia, Gangyong; Shu, Lei

    2016-01-01

    Cloud computing has innovated the IT industry in recent years, as it can delivery subscription-based services to users in the pay-as-you-go model. Meanwhile, multimedia cloud computing is emerging based on cloud computing to provide a variety of media services on the Internet. However, with the growing popularity of multimedia cloud computing, its large energy consumption cannot only contribute to greenhouse gas emissions, but also result in the rising of cloud users’ costs. Therefore, the multimedia cloud providers should try to minimize its energy consumption as much as possible while satisfying the consumers’ resource requirements and guaranteeing quality of service (QoS). In this paper, we have proposed a remaining utilization-aware (RUA) algorithm for virtual machine (VM) placement, and a power-aware algorithm (PA) is proposed to find proper hosts to shut down for energy saving. These two algorithms have been combined and applied to cloud data centers for completing the process of VM consolidation. Simulation results have shown that there exists a trade-off between the cloud data center’s energy consumption and service-level agreement (SLA) violations. Besides, the RUA algorithm is able to deal with variable workload to prevent hosts from overloading after VM placement and to reduce the SLA violations dramatically. PMID:26901201

  14. Efficiency of Computer Literacy Course in Communication Studies

    ERIC Educational Resources Information Center

    Gümüs, Agah; Özad, Bahire Efe

    2004-01-01

    Following the exponential increase in the global usage of the Internet as one of the main tools for communication, the Internet established itself as the fourth most powerful media. In a similar vein, computer literacy education and related courses established themselves as the essential components of the Faculty of Communication and Media…

  15. An Efficient Virtual Machine Consolidation Scheme for Multimedia Cloud Computing.

    PubMed

    Han, Guangjie; Que, Wenhui; Jia, Gangyong; Shu, Lei

    2016-02-18

    Cloud computing has innovated the IT industry in recent years, as it can delivery subscription-based services to users in the pay-as-you-go model. Meanwhile, multimedia cloud computing is emerging based on cloud computing to provide a variety of media services on the Internet. However, with the growing popularity of multimedia cloud computing, its large energy consumption cannot only contribute to greenhouse gas emissions, but also result in the rising of cloud users' costs. Therefore, the multimedia cloud providers should try to minimize its energy consumption as much as possible while satisfying the consumers' resource requirements and guaranteeing quality of service (QoS). In this paper, we have proposed a remaining utilization-aware (RUA) algorithm for virtual machine (VM) placement, and a power-aware algorithm (PA) is proposed to find proper hosts to shut down for energy saving. These two algorithms have been combined and applied to cloud data centers for completing the process of VM consolidation. Simulation results have shown that there exists a trade-off between the cloud data center's energy consumption and service-level agreement (SLA) violations. Besides, the RUA algorithm is able to deal with variable workload to prevent hosts from overloading after VM placement and to reduce the SLA violations dramatically.

  16. Comparative fracture strength analysis of Lava and Digident CAD/CAM zirconia ceramic crowns

    PubMed Central

    Kwon, Taek-Ka; Pak, Hyun-Soon; Han, Jung-Suk; Lee, Jai-Bong; Kim, Sung-Hun

    2013-01-01

    PURPOSE All-ceramic crowns are subject to fracture during function. To minimize this common clinical complication, zirconium oxide has been used as the framework for all-ceramic crowns. The aim of this study was to compare the fracture strengths of two computer-aided design/computer-aided manufacturing (CAD/CAM) zirconia crown systems: Lava and Digident. MATERIALS AND METHODS Twenty Lava CAD/CAM zirconia crowns and twenty Digident CAD/CAM zirconia crowns were fabricated. A metal die was also duplicated from the original prepared tooth for fracture testing. A universal testing machine was used to determine the fracture strength of the crowns. RESULTS The mean fracture strengths were as follows: 54.9 ± 15.6 N for the Lava CAD/CAM zirconia crowns and 87.0 ± 16.0 N for the Digident CAD/CAM zirconia crowns. The difference between the mean fracture strengths of the Lava and Digident crowns was statistically significant (P<.001). Lava CAD/CAM zirconia crowns showed a complete fracture of both the veneering porcelain and the core whereas the Digident CAD/CAM zirconia crowns showed fracture only of the veneering porcelain. CONCLUSION The fracture strengths of CAD/CAM zirconia crowns differ depending on the compatibility of the core material and the veneering porcelain. PMID:23755332

  17. Quality assurance and training procedures for computer-aided detection and diagnosis systems in clinical use.

    PubMed

    Huo, Zhimin; Summers, Ronald M; Paquerault, Sophie; Lo, Joseph; Hoffmeister, Jeffrey; Armato, Samuel G; Freedman, Matthew T; Lin, Jesse; Lo, Shih-Chung Ben; Petrick, Nicholas; Sahiner, Berkman; Fryd, David; Yoshida, Hiroyuki; Chan, Heang-Ping

    2013-07-01

    Computer-aided detection/diagnosis (CAD) is increasingly used for decision support by clinicians for detection and interpretation of diseases. However, there are no quality assurance (QA) requirements for CAD in clinical use at present. QA of CAD is important so that end users can be made aware of changes in CAD performance both due to intentional or unintentional causes. In addition, end-user training is critical to prevent improper use of CAD, which could potentially result in lower overall clinical performance. Research on QA of CAD and user training are limited to date. The purpose of this paper is to bring attention to these issues, inform the readers of the opinions of the members of the American Association of Physicists in Medicine (AAPM) CAD subcommittee, and thus stimulate further discussion in the CAD community on these topics. The recommendations in this paper are intended to be work items for AAPM task groups that will be formed to address QA and user training issues on CAD in the future. The work items may serve as a framework for the discussion and eventual design of detailed QA and training procedures for physicists and users of CAD. Some of the recommendations are considered by the subcommittee to be reasonably easy and practical and can be implemented immediately by the end users; others are considered to be "best practice" approaches, which may require significant effort, additional tools, and proper training to implement. The eventual standardization of the requirements of QA procedures for CAD will have to be determined through consensus from members of the CAD community, and user training may require support of professional societies. It is expected that high-quality CAD and proper use of CAD could allow these systems to achieve their true potential, thus benefiting both the patients and the clinicians, and may bring about more widespread clinical use of CAD for many other diseases and applications. It is hoped that the awareness of the need

  18. Learning with Computer-Based Multimedia: Gender Effects on Efficiency

    ERIC Educational Resources Information Center

    Pohnl, Sabine; Bogner, Franz X.

    2012-01-01

    Up to now, only a few studies in multimedia learning have focused on gender effects. While research has mostly focused on learning success, the effect of gender on instructional efficiency (IE) has not yet been considered. Consequently, we used a quasi-experimental design to examine possible gender differences in the learning success, mental…

  19. Computational Complexity, Efficiency and Accountability in Large Scale Teleprocessing Systems.

    DTIC Science & Technology

    1980-12-01

    COMPLEXITY, EFFICIENCY AND ACCOUNTABILITY IN LARGE SCALE TELEPROCESSING SYSTEMS DAAG29-78-C-0036 STANFORD UNIVERSITY JOHN T. GILL MARTIN E. BELLMAN...solve but easy to check. Ve have also suggested howy sucb random tapes can be simulated by determin- istically generating "pseudorandom" numbers by a

  20. College Students' Reading Efficiency with Computer-Presented Text.

    ERIC Educational Resources Information Center

    Wepner, Shelley B.; Feeley, Joan T.

    Focusing on improving college students' reading efficiency, a study investigated whether a commercially-prepared computerized speed reading package, Speed Reader II, could be utilized as effectively as traditionally printed text. Subjects were 70 college freshmen from a college reading and rate improvement course with borderline scores on the…

  1. Complete denture fabrication supported by CAD/CAM.

    PubMed

    Wimmer, Timea; Gallus, Korbinian; Eichberger, Marlis; Stawarczyk, Bogna

    2016-05-01

    The inclusion of computer-aided design/computer-aided manufacturing (CAD/CAM) technology into complete denture fabrication facilitates the procedures. The presented workflow for complete denture fabrication combines conventional and digitally supported treatment steps for improving dental care. With the presented technique, the registration of the occlusal plane, the determination of the ideal lip support, and the verification of the maxillomandibular relationship record are considered.

  2. IFEMS, an Interactive Finite Element Modeling System Using a CAD/CAM System

    NASA Technical Reports Server (NTRS)

    Mckellip, S.; Schuman, T.; Lauer, S.

    1980-01-01

    A method of coupling a CAD/CAM system with a general purpose finite element mesh generator is described. The three computer programs which make up the interactive finite element graphics system are discussed.

  3. A New Stochastic Computing Methodology for Efficient Neural Network Implementation.

    PubMed

    Canals, Vincent; Morro, Antoni; Oliver, Antoni; Alomar, Miquel L; Rosselló, Josep L

    2016-03-01

    This paper presents a new methodology for the hardware implementation of neural networks (NNs) based on probabilistic laws. The proposed encoding scheme circumvents the limitations of classical stochastic computing (based on unipolar or bipolar encoding) extending the representation range to any real number using the ratio of two bipolar-encoded pulsed signals. Furthermore, the novel approach presents practically a total noise-immunity capability due to its specific codification. We introduce different designs for building the fundamental blocks needed to implement NNs. The validity of the present approach is demonstrated through a regression and a pattern recognition task. The low cost of the methodology in terms of hardware, along with its capacity to implement complex mathematical functions (such as the hyperbolic tangent), allows its use for building highly reliable systems and parallel computing.

  4. Labeled trees and the efficient computation of derivations

    NASA Technical Reports Server (NTRS)

    Grossman, Robert; Larson, Richard G.

    1989-01-01

    The effective parallel symbolic computation of operators under composition is discussed. Examples include differential operators under composition and vector fields under the Lie bracket. Data structures consisting of formal linear combinations of rooted labeled trees are discussed. A multiplication on rooted labeled trees is defined, thereby making the set of these data structures into an associative algebra. An algebra homomorphism is defined from the original algebra of operators into this algebra of trees. An algebra homomorphism from the algebra of trees into the algebra of differential operators is then described. The cancellation which occurs when noncommuting operators are expressed in terms of commuting ones occurs naturally when the operators are represented using this data structure. This leads to an algorithm which, for operators which are derivations, speeds up the computation exponentially in the degree of the operator. It is shown that the algebra of trees leads naturally to a parallel version of the algorithm.

  5. Computationally efficient statistical differential equation modeling using homogenization

    USGS Publications Warehouse

    Hooten, Mevin B.; Garlick, Martha J.; Powell, James A.

    2013-01-01

    Statistical models using partial differential equations (PDEs) to describe dynamically evolving natural systems are appearing in the scientific literature with some regularity in recent years. Often such studies seek to characterize the dynamics of temporal or spatio-temporal phenomena such as invasive species, consumer-resource interactions, community evolution, and resource selection. Specifically, in the spatial setting, data are often available at varying spatial and temporal scales. Additionally, the necessary numerical integration of a PDE may be computationally infeasible over the spatial support of interest. We present an approach to impose computationally advantageous changes of support in statistical implementations of PDE models and demonstrate its utility through simulation using a form of PDE known as “ecological diffusion.” We also apply a statistical ecological diffusion model to a data set involving the spread of mountain pine beetle (Dendroctonus ponderosae) in Idaho, USA.

  6. A Simple and Resource-efficient Setup for the Computer-aided Drug Design Laboratory.

    PubMed

    Moretti, Loris; Sartori, Luca

    2016-10-01

    Undertaking modelling investigations for Computer-Aided Drug Design (CADD) requires a proper environment. In principle, this could be done on a single computer, but the reality of a drug discovery program requires robustness and high-throughput computing (HTC) to efficiently support the research. Therefore, a more capable alternative is needed but its implementation has no widespread solution. Here, the realization of such a computing facility is discussed, from general layout to technical details all aspects are covered.

  7. Unwrapping ADMM: Efficient Distributed Computing via Transpose Reduction

    DTIC Science & Technology

    2016-05-11

    Figures 2a and 2b. 8.2 Empirical Case Study: Classifying Guide Stars We perform experiments using the Second Genera- tion Guide Star Catalog (GSC-II...the bottom horizontal axis denotes the number of computing cores used. database containing spectral and geometric features for 950 million stars and...other objects. The GSC-II also classifies each astronomical body as “ star ” or “not a star .” We train a sparse logistic classifier to discern this

  8. Invited review: efficient computation strategies in genomic selection.

    PubMed

    Misztal, I; Legarra, A

    2016-11-21

    The purpose of this study is review and evaluation of computing methods used in genomic selection for animal breeding. Commonly used models include SNP BLUP with extensions (BayesA, etc), genomic BLUP (GBLUP) and single-step GBLUP (ssGBLUP). These models are applied for genomewide association studies (GWAS), genomic prediction and parameter estimation. Solving methods include finite Cholesky decomposition possibly with a sparse implementation, and iterative Gauss-Seidel (GS) or preconditioned conjugate gradient (PCG), the last two methods possibly with iteration on data. Details are provided that can drastically decrease some computations. For SNP BLUP especially with sampling and large number of SNP, the only choice is GS with iteration on data and adjustment of residuals. If only solutions are required, PCG by iteration on data is a clear choice. A genomic relationship matrix (GRM) has limited dimensionality due to small effective population size, resulting in infinite number of generalized inverses of GRM for large genotyped populations. A specific inverse called APY requires only a small fraction of GRM, is sparse and can be computed and stored at a low cost for millions of animals. With APY inverse and PCG iteration, GBLUP and ssGBLUP can be applied to any population. Both tools can be applied to GWAS. When the system of equations is sparse but contains dense blocks, a recently developed package for sparse Cholesky decomposition and sparse inversion called YAMS has greatly improved performance over packages where such blocks were treated as sparse. With YAMS, GREML and possibly single-step GREML can be applied to populations with >50 000 genotyped animals. From a computational perspective, genomic selection is becoming a mature methodology.

  9. Digital data management for CAD/CAM technology. An update of current systems.

    PubMed

    Andreiotelli, M; Kamposiora, P; Papavasiliou, G

    2013-03-01

    Abstract - Computer-aided design/computer-aided manufacturing (CAD/CAM) technology continues to rapidly evolve in the dental community. This review article provides an overview of the operational components and methodologies used with some of the CAD/CAM systems. Future trends are also discussed. While these systems show great promise, the quality of performance varies among systems. No single system currently acquires data directly in the oral cavity and produces restorations using all materials available. Further refinements of these CAD/CAM technologies may increase their capabilities, but further special training will be required for effective use.

  10. Fracture resistance of CAD/CAM-fabricated fiber-reinforced composite denture retainers.

    PubMed

    Nagata, Kohji; Wakabayashi, Noriyuki; Takahashi, Hidekazu; Vallittu, Pekka K; Lassila, Lippo V J

    2013-01-01

    The purpose of this study was to evaluate the fracture resistance of computer-aided design/computer-assisted manufacture (CAD/CAM)-fabricated fiber-reinforced composite (FRC) denture retainers. Distal extension dentures incorporating two telescopic retainers and two molar pontics, with or without fiberglass, were fabricated by CAD/CAM or by the conventional polymerization method. The dentures were subjected to a vertical load on the second molar pontic until fracture. Within each manufacturing method, embedment of the FRC increased the mean final fracture load, suggesting the reinforcing effect of fiberglass. The polymerized dentures with FRC showed greater mean final fracture load than the CAD/CAM dentures with FRC.

  11. Chunking as the result of an efficiency computation trade-off

    PubMed Central

    Ramkumar, Pavan; Acuna, Daniel E.; Berniker, Max; Grafton, Scott T.; Turner, Robert S.; Kording, Konrad P.

    2016-01-01

    How to move efficiently is an optimal control problem, whose computational complexity grows exponentially with the horizon of the planned trajectory. Breaking a compound movement into a series of chunks, each planned over a shorter horizon can thus reduce the overall computational complexity and associated costs while limiting the achievable efficiency. This trade-off suggests a cost-effective learning strategy: to learn new movements we should start with many short chunks (to limit the cost of computation). As practice reduces the impediments to more complex computation, the chunking structure should evolve to allow progressively more efficient movements (to maximize efficiency). Here we show that monkeys learning a reaching sequence over an extended period of time adopt this strategy by performing movements that can be described as locally optimal trajectories. Chunking can thus be understood as a cost-effective strategy for producing and learning efficient movements. PMID:27397420

  12. Efficient Helicopter Aerodynamic and Aeroacoustic Predictions on Parallel Computers

    NASA Technical Reports Server (NTRS)

    Wissink, Andrew M.; Lyrintzis, Anastasios S.; Strawn, Roger C.; Oliker, Leonid; Biswas, Rupak

    1996-01-01

    This paper presents parallel implementations of two codes used in a combined CFD/Kirchhoff methodology to predict the aerodynamics and aeroacoustics properties of helicopters. The rotorcraft Navier-Stokes code, TURNS, computes the aerodynamic flowfield near the helicopter blades and the Kirchhoff acoustics code computes the noise in the far field, using the TURNS solution as input. The overall parallel strategy adds MPI message passing calls to the existing serial codes to allow for communication between processors. As a result, the total code modifications required for parallel execution are relatively small. The biggest bottleneck in running the TURNS code in parallel comes from the LU-SGS algorithm that solves the implicit system of equations. We use a new hybrid domain decomposition implementation of LU-SGS to obtain good parallel performance on the SP-2. TURNS demonstrates excellent parallel speedups for quasi-steady and unsteady three-dimensional calculations of a helicopter blade in forward flight. The execution rate attained by the code on 114 processors is six times faster than the same cases run on one processor of the Cray C-90. The parallel Kirchhoff code also shows excellent parallel speedups and fast execution rates. As a performance demonstration, unsteady acoustic pressures are computed at 1886 far-field observer locations for a sample acoustics problem. The calculation requires over two hundred hours of CPU time on one C-90 processor but takes only a few hours on 80 processors of the SP2. The resultant far-field acoustic field is analyzed with state of-the-art audio and video rendering of the propagating acoustic signals.

  13. Design of efficient computational workflows for in silico drug repurposing.

    PubMed

    Vanhaelen, Quentin; Mamoshina, Polina; Aliper, Alexander M; Artemov, Artem; Lezhnina, Ksenia; Ozerov, Ivan; Labat, Ivan; Zhavoronkov, Alex

    2017-02-01

    Here, we provide a comprehensive overview of the current status of in silico repurposing methods by establishing links between current technological trends, data availability and characteristics of the algorithms used in these methods. Using the case of the computational repurposing of fasudil as an alternative autophagy enhancer, we suggest a generic modular organization of a repurposing workflow. We also review 3D structure-based, similarity-based, inference-based and machine learning (ML)-based methods. We summarize the advantages and disadvantages of these methods to emphasize three current technical challenges. We finish by discussing current directions of research, including possibilities offered by new methods, such as deep learning.

  14. Computer-aided design development transition for IPAD environment

    NASA Technical Reports Server (NTRS)

    Owens, H. G.; Mock, W. D.; Mitchell, J. C.

    1980-01-01

    The relationship of federally sponsored computer-aided design/computer-aided manufacturing (CAD/CAM) programs to the aircraft life cycle design process, an overview of NAAD'S CAD development program, an evaluation of the CAD design process, a discussion of the current computing environment within which NAAD is developing its CAD system, some of the advantages/disadvantages of the NAAD-IPAD approach, and CAD developments during transition into the IPAD system are discussed.

  15. Mechanical properties and DIC analyses of CAD/CAM materials

    PubMed Central

    Roperto, Renato; Akkus, Anna; Akkus, Ozan; Porto-Neto, Sizenando; Teich, Sorin; Lang, Lisa; Campos, Edson

    2016-01-01

    Background This study compared two well-known computer-aided-design/computer-aided-manufactured (CAD/CAM) blocks (Paradigm MZ100 [3M ESPE] and Vitablocs Mark II [Vita] in terms of fracture toughness (Kic), index of brittleness (BI) and stress/strain distributions. Material and Methods Three-point bending test was used to calculate the fracture toughness, and the relationship between the Kic and the Vickers hardness was used to calculate the index of brittleness. Additionally, digital image correlation (DIC) was used to analyze the stress/strain distribution on both materials. Results The values for fracture toughness obtained under three-point bending were 1.87Pa√m (±0.69) for Paradigm MZ100 and 1.18Pa√m (±0.17) for Vitablocs Mark II. For the index of brittleness, the values for Paradigm and Vitablocs were 73.13μm-1/2 (±30.72) and 550.22μm-1/2 (±82.46). One-way ANOVA was performed to find differences (α=0.05) and detected deviation between the stress/strain distributions on both materials. Conclusions Both CAD/CAM materials tested presented similar fracture toughness, but, different strain/stress distributions. Both materials may perform similarly when used in CAD/CAM restorations. Key words:Ceramic, CAD/CAM, hybrid materials, composite resin, fracture toughness. PMID:27957262

  16. Model-Based Engineering and Manufacturing CAD/CAM Benchmark

    SciTech Connect

    Domm, T.D.; Underwood, R.S.

    1999-04-26

    The Benehmark Project was created from a desire to identify best practices and improve the overall efficiency and performance of the Y-12 Plant's systems and personnel supprting the manufacturing mission. The mission of the benchmark team was to search out industry leaders in manufacturing and evaluate lheir engineering practices and processes to determine direction and focus fm Y-12 modmizadon efforts. The companies visited included several large established companies and anew, small, high-tech machining firm. As a result of this efforL changes are recommended that will enable Y-12 to become a more responsive cost-effective manufacturing facility capable of suppordng the needs of the Nuclear Weapons Complex (NW@) and Work Fw Others into the 21' century. The benchmark team identified key areas of interest, both focused and gencml. The focus arm included Human Resources, Information Management, Manufacturing Software Tools, and Standarda/ Policies and Practices. Areas of general interest included Inhstructure, Computer Platforms and Networking, and Organizational Structure. The method for obtaining the desired information in these areas centered on the creation of a benchmark questionnaire. The questionnaire was used throughout each of the visits as the basis for information gathering. The results of this benchmark showed that all companies are moving in the direction of model-based engineering and manufacturing. There was evidence that many companies are trying to grasp how to manage current and legacy data. In terms of engineering design software tools, the companies contacted were using both 3-D solid modeling and surfaced Wire-frame models. The manufacturing computer tools were varie4 with most companies using more than one software product to generate machining data and none currently performing model-based manufacturing (MBM) ftom a common medel. The majority of companies were closer to identifying or using a single computer-aided design (CAD) system than a

  17. CAD-Based Aerodynamic Design of Complex Configurations using a Cartesian Method

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.; Pulliam, Thomas H.

    2003-01-01

    A modular framework for aerodynamic optimization of complex geometries is developed. By working directly with a parametric CAD system, complex-geometry models are modified nnd tessellated in an automatic fashion. The use of a component-based Cartesian method significantly reduces the demands on the CAD system, and also provides for robust and efficient flowfield analysis. The optimization is controlled using either a genetic or quasi-Newton algorithm. Parallel efficiency of the framework is maintained even when subject to limited CAD resources by dynamically re-allocating the processors of the flow solver. Overall, the resulting framework can explore designs incorporating large shape modifications and changes in topology.

  18. Computationally Efficient Marginal Models for Clustered Recurrent Event Data

    PubMed Central

    Liu, Dandan; Schaubel, Douglas E.; Kalbfleisch, John D.

    2012-01-01

    Summary Large observational databases derived from disease registries and retrospective cohort studies have proven very useful for the study of health services utilization. However, the use of large databases may introduce computational difficulties, particularly when the event of interest is recurrent. In such settings, grouping the recurrent event data into pre-specified intervals leads to a flexible event rate model and a data reduction which remedies the computational issues. We propose a possibly stratified marginal proportional rates model with a piecewise-constant baseline event rate for recurrent event data. Both the absence and the presence of a terminal event are considered. Large-sample distributions are derived for the proposed estimators. Simulation studies are conducted under various data configurations, including settings in which the model is misspecified. Guidelines for interval selection are provided and assessed using numerical studies. We then show that the proposed procedures can be carried out using standard statistical software (e.g., SAS, R). An application based on national hospitalization data for end stage renal disease patients is provided. PMID:21957989

  19. Probabilistic Damage Characterization Using the Computationally-Efficient Bayesian Approach

    NASA Technical Reports Server (NTRS)

    Warner, James E.; Hochhalter, Jacob D.

    2016-01-01

    This work presents a computationally-ecient approach for damage determination that quanti es uncertainty in the provided diagnosis. Given strain sensor data that are polluted with measurement errors, Bayesian inference is used to estimate the location, size, and orientation of damage. This approach uses Bayes' Theorem to combine any prior knowledge an analyst may have about the nature of the damage with information provided implicitly by the strain sensor data to form a posterior probability distribution over possible damage states. The unknown damage parameters are then estimated based on samples drawn numerically from this distribution using a Markov Chain Monte Carlo (MCMC) sampling algorithm. Several modi cations are made to the traditional Bayesian inference approach to provide signi cant computational speedup. First, an ecient surrogate model is constructed using sparse grid interpolation to replace a costly nite element model that must otherwise be evaluated for each sample drawn with MCMC. Next, the standard Bayesian posterior distribution is modi ed using a weighted likelihood formulation, which is shown to improve the convergence of the sampling process. Finally, a robust MCMC algorithm, Delayed Rejection Adaptive Metropolis (DRAM), is adopted to sample the probability distribution more eciently. Numerical examples demonstrate that the proposed framework e ectively provides damage estimates with uncertainty quanti cation and can yield orders of magnitude speedup over standard Bayesian approaches.

  20. Selective reduction of CAD false-positive findings

    NASA Astrophysics Data System (ADS)

    Camarlinghi, N.; Gori, I.; Retico, A.; Bagagli, F.

    2010-03-01

    Computer-Aided Detection (CAD) systems are becoming widespread supporting tools to radiologists' diagnosis, especially in screening contexts. However, a large amount of false positive (FP) alarms would inevitably lead both to an undesired possible increase in time for diagnosis, and to a reduction in radiologists' confidence in CAD as a useful tool. Most CAD systems implement as final step of the analysis a classifier which assigns a score to each entry of a list of findings; by thresholding this score it is possible to define the system performance on an annotated validation dataset in terms of a FROC curve (sensitivity vs. FP per scan). To use a CAD as a supportive tool for most clinical activities, an operative point has to be chosen on the system FROC curve, according to the obvious criterion of keeping the sensitivity as high as possible, while maintaining the number of FP alarms still acceptable. The strategy proposed in this study is to choose an operative point with high sensitivity on the CAD FROC curve, then to implement in cascade a further classification step, constituted by a smarter classifier. The key issue of this approach is that the smarter classifier is actually a meta-classifier of more then one decision system, each specialized in rejecting a particular type of FP findings generated by the CAD. The application of this approach to a dataset of 16 lung CT scans previously processed by the VBNACAD system is presented. The lung CT VBNACAD performance of 87.1% sensitivity to juxtapleural nodules with 18.5 FP per scan is improved up to 10.1 FP per scan while maintaining the same value of sensitivity. This work has been carried out in the framework of the MAGIC-V collaboration.

  1. Mapping methods for computationally efficient and accurate structural reliability

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1992-01-01

    Mapping methods are developed to improve the accuracy and efficiency of probabilistic structural analyses with coarse finite element meshes. The mapping methods consist of: (1) deterministic structural analyses with fine (convergent) finite element meshes, (2) probabilistic structural analyses with coarse finite element meshes, (3) the relationship between the probabilistic structural responses from the coarse and fine finite element meshes, and (4) a probabilistic mapping. The results show that the scatter of the probabilistic structural responses and structural reliability can be accurately predicted using a coarse finite element model with proper mapping methods. Therefore, large structures can be analyzed probabilistically using finite element methods.

  2. An efficient computational tool for ramjet combustor research

    SciTech Connect

    Vanka, S.P.; Krazinski, J.L.; Nejad, A.S.

    1988-01-01

    A multigrid based calculation procedure is presented for the efficient solution of the time-averaged equations of a turbulent elliptic reacting flow. The equations are solved on a non-orthogonal curvilinear coordinate system. The physical models currently incorporated are a two equation k-epsilon turbulence model, a four-step chemical kinetics mechanism, and a Lagrangian particle tracking procedure applicable for dilute sprays. Demonstration calculations are presented to illustrate the performance of the calculation procedure for a ramjet dump combustor configuration. 21 refs., 9 figs., 2 tabs.

  3. Expert validation of the knowledge base for E-CAD - a pre-hospital dispatch triage decision support system.

    PubMed

    Mirza, Muzna; Saini, Devashish; Brown, Todd B; Orthner, Helmuth F; Mazza, Giovanni; Battles, Marcie M

    2007-10-11

    The knowledge base (KB) for E-CAD (Enhanced Computer-Aided Dispatch), a triage decision support system for Emergency Medical Dispatch (EMD) of medical resources in trauma cases, is being evaluated. We aim to achieve expert consensus for validation and refinement of the E-CAD KB using the modified Delphi technique. Evidence-based, expert-validated and refined KB will provide improved EMD practice guidelines and may facilitate acceptance of the E-CAD by state-wide professionals.

  4. A computationally efficient modelling of laminar separation bubbles

    NASA Technical Reports Server (NTRS)

    Maughmer, Mark D.

    1988-01-01

    The goal of this research is to accurately predict the characteristics of the laminar separation bubble and its effects on airfoil performance. To this end, a model of the bubble is under development and will be incorporated in the analysis section of the Eppler and Somers program. As a first step in this direction, an existing bubble model was inserted into the program. It was decided to address the problem of the short bubble before attempting the prediction of the long bubble. In the second place, an integral boundary-layer method is believed more desirable than a finite difference approach. While these two methods achieve similar prediction accuracy, finite-difference methods tend to involve significantly longer computer run times than the integral methods. Finally, as the boundary-layer analysis in the Eppler and Somers program employs the momentum and kinetic energy integral equations, a short-bubble model compatible with these equations is most preferable.

  5. Efficient relaxed-Jacobi smoothers for multigrid on parallel computers

    NASA Astrophysics Data System (ADS)

    Yang, Xiang; Mittal, Rajat

    2017-03-01

    In this Technical Note, we present a family of Jacobi-based multigrid smoothers suitable for the solution of discretized elliptic equations. These smoothers are based on the idea of scheduled-relaxation Jacobi proposed recently by Yang & Mittal (2014) [18] and employ two or three successive relaxed Jacobi iterations with relaxation factors derived so as to maximize the smoothing property of these iterations. The performance of these new smoothers measured in terms of convergence acceleration and computational workload, is assessed for multi-domain implementations typical of parallelized solvers, and compared to the lexicographic point Gauss-Seidel smoother. The tests include the geometric multigrid method on structured grids as well as the algebraic grid method on unstructured grids. The tests demonstrate that unlike Gauss-Seidel, the convergence of these Jacobi-based smoothers is unaffected by domain decomposition, and furthermore, they outperform the lexicographic Gauss-Seidel by factors that increase with domain partition count.

  6. Efficient Computation of Approximate Gene Clusters Based on Reference Occurrences

    NASA Astrophysics Data System (ADS)

    Jahn, Katharina

    Whole genome comparison based on the analysis of gene cluster conservation has become a popular approach in comparative genomics. While gene order and gene content as a whole randomize over time, it is observed that certain groups of genes which are often functionally related remain co-located across species. However, the conservation is usually not perfect which turns the identification of these structures, often referred to as approximate gene clusters, into a challenging task. In this paper, we present a polynomial time algorithm that computes approximate gene clusters based on reference occurrences. We show that our approach yields highly comparable results to a more general approach and allows for approximate gene cluster detection in parameter ranges currently not feasible for non-reference based approaches.

  7. Grid infrastructures for developing mammography CAD systems.

    PubMed

    Ramos-Pollan, Raul; Franco, Jose M; Sevilla, Jorge; Guevara-Lopez, Miguel A; de Posada, Naimy Gonzalez; Loureiro, Joanna; Ramos, Isabel

    2010-01-01

    This paper presents a set of technologies developed to exploit Grid infrastructures for breast cancer CAD, that include (1) federated repositories of mammography images and clinical data over Grid storage, (2) a workstation for mammography image analysis and diagnosis and (3) a framework for data analysis and training machine learning classifiers over Grid computing power specially tuned for medical image based data. An experimental mammography digital repository of approximately 300 mammograms from the MIAS database was created and classifiers were built achieving a 0.85 average area under the ROC curve in a dataset of 100 selected mammograms with representative pathological lesions and normal cases. Similar results were achieved with classifiers built for the UCI Breast Cancer Wisconsin dataset (699 features vectors). Now these technologies are being validated in a real medical environment at the Faculty of Medicine in Porto University after a process of integrating the tools within the clinicians workflows and IT systems.

  8. fjoin: simple and efficient computation of feature overlaps.

    PubMed

    Richardson, Joel E

    2006-10-01

    Sets of biological features with genome coordinates (e.g., genes and promoters) are a particularly common form of data in bioinformatics today. Accordingly, an increasingly important processing step involves comparing coordinates from large sets of features to find overlapping feature pairs. This paper presents fjoin, an efficient, robust, and simple algorithm for finding these pairs, and a downloadable implementation. For typical bioinformatics feature sets, fjoin requires O(n log(n)) time (O(n) if the inputs are sorted) and uses O(1) space. The reference implementation is a stand-alone Python program; it implements the basic algorithm and a number of useful extensions, which are also discussed in this paper.

  9. Efficient computation of the compositional model for gas condensate reservoirs

    NASA Astrophysics Data System (ADS)

    Zhou, Jifu; Li, Jiachun; Ye, Jigen

    2000-12-01

    In this paper, a direct method, unsymmetric-pattern multifrontal factorization, for a large sparse system of linear equations is applied in the compositional reservoir model. The good performances of this approach are shown by solving the Poisson equation. And then the numerical module is embedded in the compositional model for simulating X1/5 (3) gas condensate reservoir in KeKeYa gas field, Northwest China. The results of oil/gas reserves, variations of stratum pressure and oil/gas production, etc. are compared with the observation. Good agreement comparable to COMP4 model is achieved, suggesting that the present model is both efficient and powerful in compositional reservoir simulations.

  10. Understanding dental CAD/CAM for restorations--the digital workflow from a mechanical engineering viewpoint.

    PubMed

    Tapie, L; Lebon, N; Mawussi, B; Fron Chabouis, H; Duret, F; Attal, J-P

    2015-01-01

    As digital technology infiltrates every area of daily life, including the field of medicine, so it is increasingly being introduced into dental practice. Apart from chairside practice, computer-aided design/computer-aided manufacturing (CAD/CAM) solutions are available for creating inlays, crowns, fixed partial dentures (FPDs), implant abutments, and other dental prostheses. CAD/CAM dental solutions can be considered a chain of digital devices and software for the almost automatic design and creation of dental restorations. However, dentists who want to use the technology often do not have the time or knowledge to understand it. A basic knowledge of the CAD/CAM digital workflow for dental restorations can help dentists to grasp the technology and purchase a CAM/CAM system that meets the needs of their office. This article provides a computer-science and mechanical-engineering approach to the CAD/CAM digital workflow to help dentists understand the technology.

  11. An accurate and efficient computation method of the hydration free energy of a large, complex molecule.

    PubMed

    Yoshidome, Takashi; Ekimoto, Toru; Matubayasi, Nobuyuki; Harano, Yuichi; Kinoshita, Masahiro; Ikeguchi, Mitsunori

    2015-05-07

    The hydration free energy (HFE) is a crucially important physical quantity to discuss various chemical processes in aqueous solutions. Although an explicit-solvent computation with molecular dynamics (MD) simulations is a preferable treatment of the HFE, huge computational load has been inevitable for large, complex solutes like proteins. In the present paper, we propose an efficient computation method for the HFE. In our method, the HFE is computed as a sum of 〈UUV〉/2 (〈UUV〉 is the ensemble average of the sum of pair interaction energy between solute and water molecule) and the water reorganization term mainly reflecting the excluded volume effect. Since 〈UUV〉 can readily be computed through a MD of the system composed of solute and water, an efficient computation of the latter term leads to a reduction of computational load. We demonstrate that the water reorganization term can quantitatively be calculated using the morphometric approach (MA) which expresses the term as the linear combinations of the four geometric measures of a solute and the corresponding coefficients determined with the energy representation (ER) method. Since the MA enables us to finish the computation of the solvent reorganization term in less than 0.1 s once the coefficients are determined, the use of the MA enables us to provide an efficient computation of the HFE even for large, complex solutes. Through the applications, we find that our method has almost the same quantitative performance as the ER method with substantial reduction of the computational load.

  12. Computational efficiences for calculating rare earth f^n energies

    NASA Astrophysics Data System (ADS)

    Beck, Donald R.

    2009-05-01

    RecentlyootnotetextD. R. Beck and E. J. Domeier, Can. J. Phys. Walter Johnson issue, Jan. 2009., we have used new computational strategies to obtain wavefunctions and energies for Gd IV 4f^7 and 4f^65d levels. Here we extend one of these techniques to allow efficent inclusion of 4f^2 pair correlation effects using radial pair energies obtained from much simpler calculationsootnotetexte.g. K. Jankowski et al., Int. J. Quant. Chem. XXVII, 665 (1985). and angular factors which can be simply computedootnotetextD. R. Beck and C. A. Nicolaides, Excited States in Quantum Chemistry, C. A. Nicolaides and D. R. Beck (editors), D. Reidel (1978), p. 105ff.. This is a re-vitalization of an older ideaootnotetextI. Oksuz and O. Sinanoglu, Phys. Rev. 181, 54 (1969).. We display relationships between angular factors involving the exchange of holes and electrons (e.g. f^6 vs f^8, f^13d vs fd^9). We apply the results to Tb IV and Gd IV, whose spectra is largely unknown, but which may play a role in MRI medicine as endohedral metallofullerenes (e.g. Gd3N-C80ootnotetextM. C. Qian and S. N. Khanna, J. Appl. Phys. 101, 09E105 (2007).). Pr III results are in good agreement (910 cm-1) with experiment. Pu I 5f^2 radial pair energies are also presented.

  13. Efficient computer algebra algorithms for polynomial matrices in control design

    NASA Technical Reports Server (NTRS)

    Baras, J. S.; Macenany, D. C.; Munach, R.

    1989-01-01

    The theory of polynomial matrices plays a key role in the design and analysis of multi-input multi-output control and communications systems using frequency domain methods. Examples include coprime factorizations of transfer functions, cannonical realizations from matrix fraction descriptions, and the transfer function design of feedback compensators. Typically, such problems abstract in a natural way to the need to solve systems of Diophantine equations or systems of linear equations over polynomials. These and other problems involving polynomial matrices can in turn be reduced to polynomial matrix triangularization procedures, a result which is not surprising given the importance of matrix triangularization techniques in numerical linear algebra. Matrices with entries from a field and Gaussian elimination play a fundamental role in understanding the triangularization process. In the case of polynomial matrices, matrices with entries from a ring for which Gaussian elimination is not defined and triangularization is accomplished by what is quite properly called Euclidean elimination. Unfortunately, the numerical stability and sensitivity issues which accompany floating point approaches to Euclidean elimination are not very well understood. New algorithms are presented which circumvent entirely such numerical issues through the use of exact, symbolic methods in computer algebra. The use of such error-free algorithms guarantees that the results are accurate to within the precision of the model data--the best that can be hoped for. Care must be taken in the design of such algorithms due to the phenomenon of intermediate expressions swell.

  14. Enabling Efficient Climate Science Workflows in High Performance Computing Environments

    NASA Astrophysics Data System (ADS)

    Krishnan, H.; Byna, S.; Wehner, M. F.; Gu, J.; O'Brien, T. A.; Loring, B.; Stone, D. A.; Collins, W.; Prabhat, M.; Liu, Y.; Johnson, J. N.; Paciorek, C. J.

    2015-12-01

    A typical climate science workflow often involves a combination of acquisition of data, modeling, simulation, analysis, visualization, publishing, and storage of results. Each of these tasks provide a myriad of challenges when running on a high performance computing environment such as Hopper or Edison at NERSC. Hurdles such as data transfer and management, job scheduling, parallel analysis routines, and publication require a lot of forethought and planning to ensure that proper quality control mechanisms are in place. These steps require effectively utilizing a combination of well tested and newly developed functionality to move data, perform analysis, apply statistical routines, and finally, serve results and tools to the greater scientific community. As part of the CAlibrated and Systematic Characterization, Attribution and Detection of Extremes (CASCADE) project we highlight a stack of tools our team utilizes and has developed to ensure that large scale simulation and analysis work are commonplace and provide operations that assist in everything from generation/procurement of data (HTAR/Globus) to automating publication of results to portals like the Earth Systems Grid Federation (ESGF), all while executing everything in between in a scalable environment in a task parallel way (MPI). We highlight the use and benefit of these tools by showing several climate science analysis use cases they have been applied to.

  15. Efficient computation of coherent synchrotron radiation in a rectangular chamber

    NASA Astrophysics Data System (ADS)

    Warnock, Robert L.; Bizzozero, David A.

    2016-09-01

    We study coherent synchrotron radiation (CSR) in a perfectly conducting vacuum chamber of rectangular cross section, in a formalism allowing an arbitrary sequence of bends and straight sections. We apply the paraxial method in the frequency domain, with a Fourier development in the vertical coordinate but with no other mode expansions. A line charge source is handled numerically by a new method that rids the equations of singularities through a change of dependent variable. The resulting algorithm is fast compared to earlier methods, works for short bunches with complicated structure, and yields all six field components at any space-time point. As an example we compute the tangential magnetic field at the walls. From that one can make a perturbative treatment of the Poynting flux to estimate the energy deposited in resistive walls. The calculation was motivated by a design issue for LCLS-II, the question of how much wall heating from CSR occurs in the last bend of a bunch compressor and the following straight section. Working with a realistic longitudinal bunch form of r.m.s. length 10.4 μ m and a charge of 100 pC we conclude that the radiated power is quite small (28 W at a 1 MHz repetition rate), and all radiated energy is absorbed in the walls within 7 m along the straight section.

  16. Computational Efficiency through Visual Argument: Do Graphic Organizers Communicate Relations in Text Too Effectively?

    ERIC Educational Resources Information Center

    Robinson, Daniel H.; Schraw, Gregory

    1994-01-01

    Three experiments involving 138 college students investigated why one type of graphic organizer (a matrix) may communicate interconcept relations better than an outline or text. Results suggest that a matrix is more computationally efficient than either outline or text, allowing the easier computation of relationships. (SLD)

  17. An Efficient Objective Analysis System for Parallel Computers

    NASA Technical Reports Server (NTRS)

    Stobie, J.

    1999-01-01

    A new atmospheric objective analysis system designed for parallel computers will be described. The system can produce a global analysis (on a 1 X 1 lat-lon grid with 18 levels of heights and winds and 10 levels of moisture) using 120,000 observations in 17 minutes on 32 CPUs (SGI Origin 2000). No special parallel code is needed (e.g. MPI or multitasking) and the 32 CPUs do not have to be on the same platform. The system is totally portable and can run on several different architectures at once. In addition, the system can easily scale up to 100 or more CPUS. This will allow for much higher resolution and significant increases in input data. The system scales linearly as the number of observations and the number of grid points. The cost overhead in going from 1 to 32 CPUs is 18%. In addition, the analysis results are identical regardless of the number of processors used. This system has all the characteristics of optimal interpolation, combining detailed instrument and first guess error statistics to produce the best estimate of the atmospheric state. Static tests with a 2 X 2.5 resolution version of this system showed it's analysis increments are comparable to the latest NASA operational system including maintenance of mass-wind balance. Results from several months of cycling test in the Goddard EOS Data Assimilation System (GEOS DAS) show this new analysis retains the same level of agreement between the first guess and observations (O-F statistics) as the current operational system.

  18. An Efficient Objective Analysis System for Parallel Computers

    NASA Technical Reports Server (NTRS)

    Stobie, James G.

    1999-01-01

    A new objective analysis system designed for parallel computers will be described. The system can produce a global analysis (on a 2 x 2.5 lat-lon grid with 20 levels of heights and winds and 10 levels of moisture) using 120,000 observations in less than 3 minutes on 32 CPUs (SGI Origin 2000). No special parallel code is needed (e.g. MPI or multitasking) and the 32 CPUs do not have to be on the same platform. The system Ls totally portable and can run on -several different architectures at once. In addition, the system can easily scale up to 100 or more CPUS. This will allow for much higher resolution and significant increases in input data. The system scales linearly as the number of observations and the number of grid points. The cost overhead in going from I to 32 CPus is 18%. in addition, the analysis results are identical regardless of the number of processors used. T'his system has all the characteristics of optimal interpolation, combining detailed instrument and first guess error statistics to produce the best estimate of the atmospheric state. It also includes a new quality control (buddy check) system. Static tests with the system showed it's analysis increments are comparable to the latest NASA operational system including maintenance of mass-wind balance. Results from a 2-month cycling test in the Goddard EOS Data Assimilation System (GEOS DAS) show this new analysis retains the same level of agreement between the first guess and observations (0-F statistics) throughout the entire two months.

  19. Learning-based image preprocessing for robust computer-aided detection

    NASA Astrophysics Data System (ADS)

    Raghupathi, Laks; Devarakota, Pandu R.; Wolf, Matthias

    2013-03-01

    Recent studies have shown that low dose computed tomography (LDCT) can be an effective screening tool to reduce lung cancer mortality. Computer-aided detection (CAD) would be a beneficial second reader for radiologists in such cases. Studies demonstrate that while iterative reconstructions (IR) improve LDCT diagnostic quality, it however degrades CAD performance significantly (increased false positives) when applied directly. For improving CAD performance, solutions such as retraining with newer data or applying a standard preprocessing technique may not be suffice due to high prevalence of CT scanners and non-uniform acquisition protocols. Here, we present a learning-based framework that can adaptively transform a wide variety of input data to boost an existing CAD performance. This not only enhances their robustness but also their applicability in clinical workflows. Our solution consists of applying a suitable pre-processing filter automatically on the given image based on its characteristics. This requires the preparation of ground truth (GT) of choosing an appropriate filter resulting in improved CAD performance. Accordingly, we propose an efficient consolidation process with a novel metric. Using key anatomical landmarks, we then derive consistent feature descriptors for the classification scheme that then uses a priority mechanism to automatically choose an optimal preprocessing filter. We demonstrate CAD prototype∗ performance improvement using hospital-scale datasets acquired from North America, Europe and Asia. Though we demonstrated our results for a lung nodule CAD, this scheme is straightforward to extend to other post-processing tools dedicated to other organs and modalities.

  20. Improving CAD performance by fusion of the bilateral mammographic tissue asymmetry information

    NASA Astrophysics Data System (ADS)

    Wang, Xingwei; Li, Lihua; Liu, Wei; Xu, Weidong; Lederman, Dror; Zheng, Bin

    2012-03-01

    Bilateral mammographic tissue density asymmetry could be an important factor in assessing risk of developing breast cancer and improving the detection of the suspicious lesions. This study aims to assess whether fusion of the bilateral mammographic density asymmetrical information into a computer-aided detection (CAD) scheme could improve CAD performance in detecting mass-like breast cancers. A testing dataset involving 1352 full-field digital mammograms (FFDM) acquired from 338 cases was used. In this dataset, half (169) cases are positive containing malignant masses and half are negative. Two computerized schemes were first independently applied to process FFDM images of each case. The first single-image based CAD scheme detected suspicious mass regions on each image. The second scheme detected and computed the bilateral mammographic tissue density asymmetry for each case. A fusion method was then applied to combine the output scores of the two schemes. The CAD performance levels using the original CAD-generated detection scores and the new fusion scores were evaluated and compared using a free-response receiver operating characteristic (FROC) type data analysis method. By fusion with the bilateral mammographic density asymmetrical scores, the case-based CAD sensitivity was increased from 79.2% to 84.6% at a false-positive rate of 0.3 per image. CAD also cued more "difficult" masses with lower CAD-generated detection scores while discarded some "easy" cases. The study indicated that fusion between the scores generated by a single-image based CAD scheme and the computed bilateral mammographic density asymmetry scores enabled to increase mass detection sensitivity in particular to detect more subtle masses.

  1. Dental students' preferences and performance in crown design: conventional wax-added versus CAD.

    PubMed

    Douglas, R Duane; Hopp, Christa D; Augustin, Marcus A

    2014-12-01

    The purpose of this study was to evaluate dental students' perceptions of traditional waxing vs. computer-aided crown design and to determine the effectiveness of either technique through comparative grading of the final products. On one of twoidentical tooth preparations, second-year students at one dental school fabricated a wax pattern for a full contour crown; on the second tooth preparation, the same students designed and fabricated an all-ceramic crown using computer-aided design (CAD) and computer-aided manufacturing (CAM) technology. Projects were graded for occlusion and anatomic form by three faculty members. On completion of the projects, 100 percent of the students (n=50) completed an eight-question, five-point Likert scalesurvey, designed to assess their perceptions of and learning associated with the two design techniques. The average grades for the crown design projects were 78.3 (CAD) and 79.1 (wax design). The mean numbers of occlusal contacts were 3.8 (CAD) and 2.9(wax design), which was significantly higher for CAD (p=0.02). The survey results indicated that students enjoyed designing afull contour crown using CAD as compared to using conventional wax techniques and spent less time designing the crown using CAD. From a learning perspective, students felt that they learned more about position and the size/strength of occlusal contacts using CAD. However, students recognized that CAD technology has limits in terms of representing anatomic contours and excursive occlusion compared to conventional wax techniques. The results suggest that crown design using CAD could be considered as an adjunct to conventional wax-added techniques in preclinical fixed prosthodontic curricula.

  2. Recovery Act - CAREER: Sustainable Silicon -- Energy-Efficient VLSI Interconnect for Extreme-Scale Computing

    SciTech Connect

    Chiang, Patrick

    2014-01-31

    The research goal of this CAREER proposal is to develop energy-efficient, VLSI interconnect circuits and systems that will facilitate future massively-parallel, high-performance computing. Extreme-scale computing will exhibit massive parallelism on multiple vertical levels, from thou­ sands of computational units on a single processor to thousands of processors in a single data center. Unfortunately, the energy required to communicate between these units at every level (on­ chip, off-chip, off-rack) will be the critical limitation to energy efficiency. Therefore, the PI's career goal is to become a leading researcher in the design of energy-efficient VLSI interconnect for future computing systems.

  3. Computationally generated velocity taper for efficiency enhancement in a coupled-cavity traveling-wave tube

    NASA Technical Reports Server (NTRS)

    Wilson, Jeffrey D.

    1989-01-01

    A computational routine has been created to generate velocity tapers for efficiency enhancement in coupled-cavity TWTs. Programmed into the NASA multidimensional large-signal coupled-cavity TWT computer code, the routine generates the gradually decreasing cavity periods required to maintain a prescribed relationship between the circuit phase velocity and the electron-bunch velocity. Computational results for several computer-generated tapers are compared to those for an existing coupled-cavity TWT with a three-step taper. Guidelines are developed for prescribing the bunch-phase profile to produce a taper for efficiency. The resulting taper provides a calculated RF efficiency 45 percent higher than the step taper at center frequency and at least 37 percent higher over the bandwidth.

  4. Modelling and computationally efficient time domain linear equalisation of nonlinear bandlimited QPSK satellite channels

    NASA Technical Reports Server (NTRS)

    Konstantinides, K.; Yao, K.

    1990-01-01

    The problem of modeling and equalization of a nonlinear satellite channel is considered. The channel is assumed to be bandlimited and exhibits both amplitude and phase nonlinearities. In traditional models, computations are usually performed in the frequency domain and solutions are based on complex numerical techniques. A discrete time model is used to represent the satellite link with both uplink and downlink white Gaussian noise. Under conditions of practical interest, a simple and computationally efficient time-domain design technique for the minimum mean square error linear equalizer is presented. The efficiency of this technique is enhanced by the use of a fast and simple iterative algorithm for the computation of the autocorrelation coefficients of the output of the nonlinear channel. Numerical results on the evaluations of bit error probability and other relevant parameters needed in the design and analysis of a nonlinear bandlimited QPSK system demonstrate the simplicity and computational efficiency of the proposed approach.

  5. Integration of a CAD System Into an MDO Framework

    NASA Technical Reports Server (NTRS)

    Townsend, J. C.; Samareh, J. A.; Weston, R. P.; Zorumski, W. E.

    1998-01-01

    NASA Langley has developed a heterogeneous distributed computing environment, called the Framework for Inter-disciplinary Design Optimization, or FIDO. Its purpose has been to demonstrate framework technical feasibility and usefulness for optimizing the preliminary design of complex systems and to provide a working environment for testing optimization schemes. Its initial implementation has been for a simplified model of preliminary design of a high-speed civil transport. Upgrades being considered for the FIDO system include a more complete geometry description, required by high-fidelity aerodynamics and structures codes and based on a commercial Computer Aided Design (CAD) system. This report presents the philosophy behind some of the decisions that have shaped the FIDO system and gives a brief case study of the problems and successes encountered in integrating a CAD system into the FEDO framework.

  6. A Software Demonstration of 'rap': Preparing CAD Geometries for Overlapping Grid Generation

    SciTech Connect

    Anders Petersson, N.

    2002-02-15

    We demonstrate the application code ''rap'' which is part of the ''Overture'' library. A CAD geometry imported from an IGES file is first cleaned up and simplified to suit the needs of mesh generation. Thereafter, the topology of the model is computed and a water-tight surface triangulation is created on the CAD surface. This triangulation is used to speed up the projection of points onto the CAD surface during the generation of overlapping surface grids. From each surface grid, volume grids are grown into the domain using a hyperbolic marching procedure. The final step is to fill any remaining parts of the interior with background meshes.

  7. CYBERSECURITY AND USER ACCOUNTABILITY IN THE C-AD CONTROL SYSTEM

    SciTech Connect

    MORRIS,J.T.; BINELLO, S.; D OTTAVIO, T.; KATZ, R.A.

    2007-10-15

    A heightened awareness of cybersecurity has led to a review of the procedures that ensure user accountability for actions performed on the computers of the Collider-Accelerator Department (C-AD) Control System. Control system consoles are shared by multiple users in control rooms throughout the C-AD complex. A significant challenge has been the establishment of procedures that securely control and monitor access to these shared consoles without impeding accelerator operations. This paper provides an overview of C-AD cybersecurity strategies with an emphasis on recent enhancements in user authentication and tracking methods.

  8. An Educational Exercise Examining the Role of Model Attributes on the Creation and Alteration of CAD Models

    ERIC Educational Resources Information Center

    Johnson, Michael D.; Diwakaran, Ram Prasad

    2011-01-01

    Computer-aided design (CAD) is a ubiquitous tool that today's students will be expected to use proficiently for numerous engineering purposes. Taking full advantage of the features available in modern CAD programs requires that models are created in a manner that allows others to easily understand how they are organized and alter them in an…

  9. Adjoint Algorithm for CAD-Based Shape Optimization Using a Cartesian Method

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.

    2004-01-01

    Adjoint solutions of the governing flow equations are becoming increasingly important for the development of efficient analysis and optimization algorithms. A well-known use of the adjoint method is gradient-based shape optimization. Given an objective function that defines some measure of performance, such as the lift and drag functionals, its gradient is computed at a cost that is essentially independent of the number of design variables (geometric parameters that control the shape). More recently, emerging adjoint applications focus on the analysis problem, where the adjoint solution is used to drive mesh adaptation, as well as to provide estimates of functional error bounds and corrections. The attractive feature of this approach is that the mesh-adaptation procedure targets a specific functional, thereby localizing the mesh refinement and reducing computational cost. Our focus is on the development of adjoint-based optimization techniques for a Cartesian method with embedded boundaries.12 In contrast t o implementations on structured and unstructured grids, Cartesian methods decouple the surface discretization from the volume mesh. This feature makes Cartesian methods well suited for the automated analysis of complex geometry problems, and consequently a promising approach to aerodynamic optimization. Melvin et developed an adjoint formulation for the TRANAIR code, which is based on the full-potential equation with viscous corrections. More recently, Dadone and Grossman presented an adjoint formulation for the Euler equations. In both approaches, a boundary condition is introduced to approximate the effects of the evolving surface shape that results in accurate gradient computation. Central to automated shape optimization algorithms is the issue of geometry modeling and control. The need to optimize complex, "real-life" geometry provides a strong incentive for the use of parametric-CAD systems within the optimization procedure. In previous work, we presented

  10. Performance evaluation of the NASA/KSC CAD/CAE and office automation LAN's

    NASA Technical Reports Server (NTRS)

    Zobrist, George W.

    1994-01-01

    This study's objective is the performance evaluation of the existing CAD/CAE (Computer Aided Design/Computer Aided Engineering) network at NASA/KSC. This evaluation also includes a similar study of the Office Automation network, since it is being planned to integrate this network into the CAD/CAE network. The Microsoft mail facility which is presently on the CAD/CAE network was monitored to determine its present usage. This performance evaluation of the various networks will aid the NASA/KSC network managers in planning for the integration of future workload requirements into the CAD/CAE network and determining the effectiveness of the planned FDDI (Fiber Distributed Data Interface) migration.

  11. Intrinsic Efficiency Calibration Considering Geometric Factors in Gamma-ray Computed Tomography for Radioactive Waste Assay

    SciTech Connect

    Liu, Zhe; Zhang, Li

    2015-07-01

    In radioactive waste assay with gamma-ray computed tomography, calibration for intrinsic efficiency of the system is important to the reconstruction of radioactivity distribution. Due to the geometric characteristics of the system, the non-uniformity of intrinsic efficiency for gamma-rays with different incident positions and directions are often un-negligible. Intrinsic efficiency curves versus geometric parameters of incident gamma-ray are obtained by Monte-Carlo simulation, and two intrinsic efficiency models are suggested to characterize the intrinsic efficiency determined by relative source-detector position and system geometry in the system matrix. Monte-Carlo simulation is performed to compare the different intrinsic efficiency models. Better reconstruction results of radioactivity distribution are achieved by both suggested models than by the uniform intrinsic efficiency model. And compared to model based on detector position, model based on point response increases reconstruction accuracy as well as complexity and time of calculation. (authors)

  12. Dynamic MRI-based computer aided diagnostic systems for early detection of kidney transplant rejection: A survey

    NASA Astrophysics Data System (ADS)

    Mostapha, Mahmoud; Khalifa, Fahmi; Alansary, Amir; Soliman, Ahmed; Gimel'farb, Georgy; El-Baz, Ayman

    2013-10-01

    Early detection of renal transplant rejection is important to implement appropriate medical and immune therapy in patients with transplanted kidneys. In literature, a large number of computer-aided diagnostic (CAD) systems using different image modalities, such as ultrasound (US), magnetic resonance imaging (MRI), computed tomography (CT), and radionuclide imaging, have been proposed for early detection of kidney diseases. A typical CAD system for kidney diagnosis consists of a set of processing steps including: motion correction, segmentation of the kidney and/or its internal structures (e.g., cortex, medulla), construction of agent kinetic curves, functional parameter estimation, diagnosis, and assessment of the kidney status. In this paper, we survey the current state-of-the-art CAD systems that have been developed for kidney disease diagnosis using dynamic MRI. In addition, the paper addresses several challenges that researchers face in developing efficient, fast and reliable CAD systems for the early detection of kidney diseases.

  13. Efficient use of high performance computers for integrated controls and structures design. [of large space platforms

    NASA Technical Reports Server (NTRS)

    Belvin, W. K.; Maghami, P. G.; Nguyen, D. T.

    1992-01-01

    Simply transporting design codes from sequential-scalar computers to parallel-vector computers does not fully utilize the computational benefits offered by high performance computers. By performing integrated controls and structures design on an experimental truss platform with both sequential-scalar and parallel-vector design codes, conclusive results are presented to substantiate this claim. The efficiency of a Cholesky factorization scheme in conjunction with a variable-band row data structure is presented. In addition, the Lanczos eigensolution algorithm has been incorporated in the design code for both parallel and vector computations. Comparisons of computational efficiency between the initial design code and the parallel-vector design code are presented. It is shown that the Lanczos algorithm with the Cholesky factorization scheme is far superior to the sub-space iteration method of eigensolution when substantial numbers of eigenvectors are required for control design and/or performance optimization. Integrated design results show the need for continued efficiency studies in the area of element computations and matrix assembly.

  14. Integrated Computer-Aided Drafting Instruction (ICADI).

    ERIC Educational Resources Information Center

    Chen, C. Y.; McCampbell, David H.

    Until recently, computer-aided drafting and design (CAD) systems were almost exclusively operated on mainframes or minicomputers and their cost prohibited many schools from offering CAD instruction. Today, many powerful personal computers are capable of performing the high-speed calculation and analysis required by the CAD application; however,…

  15. An approximate solution to improve computational efficiency of impedance-type payload load prediction

    NASA Technical Reports Server (NTRS)

    White, C. W.

    1981-01-01

    The computational efficiency of the impedance type loads prediction method was studied. Three goals were addressed: devise a method to make the impedance method operate more efficiently in the computer; assess the accuracy and convenience of the method for determining the effect of design changes; and investigate the use of the method to identify design changes for reduction of payload loads. The method is suitable for calculation of dynamic response in either the frequency or time domain. It is concluded that: the choice of an orthogonal coordinate system will allow the impedance method to operate more efficiently in the computer; the approximate mode impedance technique is adequate for determining the effect of design changes, and is applicable for both statically determinate and statically indeterminate payload attachments; and beneficial design changes to reduce payload loads can be identified by the combined application of impedance techniques and energy distribution review techniques.

  16. A Computationally Efficient Parallel Levenberg-Marquardt Algorithm for Large-Scale Big-Data Inversion

    NASA Astrophysics Data System (ADS)

    Lin, Y.; O'Malley, D.; Vesselinov, V. V.

    2015-12-01

    Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a

  17. The Challenging Academic Development (CAD) Collective

    ERIC Educational Resources Information Center

    Peseta, Tai

    2005-01-01

    This article discusses the Challenging Academic Development (CAD) Collective and describes how it came out of a symposium called "Liminality, identity, and hybridity: On the promise of new conceptual frameworks for theorising academic/faculty development." The CAD Collective is and represents a space where people can open up their…

  18. An efficient numerical algorithm for computing densely distributed positive interior transmission eigenvalues

    NASA Astrophysics Data System (ADS)

    Li, Tiexiang; Huang, Tsung-Ming; Lin, Wen-Wei; Wang, Jenn-Nan

    2017-03-01

    We propose an efficient eigensolver for computing densely distributed spectra of the two-dimensional transmission eigenvalue problem (TEP), which is derived from Maxwell’s equations with Tellegen media and the transverse magnetic mode. The governing equations, when discretized by the standard piecewise linear finite element method, give rise to a large-scale quadratic eigenvalue problem (QEP). Our numerical simulation shows that half of the positive eigenvalues of the QEP are densely distributed in some interval near the origin. The quadratic Jacobi–Davidson method with a so-called non-equivalence deflation technique is proposed to compute the dense spectrum of the QEP. Extensive numerical simulations show that our proposed method processes the convergence efficiently, even when it needs to compute more than 5000 desired eigenpairs. Numerical results also illustrate that the computed eigenvalue curves can be approximated by nonlinear functions, which can be applied to estimate the denseness of the eigenvalues for the TEP.

  19. Efficient and Flexible Computation of Many-Electron Wave Function Overlaps.

    PubMed

    Plasser, Felix; Ruckenbauer, Matthias; Mai, Sebastian; Oppel, Markus; Marquetand, Philipp; González, Leticia

    2016-03-08

    A new algorithm for the computation of the overlap between many-electron wave functions is described. This algorithm allows for the extensive use of recurring intermediates and thus provides high computational efficiency. Because of the general formalism employed, overlaps can be computed for varying wave function types, molecular orbitals, basis sets, and molecular geometries. This paves the way for efficiently computing nonadiabatic interaction terms for dynamics simulations. In addition, other application areas can be envisaged, such as the comparison of wave functions constructed at different levels of theory. Aside from explaining the algorithm and evaluating the performance, a detailed analysis of the numerical stability of wave function overlaps is carried out, and strategies for overcoming potential severe pitfalls due to displaced atoms and truncated wave functions are presented.

  20. Efficient and Flexible Computation of Many-Electron Wave Function Overlaps

    PubMed Central

    2016-01-01

    A new algorithm for the computation of the overlap between many-electron wave functions is described. This algorithm allows for the extensive use of recurring intermediates and thus provides high computational efficiency. Because of the general formalism employed, overlaps can be computed for varying wave function types, molecular orbitals, basis sets, and molecular geometries. This paves the way for efficiently computing nonadiabatic interaction terms for dynamics simulations. In addition, other application areas can be envisaged, such as the comparison of wave functions constructed at different levels of theory. Aside from explaining the algorithm and evaluating the performance, a detailed analysis of the numerical stability of wave function overlaps is carried out, and strategies for overcoming potential severe pitfalls due to displaced atoms and truncated wave functions are presented. PMID:26854874

  1. Efficient Computation of Functional Brain Networks: toward Real-Time Functional Connectivity

    PubMed Central

    García-Prieto, Juan; Bajo, Ricardo; Pereda, Ernesto

    2017-01-01

    Functional Connectivity has demonstrated to be a key concept for unraveling how the brain balances functional segregation and integration properties while processing information. This work presents a set of open-source tools that significantly increase computational efficiency of some well-known connectivity indices and Graph-Theory measures. PLV, PLI, ImC, and wPLI as Phase Synchronization measures, Mutual Information as an information theory based measure, and Generalized Synchronization indices are computed much more efficiently than prior open-source available implementations. Furthermore, network theory related measures like Strength, Shortest Path Length, Clustering Coefficient, and Betweenness Centrality are also implemented showing computational times up to thousands of times faster than most well-known implementations. Altogether, this work significantly expands what can be computed in feasible times, even enabling whole-head real-time network analysis of brain function. PMID:28220071

  2. Efficient Computation of Functional Brain Networks: toward Real-Time Functional Connectivity.

    PubMed

    García-Prieto, Juan; Bajo, Ricardo; Pereda, Ernesto

    2017-01-01

    Functional Connectivity has demonstrated to be a key concept for unraveling how the brain balances functional segregation and integration properties while processing information. This work presents a set of open-source tools that significantly increase computational efficiency of some well-known connectivity indices and Graph-Theory measures. PLV, PLI, ImC, and wPLI as Phase Synchronization measures, Mutual Information as an information theory based measure, and Generalized Synchronization indices are computed much more efficiently than prior open-source available implementations. Furthermore, network theory related measures like Strength, Shortest Path Length, Clustering Coefficient, and Betweenness Centrality are also implemented showing computational times up to thousands of times faster than most well-known implementations. Altogether, this work significantly expands what can be computed in feasible times, even enabling whole-head real-time network analysis of brain function.

  3. Some computational challenges of developing efficient parallel algorithms for data-dependent computations in thermal-hydraulics supercomputer applications

    SciTech Connect

    Woodruff, S.B.

    1992-01-01

    The Transient Reactor Analysis Code (TRAC), which features a two- fluid treatment of thermal-hydraulics, is designed to model transients in water reactors and related facilities. One of the major computational costs associated with TRAC and similar codes is calculating constitutive coefficients. Although the formulations for these coefficients are local the costs are flow-regime- or data-dependent; i.e., the computations needed for a given spatial node often vary widely as a function of time. Consequently, poor load balancing will degrade efficiency on either vector or data parallel architectures when the data are organized according to spatial location. Unfortunately, a general automatic solution to the load-balancing problem associated with data-dependent computations is not yet available for massively parallel architectures. This document discusses why developers algorithms, such as a neural net representation, that do not exhibit algorithms, such as a neural net representation, that do not exhibit load-balancing problems.

  4. A CAD interface for GEANT4.

    PubMed

    Poole, C M; Cornelius, I; Trapp, J V; Langton, C M

    2012-09-01

    Often CAD models already exist for parts of a geometry being simulated using GEANT4. Direct import of these CAD models into GEANT4 however, may not be possible and complex components may be difficult to define via other means. Solutions that allow for users to work around the limited support in the GEANT4 toolkit for loading predefined CAD geometries have been presented by others, however these solutions require intermediate file format conversion using commercial software. Here within we describe a technique that allows for CAD models to be directly loaded as geometry without the need for commercial software and intermediate file format conversion. Robustness of the interface was tested using a set of CAD models of various complexity; for the models used in testing, no import errors were reported and all geometry was found to be navigable by GEANT4.

  5. Computationally Efficient Use of Derivatives in Emulation of Complex Computational Models

    SciTech Connect

    Williams, Brian J.; Marcy, Peter W.

    2012-06-07

    We will investigate the use of derivative information in complex computer model emulation when the correlation function is of the compactly supported Bohman class. To this end, a Gaussian process model similar to that used by Kaufman et al. (2011) is extended to a situation where first partial derivatives in each dimension are calculated at each input site (i.e. using gradients). A simulation study in the ten-dimensional case is conducted to assess the utility of the Bohman correlation function against strictly positive correlation functions when a high degree of sparsity is induced.

  6. A computationally efficient denoising and hole-filling method for depth image enhancement

    NASA Astrophysics Data System (ADS)

    Liu, Soulan; Chen, Chen; Kehtarnavaz, Nasser

    2016-04-01

    Depth maps captured by Kinect depth cameras are being widely used for 3D action recognition. However, such images often appear noisy and contain missing pixels or black holes. This paper presents a computationally efficient method for both denoising and hole-filling in depth images. The denoising is achieved by utilizing a combination of Gaussian kernel filtering and anisotropic filtering. The hole-filling is achieved by utilizing a combination of morphological filtering and zero block filtering. Experimental results using the publicly available datasets are provided indicating the superiority of the developed method in terms of both depth error and computational efficiency compared to three existing methods.

  7. Development of efficient computer program for dynamic simulation of telerobotic manipulation

    NASA Technical Reports Server (NTRS)

    Chen, J.; Ou, Y. J.

    1989-01-01

    Research in robot control has generated interest in computationally efficient forms of dynamic equations for multi-body systems. For a simply connected open-loop linkage, dynamic equations arranged in recursive form were found to be particularly efficient. A general computer program capable of simulating an open-loop manipulator with arbitrary number of links has been developed based on an efficient recursive form of Kane's dynamic equations. Also included in the program is some of the important dynamics of the joint drive system, i.e., the rotational effect of the motor rotors. Further efficiency is achieved by the use of symbolic manipulation program to generate the FORTRAN simulation program tailored for a specific manipulator based on the parameter values given. The formulations and the validation of the program are described, and some results are shown.

  8. Energy-Efficient Computational Chemistry: Comparison of x86 and ARM Systems.

    PubMed

    Keipert, Kristopher; Mitra, Gaurav; Sunriyal, Vaibhav; Leang, Sarom S; Sosonkina, Masha; Rendell, Alistair P; Gordon, Mark S

    2015-11-10

    The computational efficiency and energy-to-solution of several applications using the GAMESS quantum chemistry suite of codes is evaluated for 32-bit and 64-bit ARM-based computers, and compared to an x86 machine. The x86 system completes all benchmark computations more quickly than either ARM system and is the best choice to minimize time to solution. The ARM64 and ARM32 computational performances are similar to each other for Hartree-Fock and density functional theory energy calculations. However, for memory-intensive second-order perturbation theory energy and gradient computations the lower ARM32 read/write memory bandwidth results in computation times as much as 86% longer than on the ARM64 system. The ARM32 system is more energy efficient than the x86 and ARM64 CPUs for all benchmarked methods, while the ARM64 CPU is more energy efficient than the x86 CPU for some core counts and molecular sizes.

  9. Evaluation of Emerging Energy-Efficient Heterogeneous Computing Platforms for Biomolecular and Cellular Simulation Workloads

    PubMed Central

    Stone, John E.; Hallock, Michael J.; Phillips, James C.; Peterson, Joseph R.; Luthey-Schulten, Zaida; Schulten, Klaus

    2016-01-01

    Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers. PMID:27516922

  10. Evaluation of Emerging Energy-Efficient Heterogeneous Computing Platforms for Biomolecular and Cellular Simulation Workloads.

    PubMed

    Stone, John E; Hallock, Michael J; Phillips, James C; Peterson, Joseph R; Luthey-Schulten, Zaida; Schulten, Klaus

    2016-05-01

    Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers.

  11. Dual vs. single computer monitor in a Canadian hospital Archiving Department: a study of efficiency and satisfaction.

    PubMed

    Poder, Thomas G; Godbout, Sylvie T; Bellemare, Christian

    2011-01-01

    This paper describes a comparative study of clinical coding by Archivists (also known as Clinical Coders in some other countries) using single and dual computer monitors. In the present context, processing a record corresponds to checking the available information; searching for the missing physician information; and finally, performing clinical coding. We collected data for each Archivist during her use of the single monitor for 40 hours and during her use of the dual monitor for 20 hours. During the experimental periods, Archivists did not perform other related duties, so we were able to measure the real-time processing of records. To control for the type of records and their impact on the process time required, we categorised the cases as major or minor, based on whether acute care or day surgery was involved. Overall results show that 1,234 records were processed using a single monitor and 647 records using a dual monitor. The time required to process a record was significantly higher (p= .071) with a single monitor compared to a dual monitor (19.83 vs.18.73 minutes). However, the percentage of major cases was significantly higher (p= .000) in the single monitor group compared to the dual monitor group (78% vs. 69%). As a consequence, we adjusted our results, which reduced the difference in time required to process a record between the two systems from 1.1 to 0.61 minutes. Thus, the net real-time difference was only 37 seconds in favour of the dual monitor system. Extrapolated over a 5-year period, this would represent a time savings of 3.1% and generate a net cost savings of $7,729 CAD (Canadian dollars) for each workstation that devoted 35 hours per week to the processing of records. Finally, satisfaction questionnaire responses indicated a high level of satisfaction and support for the dual-monitor system. The implementation of a dual-monitor system in a hospital archiving department is an efficient option in the context of scarce human resources and has the

  12. Indications for Computer-Aided Design and Manufacturing in Congenital Craniofacial Reconstruction.

    PubMed

    Fisher, Mark; Medina, Miguel; Bojovic, Branko; Ahn, Edward; Dorafshar, Amir H

    2016-09-01

    The complex three-dimensional relationships in congenital craniofacial reconstruction uniquely lend themselves to the ability to accurately plan and model the result provided by computer-aided design and manufacturing (CAD/CAM). The goal of this study was to illustrate indications where CAD/CAM would be helpful in the treatment of congenital craniofacial anomalies reconstruction and to discuss the application of this technology and its outcomes. A retrospective review was performed of all congenital craniofacial cases performed by the senior author between 2010 and 2014. Cases where CAD/CAM was used were identified, and illustrative cases to demonstrate the benefits of CAD/CAM were selected. Preoperative appearance, computerized plan, intraoperative course, and final outcome were analyzed. Preoperative planning enabled efficient execution of the operative plan with predictable results. Risk factors which made these patients good candidates for CAD/CAM were identified and compiled. Several indications, including multisuture and revisional craniosynostosis, facial bipartition, four-wall box osteotomy, reduction cranioplasty, and distraction osteogenesis could benefit most from this technology. We illustrate the use of CAD/CAM for these applications and describe the decision-making process both before and during surgery. We explore why we believe that CAD/CAM is indicated in these scenarios as well as the disadvantages and risks.

  13. Fabricating Complete Dentures with CAD/CAM and RP Technologies.

    PubMed

    Bilgin, Mehmet Selim; Erdem, Ali; Aglarci, Osman Sami; Dilber, Erhan

    2015-06-01

    Two techological approaches for fabricating dentures; computer-aided design and computer-aided manufacturing (CAD/CAM) and rapid prototyping (RP), are combined with the conventional techniques of impression and jaw relation recording to determine their feasibility and applicability. Maxillary and mandibular edentulous jaw models were produced using silicone molds. After obtaining a gypsum working model, acrylic bases were crafted, and occlusal rims for each model were fabricated with previously determined standard vertical and centric relationships. The maxillary and mandibular relationships were recorded with guides. The occlusal rims were then scanned with a digital scanner. The alignment of the maxillary and mandibular teeth was verified. The teeth in each arch were fabricated in one piece, or set, either by CAM or RP. Conventional waxing and flasking was then performed for both methods. These techniques obviate a practitioner's need for technicians during design and provide the patient with an opportunity to participate in esthetic design with the dentist. In addition, CAD/CAM and RP reduce chair time; however, the materials and techniques need further improvements. Both CAD/CAM and RP techniques seem promising for reducing chair time and allowing the patient to participate in esthetics design. Furthermore, the one-set aligned artificial tooth design may increase the acrylic's durability.

  14. Computationally efficient scalar nonparaxial modeling of optical wave propagation in the far-field.

    PubMed

    Nguyen, Giang-Nam; Heggarty, Kevin; Gérard, Philippe; Serio, Bruno; Meyrueis, Patrick

    2014-04-01

    We present a scalar model to overcome the computation time and sampling interval limitations of the traditional Rayleigh-Sommerfeld (RS) formula and angular spectrum method in computing wide-angle diffraction in the far-field. Numerical and experimental results show that our proposed method based on an accurate nonparaxial diffraction step onto a hemisphere and a projection onto a plane accurately predicts the observed nonparaxial far-field diffraction pattern, while its calculation time is much lower than the more rigorous RS integral. The results enable a fast and efficient way to compute far-field nonparaxial diffraction when the conventional Fraunhofer pattern fails to predict correctly.

  15. From Oss CAD to Bim for Cultural Heritage Digital Representation

    NASA Astrophysics Data System (ADS)

    Logothetis, S.; Karachaliou, E.; Stylianidis, E.

    2017-02-01

    The paper illustrates the use of open source Computer-aided design (CAD) environments in order to develop Building Information Modelling (BIM) tools able to manage 3D models in the field of cultural heritage. Nowadays, the development of Free and Open Source Software (FOSS) has been rapidly growing and their use tends to be consolidated. Although BIM technology is widely known and used, there is a lack of integrated open source platforms able to support all stages of Historic Building Information Modelling (HBIM) processes. The present research aims to use a FOSS CAD environment in order to develop BIM plug-ins which will be able to import and edit digital representations of cultural heritage models derived by photogrammetric methods.

  16. Tooth-colored CAD/CAM monolithic restorations.

    PubMed

    Reich, S

    2015-01-01

    A monolithic restoration (also known as a full contour restoration) is one that is manufactured from a single material for the fully anatomic replacement of lost tooth structure. Additional staining (followed by glaze firing if ceramic materials are used) may be performed to enhance the appearance of the restoration. For decades, monolithic restoration has been the standard for inlay and partial crown restorations manufactured by both pressing and computer-aided design and manufacturing (CAD/CAM) techniques. A limited selection of monolithic materials is now available for dental crown and bridge restorations. The IDS (2015) provided an opportunity to learn about and evaluate current trends in this field. In addition to new developments, established materials are also mentioned in this article to complete the picture. In line with the strategic focus of the IJCD, the focus here is naturally on CAD/CAM materials.

  17. Spin-neurons: A possible path to energy-efficient neuromorphic computers

    NASA Astrophysics Data System (ADS)

    Sharad, Mrigank; Fan, Deliang; Roy, Kaushik

    2013-12-01

    Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices. Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and "thresholding" operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that "spin-neurons" (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.

  18. Spin-neurons: A possible path to energy-efficient neuromorphic computers

    SciTech Connect

    Sharad, Mrigank; Fan, Deliang; Roy, Kaushik

    2013-12-21

    Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices. Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and “thresholding” operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that “spin-neurons” (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.

  19. CAD/CAM at a Distance: Assessing the Effectiveness of Web-Based Instruction To Meet Workforce Development Needs. AIR 2000 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Wilkerson, Joyce A.; Elkins, Susan A.

    This qualitative case study assessed web-based instruction in a computer-aided design/computer-assisted manufacturing (CAD/CAM) course designed for workforce development. The study examined students' and instructors' experience in a CAD/CAM course delivered exclusively on the Internet, evaluating course content and delivery, clarity of…

  20. NREL's Building-Integrated Supercomputer Provides Heating and Efficient Computing (Fact Sheet)

    SciTech Connect

    Not Available

    2014-09-01

    NREL's Energy Systems Integration Facility (ESIF) is meant to investigate new ways to integrate energy sources so they work together efficiently, and one of the key tools to that investigation, a new supercomputer, is itself a prime example of energy systems integration. NREL teamed with Hewlett-Packard (HP) and Intel to develop the innovative warm-water, liquid-cooled Peregrine supercomputer, which not only operates efficiently but also serves as the primary source of building heat for ESIF offices and laboratories. This innovative high-performance computer (HPC) can perform more than a quadrillion calculations per second as part of the world's most energy-efficient HPC data center.

  1. Understanding dental CAD/CAM for restorations--accuracy from a mechanical engineering viewpoint.

    PubMed

    Tapie, Laurent; Lebon, Nicolas; Mawussi, Bernardin; Fron-Chabouis, Hélène; Duret, Francois; Attal, Jean-Pierre

    2015-01-01

    As is the case in the field of medicine, as well as in most areas of daily life, digital technology is increasingly being introduced into dental practice. Computer-aided design/ computer-aided manufacturing (CAD/CAM) solutions are available not only for chairside practice but also for creating inlays, crowns, fixed partial dentures (FPDs), implant abutments, and other dental prostheses. CAD/CAM dental practice can be considered as the handling of devices and software processing for the almost automatic design and creation of dental restorations. However, dentists who want to use dental CAD/CAM systems often do not have enough information to understand the variations offered by such technology practice. Knowledge of the random and systematic errors in accuracy with CAD/CAM systems can help to achieve successful restorations with this technology, and help with the purchasing of a CAD/CAM system that meets the clinical needs of restoration. This article provides a mechanical engineering viewpoint of the accuracy of CAD/ CAM systems, to help dentists understand the impact of this technology on restoration accuracy.

  2. Rationale for the Use of CAD/CAM Technology in Implant Prosthodontics

    PubMed Central

    Abduo, Jaafar; Lyons, Karl

    2013-01-01

    Despite the predictable longevity of implant prosthesis, there is an ongoing interest to continue to improve implant prosthodontic treatment and outcomes. One of the developments is the application of computer-aided design and computer-aided manufacturing (CAD/CAM) to produce implant abutments and frameworks from metal or ceramic materials. The aim of this narrative review is to critically evaluate the rationale of CAD/CAM utilization for implant prosthodontics. To date, CAD/CAM allows simplified production of precise and durable implant components. The precision of fit has been proven in several laboratory experiments and has been attributed to the design of implants. Milling also facilitates component fabrication from durable and aesthetic materials. With further development, it is expected that the CAD/CAM protocol will be further simplified. Although compelling clinical evidence supporting the superiority of CAD/CAM implant restorations is still lacking, it is envisioned that CAD/CAM may become the main stream for implant component fabrication. PMID:23690778

  3. Improving the Efficiency and Effectiveness of Grading through the Use of Computer-Assisted Grading Rubrics

    ERIC Educational Resources Information Center

    Anglin, Linda; Anglin, Kenneth; Schumann, Paul L.; Kaliski, John A.

    2008-01-01

    This study tests the use of computer-assisted grading rubrics compared to other grading methods with respect to the efficiency and effectiveness of different grading processes for subjective assignments. The test was performed on a large Introduction to Business course. The students in this course were randomly assigned to four treatment groups…

  4. Framework for computationally efficient optimal irrigation scheduling using ant colony optimization

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A general optimization framework is introduced with the overall goal of reducing search space size and increasing the computational efficiency of evolutionary algorithm application for optimal irrigation scheduling. The framework achieves this goal by representing the problem in the form of a decisi...

  5. The Improvement of Efficiency in the Numerical Computation of Orbit Trajectories

    NASA Technical Reports Server (NTRS)

    Dyer, J.; Danchick, R.; Pierce, S.; Haney, R.

    1972-01-01

    An analysis, system design, programming, and evaluation of results are described for numerical computation of orbit trajectories. Evaluation of generalized methods, interaction of different formulations for satellite motion, transformation of equations of motion and integrator loads, and development of efficient integrators are also considered.

  6. Efficient shortest-path-tree computation in network routing based on pulse-coupled neural networks.

    PubMed

    Qu, Hong; Yi, Zhang; Yang, Simon X

    2013-06-01

    Shortest path tree (SPT) computation is a critical issue for routers using link-state routing protocols, such as the most commonly used open shortest path first and intermediate system to intermediate system. Each router needs to recompute a new SPT rooted from itself whenever a change happens in the link state. Most commercial routers do this computation by deleting the current SPT and building a new one using static algorithms such as the Dijkstra algorithm at the beginning. Such recomputation of an entire SPT is inefficient, which may consume a considerable amount of CPU time and result in a time delay in the network. Some dynamic updating methods using the information in the updated SPT have been proposed in recent years. However, there are still many limitations in those dynamic algorithms. In this paper, a new modified model of pulse-coupled neural networks (M-PCNNs) is proposed for the SPT computation. It is rigorously proved that the proposed model is capable of solving some optimization problems, such as the SPT. A static algorithm is proposed based on the M-PCNNs to compute the SPT efficiently for large-scale problems. In addition, a dynamic algorithm that makes use of the structure of the previously computed SPT is proposed, which significantly improves the efficiency of the algorithm. Simulation results demonstrate the effective and efficient performance of the proposed approach.

  7. Flexible Concurrency Control for Legacy CAD to Construct Collaborative CAD Environment

    NASA Astrophysics Data System (ADS)

    Cai, Xiantao; Li, Xiaoxia; He, Fazhi; Han, Soonhung; Chen, Xiao

    Collaborative CAD (Co-CAD) systems can be constructed based on either 3D kernel or legacy stand-alone CAD systems, which are typically commercial CAD systems such as CATIA, Pro/E and so on. Most of synchronous Co-CAD systems, especially these based on legacy stand-alone CAD systems, adopt the lock mechanism or the floor control as concurrency controls which are very restrictive and stagnant. A flexible concurrency control method is proposed to support the flexible concurrency control in Co-CAD systems based on legacy stand-alone CAD systems. At first, a model of operation relationship is proposed with special consideration for the concurrency control of these kinds of Co-CAD system. Then two types of data structure, the Collaborative Feature Dependent Graph (Co-FDG) and the Collaborative Feature Operational List (Co-FOL), are presented as the cornerstone of flexible concurrency control. Next a Flexible Concurrency Control Algorithm (FCCA) is proposed. Finally a Selective Undo/Redo Algorithm is proposed which can improve the flexibility of Co-CAD furthermore.

  8. The efficient implementation of correction procedure via reconstruction with GPU computing

    NASA Astrophysics Data System (ADS)

    Zimmerman, Ben J.

    Computational fluid dynamics (CFD) has long been a useful tool to model fluid flow problems across many engineering disciplines, and while problem size, complexity, and difficulty continue to expand, the demands for robustness and accuracy grow. Furthermore, generating high-order accurate solutions has escalated the required computational resources, and as problems continue to increase in complexity, so will computational needs such as memory requirements and calculation time for accurate flow field prediction. To improve upon computational time, vast amounts of computational power and resources are employed, but even over dozens to hundreds of central processing units (CPUs), the required computational time to formulate solutions can be weeks, months, or longer, which is particularly true when generating high-order accurate solutions over large computational domains. One response to lower the computational time for CFD problems is to implement graphical processing units (GPUs) with current CFD solvers. GPUs have illustrated the ability to solve problems orders of magnitude faster than their CPU counterparts with identical accuracy. The goal of the presented work is to combine a CFD solver and GPU computing with the intent to solve complex problems at a high-order of accuracy while lowering the computational time required to generate the solution. The CFD solver should have high-order spacial capabilities to evaluate small fluctuations and fluid structures not generally captured by lower-order methods and be efficient for the GPU architecture. This research combines the high-order Correction Procedure via Reconstruction (CPR) method with compute unified device architecture (CUDA) from NVIDIA to reach these goals. In addition, the study demonstrates accuracy of the developed solver by comparing results with other solvers and exact solutions. Solving CFD problems accurately and quickly are two factors to consider for the next generation of solvers. GPU computing is a

  9. Computationally efficient measure of topological redundancy of biological and social networks

    NASA Astrophysics Data System (ADS)

    Albert, Réka; Dasgupta, Bhaskar; Hegde, Rashmi; Sivanathan, Gowri Sangeetha; Gitter, Anthony; Gürsoy, Gamze; Paul, Pradyut; Sontag, Eduardo

    2011-09-01

    It is well known that biological and social interaction networks have a varying degree of redundancy, though a consensus of the precise cause of this is so far lacking. In this paper, we introduce a topological redundancy measure for labeled directed networks that is formal, computationally efficient, and applicable to a variety of directed networks such as cellular signaling, and metabolic and social interaction networks. We demonstrate the computational efficiency of our measure by computing its value and statistical significance on a number of biological and social networks with up to several thousands of nodes and edges. Our results suggest a number of interesting observations: (1) Social networks are more redundant that their biological counterparts, (2) transcriptional networks are less redundant than signaling networks, (3) the topological redundancy of the C. elegans metabolic network is largely due to its inclusion of currency metabolites, and (4) the redundancy of signaling networks is highly (negatively) correlated with the monotonicity of their dynamics.

  10. Fabrication of lithium silicate ceramic veneers with a CAD/CAM approach: a clinical report of cleidocranial dysplasia.

    PubMed

    da Cunha, Leonardo Fernandes; Mukai, Eduardo; Hamerschmitt, Raphael Meneghetti; Correr, Gisele Maria

    2015-05-01

    The fabrication of minimally invasive ceramic veneers remains a challenge for dental restorations involving computer-aided design and computer-aided manufacturing (CAD/CAM). The application of an appropriate CAD/CAM protocol and correlation mode not only simplifies the fabrication of ceramic veneers but also improves the resulting esthetics. Ceramic veneers can restore tooth abnormalities caused by disorders such as cleidocranial dysplasia, enamel hypoplasia, or supernumerary teeth. This report illustrates the fabrication of dental veneers with a new lithium silicate ceramic and the CAD/CAM technique in a patient with cleidocranial dysplasia.

  11. Developing a computationally efficient dynamic multilevel hybrid optimization scheme using multifidelity model interactions.

    SciTech Connect

    Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Castro, Joseph Pete Jr.; Giunta, Anthony Andrew

    2006-01-01

    Many engineering application problems use optimization algorithms in conjunction with numerical simulators to search for solutions. The formulation of relevant objective functions and constraints dictate possible optimization algorithms. Often, a gradient based approach is not possible since objective functions and constraints can be nonlinear, nonconvex, non-differentiable, or even discontinuous and the simulations involved can be computationally expensive. Moreover, computational efficiency and accuracy are desirable and also influence the choice of solution method. With the advent and increasing availability of massively parallel computers, computational speed has increased tremendously. Unfortunately, the numerical and model complexities of many problems still demand significant computational resources. Moreover, in optimization, these expenses can be a limiting factor since obtaining solutions often requires the completion of numerous computationally intensive simulations. Therefore, we propose a multifidelity optimization algorithm (MFO) designed to improve the computational efficiency of an optimization method for a wide range of applications. In developing the MFO algorithm, we take advantage of the interactions between multi fidelity models to develop a dynamic and computational time saving optimization algorithm. First, a direct search method is applied to the high fidelity model over a reduced design space. In conjunction with this search, a specialized oracle is employed to map the design space of this high fidelity model to that of a computationally cheaper low fidelity model using space mapping techniques. Then, in the low fidelity space, an optimum is obtained using gradient or non-gradient based optimization, and it is mapped back to the high fidelity space. In this paper, we describe the theory and implementation details of our MFO algorithm. We also demonstrate our MFO method on some example problems and on two applications: earth penetrators and

  12. Efficient implementation for spherical flux computation and its application to vascular segmentation.

    PubMed

    Law, Max W K; Chung, Albert C S

    2009-03-01

    Spherical flux is the flux inside a spherical region, and it is very useful in the analysis of tubular structures in magnetic resonance angiography and computed tomographic angiography. The conventional approach is to estimate the spherical flux in the spatial domain. Its running time depends on the sphere radius quadratically, which leads to very slow spherical flux computation when the sphere size is large. This paper proposes a more efficient implementation for spherical flux computation in the Fourier domain. Our implementation is based on the reformulation of the spherical flux calculation using the divergence theorem, spherical step function, and the convolution operation. With this reformulation, most of the calculations are performed in the Fourier domain. We show how to select the frequency subband so that the computation accuracy can be maintained. It is experimentally demonstrated that, using the synthetic and clinical phase contrast magnetic resonance angiographic volumes, our implementation is more computationally efficient than the conventional spatial implementation. The accuracies of our implementation and that of the conventional spatial implementation are comparable. Finally, the proposed implementation can definitely benefit the computation of the multiscale spherical flux with a set of radii because, unlike the conventional spatial implementation, the time complexity of the proposed implementation does not depend on the sphere radius.

  13. Education & Training for CAD/CAM: Results of a National Probability Survey. Krannert Institute Paper Series.

    ERIC Educational Resources Information Center

    Majchrzak, Ann

    A study was conducted of the training programs used by plants with Computer Automated Design/Computer Automated Manufacturing (CAD/CAM) to help their employees adapt to automated manufacturing. The study sought to determine the relative priorities of manufacturing establishments for training certain workers in certain skills; the status of…

  14. An accelerated technique for a ceramic-pressed-to-metal restoration with CAD/CAM technology.

    PubMed

    Lee, Ju-Hyoung

    2014-11-01

    The conventional fabrication of metal ceramic restorations depends on an experienced dental technician and requires a long processing time. However, complete-contour digital waxing and digital cutback with computer-aided design and computer-aided manufacturing (CAD/CAM) technology can overcome these disadvantages and provide a correct metal framework design and space for the ceramic material.

  15. The fabrication of a CAD/CAM ceramic crown to fit an existing partial removable dental prosthesis: a clinical report.

    PubMed

    Yoon, Tae-Ho; Chang, Won-Gun

    2012-09-01

    The application of computer-aided design/computer-assisted manufacturing (CAD/CAM) technology to fabricate a retrofit ceramic surveyed crown to an existing partial removable dental prosthesis (PRDP) is described. The fabrication of a surveyed crown by using CAD/CAM technology enables precise and easy replication of the shape and contours as well as the rest seat of the existing abutment tooth, ensuring excellent adaptation to the existing PRDP framework with minimal adjustment.

  16. A computationally efficient approach for hidden-Markov model-augmented fingerprint-based positioning

    NASA Astrophysics Data System (ADS)

    Roth, John; Tummala, Murali; McEachen, John

    2016-09-01

    This paper presents a computationally efficient approach for mobile subscriber position estimation in wireless networks. A method of data scaling assisted by timing adjust is introduced in fingerprint-based location estimation under a framework which allows for minimising computational cost. The proposed method maintains a comparable level of accuracy to the traditional case where no data scaling is used and is evaluated in a simulated environment under varying channel conditions. The proposed scheme is studied when it is augmented by a hidden-Markov model to match the internal parameters to the channel conditions that present, thus minimising computational cost while maximising accuracy. Furthermore, the timing adjust quantity, available in modern wireless signalling messages, is shown to be able to further reduce computational cost and increase accuracy when available. The results may be seen as a significant step towards integrating advanced position-based modelling with power-sensitive mobile devices.

  17. Use of Existing CAD Models for Radiation Shielding Analysis

    NASA Technical Reports Server (NTRS)

    Lee, K. T.; Barzilla, J. E.; Wilson, P.; Davis, A.; Zachman, J.

    2015-01-01

    The utility of a radiation exposure analysis depends not only on the accuracy of the underlying particle transport code, but also on the accuracy of the geometric representations of both the vehicle used as radiation shielding mass and the phantom representation of the human form. The current NASA/Space Radiation Analysis Group (SRAG) process to determine crew radiation exposure in a vehicle design incorporates both output from an analytic High Z and Energy Particle Transport (HZETRN) code and the properties (i.e., material thicknesses) of a previously processed drawing. This geometry pre-process can be time-consuming, and the results are less accurate than those determined using a Monte Carlo-based particle transport code. The current work aims to improve this process. Although several Monte Carlo programs (FLUKA, Geant4) are readily available, most use an internal geometry engine. The lack of an interface with the standard CAD formats used by the vehicle designers limits the ability of the user to communicate complex geometries. Translation of native CAD drawings into a format readable by these transport programs is time consuming and prone to error. The Direct Accelerated Geometry -United (DAGU) project is intended to provide an interface between the native vehicle or phantom CAD geometry and multiple particle transport codes to minimize problem setup, computing time and analysis error.

  18. Centerline-based colon segmentation for CAD of CT colonography

    NASA Astrophysics Data System (ADS)

    Näppi, Janne; Frimmel, Hans; Yoshida, Hiroyuki

    2006-03-01

    We developed a fast centerline-based segmentation (CBS) algorithm for the extraction of colon in computer-aided detection (CAD) for CT colonography (CTC). CBS calculates local centerpoints along thresholded components of abdominal air, and connects the centerpoints iteratively to yield a colon centerline. A thick region encompassing the colonic wall is extracted by use of region-growing around the centerline. The resulting colonic wall is employed in our CAD scheme for the detection of polyps, in which polyps are detected within the wall by use of volumetric shape features. False-positive detections are reduced by use of a Bayesian neural network. The colon extraction accuracy of CBS was evaluated by use of 38 clinical CTC scans representing various preparation conditions. On average, CBS covered more than 96% of the visible region of colon with less than 1% extracolonic components in the extracted region. The polyp detection performance of the CAD scheme was evaluated by use of 121 clinical cases with 42 colonoscopy-confirmed polyps 5-25 mm. At a 93% by-polyp detection sensitivity for polyps >=5 mm, a leave-one-patient-out evaluation yielded 1.4 false-positive polyp detections per CT scan.

  19. AutoBioCAD: full biodesign automation of genetic circuits.

    PubMed

    Rodrigo, Guillermo; Jaramillo, Alfonso

    2013-05-17

    Synthetic regulatory networks with prescribed functions are engineered by assembling a reduced set of functional elements. We could also assemble them computationally if the mathematical models of those functional elements were predictive enough in different genetic contexts. Only after achieving this will we have libraries of models of biological parts able to provide predictive dynamical behaviors for most circuits constructed with them. We thus need tools that can automatically explore different genetic contexts, in addition to being able to use such libraries to design novel circuits with targeted dynamics. We have implemented a new tool, AutoBioCAD, aimed at the automated design of gene regulatory circuits. AutoBioCAD loads a library of models of genetic elements and implements evolutionary design strategies to produce (i) nucleotide sequences encoding circuits with targeted dynamics that can then be tested experimentally and (ii) circuit models for testing regulation principles in natural systems, providing a new tool for synthetic biology. AutoBioCAD can be used to model and design genetic circuits with dynamic behavior, thanks to the incorporation of stochastic effects, robustness, qualitative dynamics, multiobjective optimization, or degenerate nucleotide sequences, all facilitating the link with biological part/circuit engineering.

  20. Computationally efficient analysis of extraordinary optical transmission through infinite and truncated subwavelength hole arrays

    NASA Astrophysics Data System (ADS)

    Camacho, Miguel; Boix, Rafael R.; Medina, Francisco

    2016-06-01

    The authors present a computationally efficient technique for the analysis of extraordinary transmission through both infinite and truncated periodic arrays of slots in perfect conductor screens of negligible thickness. An integral equation is obtained for the tangential electric field in the slots both in the infinite case and in the truncated case. The unknown functions are expressed as linear combinations of known basis functions, and the unknown weight coefficients are determined by means of Galerkin's method. The coefficients of Galerkin's matrix are obtained in the spatial domain in terms of double finite integrals containing the Green's functions (which, in the infinite case, is efficiently computed by means of Ewald's method) times cross-correlations between both the basis functions and their divergences. The computation in the spatial domain is an efficient alternative to the direct computation in the spectral domain since this latter approach involves the determination of either slowly convergent double infinite summations (infinite case) or slowly convergent double infinite integrals (truncated case). The results obtained are validated by means of commercial software, and it is found that the integral equation technique presented in this paper is at least two orders of magnitude faster than commercial software for a similar accuracy. It is also shown that the phenomena related to periodicity such as extraordinary transmission and Wood's anomaly start to appear in the truncated case for arrays with more than 100 (10 ×10 ) slots.

  1. CADS:Cantera Aerosol Dynamics Simulator.

    SciTech Connect

    Moffat, Harry K.

    2007-07-01

    This manual describes a library for aerosol kinetics and transport, called CADS (Cantera Aerosol Dynamics Simulator), which employs a section-based approach for describing the particle size distributions. CADS is based upon Cantera, a set of C++ libraries and applications that handles gas phase species transport and reactions. The method uses a discontinuous Galerkin formulation to represent the particle distributions within each section and to solve for changes to the aerosol particle distributions due to condensation, coagulation, and nucleation processes. CADS conserves particles, elements, and total enthalpy up to numerical round-off error, in all of its formulations. Both 0-D time dependent and 1-D steady state applications (an opposing-flow flame application) have been developed with CADS, with the initial emphasis on developing fundamental mechanisms for soot formation within fires. This report also describes the 0-D application, TDcads, which models a time-dependent perfectly stirred reactor.

  2. Reducing Vehicle Weight and Improving U.S. Energy Efficiency Using Integrated Computational Materials Engineering

    NASA Astrophysics Data System (ADS)

    Joost, William J.

    2012-09-01

    Transportation accounts for approximately 28% of U.S. energy consumption with the majority of transportation energy derived from petroleum sources. Many technologies such as vehicle electrification, advanced combustion, and advanced fuels can reduce transportation energy consumption by improving the efficiency of cars and trucks. Lightweight materials are another important technology that can improve passenger vehicle fuel efficiency by 6-8% for each 10% reduction in weight while also making electric and alternative vehicles more competitive. Despite the opportunities for improved efficiency, widespread deployment of lightweight materials for automotive structures is hampered by technology gaps most often associated with performance, manufacturability, and cost. In this report, the impact of reduced vehicle weight on energy efficiency is discussed with a particular emphasis on quantitative relationships determined by several researchers. The most promising lightweight materials systems are described along with a brief review of the most significant technical barriers to their implementation. For each material system, the development of accurate material models is critical to support simulation-intensive processing and structural design for vehicles; improved models also contribute to an integrated computational materials engineering (ICME) approach for addressing technical barriers and accelerating deployment. The value of computational techniques is described by considering recent ICME and computational materials science success stories with an emphasis on applying problem-specific methods.

  3. An efficient sparse matrix multiplication scheme for the CYBER 205 computer

    NASA Technical Reports Server (NTRS)

    Lambiotte, Jules J., Jr.

    1988-01-01

    This paper describes the development of an efficient algorithm for computing the product of a matrix and vector on a CYBER 205 vector computer. The desire to provide software which allows the user to choose between the often conflicting goals of minimizing central processing unit (CPU) time or storage requirements has led to a diagonal-based algorithm in which one of four types of storage is selected for each diagonal. The candidate storage types employed were chosen to be efficient on the CYBER 205 for diagonals which have nonzero structure which is dense, moderately sparse, very sparse and short, or very sparse and long; however, for many densities, no diagonal type is most efficient with respect to both resource requirements, and a trade-off must be made. For each diagonal, an initialization subroutine estimates the CPU time and storage required for each storage type based on results from previously performed numerical experimentation. These requirements are adjusted by weights provided by the user which reflect the relative importance the user places on the two resources. The adjusted resource requirements are then compared to select the most efficient storage and computational scheme.

  4. Efficient scatter model for simulation of ultrasound images from computed tomography data

    NASA Astrophysics Data System (ADS)

    D'Amato, J. P.; Lo Vercio, L.; Rubi, P.; Fernandez Vera, E.; Barbuzza, R.; Del Fresno, M.; Larrabide, I.

    2015-12-01

    Background and motivation: Real-time ultrasound simulation refers to the process of computationally creating fully synthetic ultrasound images instantly. Due to the high value of specialized low cost training for healthcare professionals, there is a growing interest in the use of this technology and the development of high fidelity systems that simulate the acquisitions of echographic images. The objective is to create an efficient and reproducible simulator that can run either on notebooks or desktops using low cost devices. Materials and methods: We present an interactive ultrasound simulator based on CT data. This simulator is based on ray-casting and provides real-time interaction capabilities. The simulation of scattering that is coherent with the transducer position in real time is also introduced. Such noise is produced using a simplified model of multiplicative noise and convolution with point spread functions (PSF) tailored for this purpose. Results: The computational efficiency of scattering maps generation was revised with an improved performance. This allowed a more efficient simulation of coherent scattering in the synthetic echographic images while providing highly realistic result. We describe some quality and performance metrics to validate these results, where a performance of up to 55fps was achieved. Conclusion: The proposed technique for real-time scattering modeling provides realistic yet computationally efficient scatter distributions. The error between the original image and the simulated scattering image was compared for the proposed method and the state-of-the-art, showing negligible differences in its distribution.

  5. Efficient parallel algorithms for optical computing with the discrete Fourier transform (DFT) primitive

    NASA Astrophysics Data System (ADS)

    Reif, John H.; Tyagi, Akhilesh

    1997-10-01

    Optical-computing technology offers new challenges to algorithm designers since it can perform an n -point discrete Fourier transform (DFT) computation in only unit time. Note that the DFT is a nontrivial computation in the parallel random-access machine model, a model of computing commonly used by parallel-algorithm designers. We develop two new models, the DFT VLSIO (very-large-scale integrated optics) and the DFT circuit, to capture this characteristic of optical computing. We also provide two paradigms for developing parallel algorithms in these models. Efficient parallel algorithms for many problems, including polynomial and matrix computations, sorting, and string matching, are presented. The sorting and string-matching algorithms are particularly noteworthy. Almost all these algorithms are within a polylog factor of the optical-computing (VLSIO) lower bounds derived by Barakat and Reif Appl. Opt. 26, 1015 (1987) and by Tyagi and Reif Proceedings of the Second IEEE Symposium on Parallel and Distributed Processing (Institute of Electrical and Electronics Engineers, New York, 1990) p. 14 .

  6. Does computer-aided surgical simulation improve efficiency in bimaxillary orthognathic surgery?

    PubMed

    Schwartz, H C

    2014-05-01

    The purpose of this study was to compare the efficiency of bimaxillary orthognathic surgery using computer-aided surgical simulation (CASS), with cases planned using traditional methods. Total doctor time was used to measure efficiency. While costs vary widely in different localities and in different health schemes, time is a valuable and limited resource everywhere. For this reason, total doctor time is a more useful measure of efficiency than is cost. Even though we use CASS primarily for planning more complex cases at the present time, this study showed an average saving of 60min for each case. In the context of a department that performs 200 bimaxillary cases each year, this would represent a saving of 25 days of doctor time, if applied to every case. It is concluded that CASS offers great potential for improving efficiency when used in the planning of bimaxillary orthognathic surgery. It saves significant doctor time that can be applied to additional surgical work.

  7. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  8. Unified commutation-pruning technique for efficient computation of composite DFTs

    NASA Astrophysics Data System (ADS)

    Castro-Palazuelos, David E.; Medina-Melendrez, Modesto Gpe.; Torres-Roman, Deni L.; Shkvarko, Yuriy V.

    2015-12-01

    An efficient computation of a composite length discrete Fourier transform (DFT), as well as a fast Fourier transform (FFT) of both time and space data sequences in uncertain (non-sparse or sparse) computational scenarios, requires specific processing algorithms. Traditional algorithms typically employ some pruning methods without any commutations, which prevents them from attaining the potential computational efficiency. In this paper, we propose an alternative unified approach with automatic commutations between three computational modalities aimed at efficient computations of the pruned DFTs adapted for variable composite lengths of the non-sparse input-output data. The first modality is an implementation of the direct computation of a composite length DFT, the second one employs the second-order recursive filtering method, and the third one performs the new pruned decomposed transform. The pruned decomposed transform algorithm performs the decimation in time or space (DIT) data acquisition domain and, then, decimation in frequency (DIF). The unified combination of these three algorithms is addressed as the DFTCOMM technique. Based on the treatment of the combinational-type hypotheses testing optimization problem of preferable allocations between all feasible commuting-pruning modalities, we have found the global optimal solution to the pruning problem that always requires a fewer or, at most, the same number of arithmetic operations than other feasible modalities. The DFTCOMM method outperforms the existing competing pruning techniques in the sense of attainable savings in the number of required arithmetic operations. It requires fewer or at most the same number of arithmetic operations for its execution than any other of the competing pruning methods reported in the literature. Finally, we provide the comparison of the DFTCOMM with the recently developed sparse fast Fourier transform (SFFT) algorithmic family. We feature that, in the sensing scenarios with

  9. Efficient O(N) recursive computation of the operational space inertial matrix

    SciTech Connect

    Lilly, K.W.; Orin, D.E.

    1993-09-01

    The operational space inertia matrix {Lambda} reflects the dynamic properties of a robot manipulator to its tip. In the control domain, it may be used to decouple force and/or motion control about the manipulator workspace axes. The matrix {Lambda} also plays an important role in the development of efficient algorithms for the dynamic simulation of closed-chain robotic mechanisms, including simple closed-chain mechanisms such as multiple manipulator systems and walking machines. The traditional approach used to compute {Lambda} has a computational complexity of O(N{sup 3}) for an N degree-of-freedom manipulator. This paper presents the development of a recursive algorithm for computing the operational space inertia matrix (OSIM) that reduces the computational complexity to O(N). This algorithm, the inertia propagation method, is based on a single recursion that begins at the base of the manipulator and progresses out to the last link. Also applicable to redundant systems and mechanisms with multiple-degree-of-freedom joints, the inertia propagation method is the most efficient method known for computing {Lambda} for N {>=} 6. The numerical accuracy of the algorithm is discussed for a PUMA 560 robot with a fixed base.

  10. An efficient parallel implementation of explicit multirate Runge–Kutta schemes for discontinuous Galerkin computations

    SciTech Connect

    Seny, Bruno Lambrechts, Jonathan; Toulorge, Thomas; Legat, Vincent; Remacle, Jean-François

    2014-01-01

    Although explicit time integration schemes require small computational efforts per time step, their efficiency is severely restricted by their stability limits. Indeed, the multi-scale nature of some physical processes combined with highly unstructured meshes can lead some elements to impose a severely small stable time step for a global problem. Multirate methods offer a way to increase the global efficiency by gathering grid cells in appropriate groups under local stability conditions. These methods are well suited to the discontinuous Galerkin framework. The parallelization of the multirate strategy is challenging because grid cells have different workloads. The computational cost is different for each sub-time step depending on the elements involved and a classical partitioning strategy is not adequate any more. In this paper, we propose a solution that makes use of multi-constraint mesh partitioning. It tends to minimize the inter-processor communications, while ensuring that the workload is almost equally shared by every computer core at every stage of the algorithm. Particular attention is given to the simplicity of the parallel multirate algorithm while minimizing computational and communication overheads. Our implementation makes use of the MeTiS library for mesh partitioning and the Message Passing Interface for inter-processor communication. Performance analyses for two and three-dimensional practical applications confirm that multirate methods preserve important computational advantages of explicit methods up to a significant number of processors.

  11. Comprehensive BRL-CAD Primitive Database

    DTIC Science & Technology

    2015-03-01

    corrected by taking into account the sampling rate. 15. SUBJECT TERMS BRL-CAD, Primitives, CSG, rtweight, rtarea, hypersampling, raytracer 16...approaches, such as polygonal mesh modeling. CSG not only decreases the file size but also increases the speed of the raytracer , the tool BRL–CAD uses...to render images. CSG also increases the speed of the raytracer to calculate information about the primitives, such as their weight and thermal

  12. Phase diagrams and dynamics of a computationally efficient map-based neuron model

    PubMed Central

    Gonsalves, Jheniffer J.; Tragtenberg, Marcelo H. R.

    2017-01-01

    We introduce a new map-based neuron model derived from the dynamical perceptron family that has the best compromise between computational efficiency, analytical tractability, reduced parameter space and many dynamical behaviors. We calculate bifurcation and phase diagrams analytically and computationally that underpins a rich repertoire of autonomous and excitable dynamical behaviors. We report the existence of a new regime of cardiac spikes corresponding to nonchaotic aperiodic behavior. We compare the features of our model to standard neuron models currently available in the literature. PMID:28358843

  13. Efficient computation of stress and load distribution for external cylindrical gears

    SciTech Connect

    Zhang, J.J.; Esat, I.I.; Shi, Y.H.

    1996-12-31

    It has been extensively realized that tooth flank correction is an effective technique to improve load carrying capacity and running behavior of gears. However, the existing analytical methods of load distribution are not very satisfactory. They are either too simplified to produce accurate results or computationally too expensive. In this paper, we propose a new approach which computes the load and stress distribution of external involute gears efficiently and accurately. It adopts the {open_quotes}thin-slice{close_quotes} model and 2D FEA technique and takes into account the varying meshing stiffness.

  14. Do Computers Improve the Drawing of a Geometrical Figure for 10 Year-Old Children?

    ERIC Educational Resources Information Center

    Martin, Perrine; Velay, Jean-Luc

    2012-01-01

    Nowadays, computer aided design (CAD) is widely used by designers. Would children learn to draw more easily and more efficiently if they were taught with computerised tools? To answer this question, we made an experiment designed to compare two methods for children to do the same drawing: the classical "pen and paper" method and a CAD…

  15. Efficient path-based computations on pedigree graphs with compact encodings.

    PubMed

    Yang, Lei; Cheng, En; Özsoyoğlu, Z Meral

    2012-03-21

    A pedigree is a diagram of family relationships, and it is often used to determine the mode of inheritance (dominant, recessive, etc.) of genetic diseases. Along with rapidly growing knowledge of genetics and accumulation of genealogy information, pedigree data is becoming increasingly important. In large pedigree graphs, path-based methods for efficiently computing genealogical measurements, such as inbreeding and kinship coefficients of individuals, depend on efficient identification and processing of paths. In this paper, we propose a new compact path encoding scheme on large pedigrees, accompanied by an efficient algorithm for identifying paths. We demonstrate the utilization of our proposed method by applying it to the inbreeding coefficient computation. We present time and space complexity analysis, and also manifest the efficiency of our method for evaluating inbreeding coefficients as compared to previous methods by experimental results using pedigree graphs with real and synthetic data. Both theoretical and experimental results demonstrate that our method is more scalable and efficient than previous methods in terms of time and space requirements.

  16. Information fusion for diabetic retinopathy CAD in digital color fundus photographs.

    PubMed

    Niemeijer, Meindert; Abramoff, Michael D; van Ginneken, Bram

    2009-05-01

    The purpose of computer-aided detection or diagnosis (CAD) technology has so far been to serve as a second reader. If, however, all relevant lesions in an image can be detected by CAD algorithms, use of CAD for automatic reading or prescreening may become feasible. This work addresses the question how to fuse information from multiple CAD algorithms, operating on multiple images that comprise an exam, to determine a likelihood that the exam is normal and would not require further inspection by human operators. We focus on retinal image screening for diabetic retinopathy, a common complication of diabetes. Current CAD systems are not designed to automatically evaluate complete exams consisting of multiple images for which several detection algorithm output sets are available. Information fusion will potentially play a crucial role in enabling the application of CAD technology to the automatic screening problem. Several different fusion methods are proposed and their effect on the performance of a complete comprehensive automatic diabetic retinopathy screening system is evaluated. Experiments show that the choice of fusion method can have a large impact on system performance. The complete system was evaluated on a set of 15,000 exams (60,000 images). The best performing fusion method obtained an area under the receiver operator characteristic curve of 0.881. This indicates that automated prescreening could be applied in diabetic retinopathy screening programs.

  17. Investigating the effects of majority voting on CAD systems: a LIDC case study

    NASA Astrophysics Data System (ADS)

    Carrazza, Miguel; Kennedy, Brendan; Rasin, Alexander; Furst, Jacob; Raicu, Daniela

    2016-03-01

    Computer-Aided Diagnosis (CAD) systems can provide a second opinion for either identifying suspicious regions on a medical image or predicting the degree of malignancy for a detected suspicious region. To develop a predictive model, CAD systems are trained on low-level image features extracted from image data and the class labels acquired through radiologists' interpretations or a gold standard (e.g., a biopsy). While the opinion of an expert radiologist is still an estimate of the answer, the ground truth may be extremely expensive to acquire. In such cases, CAD systems are trained on input data that contains multiple expert opinions per case with the expectation that the aggregate of labels will closely approximate the ground truth. Using multiple labels to solve this problem has its own challenges because of the inherent label uncertainty introduced by the variability in the radiologists' interpretations. Most CAD systems use majority voting (e.g., average, mode) to handle label uncertainty. This paper investigates the effects that majority voting can have on a CAD system by classifying and analyzing different semantic characteristics supplied with the Lung Image Database Consortium (LIDC) dataset. Using a decision tree based iterative predictive model, we show that majority voting with labels that exhibit certain types of skewed distribution can have a significant negative impact on the performance of a CAD system; therefore, alternative strategies for label integration are required when handling multiple interpretations.

  18. Diagnostic performance of radiologists with and without different CAD systems for mammography

    NASA Astrophysics Data System (ADS)

    Lauria, Adele; Fantacci, Maria E.; Bottigli, Ubaldo; Delogu, Pasquale; Fauci, Francesco; Golosio, Bruno; Indovina, Pietro L.; Masala, Giovanni L.; Oliva, Piernicola; Palmiero, Rosa; Raso, Giuseppe; Stumbo, Simone; Tangaro, Sabina

    2003-05-01

    The purpose of this study is the evaluation of the variation of performance in terms of sensitivity and specificity of two radiologists with different experience in mammography, with and without the assistance of two different CAD systems. The CAD considered are SecondLookTM (CADx Medical Systems, Canada), and CALMA (Computer Assisted Library in MAmmography). The first is a commercial system, the other is the result of a research project, supported by INFN (Istituto Nazionale di Fisica Nucleare, Italy); their characteristics have already been reported in literature. To compare the results with and without these tools, a dataset composed by 70 images of patients with cancer (biopsy proven) and 120 images of healthy breasts (with a three years follow up) has been collected. All the images have been digitized and analysed by two CAD, then two radiologists with respectively 6 and 2 years of experience in mammography indipendently made their diagnosis without and with, the support of the two CAD systems. In this work sensitivity and specificity variation, the Az area under the ROC curve, are reported. The results show that the use of a CAD allows for a substantial increment in sensitivity and a less pronounced decrement in specificity. The extent of these effects depends on the experience of the readers and is comparable for the two CAD considered.

  19. Convergence Acceleration of a Navier-Stokes Solver for Efficient Static Aeroelastic Computations

    NASA Technical Reports Server (NTRS)

    Obayashi, Shigeru; Guruswamy, Guru P.

    1995-01-01

    New capabilities have been developed for a Navier-Stokes solver to perform steady-state simulations more efficiently. The flow solver for solving the Navier-Stokes equations is based on a combination of the lower-upper factored symmetric Gauss-Seidel implicit method and the modified Harten-Lax-van Leer-Einfeldt upwind scheme. A numerically stable and efficient pseudo-time-marching method is also developed for computing steady flows over flexible wings. Results are demonstrated for transonic flows over rigid and flexible wings.

  20. Efficient solid state NMR powder simulations using SMP and MPP parallel computation

    NASA Astrophysics Data System (ADS)

    Kristensen, Jørgen Holm; Farnan, Ian

    2003-04-01

    Methods for parallel simulation of solid state NMR powder spectra are presented for both shared and distributed memory parallel supercomputers. For shared memory architectures the performance of simulation programs implementing the OpenMP application programming interface is evaluated. It is demonstrated that the design of correct and efficient shared memory parallel programs is difficult as the performance depends on data locality and cache memory effects. The distributed memory parallel programming model is examined for simulation programs using the MPI message passing interface. The results reveal that both shared and distributed memory parallel computation are very efficient with an almost perfect application speedup and may be applied to the most advanced powder simulations.

  1. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.

  2. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    SciTech Connect

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~101 to ~102 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.

  3. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE PAGES

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~101 to ~102 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  4. CAD-based Automatic Modeling Method for Geant4 geometry model Through MCAM

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Nie, Fanzhi; Wang, Guozhong; Long, Pengcheng; LV, Zhongliang; LV, Zhongliang

    2014-06-01

    Geant4 is a widely used Monte Carlo transport simulation package. Before calculating using Geant4, the calculation model need be established which could be described by using Geometry Description Markup Language (GDML) or C++ language. However, it is time-consuming and error-prone to manually describe the models by GDML. Automatic modeling methods have been developed recently, but there are some problem existed in most of present modeling programs, specially some of them were not accurate or adapted to specifically CAD format. To convert the GDML format models to CAD format accurately, a Geant4 Computer Aided Design (CAD) based modeling method was developed for automatically converting complex CAD geometry model into GDML geometry model. The essence of this method was dealing with CAD model represented with boundary representation (B-REP) and GDML model represented with constructive solid geometry (CSG). At first, CAD model was decomposed to several simple solids which had only one close shell. And then the simple solid was decomposed to convex shell set. Then corresponding GDML convex basic solids were generated by the boundary surfaces getting from the topological characteristic of a convex shell. After the generation of these solids, GDML model was accomplished with series boolean operations. This method was adopted in CAD/Image-based Automatic Modeling Program for Neutronics & Radiation Transport (MCAM), and tested with several models including the examples in Geant4 install package. The results showed that this method could convert standard CAD model accurately, and can be used for Geant4 automatic modeling.

  5. A Power Efficient Exaflop Computer Design for Global Cloud System Resolving Climate Models.

    NASA Astrophysics Data System (ADS)

    Wehner, M. F.; Oliker, L.; Shalf, J.

    2008-12-01

    Exascale computers would allow routine ensemble modeling of the global climate system at the cloud system resolving scale. Power and cost requirements of traditional architecture systems are likely to delay such capability for many years. We present an alternative route to the exascale using embedded processor technology to design a system optimized for ultra high resolution climate modeling. These power efficient processors, used in consumer electronic devices such as mobile phones, portable music players, cameras, etc., can be tailored to the specific needs of scientific computing. We project that a system capable of integrating a kilometer scale climate model a thousand times faster than real time could be designed and built in a five year time scale for US$75M with a power consumption of 3MW. This is cheaper, more power efficient and sooner than any other existing technology.

  6. Step-by-step magic state encoding for efficient fault-tolerant quantum computation.

    PubMed

    Goto, Hayato

    2014-12-16

    Quantum error correction allows one to make quantum computers fault-tolerant against unavoidable errors due to decoherence and imperfect physical gate operations. However, the fault-tolerant quantum computation requires impractically large computational resources for useful applications. This is a current major obstacle to the realization of a quantum computer. In particular, magic state distillation, which is a standard approach to universality, consumes the most resources in fault-tolerant quantum computation. For the resource problem, here we propose step-by-step magic state encoding for concatenated quantum codes, where magic states are encoded step by step from the physical level to the logical one. To manage errors during the encoding, we carefully use error detection. Since the sizes of intermediate codes are small, it is expected that the resource overheads will become lower than previous approaches based on the distillation at the logical level. Our simulation results suggest that the resource requirements for a logical magic state will become comparable to those for a single logical controlled-NOT gate. Thus, the present method opens a new possibility for efficient fault-tolerant quantum computation.

  7. Computationally Efficient Adaptive Beamformer for Ultrasound Imaging Based on QR Decomposition.

    PubMed

    Park, Jongin; Wi, Seok-Min; Lee, Jin S

    2016-02-01

    Adaptive beamforming methods for ultrasound imaging have been studied to improve image resolution and contrast. The most common approach is the minimum variance (MV) beamformer which minimizes the power of the beamformed output while maintaining the response from the direction of interest constant. The method achieves higher resolution and better contrast than the delay-and-sum (DAS) beamformer, but it suffers from high computational cost. This cost is mainly due to the computation of the spatial covariance matrix and its inverse, which requires O(L(3)) computations, where L denotes the subarray size. In this study, we propose a computationally efficient MV beamformer based on QR decomposition. The idea behind our approach is to transform the spatial covariance matrix to be a scalar matrix σI and we subsequently obtain the apodization weights and the beamformed output without computing the matrix inverse. To do that, QR decomposition algorithm is used and also can be executed at low cost, and therefore, the computational complexity is reduced to O(L(2)). In addition, our approach is mathematically equivalent to the conventional MV beamformer, thereby showing the equivalent performances. The simulation and experimental results support the validity of our approach.

  8. Efficient Transition Probability Computation for Continuous-Time Branching Processes via Compressed Sensing

    PubMed Central

    Xu, Jason; Minin, Vladimir N.

    2016-01-01

    Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes. PMID:26949377

  9. Efficient Probability of Failure Calculations for QMU using Computational Geometry LDRD 13-0144 Final Report

    SciTech Connect

    Mitchell, Scott A.; Ebeida, Mohamed Salah; Romero, Vicente J.; Swiler, Laura Painton; Rushdi, Ahmad A.; Abdelkader, Ahmad

    2015-09-01

    This SAND report summarizes our work on the Sandia National Laboratory LDRD project titled "Efficient Probability of Failure Calculations for QMU using Computational Geometry" which was project #165617 and proposal #13-0144. This report merely summarizes our work. Those interested in the technical details are encouraged to read the full published results, and contact the report authors for the status of the software and follow-on projects.

  10. A High Resolution, Three-Dimensional, Computationally Efficient, Diagnostic Wind Model: Initial Development Report

    DTIC Science & Technology

    2003-10-01

    reasonably accurate in representing the flow physics and computationally efficient. The basic framework of the model is discussed in this document... Basically , this version of the model takes about 20 to 30 times more CPU time to 17 run, compared with the latest model version implemented with the...free or surface-mounted obstacles: applying topology to flow visulization . J. Fluid Mech. 1978, 86, pp 179-200. Kastner-Klein, P.; Rotach, M. W

  11. Computer-Aided Design in Further Education.

    ERIC Educational Resources Information Center

    Ingham, Peter, Ed.

    This publication updates the 1982 occasional paper that was intended to foster staff awareness and assist colleges in Great Britain considering the use of computer-aided design (CAD) material in engineering courses. The paper begins by defining CAD and its place in the Integrated Business System with a brief discussion of the effect of CAD on the…

  12. Computer-aided detection of early interstitial lung diseases using low-dose CT images.

    PubMed

    Park, Sang Cheol; Tan, Jun; Wang, Xingwei; Lederman, Dror; Leader, Joseph K; Kim, Soo Hyung; Zheng, Bin

    2011-02-21

    This study aims to develop a new computer-aided detection (CAD) scheme to detect early interstitial lung disease (ILD) using low-dose computed tomography (CT) examinations. The CAD scheme classifies each pixel depicted on the segmented lung areas into positive or negative groups for ILD using a mesh-grid-based region growth method and a multi-feature-based artificial neural network (ANN). A genetic algorithm was applied to select optimal image features and the ANN structure. In testing each CT examination, only pixels selected by the mesh-grid region growth method were analyzed and classified by the ANN to improve computational efficiency. All unselected pixels were classified as negative for ILD. After classifying all pixels into the positive and negative groups, CAD computed a detection score based on the ratio of the number of positive pixels to all pixels in the segmented lung areas, which indicates the likelihood of the test case being positive for ILD. When applying to an independent testing dataset of 15 positive and 15 negative cases, the CAD scheme yielded the area under receiver operating characteristic curve (AUC = 0.884 ± 0.064) and 80.0% sensitivity at 85.7% specificity. The results demonstrated the feasibility of applying the CAD scheme to automatically detect early ILD using low-dose CT examinations.

  13. Computer-aided detection of early interstitial lung diseases using low-dose CT images

    NASA Astrophysics Data System (ADS)

    Park, Sang Cheol; Tan, Jun; Wang, Xingwei; Lederman, Dror; Leader, Joseph K.; Kim, Soo Hyung; Zheng, Bin

    2011-02-01

    This study aims to develop a new computer-aided detection (CAD) scheme to detect early interstitial lung disease (ILD) using low-dose computed tomography (CT) examinations. The CAD scheme classifies each pixel depicted on the segmented lung areas into positive or negative groups for ILD using a mesh-grid-based region growth method and a multi-feature-based artificial neural network (ANN). A genetic algorithm was applied to select optimal image features and the ANN structure. In testing each CT examination, only pixels selected by the mesh-grid region growth method were analyzed and classified by the ANN to improve computational efficiency. All unselected pixels were classified as negative for ILD. After classifying all pixels into the positive and negative groups, CAD computed a detection score based on the ratio of the number of positive pixels to all pixels in the segmented lung areas, which indicates the likelihood of the test case being positive for ILD. When applying to an independent testing dataset of 15 positive and 15 negative cases, the CAD scheme yielded the area under receiver operating characteristic curve (AUC = 0.884 ± 0.064) and 80.0% sensitivity at 85.7% specificity. The results demonstrated the feasibility of applying the CAD scheme to automatically detect early ILD using low-dose CT examinations.

  14. Web-based CAD and CAM for optomechatronics

    NASA Astrophysics Data System (ADS)

    Han, Min; Zhou, Hai-Guang

    2001-10-01

    CAD & CAM technologies are being used in design and manufacturing process, and are receiving increasing attention from industries and education. We have been researching to develop a new kind of software that is for web-course CAD & CAM. It can be used either in industries or in training, it is supported by IE. Firstly, we aim at CAD/CAM for optomechatronics. We have developed a kind of CAD/CAM, which is not only for mechanics but also for optics and electronic. That is a new kind of software in China. Secondly, we have developed a kind of software for web-course CAD & CAM, we introduce the basis of CAD, the commands of CAD, the programming, CAD/CAM for optomechatronics, the joint application of CAD & CAM. We introduce the functions of MasterCAM, show the whole processes of CAD/CAM/CNC by examples. Following the steps showed on the web, the trainer can not miss. CAD & CAM are widely used in many areas, development of web-course CAD & CAM is necessary for long- distance education and public education. In 1992, China raised: CAD technique, as an important part of electronic technology, is a new key technique to improve the national economic and the modernization of national defence. As so for, the education. Of CAD & CAM is mainly involved in manufacturing industry in China. But with the rapidly development of new technology, especially the development of optics and electronics, CAD & CAM will receive more attention from those areas.

  15. An efficient FPGA architecture for integer ƞth root computation

    NASA Astrophysics Data System (ADS)

    Rangel-Valdez, Nelson; Barron-Zambrano, Jose Hugo; Torres-Huitzil, Cesar; Torres-Jimenez, Jose

    2015-10-01

    In embedded computing, it is common to find applications such as signal processing, image processing, computer graphics or data compression that might benefit from hardware implementation for the computation of integer roots of order ?. However, the scientific literature lacks architectural designs that implement such operations for different values of N, using a low amount of resources. This article presents a parameterisable field programmable gate array (FPGA) architecture for an efficient Nth root calculator that uses only adders/subtractors and ? location memory elements. The architecture was tested for different values of ?, using 64-bit number representation. The results show a consumption up to 10% of the logical resources of a Xilinx XC6SLX45-CSG324C device, depending on the value of N. The hardware implementation improved the performance of its corresponding software implementations in one order of magnitude. The architecture performance varies from several thousands to seven millions of root operations per second.

  16. Efficient Solvability of Hamiltonians and Limits on the Power of Some Quantum Computational Models

    NASA Astrophysics Data System (ADS)

    Somma, Rolando; Barnum, Howard; Ortiz, Gerardo; Knill, Emanuel

    2006-11-01

    One way to specify a model of quantum computing is to give a set of control Hamiltonians acting on a quantum state space whose initial state and final measurement are specified in terms of the Hamiltonians. We formalize such models and show that they can be simulated classically in a time polynomial in the dimension of the Lie algebra generated by the Hamiltonians and logarithmic in the dimension of the state space. This leads to a definition of Lie-algebraic “generalized mean-field Hamiltonians.” We show that they are efficiently (exactly) solvable. Our results generalize the known weakness of fermionic linear optics computation and give conditions on control needed to exploit the full power of quantum computing.

  17. Redundancy management for efficient fault recovery in NASA's distributed computing system

    NASA Technical Reports Server (NTRS)

    Malek, Miroslaw; Pandya, Mihir; Yau, Kitty

    1991-01-01

    The management of redundancy in computer systems was studied and guidelines were provided for the development of NASA's fault-tolerant distributed systems. Fault recovery and reconfiguration mechanisms were examined. A theoretical foundation was laid for redundancy management by efficient reconfiguration methods and algorithmic diversity. Algorithms were developed to optimize the resources for embedding of computational graphs of tasks in the system architecture and reconfiguration of these tasks after a failure has occurred. The computational structure represented by a path and the complete binary tree was considered and the mesh and hypercube architectures were targeted for their embeddings. The innovative concept of Hybrid Algorithm Technique was introduced. This new technique provides a mechanism for obtaining fault tolerance while exhibiting improved performance.

  18. Random generation of periodic hard ellipsoids based on molecular dynamics: A computationally-efficient algorithm

    NASA Astrophysics Data System (ADS)

    Ghossein, Elias; Lévesque, Martin

    2013-11-01

    This paper presents a computationally-efficient algorithm for generating random periodic packings of hard ellipsoids. The algorithm is based on molecular dynamics where the ellipsoids are set in translational and rotational motion and their volumes gradually increase. Binary collision times are computed by simply finding the roots of a non-linear function. In addition, an original and efficient method to compute the collision time between an ellipsoid and a cube face is proposed. The algorithm can generate all types of ellipsoids (prolate, oblate and scalene) with very high aspect ratios (i.e., >10). It is the first time that such packings are reported in the literature. Orientations tensors were computed for the generated packings and it has been shown that ellipsoids had a uniform distribution of orientations. Moreover, it seems that for low aspect ratios (i.e., ⩽10), the volume fraction is the most influential parameter on the algorithm CPU time. For higher aspect ratios, the influence of the latter becomes as important as the volume fraction. All necessary pseudo-codes are given so that the reader can easily implement the algorithm.

  19. CAD/CAM and scientific data management at Dassault

    NASA Technical Reports Server (NTRS)

    Bohn, P.

    1984-01-01

    The history of CAD/CAM and scientific data management at Dassault are presented. Emphasis is put on the targets of the now commercially available software CATIA. The links with scientific computations such as aerodynamics and structural analysis are presented. Comments are made on the principles followed within the company. The consequences of the approximative nature of scientific data are examined. Consequence of the new history function is mainly its protection against copy or alteration. Future plans at Dassault for scientific data appear to be in opposite directions compared to some general tendencies.

  20. Computer-aided design for metabolic engineering.

    PubMed

    Fernández-Castané, Alfred; Fehér, Tamás; Carbonell, Pablo; Pauthenier, Cyrille; Faulon, Jean-Loup

    2014-12-20

    The development and application of biotechnology-based strategies has had a great socio-economical impact and is likely to play a crucial role in the foundation of more sustainable and efficient industrial processes. Within biotechnology, metabolic engineering aims at the directed improvement of cellular properties, often with the goal of synthesizing a target chemical compound. The use of computer-aided design (CAD) tools, along with the continuously emerging advanced genetic engineering techniques have allowed metabolic engineering to broaden and streamline the process of heterologous compound-production. In this work, we review the CAD tools available for metabolic engineering with an emphasis, on retrosynthesis methodologies. Recent advances in genetic engineering strategies for pathway implementation and optimization are also reviewed as well as a range of bionalytical tools to validate in silico predictions. A case study applying retrosynthesis is presented as an experimental verification of the output from Retropath, the first complete automated computational pipeline applicable to metabolic engineering. Applying this CAD pipeline, together with genetic reassembly and optimization of culture conditions led to improved production of the plant flavonoid pinocembrin. Coupling CAD tools with advanced genetic engineering strategies and bioprocess optimization is crucial for enhanced product yields and will be of great value for the development of non-natural products through sustainable biotechnological processes.

  1. Different CAD/CAM-processing routes for zirconia restorations: influence on fitting accuracy.

    PubMed

    Kohorst, Philipp; Junghanns, Janet; Dittmer, Marc P; Borchers, Lothar; Stiesch, Meike

    2011-08-01

    The aim of the present in vitro study was to evaluate the influence of different processing routes on the fitting accuracy of four-unit zirconia fixed dental prostheses (FDPs) fabricated by computer-aided design/computer-aided manufacturing (CAD/CAM). Three groups of zirconia frameworks with ten specimens each were fabricated. Frameworks of one group (CerconCAM) were produced by means of a laboratory CAM-only system. The other frameworks were made with different CAD/CAM systems; on the one hand by in-laboratory production (CerconCAD/CAM) and on the other hand by centralized production in a milling center (Compartis) after forwarding geometrical data. Frameworks were then veneered with the recommended ceramics, and marginal accuracy was determined using a replica technique. Horizontal marginal discrepancy, vertical marginal discrepancy, absolute marginal discrepancy, and marginal gap were evaluated. Statistical analyses were performed by one-way analysis of variance (ANOVA), with the level of significance chosen at 0.05. Mean horizontal discrepancies ranged between 22 μm (CerconCAM) and 58 μm (Compartis), vertical discrepancies ranged between 63 μm (CerconCAD/CAM) and 162 μm (CerconCAM), and absolute marginal discrepancies ranged between 94 μm (CerconCAD/CAM) and 181 μm (CerconCAM). The marginal gap varied between 72 μm (CerconCAD/CAM) and 112 μm (CerconCAM, Compartis). Statistical analysis revealed that, with all measurements, the marginal accuracy of the zirconia FDPs was significantly influenced by the processing route used (p < 0.05). Within the limitations of this study, all restorations showed a clinically acceptable marginal accuracy; however, the results suggest that the CAD/CAM systems are more precise than the CAM-only system for the manufacture of four-unit FDPs.

  2. From Artisanal to CAD-CAM Blocks: State of the Art of Indirect Composites.

    PubMed

    Mainjot, A K; Dupont, N M; Oudkerk, J C; Dewael, T Y; Sadoun, M J

    2016-05-01

    Indirect composites have been undergoing an impressive evolution over the last few years. Specifically, recent developments in computer-aided design-computer-aided manufacturing (CAD-CAM) blocks have been associated with new polymerization modes, innovative microstructures, and different compositions. All these recent breakthroughs have introduced important gaps among the properties of the different materials. This critical state-of-the-art review analyzes the strengths and weaknesses of the different varieties of CAD-CAM composite materials, especially as compared with direct and artisanal indirect composites. Indeed, new polymerization modes used for CAD-CAM blocks-especially high temperature (HT) and, most of all, high temperature-high pressure (HT-HP)-are shown to significantly increase the degree of conversion in comparison with light-cured composites. Industrial processes also allow for the augmentation of the filler content and for the realization of more homogeneous structures with fewer flaws. In addition, due to their increased degree of conversion and their different monomer composition, some CAD-CAM blocks are more advantageous in terms of toxicity and monomer release. Finally, materials with a polymer-infiltrated ceramic network (PICN) microstructure exhibit higher flexural strength and a more favorable elasticity modulus than materials with a dispersed filler microstructure. Consequently, some high-performance composite CAD-CAM blocks-particularly experimental PICNs-can now rival glass-ceramics, such as lithium-disilicate glass-ceramics, for use as bonded partial restorations and crowns on natural teeth and implants. Being able to be manufactured in very low thicknesses, they offer the possibility of developing innovative minimally invasive treatment strategies, such as "no prep" treatment of worn dentition. Current issues are related to the study of bonding and wear properties of the different varieties of CAD-CAM composites. There is also a crucial

  3. Fabricating CAD/CAM Implant-Retained Mandibular Bar Overdentures: A Clinical and Technical Overview

    PubMed Central

    Tan, Keson Beng Choon

    2017-01-01

    This report describes the clinical and technical aspects in the oral rehabilitation of an edentulous patient with knife-edge ridge at the mandibular anterior edentulous region, using implant-retained overdentures. The application of computer-aided design and computer-aided manufacturing (CAD/CAM) in the fabrication of the overdenture framework simplifies the laboratory process of the implant prostheses. The Nobel Procera CAD/CAM System was utilised to produce a lightweight titanium overdenture bar with locator attachments. It is proposed that the digital workflow of CAD/CAM milled implant overdenture bar allows us to avoid numerous technical steps and possibility of casting errors involved in the conventional casting of such bars.

  4. Performance evaluation for volumetric segmentation of multiple sclerosis lesions using MATLAB and computing engine in the graphical processing unit (GPU)

    NASA Astrophysics Data System (ADS)

    Le, Anh H.; Park, Young W.; Ma, Kevin; Jacobs, Colin; Liu, Brent J.

    2010-03-01

    Multiple Sclerosis (MS) is a progressive neurological disease affecting myelin pathways in the brain. Multiple lesions in the white matter can cause paralysis and severe motor disabilities of the affected patient. To solve the issue of inconsistency and user-dependency in manual lesion measurement of MRI, we have proposed a 3-D automated lesion quantification algorithm to enable objective and efficient lesion volume tracking. The computer-aided detection (CAD) of MS, written in MATLAB, utilizes K-Nearest Neighbors (KNN) method to compute the probability of lesions on a per-voxel basis. Despite the highly optimized algorithm of imaging processing that is used in CAD development, MS CAD integration and evaluation in clinical workflow is technically challenging due to the requirement of high computation rates and memory bandwidth in the recursive nature of the algorithm. In this paper, we present the development and evaluation of using a computing engine in the graphical processing unit (GPU) with MATLAB for segmentation of MS lesions. The paper investigates the utilization of a high-end GPU for parallel computing of KNN in the MATLAB environment to improve algorithm performance. The integration is accomplished using NVIDIA's CUDA developmental toolkit for MATLAB. The results of this study will validate the practicality and effectiveness of the prototype MS CAD in a clinical setting. The GPU method may allow MS CAD to rapidly integrate in an electronic patient record or any disease-centric health care system.

  5. Computationally-Efficient Minimum-Time Aircraft Routes in the Presence of Winds

    NASA Technical Reports Server (NTRS)

    Jardin, Matthew R.

    2004-01-01

    A computationally efficient algorithm for minimizing the flight time of an aircraft in a variable wind field has been invented. The algorithm, referred to as Neighboring Optimal Wind Routing (NOWR), is based upon neighboring-optimal-control (NOC) concepts and achieves minimum-time paths by adjusting aircraft heading according to wind conditions at an arbitrary number of wind measurement points along the flight route. The NOWR algorithm may either be used in a fast-time mode to compute minimum- time routes prior to flight, or may be used in a feedback mode to adjust aircraft heading in real-time. By traveling minimum-time routes instead of direct great-circle (direct) routes, flights across the United States can save an average of about 7 minutes, and as much as one hour of flight time during periods of strong jet-stream winds. The neighboring optimal routes computed via the NOWR technique have been shown to be within 1.5 percent of the absolute minimum-time routes for flights across the continental United States. On a typical 450-MHz Sun Ultra workstation, the NOWR algorithm produces complete minimum-time routes in less than 40 milliseconds. This corresponds to a rate of 25 optimal routes per second. The closest comparable optimization technique runs approximately 10 times slower. Airlines currently use various trial-and-error search techniques to determine which of a set of commonly traveled routes will minimize flight time. These algorithms are too computationally expensive for use in real-time systems, or in systems where many optimal routes need to be computed in a short amount of time. Instead of operating in real-time, airlines will typically plan a trajectory several hours in advance using wind forecasts. If winds change significantly from forecasts, the resulting flights will no longer be minimum-time. The need for a computationally efficient wind-optimal routing algorithm is even greater in the case of new air-traffic-control automation concepts. For air

  6. Automated knowledge base development from CAD/CAE databases

    NASA Technical Reports Server (NTRS)

    Wright, R. Glenn; Blanchard, Mary

    1988-01-01

    Knowledge base development requires a substantial investment in time, money, and resources in order to capture the knowledge and information necessary for anything other than trivial applications. This paper addresses a means to integrate the design and knowledge base development process through automated knowledge base development from CAD/CAE databases and files. Benefits of this approach include the development of a more efficient means of knowledge engineering, resulting in the timely creation of large knowledge based systems that are inherently free of error.

  7. Efficient curve-skeleton computation for the analysis of biomedical 3d images - biomed 2010.

    PubMed

    Brun, Francesco; Dreossi, Diego

    2010-01-01

    Advances in three dimensional (3D) biomedical imaging techniques, such as magnetic resonance (MR) and computed tomography (CT), make it easy to reconstruct high quality 3D models of portions of human body and other biological specimens. A major challenge lies in the quantitative analysis of the resulting models thus allowing a more comprehensive characterization of the object under investigation. An interesting approach is based on curve-skeleton (or medial axis) extraction, which gives basic information concerning the topology and the geometry. Curve-skeletons have been applied in the analysis of vascular networks and the diagnosis of tracheal stenoses as well as a 3D flight path in virtual endoscopy. However curve-skeleton computation is a crucial task. An effective skeletonization algorithm was introduced by N. Cornea in [1] but it lacks in computational performances. Thanks to the advances in imaging techniques the resolution of 3D images is increasing more and more, therefore there is the need for efficient algorithms in order to analyze significant Volumes of Interest (VOIs). In the present paper an improved skeletonization algorithm based on the idea proposed in [1] is presented. A computational comparison between the original and the proposed method is also reported. The obtained results show that the proposed method allows a significant computational improvement making more appealing the adoption of the skeleton representation in biomedical image analysis applications.

  8. Computationally efficient characterization of potential energy surfaces based on fingerprint distances

    NASA Astrophysics Data System (ADS)

    Schaefer, Bastian; Goedecker, Stefan

    2016-07-01

    An analysis of the network defined by the potential energy minima of multi-atomic systems and their connectivity via reaction pathways that go through transition states allows us to understand important characteristics like thermodynamic, dynamic, and structural properties. Unfortunately computing the transition states and reaction pathways in addition to the significant energetically low-lying local minima is a computationally demanding task. We here introduce a computationally efficient method that is based on a combination of the minima hopping global optimization method and the insight that uphill barriers tend to increase with increasing structural distances of the educt and product states. This method allows us to replace the exact connectivity information and transition state energies with alternative and approximate concepts. Without adding any significant additional cost to the minima hopping global optimization approach, this method allows us to generate an approximate network of the minima, their connectivity, and a rough measure for the energy needed for their interconversion. This can be used to obtain a first qualitative idea on important physical and chemical properties by means of a disconnectivity graph analysis. Besides the physical insight obtained by such an analysis, the gained knowledge can be used to make a decision if it is worthwhile or not to invest computational resources for an exact computation of the transition states and the reaction pathways. Furthermore it is demonstrated that the here presented method can be used for finding physically reasonable interconversion pathways that are promising input pathways for methods like transition path sampling or discrete path sampling.

  9. Enhancing simulation of efficiency with analytical tools. [combining computer simulation and analytical techniques for cost reduction

    NASA Technical Reports Server (NTRS)

    Seltzer, S. M.

    1974-01-01

    Some means of combining both computer simulation and anlytical techniques are indicated in order to mutually enhance their efficiency as design tools and to motivate those involved in engineering design to consider using such combinations. While the idea is not new, heavy reliance on computers often seems to overshadow the potential utility of analytical tools. Although the example used is drawn from the area of dynamics and control, the principles espoused are applicable to other fields. In the example the parameter plane stability analysis technique is described briefly and extended beyond that reported in the literature to increase its utility (through a simple set of recursive formulas) and its applicability (through the portrayal of the effect of varying the sampling period of the computer). The numerical values that were rapidly selected by analysis were found to be correct for the hybrid computer simulation for which they were needed. This obviated the need for cut-and-try methods to choose the numerical values, thereby saving both time and computer utilization.

  10. Efficient mapping algorithms for scheduling robot inverse dynamics computation on a multiprocessor system

    NASA Technical Reports Server (NTRS)

    Lee, C. S. G.; Chen, C. L.

    1989-01-01

    Two efficient mapping algorithms for scheduling the robot inverse dynamics computation consisting of m computational modules with precedence relationship to be executed on a multiprocessor system consisting of p identical homogeneous processors with processor and communication costs to achieve minimum computation time are presented. An objective function is defined in terms of the sum of the processor finishing time and the interprocessor communication time. The minimax optimization is performed on the objective function to obtain the best mapping. This mapping problem can be formulated as a combination of the graph partitioning and the scheduling problems; both have been known to be NP-complete. Thus, to speed up the searching for a solution, two heuristic algorithms were proposed to obtain fast but suboptimal mapping solutions. The first algorithm utilizes the level and the communication intensity of the task modules to construct an ordered priority list of ready modules and the module assignment is performed by a weighted bipartite matching algorithm. For a near-optimal mapping solution, the problem can be solved by the heuristic algorithm with simulated annealing. These proposed optimization algorithms can solve various large-scale problems within a reasonable time. Computer simulations were performed to evaluate and verify the performance and the validity of the proposed mapping algorithms. Finally, experiments for computing the inverse dynamics of a six-jointed PUMA-like manipulator based on the Newton-Euler dynamic equations were implemented on an NCUBE/ten hypercube computer to verify the proposed mapping algorithms. Computer simulation and experimental results are compared and discussed.

  11. Capture Efficiency of Biocompatible Magnetic Nanoparticles in Arterial Flow: A Computer Simulation for Magnetic Drug Targeting

    NASA Astrophysics Data System (ADS)

    Lunnoo, Thodsaphon; Puangmali, Theerapong

    2015-10-01

    The primary limitation of magnetic drug targeting (MDT) relates to the strength of an external magnetic field which decreases with increasing distance. Small nanoparticles (NPs) displaying superparamagnetic behaviour are also required in order to reduce embolization in the blood vessel. The small NPs, however, make it difficult to vector NPs and keep them in the desired location. The aims of this work were to investigate parameters influencing the capture efficiency of the drug carriers in mimicked arterial flow. In this work, we computationally modelled and evaluated capture efficiency in MDT with COMSOL Multiphysics 4.4. The studied parameters were (i) magnetic nanoparticle size, (ii) three classes of magnetic cores (Fe3O4, Fe2O3, and Fe), and (iii) the thickness of biocompatible coating materials (Au, SiO2, and PEG). It was found that the capture efficiency of small particles decreased with decreasing size and was less than 5 % for magnetic particles in the superparamagnetic regime. The thickness of non-magnetic coating materials did not significantly influence the capture efficiency of MDT. It was difficult to capture small drug carriers ( D<200 nm) in the arterial flow. We suggest that the MDT with high-capture efficiency can be obtained in small vessels and low-blood velocities such as micro-capillary vessels.

  12. Efficiency Improvement Opportunities for Personal Computer Monitors. Implications for Market Transformation Programs

    SciTech Connect

    Park, Won Young; Phadke, Amol; Shah, Nihar

    2012-06-29

    Displays account for a significant portion of electricity consumed in personal computer (PC) use, and global PC monitor shipments are expected to continue to increase. We assess the market trends in the energy efficiency of PC monitors that are likely to occur without any additional policy intervention and estimate that display efficiency will likely improve by over 40% by 2015 compared to today’s technology. We evaluate the cost effectiveness of a key technology which further improves efficiency beyond this level by at least 20% and find that its adoption is cost effective. We assess the potential for further improving efficiency taking into account the recent development of universal serial bus (USB) powered liquid crystal display (LCD) monitors and find that the current technology available and deployed in USB powered monitors has the potential to deeply reduce energy consumption by as much as 50%. We provide insights for policies and programs that can be used to accelerate the adoption of efficient technologies to capture global energy saving potential from PC monitors which we estimate to be 9.2 terawatt-hours [TWh] per year in 2015.

  13. Reentry-Vehicle Shape Optimization Using a Cartesian Adjoint Method and CAD Geometry

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.

    2006-01-01

    A DJOINT solutions of the governing flow equations are becoming increasingly important for the development of efficient analysis and optimization algorithms. A well-known use of the adjoint method is gradient-based shape. Given an objective function that defines some measure of performance, such as the lift and drag functionals, its gradient is computed at a cost that is essentially independent of the number of design variables (e.g., geometric parameters that control the shape). Classic aerodynamic applications of gradient-based optimization include the design of cruise configurations for transonic and supersonic flow, as well as the design of high-lift systems. are perhaps the most promising approach for addressing the issues of flow solution automation for aerodynamic design problems. In these methods, the discretization of the wetted surface is decoupled from that of the volume mesh. This not only enables fast and robust mesh generation for geometry of arbitrary complexity, but also facilitates access to geometry modeling and manipulation using parametric computer-aided design (CAD). In previous work on Cartesian adjoint solvers, Melvin et al. developed an adjoint formulation for the TRANAIR code, which is based on the full-potential equation with viscous corrections. More recently, Dadone and Grossman presented an adjoint formulation for the two-dimensional Euler equations using a ghost-cell method to enforce the wall boundary conditions. In Refs. 18 and 19, we presented an accurate and efficient algorithm for the solution of the adjoint Euler equations discretized on Cartesian meshes with embedded, cut-cell boundaries. Novel aspects of the algorithm were the computation of surface shape sensitivities for triangulations based on parametric-CAD models and the linearization of the coupling between the surface triangulation and the cut-cells. The accuracy of the gradient computation was verified using several three-dimensional test cases, which included design

  14. CAD system for automatic analysis of CT perfusion maps

    NASA Astrophysics Data System (ADS)

    Hachaj, T.; Ogiela, M. R.

    2011-03-01

    In this article, authors present novel algorithms developed for the computer-assisted diagnosis (CAD) system for analysis of dynamic brain perfusion, computer tomography (CT) maps, cerebral blood flow (CBF), and cerebral blood volume (CBV). Those methods perform both quantitative analysis [detection and measurement and description with brain anatomy atlas (AA) of potential asymmetries/lesions] and qualitative analysis (semantic interpretation of visualized symptoms). The semantic interpretation (decision about type of lesion: ischemic/hemorrhagic, is the brain tissue at risk of infraction or not) of visualized symptoms is done by, so-called, cognitive inference processes allowing for reasoning on character of pathological regions based on specialist image knowledge. The whole system is implemented in.NET platform (C# programming language) and can be used on any standard PC computer with.NET framework installed.

  15. Uncertainty in aspiration efficiency estimates from torso simplifications in computational fluid dynamics simulations.

    PubMed

    Anderson, Kimberly R; Anthony, T Renée

    2013-03-01

    Computational fluid dynamics (CFD) has been used to report particle inhalability in low velocity freestreams, where realistic faces but simplified, truncated, and cylindrical human torsos were used. When compared to wind tunnel velocity studies, the truncated models were found to underestimate the air's upward velocity near the humans, raising questions about aspiration estimation. This work compares aspiration efficiencies for particles ranging from 7 to 116 µm using three torso geometries: (i) a simplified truncated cylinder, (ii) a non-truncated cylinder, and (iii) an anthropometrically realistic humanoid body. The primary aim of this work is to (i) quantify the errors introduced by using a simplified geometry and (ii) determine the required level of detail to adequately represent a human form in CFD studies of aspiration efficiency. Fluid simulations used the standard k-epsilon turbulence models, with freestream velocities at 0.1, 0.2, and 0.4 m s(-1) and breathing velocities at 1.81 and 12.11 m s(-1) to represent at-rest and heavy breathing rates, respectively. Laminar particle trajectory simulations were used to determine the upstream area, also known as the critical area, where particles would be inhaled. These areas were used to compute aspiration efficiencies for facing the wind. Significant differences were found in both vertical velocity estimates and the location of the critical area between the three models. However, differences in aspiration efficiencies between the three forms were <8.8% over all particle sizes, indicating that there is little difference in aspiration efficiency between torso models.

  16. Uncertainty in Aspiration Efficiency Estimates from Torso Simplifications in Computational Fluid Dynamics Simulations

    PubMed Central

    Anthony, T. Renée

    2013-01-01

    Computational fluid dynamics (CFD) has been used to report particle inhalability in low velocity freestreams, where realistic faces but simplified, truncated, and cylindrical human torsos were used. When compared to wind tunnel velocity studies, the truncated models were found to underestimate the air’s upward velocity near the humans, raising questions about aspiration estimation. This work compares aspiration efficiencies for particles ranging from 7 to 116 µm using three torso geometries: (i) a simplified truncated cylinder, (ii) a non-truncated cylinder, and (iii) an anthropometrically realistic humanoid body. The primary aim of this work is to (i) quantify the errors introduced by using a simplified geometry and (ii) determine the required level of detail to adequately represent a human form in CFD studies of aspiration efficiency. Fluid simulations used the standard k-epsilon turbulence models, with freestream velocities at 0.1, 0.2, and 0.4 m s−1 and breathing velocities at 1.81 and 12.11 m s−1 to represent at-rest and heavy breathing rates, respectively. Laminar particle trajectory simulations were used to determine the upstream area, also known as the critical area, where particles would be inhaled. These areas were used to compute aspiration efficiencies for facing the wind. Significant differences were found in both vertical velocity estimates and the location of the critical area between the three models. However, differences in aspiration efficiencies between the three forms were <8.8% over all particle sizes, indicating that there is little difference in aspiration efficiency between torso models. PMID:23006817

  17. Efficient Adjoint Computation of Hybrid Systems of Differential Algebraic Equations with Applications in Power Systems

    SciTech Connect

    Abhyankar, Shrirang; Anitescu, Mihai; Constantinescu, Emil; Zhang, Hong

    2016-03-31

    Sensitivity analysis is an important tool to describe power system dynamic behavior in response to parameter variations. It is a central component in preventive and corrective control applications. The existing approaches for sensitivity calculations, namely, finite-difference and forward sensitivity analysis, require a computational effort that increases linearly with the number of sensitivity parameters. In this work, we investigate, implement, and test a discrete adjoint sensitivity approach whose computational effort is effectively independent of the number of sensitivity parameters. The proposed approach is highly efficient for calculating trajectory sensitivities of larger systems and is consistent, within machine precision, with the function whose sensitivity we are seeking. This is an essential feature for use in optimization applications. Moreover, our approach includes a consistent treatment of systems with switching, such as DC exciters, by deriving and implementing the adjoint jump conditions that arise from state and time-dependent discontinuities. The accuracy and the computational efficiency of the proposed approach are demonstrated in comparison with the forward sensitivity analysis approach.

  18. An efficient computational model for deep low-enthalpy geothermal systems

    NASA Astrophysics Data System (ADS)

    Saeid, Sanaz; Al-Khoury, Rafid; Barends, Frans

    2013-02-01

    In this paper, a computationally efficient finite element model for transient heat and fluid flow in a deep low-enthalpy geothermal system is formulated. Emphasis is placed on coupling between the involved wellbores and a soil mass, represented by a geothermal reservoir and a surrounding soil. The finite element package COMSOL is utilized as a framework for implementing the model. Two main aspects have contributed to the computational efficiency and accuracy: the wellbore model, and the 1D-2D coupling of COMSOL. In the first aspect, heat flow in the wellbore is modelled as pseudo three-dimensional conductive-convective, using a one-dimensional element. In this model, thermal interactions between the wellbore components are included in the mathematical model, alleviating the need for typical 3D spatial discretization, and thus reducing the mesh size significantly. In the second aspect, heat flow in the soil mass is coupled to the heat flow in the wellbores, giving accurate description of heat loss and gain along the pathway of the injected and produced fluid. Heat flow in the geothermal reservoir, and due to dependency of fluid density and viscosity on temperature, is simulated as two-dimensional fully saturated nonlinear conductive-convective, whereas in the surrounding soil, heat flow is simulated as linear conductive. Numerical and parametric examples describing the computational capabilities of the model and its suitability for utilization in engineering practice are presented.

  19. Sampling efficiency of modified 37-mm sampling cassettes using computational fluid dynamics.

    PubMed

    Anthony, T Renée; Sleeth, Darrah; Volckens, John

    2016-01-01

    In the U.S., most industrial hygiene practitioners continue to rely on the closed-face cassette (CFC) to assess worker exposures to hazardous dusts, primarily because ease of use, cost, and familiarity. However, mass concentrations measured with this classic sampler underestimate exposures to larger particles throughout the inhalable particulate mass (IPM) size range (up to aerodynamic diameters of 100 μm). To investigate whether the current 37-mm inlet cap can be redesigned to better meet the IPM sampling criterion, computational fluid dynamics (CFD) models were developed, and particle sampling efficiencies associated with various modifications to the CFC inlet cap were determined. Simulations of fluid flow (standard k-epsilon turbulent model) and particle transport (laminar trajectories, 1-116 μm) were conducted using sampling flow rates of 10 L min(-1) in slow moving air (0.2 m s(-1)) in the facing-the-wind orientation. Combinations of seven inlet shapes and three inlet diameters were evaluated as candidates to replace the current 37-mm inlet cap. For a given inlet geometry, differences in sampler efficiency between inlet diameters averaged less than 1% for particles through 100 μm, but the largest opening was found to increase the efficiency for the 116 μm particles by 14% for the flat inlet cap. A substantial reduction in sampler efficiency was identified for sampler inlets with side walls extending beyond the dimension of the external lip of the current 37-mm CFC. The inlet cap based on the 37-mm CFC dimensions with an expanded 15-mm entry provided the best agreement with facing-the-wind human aspiration efficiency. The sampler efficiency was increased with a flat entry or with a thin central lip adjacent to the new enlarged entry. This work provides a substantial body of sampling efficiency estimates as a function of particle size and inlet geometry for personal aerosol samplers.

  20. Improving Computational Efficiency of Prediction in Model-Based Prognostics Using the Unscented Transform

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew John; Goebel, Kai Frank

    2010-01-01

    Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of end of life (EOL). EOL is predicted based on the estimated current state distribution of a component and expected profiles of future usage. In general, this requires simulations of the component using the underlying models. In this paper, we develop a simulation-based prediction methodology that achieves computational efficiency by performing only the minimal number of simulations needed in order to accurately approximate the mean and variance of the complete EOL distribution. This is performed through the use of the unscented transform, which predicts the means and covariances of a distribution passed through a nonlinear transformation. In this case, the EOL simulation acts as that nonlinear transformation. In this paper, we review the unscented transform, and describe how this concept is applied to efficient EOL prediction. As a case study, we develop a physics-based model of a solenoid valve, and perform simulation experiments to demonstrate improved computational efficiency without sacrificing prediction accuracy.

  1. Efficient computational techniques for mistuning analysis of bladed discs: A review

    NASA Astrophysics Data System (ADS)

    Yuan, Jie; Scarpa, Fabrizio; Allegri, Giuliano; Titurus, Branislav; Patsias, Sophoclis; Rajasekaran, Ramesh

    2017-03-01

    This paper describes a review of the relevant literature about mistuning problems in bladed disc systems, and their implications for the uncertainty propagation associated to the dynamics of aeroengine systems. An emphasis of the review is placed on the developments of the multi-scale computational techniques to increase the computational efficiency for the linear mistuning analysis, especially with the respect to the reduced order modeling techniques and uncertainty quantification methods. The non-linearity phenomena are not considered in this paper. The first two parts describe the fundamentals of the mechanics of tuned and mistuned bladed discs, followed by a review of critical research efforts performed on the development of reduced order rotor models. The focus of the fourth part is on the review of efficient simulation methods for the stochastic analysis of mistuned bladed disc systems. After that, we will finally provide a view of the current state of the art associated to efficient inversion methods for the stochastic analysis, followed by a summary.

  2. Computational Efficient Upscaling Methodology for Predicting Thermal Conductivity of Nuclear Waste forms

    SciTech Connect

    Li, Dongsheng; Sun, Xin; Khaleel, Mohammad A.

    2011-09-28

    This study evaluated different upscaling methods to predict thermal conductivity in loaded nuclear waste form, a heterogeneous material system. The efficiency and accuracy of these methods were compared. Thermal conductivity in loaded nuclear waste form is an important property specific to scientific researchers, in waste form Integrated performance and safety code (IPSC). The effective thermal conductivity obtained from microstructure information and local thermal conductivity of different components is critical in predicting the life and performance of waste form during storage. How the heat generated during storage is directly related to thermal conductivity, which in turn determining the mechanical deformation behavior, corrosion resistance and aging performance. Several methods, including the Taylor model, Sachs model, self-consistent model, and statistical upscaling models were developed and implemented. Due to the absence of experimental data, prediction results from finite element method (FEM) were used as reference to determine the accuracy of different upscaling models. Micrographs from different loading of nuclear waste were used in the prediction of thermal conductivity. Prediction results demonstrated that in term of efficiency, boundary models (Taylor and Sachs model) are better than self consistent model, statistical upscaling method and FEM. Balancing the computation resource and accuracy, statistical upscaling is a computational efficient method in predicting effective thermal conductivity for nuclear waste form.

  3. Computationally efficient multidimensional analysis of complex flow cytometry data using second order polynomial histograms.

    PubMed

    Zaunders, John; Jing, Junmei; Leipold, Michael; Maecker, Holden; Kelleher, Anthony D; Koch, Inge

    2016-01-01

    Many methods have been described for automated clustering analysis of complex flow cytometry data, but so far the goal to efficiently estimate multivariate densities and their modes for a moderate number of dimensions and potentially millions of data points has not been attained. We have devised a novel approach to describing modes using second order polynomial histogram estimators (SOPHE). The method divides the data into multivariate bins and determines the shape of the data in each bin based on second order polynomials, which is an efficient computation. These calculations yield local maxima and allow joining of adjacent bins to identify clusters. The use of second order polynomials also optimally uses wide bins, such that in most cases each parameter (dimension) need only be divided into 4-8 bins, again reducing computational load. We have validated this method using defined mixtures of up to 17 fluorescent beads in 16 dimensions, correctly identifying all populations in data files of 100,000 beads in <10 s, on a standard laptop. The method also correctly clustered granulocytes, lymphocytes, including standard T, B, and NK cell subsets, and monocytes in 9-color stained peripheral blood, within seconds. SOPHE successfully clustered up to 36 subsets of memory CD4 T cells using differentiation and trafficking markers, in 14-color flow analysis, and up to 65 subpopulations of PBMC in 33-dimensional CyTOF data, showing its usefulness in discovery research. SOPHE has the potential to greatly increase efficiency of analysing complex mixtures of cells in higher dimensions.

  4. Formal Management of CAD/CAM Processes

    NASA Astrophysics Data System (ADS)

    Kohlhase, Michael; Lemburg, Johannes; Schröder, Lutz; Schulz, Ewaryst

    Systematic engineering design processes have many aspects in common with software engineering, with CAD/CAM objects replacing program code as the implementation stage of the development. They are, however, currently considerably less formal. We propose to draw on the mentioned similarities and transfer methods from software engineering to engineering design in order to enhance in particular the reliability and reusability of engineering processes. We lay out a vision of a document-oriented design process that integrates CAD/CAM documents with requirement specifications; as a first step towards supporting such a process, we present a tool that interfaces a CAD system with program verification workflows, thus allowing for completely formalised development strands within a semi-formal methodology.

  5. Next Generation CAD/CAM/CAE Systems

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler); Malone, John B. (Compiler)

    1997-01-01

    This document contains presentations from the joint UVA/NASA Workshop on Next Generation CAD/CAM/CAE Systems held at NASA Langley Research Center in Hampton, Virginia on March 18-19, 1997. The presentations focused on current capabilities and future directions of CAD/CAM/CAE systems, aerospace industry projects, and university activities related to simulation-based design. Workshop attendees represented NASA, commercial software developers, the aerospace industry, government labs, and academia. The workshop objectives were to assess the potential of emerging CAD/CAM/CAE technology for use in intelligent simulation-based design and to provide guidelines for focused future research leading to effective use of CAE systems for simulating the entire life cycle of aerospace systems.

  6. Improving CAD performance in pulmonary embolism detection: preliminary investigation

    NASA Astrophysics Data System (ADS)

    Park, Sang Cheol; Chapman, Brian; Deible, Christopher; Lee, Sean; Zheng, Bin

    2010-03-01

    In this preliminary study, a new computer-aided detection (CAD) scheme for pulmonary embolism (PE) detection was developed and tested. The scheme applies multiple steps including lung segmentation, candidate extraction using intensity mask and tobogganing method, feature extraction, false positive reduction using a multifeature based artificial neural network (ANN) and a k-nearest neighbor (KNN) classifier to detect and classify suspicious PE lesions. In particular, a new method to define the surrounding background regions of interest (ROI) depicting PE candidates was proposed and tested in an attempt to reduce the detection of false positive regions. In this study, the authors also investigated following methods to improve CAD performance, which include a grouping and scoring method, feature selection using genetic algorithm, and limitation on allowed suspicious lesions to be cued in one examination. To test the scheme performance, a set of 20 chest CT examinations were selected. Among them, 18 are positive cases depicted 44 verified PE lesions and the remaining 2 were negative cases. The dataset was also divided into a training subset (9 examinations) and a testing subset (11 examinations), respectively. The experimental results showed when applying to the testing dataset CAD scheme using tobogganing method alone achieved 2D region-based sensitivity of 72.1% (220/305) and 3D lesion-based sensitivity of 83.3% (20/24) with total 19,653 2D false-positive (FP) PE regions (1,786.6 per case or approximately 6.3 per CT slice). Applying the proposed new method to improve lung region segmentation and better define the surrounding background ROI, the scheme reduced the region-based sensitivity by 6.5% to 65.6% or lesion-based sensitivity by 4.1% to 79.2% while reducing the FP rate by 65.6% to 6,752 regions (or 613.8 per case). After applying the methods of grouping, the maximum scoring, a genetic algorithm (GA) to delete "redundant" features, and limiting the maximum

  7. Computationally efficient gradient matrix of optical path length in axisymmetric optical systems.

    PubMed

    Hsueh, Chun-Che; Lin, Psang-Dain

    2009-02-10

    We develop a mathematical method for determining the optical path length (OPL) gradient matrix relative to all the system variables such that the effects of variable changes can be evaluated in a single pass. The approach developed avoids the requirement for multiple ray-tracing operations and is, therefore, more computationally efficient. By contrast, the effects of variable changes on the OPL of an optical system are generally evaluated by utilizing a ray-tracing approach to determine the OPL before and after the variable change and then applying a finite-difference (FD) approximation method to estimate the OPL gradient with respect to each individual variable. Utilizing a Petzval lens system for verification purposes, it is shown that the approach developed reduces the computational time by around 90% compared to that of the FD method.

  8. A computationally efficient strength model for textured HCP metals undergoing dynamic loading conditions: Application to Magnesium

    NASA Astrophysics Data System (ADS)

    Lloyd, Jeffrey; Becker, Richard

    2015-06-01

    Predicting the behavior of HCP metals presents challenges beyond those of FCC and BCC metals because several deformation mechanisms, each with their own distinct behavior, compete simultaneously. Understanding and capturing the competition of these mechanisms is essential for modeling the anisotropic and highly orientation-dependent behavior exhibited by most HCP metals, yet doing so in a computationally efficient manner has been elusive. In this work an orientation-dependent strength model is developed that captures the competition between basal slip, extension twinning, and non-basal slip at significantly lower computational cost than conventional crystal plasticity models. The model is applied to various textured Magnesium polycrystals, and where applicable, compared with experimental results. Although the model developed in this work is only applied to Magnesium, both the framework and model are applicable to other non-cubic crystal structures.

  9. Computationally Efficient 2D DOA Estimation with Uniform Rectangular Array in Low-Grazing Angle

    PubMed Central

    Shi, Junpeng; Hu, Guoping; Zhang, Xiaofei; Sun, Fenggang; Xiao, Yu

    2017-01-01

    In this paper, we propose a computationally efficient spatial differencing matrix set (SDMS) method for two-dimensional direction of arrival (2D DOA) estimation with uniform rectangular arrays (URAs) in a low-grazing angle (LGA) condition. By rearranging the auto-correlation and cross-correlation matrices in turn among different subarrays, the SDMS method can estimate the two parameters independently with one-dimensional (1D) subspace-based estimation techniques, where we only perform difference for auto-correlation matrices and the cross-correlation matrices are kept completely. Then, the pair-matching of two parameters is achieved by extracting the diagonal elements of URA. Thus, the proposed method can decrease the computational complexity, suppress the effect of additive noise and also have little information loss. Simulation results show that, in LGA, compared to other methods, the proposed methods can achieve performance improvement in the white or colored noise conditions. PMID:28245634

  10. A computationally efficient description of heterogeneous freezing: A simplified version of the Soccer ball model

    NASA Astrophysics Data System (ADS)

    Niedermeier, Dennis; Ervens, Barbara; Clauss, Tina; Voigtländer, Jens; Wex, Heike; Hartmann, Susan; Stratmann, Frank

    2014-01-01

    In a recent study, the Soccer ball model (SBM) was introduced for modeling and/or parameterizing heterogeneous ice nucleation processes. The model applies classical nucleation theory. It allows for a consistent description of both apparently singular and stochastic ice nucleation behavior, by distributing contact angles over the nucleation sites of a particle population assuming a Gaussian probability density function. The original SBM utilizes the Monte Carlo technique, which hampers its usage in atmospheric models, as fairly time-consuming calculations must be performed to obtain statistically significant results. Thus, we have developed a simplified and computationally more efficient version of the SBM. We successfully used the new SBM to parameterize experimental nucleation data of, e.g., bacterial ice nucleation. Both SBMs give identical results; however, the new model is computationally less expensive as confirmed by cloud parcel simulations. Therefore, it is a suitable tool for describing heterogeneous ice nucleation processes in atmospheric models.

  11. Use of global functions for improvement in efficiency of nonlinear analysis. [in computer structural displacement estimation

    NASA Technical Reports Server (NTRS)

    Almroth, B. O.; Stehlin, P.; Brogan, F. A.

    1981-01-01

    A method for improving the efficiency of nonlinear structural analysis by the use of global displacement functions is presented. The computer programs include options to define the global functions as input or let the program automatically select and update these functions. The program was applied to a number of structures: (1) 'pear-shaped cylinder' in compression, (2) bending of a long cylinder, (3) spherical shell subjected to point force, (4) panel with initial imperfections, (5) cylinder with cutouts. The sample cases indicate the usefulness of the procedure in the solution of nonlinear structural shell problems by the finite element method. It is concluded that the use of global functions for extrapolation will lead to savings in computer time.

  12. Efficient algorithm for computing exact partition functions of lattice polymer models

    NASA Astrophysics Data System (ADS)

    Hsieh, Yu-Hsin; Chen, Chi-Ning; Hu, Chin-Kun

    2016-12-01

    Polymers are important macromolecules in many physical, chemical, biological and industrial problems. Studies on simple lattice polymer models are very helpful for understanding behaviors of polymers. We develop an efficient algorithm for computing exact partition functions of lattice polymer models, and we use this algorithm and personal computers to obtain exact partition functions of the interacting self-avoiding walks with N monomers on the simple cubic lattice up to N = 28 and on the square lattice up to N = 40. Our algorithm can be extended to study other lattice polymer models, such as the HP model for protein folding and the charged HP model for protein aggregation. It also provides references for checking accuracy of numerical partition functions obtained by simulations.

  13. Efficient Computation of Closed-loop Frequency Response for Large Order Flexible Systems

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Giesy, Daniel P.

    1997-01-01

    An efficient and robust computational scheme is given for the calculation of the frequency response function of a large order, flexible system implemented with a linear, time invariant control system. Advantage is taken of the highly structured sparsity of the system matrix of the plant based on a model of the structure using normal mode coordinates. The computational time per frequency point of the new computational scheme is a linear function of system size, a significant improvement over traditional, full-matrix techniques whose computational times per frequency point range from quadratic to cubic functions of system size. This permits the practical frequency domain analysis of systems of much larger order than by traditional, full-matrix techniques. Formulations are given for both open and closed loop loop systems. Numerical examples are presented showing the advantages of the present formulation over traditional approaches, both in speed and in accuracy. Using a model with 703 structural modes, a speed-up of almost two orders of magnitude was observed while accuracy improved by up to 5 decimal places.

  14. Algorithms for Efficient Computation of Transfer Functions for Large Order Flexible Systems

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Giesy, Daniel P.

    1998-01-01

    An efficient and robust computational scheme is given for the calculation of the frequency response function of a large order, flexible system implemented with a linear, time invariant control system. Advantage is taken of the highly structured sparsity of the system matrix of the plant based on a model of the structure using normal mode coordinates. The computational time per frequency point of the new computational scheme is a linear function of system size, a significant improvement over traditional, still-matrix techniques whose computational times per frequency point range from quadratic to cubic functions of system size. This permits the practical frequency domain analysis of systems of much larger order than by traditional, full-matrix techniques. Formulations are given for both open- and closed-loop systems. Numerical examples are presented showing the advantages of the present formulation over traditional approaches, both in speed and in accuracy. Using a model with 703 structural modes, the present method was up to two orders of magnitude faster than a traditional method. The present method generally showed good to excellent accuracy throughout the range of test frequencies, while traditional methods gave adequate accuracy for lower frequencies, but generally deteriorated in performance at higher frequencies with worst case errors being many orders of magnitude times the correct values.

  15. Modeling weakly-ionized plasmas in magnetic field: A new computationally-efficient approach

    SciTech Connect

    Parent, Bernard; Macheret, Sergey O.; Shneider, Mikhail N.

    2015-11-01

    Despite its success at simulating accurately both non-neutral and quasi-neutral weakly-ionized plasmas, the drift-diffusion model has been observed to be a particularly stiff set of equations. Recently, it was demonstrated that the stiffness of the system could be relieved by rewriting the equations such that the potential is obtained from Ohm's law rather than Gauss's law while adding some source terms to the ion transport equation to ensure that Gauss's law is satisfied in non-neutral regions. Although the latter was applicable to multicomponent and multidimensional plasmas, it could not be used for plasmas in which the magnetic field was significant. This paper hence proposes a new computationally-efficient set of electron and ion transport equations that can be used not only for a plasma with multiple types of positive and negative ions, but also for a plasma in magnetic field. Because the proposed set of equations is obtained from the same physical model as the conventional drift-diffusion equations without introducing new assumptions or simplifications, it results in the same exact solution when the grid is refined sufficiently while being more computationally efficient: not only is the proposed approach considerably less stiff and hence requires fewer iterations to reach convergence but it yields a converged solution that exhibits a significantly higher resolution. The combined faster convergence and higher resolution is shown to result in a hundredfold increase in computational efficiency for some typical steady and unsteady plasma problems including non-neutral cathode and anode sheaths as well as quasi-neutral regions.

  16. Simple and Computationally Efficient Modeling of Surface Wind Speeds Over Heterogeneous Terrain

    NASA Astrophysics Data System (ADS)

    Winstral, A.; Marks, D.; Gurney, R.

    2007-12-01

    In mountain catchments wind frequently is the dominant process controlling snow distribution. The spatial variability of winds over mountain landscapes is considerable producing great spatial variability in mass and energy fluxes. Distributed models capable of capturing the variability of these mass and energy fluxes require time-series of distributed wind data at compatible fine spatial scale. Atmospheric and surface wind flow models in these regions have been limited by our abilities to represent the inherent complexities of the processes being modeled in a computationally efficient manner. Simplified parameterized models, such as those based on terrain and vegetation, though not as explicit as a model of fluid flow, are computationally efficient for operational use, including in real time. Recent work described just such a model that related a measure of topographic exposure to wind speed differences at proximal locations with varied exposures. The current work used a more expansive network of stations in the Reynolds Creek Experimental Watershed in southwestern Idaho, USA to test extension of the previous findings to larger domains. The stations in the study have varying degrees of wind exposure and comprise an area of approximately 125 km2 and an elevation range of 1200 - 2100 masl. Subsets of site data were detrended based on the relationship derived in the prior work to a selected standard exposure to ascertain and model the presence of any elevation-based trends in the hourly observations. Hourly wind speeds at the withheld stations were then predicted based on elevation and topographic exposure at each respective site. It was found that reasonable predictions of wind speed across this heterogeneous landscape capturing both large-scale elevation trends and small-scale topographic variability could be achieved in a computationally efficient manner.

  17. Computationally efficient modeling of the dynamic behavior of a portable PEM fuel cell stack

    NASA Astrophysics Data System (ADS)

    Philipps, S. P.; Ziegler, C.

    A numerically efficient mathematical model of a proton exchange membrane fuel cell (PEMFC) stack is presented. The aim of this model is to study the dynamic response of a PEMFC stack subjected to load changes under the restriction of short computing time. This restriction was imposed in order for the model to be applicable for nonlinear model predictive control (NMPC). The dynamic, non-isothermal model is based on mass and energy balance equations, which are reduced to ordinary differential equations in time. The reduced equations are solved for a single cell and the results are upscaled to describe the fuel cell stack. This approach makes our calculations computationally efficient. We study the feasibility of capturing water balance effects with such a reduced model. Mass balance equations for water vapor and liquid water including the phase change as well as a steady-state membrane model accounting for the electro-osmotic drag and diffusion of water through the membrane are included. Based on this approach the model is successfully used to predict critical operating conditions by monitoring the amount of liquid water in the stack and the stack impedance. The model and the overall calculation method are validated using two different load profiles on realistic time scales of up to 30 min. The simulation results are used to clarify the measured characteristics of the stack temperature and the stack voltage, which has rarely been done on such long time scales. In addition, a discussion of the influence of flooding and dry-out on the stack voltage is included. The modeling approach proves to be computationally efficient: an operating time of 0.5 h is simulated in less than 1 s, while still showing sufficient accuracy.

  18. Generation and use of human 3D-CAD models

    NASA Astrophysics Data System (ADS)

    Grotepass, Juergen; Speyer, Hartmut; Kaiser, Ralf

    2002-05-01

    Individualized Products are one of the ten mega trends of the 21st Century with human modeling as the key issue for tomorrow's design and product development. The use of human modeling software for computer based ergonomic simulations within the production process increases quality while reducing costs by 30- 50 percent and shortening production time. This presentation focuses on the use of human 3D-CAD models for both, the ergonomic design of working environments and made to measure garment production. Today, the entire production chain can be designed, individualized models generated and analyzed in 3D computer environments. Anthropometric design for ergonomics is matched to human needs, thus preserving health. Ergonomic simulation includes topics as human vision, reachability, kinematics, force and comfort analysis and international design capabilities. In German more than 17 billions of Mark are moved to other industries, because clothes do not fit. Individual clothing tailored to the customer's preference means surplus value, pleasure and perfect fit. The body scanning technology is the key to generation and use of human 3D-CAD models for both, the ergonomic design of working environments and made to measure garment production.

  19. Hierarchy of Efficiently Computable and Faithful Lower Bounds to Quantum Discord.

    PubMed

    Piani, Marco

    2016-08-19

    Quantum discord expresses a fundamental nonclassicality of correlations that is more general than entanglement, but that, in its standard definition, is not easily evaluated. We derive a hierarchy of computationally efficient lower bounds to the standard quantum discord. Every nontrivial element of the hierarchy constitutes by itself a valid discordlike measure, based on a fundamental feature of quantum correlations: their lack of shareability. Our approach emphasizes how the difference between entanglement and discord depends on whether shareability is intended as a static property or as a dynamical process.

  20. Computationally efficient method for Fourier transform of highly chirped pulses for laser and parametric amplifier modeling.

    PubMed

    Andrianov, Alexey; Szabo, Aron; Sergeev, Alexander; Kim, Arkady; Chvykov, Vladimir; Kalashnikov, Mikhail

    2016-11-14

    We developed an improved approach to calculate the Fourier transform of signals with arbitrary large quadratic phase which can be efficiently implemented in numerical simulations utilizing Fast Fourier transform. The proposed algorithm significantly reduces the computational cost of Fourier transform of a highly chirped and stretched pulse by splitting it into two separate transforms of almost transform limited pulses, thereby reducing the required grid size roughly by a factor of the pulse stretching. The application of our improved Fourier transform algorithm in the split-step method for numerical modeling of CPA and OPCPA shows excellent agreement with standard algorithms.

  1. Computational efficiency and Amdahl’s law for the adaptive resolution simulation technique

    DOE PAGES

    Junghans, Christoph; Agarwal, Animesh; Delle Site, Luigi

    2017-06-01

    Here, we discuss the computational performance of the adaptive resolution technique in molecular simulation when it is compared with equivalent full coarse-grained and full atomistic simulations. We show that an estimate of its efficiency, within 10%–15% accuracy, is given by the Amdahl’s Law adapted to the specific quantities involved in the problem. The derivation of the predictive formula is general enough that it may be applied to the general case of molecular dynamics approaches where a reduction of degrees of freedom in a multi scale fashion occurs.

  2. Efficient quantum-classical method for computing thermal rate constant of recombination: application to ozone formation.

    PubMed

    Ivanov, Mikhail V; Babikov, Dmitri

    2012-05-14

    Efficient method is proposed for computing thermal rate constant of recombination reaction that proceeds according to the energy transfer mechanism, when an energized molecule is formed from reactants first, and is stabilized later by collision with quencher. The mixed quantum-classical theory for the collisional energy transfer and the ro-vibrational energy flow [M. Ivanov and D. Babikov, J. Chem. Phys. 134, 144107 (2011)] is employed to treat the dynamics of molecule + quencher collision. Efficiency is achieved by sampling simultaneously (i) the thermal collision energy, (ii) the impact parameter, and (iii) the incident direction of quencher, as well as (iv) the rotational state of energized molecule. This approach is applied to calculate third-order rate constant of the recombination reaction that forms the (16)O(18)O(16)O isotopomer of ozone. Comparison of the predicted rate vs. experimental result is presented.

  3. Classroom Experiences in an Engineering Design Graphics Course with a CAD/CAM Extension.

    ERIC Educational Resources Information Center

    Barr, Ronald E.; Juricic, Davor

    1997-01-01

    Reports on the development of a new CAD/CAM laboratory experience for an Engineering Design Graphics (EDG) course. The EDG curriculum included freehand sketching, introduction to Computer-Aided Design and Drafting (CADD), and emphasized 3-D solid modeling. Reviews the project and reports on the testing of the new laboratory components which were…

  4. Extending Engineering Design Graphics Laboratories to Have a CAD/CAM Component: Implementation Issues.

    ERIC Educational Resources Information Center

    Juricic, Davor; Barr, Ronald E.

    1996-01-01

    Reports on a project that extended the Engineering Design Graphics curriculum to include instruction and laboratory experience in computer-aided design, analysis, and manufacturing (CAD/CAM). Discusses issues in project implementation, including introduction of finite element analysis to lower-division students, feasibility of classroom prototype…

  5. Preparing for High Technology: CAD/CAM Programs. Research & Development Series No. 234.

    ERIC Educational Resources Information Center

    Abram, Robert; And Others

    This guide is one of three developed to provide information and resources to assist in planning and developing postsecondary technican training programs in high technology areas. It is specifically intended for vocational-technical educators and planners in the initial stages of planning a specialized training option in computer-aided design (CAD)…

  6. Bridging CAGD knowledge into CAD/CG applications: Mathematical theories as stepping stones of innovations

    NASA Astrophysics Data System (ADS)

    Gobithaasan, R. U.; Miura, Kenjiro T.; Hassan, Mohamad Nor

    2014-07-01

    Computer Aided Geometric Design (CAGD) which surpasses the underlying theories of Computer Aided Design (CAD) and Computer Graphics (CG) has been taught in a number of Malaysian universities under the umbrella of Mathematical Sciences' faculty/department. On the other hand, CAD/CG is taught either under the Engineering or Computer Science Faculty. Even though CAGD researchers/educators/students (denoted as contributors) have been enriching this field of study by means of article/journal publication, many fail to convert the idea into constructive innovation due to the gap that occurs between CAGD contributors and practitioners (engineers/product/designers/architects/artists). This paper addresses this issue by advocating a number of technologies that can be used to transform CAGD contributors into innovators where immediate impact in terms of practical application can be experienced by the CAD/CG practitioners. The underlying principle of solving this issue is twofold. First would be to expose the CAGD contributors on ways to turn mathematical ideas into plug-ins and second is to impart relevant CAGD theories to CAD/CG to practitioners. Both cases are discussed in detail and the final section shows examples to illustrate the importance of turning mathematical knowledge into innovations.

  7. A distributed data base management facility for the CAD/CAM environment

    NASA Technical Reports Server (NTRS)

    Balza, R. M.; Beaudet, R. W.; Johnson, H. R.

    1984-01-01

    Current/PAD research in the area of distributed data base management considers facilities for supporting CAD/CAM data management in a heterogeneous network of computers encompassing multiple data base managers supporting a variety of data models. These facilities include coordinated execution of multiple DBMSs to provide for administration of and access to data distributed across them.

  8. A survey and taxonomy on energy efficient resource allocation techniques for cloud computing systems

    SciTech Connect

    Hameed, Abdul; Khoshkbarforoushha, Alireza; Ranjan, Rajiv; Jayaraman, Prem Prakash; Kolodziej, Joanna; Balaji, Pavan; Zeadally, Sherali; Malluhi, Qutaibah Marwan; Tziritas, Nikos; Vishnu, Abhinav; Khan, Samee U.; Zomaya, Albert

    2014-06-06

    In a cloud computing paradigm, energy efficient allocation of different virtualized ICT resources (servers, storage disks, and networks, and the like) is a complex problem due to the presence of heterogeneous application (e.g., content delivery networks, MapReduce, web applications, and the like) workloads having contentious allocation requirements in terms of ICT resource capacities (e.g., network bandwidth, processing speed, response time, etc.). Several recent papers have tried to address the issue of improving energy efficiency in allocating cloud resources to applications with varying degree of success. However, to the best of our knowledge there is no published literature on this subject that clearly articulates the research problem and provides research taxonomy for succinct classification of existing techniques. Hence, the main aim of this paper is to identify open challenges associated with energy efficient resource allocation. In this regard, the study, first, outlines the problem and existing hardware and software-based techniques available for this purpose. Furthermore, available techniques already presented in the literature are summarized based on the energy-efficient research dimension taxonomy. The advantages and disadvantages of the existing techniques are comprehensively analyzed against the proposed research dimension taxonomy namely: resource adaption policy, objective function, allocation method, allocation operation, and interoperability.

  9. Efficient rendering and compression for full-parallax computer-generated holographic stereograms

    NASA Astrophysics Data System (ADS)

    Kartch, Daniel Aaron

    2000-10-01

    In the past decade, we have witnessed a quantum leap in rendering technology and a simultaneous increase in usage of computer generated images. Despite the advances made thus far, we are faced with an ever increasing desire for technology which can provide a more realistic, more immersive experience. One fledgling technology which shows great promise is the electronic holographic display. Holograms are capable of producing a fully three-dimensional image, exhibiting all the depth cues of a real scene, including motion parallax, binocular disparity, and focal effects. Furthermore, they can be viewed simultaneously by any number of users, without the aid of special headgear or position trackers. However, to date, they have been limited in use because of their computational intractability. This thesis deals with the complex task of computing a hologram for use with such a device. Specifically, we will focus on one particular type of hologram: the holographic stereogram. A holographic stereogram is created by generating a large set of two-dimensional images of a scene as seen from multiple camera points, and then converting them to a holographic interference pattern. It is closely related to the light fields or lumigraphs used in image-based rendering. Most previous algorithms have treated the problem of rendering these images as independent computations, ignoring a great deal of coherency which could be used to our advantage. We present a new computationally efficient algorithm which operates on the image set as a whole, rather than on its individual elements. Scene polygons are mapped by perspective projection into a four-dimensional space, where they are scan-converted into 4D color and depth buffers. We use a set of very simple data structures and basic operations to form an algorithm which will lend itself well to future hardware implementation, so as to drive a real-time holographic display. We also examined issues related to the compression of stereograms

  10. Space crew radiation exposure analysis system based on a commercial stand-alone CAD system

    NASA Technical Reports Server (NTRS)

    Appleby, Matthew H.; Golightly, Michael J.; Hardy, Alva C.

    1992-01-01

    Major improvements have recently been completed in the approach to spacecraft shielding analysis. A Computer-Aided Design (CAD)-based system has been developed for determining the shielding provided to any point within or external to the spacecraft. Shielding analysis is performed using a commercially available stand-alone CAD system and a customized ray-tracing subroutine contained within a standard engineering modeling software package. This improved shielding analysis technique has been used in several vehicle design projects such as a Mars transfer habitat, pressurized lunar rover, and the redesigned Space Station. Results of these analyses are provided to demonstrate the applicability and versatility of the system.

  11. Development of CAD prototype system for Crohn's disease

    NASA Astrophysics Data System (ADS)

    Oda, Masahiro; Kitasaka, Takayuki; Furukawa, Kazuhiro; Watanabe, Osamu; Ando, Takafumi; Goto, Hidemi; Mori, Kensaku

    2010-03-01

    The purpose of this paper is to present a CAD prototype system for Crohn's disease. Crohn's disease causes inflammation or ulcers of the gastrointestinal tract. The number of patients of Crohn's disease is increasing in Japan. Symptoms of Crohn's disease include intestinal stenosis, longitudinal ulcers, and fistulae. Optical endoscope cannot pass through intestinal stenosis in some cases. We propose a new CAD system using abdominal fecal tagging CT images for efficient diagnosis of Crohn's disease. The system displays virtual unfolded (VU), virtual endoscopic, curved planar reconstruction, multi planar reconstruction, and outside views of both small and large intestines. To generate the VU views, we employ a small and large intestines extraction method followed by a simple electronic cleansing method. The intestine extraction is based on the region growing process, which uses a characteristic that tagged fluid neighbor air in the intestine. The electronic cleansing enables observation of intestinal wall under tagged fluid. We change the height of the VU views according to the perimeter of the intestine. In addition, we developed a method to enhance the longitudinal ulcer on views of the system. We enhance concave parts on the intestinal wall, which are caused by the longitudinal ulcer, based on local intensity structure analysis. We examined the small and the large intestines of eleven CT images by the proposed system. The VU views enabled efficient observation of the intestinal wall. The height change of the VU views helps finding intestinal stenosis on the VU views. The concave region enhancement made longitudinal ulcers clear on the views.

  12. SiO2-nanocomposite film coating of CAD/CAM composite resin blocks improves surface hardness and reduces susceptibility to bacterial adhesion.

    PubMed

    Kamonwanon, Pranithida; Hirose, Nanako; Yamaguchi, Satoshi; Sasaki, Jun-Ichi; Kitagawa, Haruaki; Kitagawa, Ranna; Thaweboon, Sroisiri; Srikhirin, Toemsak; Imazato, Satoshi

    2017-01-31

    Composite resin blocks for computer-aided design/computer-aided manufacturing (CAD/CAM) applications have recently become available. However, CAD/CAM composite resins have lower wear resistance and accumulate more plaque than CAD/CAM ceramic materials. We assessed the effects of SiO2-nanocomposite film coating of four types of CAD/CAM composite resin blocks: Cerasmart, Katana Avencia block, Lava Ultimate, and Block HC on surface hardness and bacterial attachment. All composite blocks with coating demonstrated significantly greater Vickers hardness, reduced surface roughness, and greater hydrophobicity than those without coating. Adhesion of Streptococcus mutans to the coated specimens was significantly less than those for the uncoated specimens. These reduced levels of bacterial adherence on the coated surface were still evident after treatment with saliva. Surface modification by SiO2-nanocomposite film coating has potential to improve wear resistance and susceptibility to plaque accumulation of CAD/CAM composite resin restorations.

  13. A computationally efficient depression-filling algorithm for digital elevation models, applied to proglacial lake drainage

    NASA Astrophysics Data System (ADS)

    Berends, Constantijn J.; van de Wal, Roderik S. W.

    2016-12-01

    Many processes govern the deglaciation of ice sheets. One of the processes that is usually ignored is the calving of ice in lakes that temporarily surround the ice sheet. In order to capture this process a "flood-fill algorithm" is needed. Here we present and evaluate several optimizations to a standard flood-fill algorithm in terms of computational efficiency. As an example, we determine the land-ocean mask for a 1 km resolution digital elevation model (DEM) of North America and Greenland, a geographical area of roughly 7000 by 5000 km (roughly 35 million elements), about half of which is covered by ocean. Determining the land-ocean mask with our improved flood-fill algorithm reduces computation time by 90 % relative to using a standard stack-based flood-fill algorithm. This implies that it is now feasible to include the calving of ice in lakes as a dynamical process inside an ice-sheet model. We demonstrate this by using bedrock elevation, ice thickness and geoid perturbation fields from the output of a coupled ice-sheet-sea-level equation model at 30 000 years before present and determine the extent of Lake Agassiz, using both the standard and improved versions of the flood-fill algorithm. We show that several optimizations to the flood-fill algorithm used for filling a depression up to a water level, which is not defined beforehand, decrease the computation time by up to 99 %. The resulting reduction in computation time allows determination of the extent and volume of depressions in a DEM over large geographical grids or repeatedly over long periods of time, where computation time might otherwise be a limiting factor. The algorithm can be used for all glaciological and hydrological models, which need to trace the evolution over time of lakes or drainage basins in general.

  14. Implant-supported fixed dental prostheses with CAD/CAM-fabricated porcelain crown and zirconia-based framework.

    PubMed

    Takaba, Masayuki; Tanaka, Shinpei; Ishiura, Yuichi; Baba, Kazuyoshi

    2013-07-01

    Recently, fixed dental prostheses (FDPs) with a hybrid structure of CAD/CAM porcelain crowns adhered to a CAD/CAM zirconia framework (PAZ) have been developed. The aim of this report was to describe the clinical application of a newly developed implant-supported FDP fabrication system, which uses PAZ, and to evaluate the outcome after a maximum application period of 36 months. Implants were placed in three patients with edentulous areas in either the maxilla or mandible. After the implant fixtures had successfully integrated with bone, gold-platinum alloy or zirconia custom abutments were first fabricated. Zirconia framework wax-up was performed on the custom abutments, and the CAD/CAM zirconia framework was prepared using the CAD/CAM system. Next, wax-up was performed on working models for porcelain crown fabrication, and CAD/CAM porcelain crowns were fabricated. The CAD/CAM zirconia frameworks and CAD/CAM porcelain crowns were bonded using adhesive resin cement, and the PAZ was cemented. Cementation of the implant superstructure improved the esthetics and masticatory efficiency in all patients. No undesirable outcomes, such as superstructure chipping, stomatognathic dysfunction, or periimplant bone resorption, were observed in any of the patients. PAZ may be a potential solution for ceramic-related clinical problems such as chipping and fracture and associated complicated repair procedures in implant-supported FDPs.

  15. Switchgrass PviCAD1: Understanding residues important for substrate preferences and activity

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Lignin is a major component of plant cell walls and is a complex aromatic heteropolymer. Reducing lignin content improves conversion efficiency into liquid fuels, and enzymes involved in lignin biosynthesis are attractive targets for bioengineering. Cinnamyl alcohol dehydrogenase (CAD) catalyzes t...

  16. A Computationally-Efficient Inverse Approach to Probabilistic Strain-Based Damage Diagnosis

    NASA Technical Reports Server (NTRS)

    Warner, James E.; Hochhalter, Jacob D.; Leser, William P.; Leser, Patrick E.; Newman, John A

    2016-01-01

    This work presents a computationally-efficient inverse approach to probabilistic damage diagnosis. Given strain data at a limited number of measurement locations, Bayesian inference and Markov Chain Monte Carlo (MCMC) sampling are used to estimate probability distributions of the unknown location, size, and orientation of damage. Substantial computational speedup is obtained by replacing a three-dimensional finite element (FE) model with an efficient surrogate model. The approach is experimentally validated on cracked test specimens where full field strains are determined using digital image correlation (DIC). Access to full field DIC data allows for testing of different hypothetical sensor arrangements, facilitating the study of strain-based diagnosis effectiveness as the distance between damage and measurement locations increases. The ability of the framework to effectively perform both probabilistic damage localization and characterization in cracked plates is demonstrated and the impact of measurement location on uncertainty in the predictions is shown. Furthermore, the analysis time to produce these predictions is orders of magnitude less than a baseline Bayesian approach with the FE method by utilizing surrogate modeling and effective numerical sampling approaches.

  17. Computationally Efficient Power Allocation Algorithm in Multicarrier-Based Cognitive Radio Networks: OFDM and FBMC Systems

    NASA Astrophysics Data System (ADS)

    Shaat, Musbah; Bader, Faouzi

    2010-12-01

    Cognitive Radio (CR) systems have been proposed to increase the spectrum utilization by opportunistically access the unused spectrum. Multicarrier communication systems are promising candidates for CR systems. Due to its high spectral efficiency, filter bank multicarrier (FBMC) can be considered as an alternative to conventional orthogonal frequency division multiplexing (OFDM) for transmission over the CR networks. This paper addresses the problem of resource allocation in multicarrier-based CR networks. The objective is to maximize the downlink capacity of the network under both total power and interference introduced to the primary users (PUs) constraints. The optimal solution has high computational complexity which makes it unsuitable for practical applications and hence a low complexity suboptimal solution is proposed. The proposed algorithm utilizes the spectrum holes in PUs bands as well as active PU bands. The performance of the proposed algorithm is investigated for OFDM and FBMC based CR systems. Simulation results illustrate that the proposed resource allocation algorithm with low computational complexity achieves near optimal performance and proves the efficiency of using FBMC in CR context.

  18. An efficient algorithm to compute row and column counts for sparse Cholesky factorization

    SciTech Connect

    Gilbert, J.R.; Ng, E.G.; Peyton, B.W.

    1992-09-01

    Let an undirected graph G be given, along with a specified depth- first spanning tree T. We give almost-linear-time algorithms to solve the following two problems: First, for every vertex v, compute the number of descendants w of v for which some descendant of w is adjacent (in G) to v. Second, for every vertx v, compute the number of ancestors of v that are adjacent (in G) to at least one descendant of v. These problems arise in Cholesky and QR factorizations of sparse matrices. Our algorithms can be used to determine the number of nonzero entries in each row and column of the triangular factor of a matrix from the zero/nonzero structure of the matrix. Such a prediction makes storage allocation for sparse matrix factorizations more efficient. Our algorithms run in time linear in the size of the input times a slowly-growing inverse of Ackermann`s function. The best previously known algorithms for these problems ran in time linear in the sum of the nonzero counts, which is usually much larger. We give experimental results demonstrating the practical efficiency of the new algorithms.

  19. An efficient algorithm to compute row and column counts for sparse Cholesky factorization

    SciTech Connect

    Gilbert, J.R. ); Ng, E.G.; Peyton, B.W. )

    1992-09-01

    Let an undirected graph G be given, along with a specified depth- first spanning tree T. We give almost-linear-time algorithms to solve the following two problems: First, for every vertex v, compute the number of descendants w of v for which some descendant of w is adjacent (in G) to v. Second, for every vertx v, compute the number of ancestors of v that are adjacent (in G) to at least one descendant of v. These problems arise in Cholesky and QR factorizations of sparse matrices. Our algorithms can be used to determine the number of nonzero entries in each row and column of the triangular factor of a matrix from the zero/nonzero structure of the matrix. Such a prediction makes storage allocation for sparse matrix factorizations more efficient. Our algorithms run in time linear in the size of the input times a slowly-growing inverse of Ackermann's function. The best previously known algorithms for these problems ran in time linear in the sum of the nonzero counts, which is usually much larger. We give experimental results demonstrating the practical efficiency of the new algorithms.

  20. Design of Complete Dentures by Adopting CAD Developed for Fixed Prostheses.

    PubMed

    Li, Yanfeng; Han, Weili; Cao, Jing; Iv, Yuan; Zhang, Yue; Han, Yishi; Shen, Yi; Ma, Zheng; Liu, Huanyue

    2016-11-21

    The demand for complete dentures is expected to increase worldwide, but complete dentures are mainly designed and fabricated manually involving a broad series of clinical and laboratory procedures. Therefore, the quality of complete dentures largely depends on the skills of the dentist and technician, leading to difficulty in quality control. Computer-aided design and manufacturing (CAD/CAM) has been used to design and fabricate various dental restorations including dental inlays, veneers, crowns, partial crowns, and fixed partial dentures (FPDs). It has been envisioned that the application of CAD/CAM technology could reduce intensive clinical/laboratory work for the fabrication of complete dentures; however, CAD/CAM is seldom used to fabricate complete dentures due to the lack of suitable CAD software to design virtual complete dentures although the CAM techniques are in a much advanced stage. Here we report the successful design of virtual complete dentures using CAD software of 3Shape Dental System 2012, which was developed for designing fixed prostheses instead of complete dentures. Our results demonstrated that complete dentures could be successfully designed by the combination of two modeling processes, single coping and full anatomical FPD, available in the 3Shape Dental System 2012.

  1. Study on the integration approaches to CAD/CAPP/FMS in garment CIMS

    NASA Astrophysics Data System (ADS)

    Wang, Xiankui; Tian, Wensheng; Liu, Chengying; Li, Zhizhong

    1995-08-01

    Computer integrated manufacturing system (CIMS), as an advanced methodology, has been applied in many industry fields. There is, however, little research on the application of CIMS in the garment industry, especially on the integrated approach to CAD, CAPP, and FMS in garment CIMS. In this paper, the current situations of CAD, CAPP, and FMS in the garment industry are discussed, and information requirements between them as well as the integrated approaches are also investigated. The representation of the garments' product data by the group technology coding is proposed. Based on the group technology, a shared data base as an integration element can be constructed, which leads to the integration of CAD/CAPP/FMS in garment CIMS.

  2. Rule-Based Design of Plant Expression Vectors Using GenoCAD.

    PubMed

    Coll, Anna; Wilson, Mandy L; Gruden, Kristina; Peccoud, Jean

    2015-01-01

    Plant synthetic biology requires software tools to assist on the design of complex multi-genic expression plasmids. Here a vector design strategy to express genes in plants is formalized and implemented as a grammar in GenoCAD, a Computer-Aided Design software for synthetic biology. It includes a library of plant biological parts organized in structural categories and a set of rules describing how to assemble these parts into large constructs. Rules developed here are organized and divided into three main subsections according to the aim of the final construct: protein localization studies, promoter analysis and protein-protein interaction experiments. The GenoCAD plant grammar guides the user through the design while allowing users to customize vectors according to their needs. Therefore the plant grammar implemented in GenoCAD will help plant biologists take advantage of methods from synthetic biology to design expression vectors supporting their research projects.

  3. On the Use of Parmetric-CAD Systems and Cartesian Methods for Aerodynamic Design

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.; Pulliam, Thomas H.

    2004-01-01

    Automated, high-fidelity tools for aerodynamic design face critical issues in attempting to optimize real-life geometry arid in permitting radical design changes. Success in these areas promises not only significantly shorter design- cycle times, but also superior and unconventional designs. To address these issues, we investigate the use of a parmetric-CAD system in conjunction with an embedded-boundary Cartesian method. Our goal is to combine the modeling capabilities of feature-based CAD with the robustness and flexibility of component-based Cartesian volume-mesh generation for complex geometry problems. We present the development of an automated optimization frame-work with a focus on the deployment of such a CAD-based design approach in a heterogeneous parallel computing environment.

  4. CAD for 4-step braided fabric composites

    SciTech Connect

    Pandey, R.; Hahn, H.T.

    1994-12-31

    A general framework is provided to predict thermoelastic properties of three dimensional 4-step braided fabric composites. Three key steps involved are (1) the development of a CAD model for yarn architecture, (2) the extraction of a unit cell (3) the prediction of the thermoelastic properties based on micromechanics. Main features of each step are summarized and experimental correlations are provided in the paper.

  5. Power- and space-efficient image computation with compressive processing: I. Background and theory

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.

    2000-11-01

    Surveillance imaging applications on small autonomous imaging platforms present challenges of highly constrained power supply and form factor, with potentially demanding specifications for target detection and recognition. Absent of significant advances in image processing hardware, such power and space restrictions can imply severely limited computational capabilities. This holds especially for compute-intensive algorithms with high-precision fixed- or floating- point operations in deep pipelines that process large data streams. Such algorithms tend not to be amenable to small or simplified architectures involving (for example) reduced precision, reconfigurable logic, low-power gates, or energy recycling schemes. In this series of two papers, a technique of reduced-power computing called compressive processing (CXP) is presented and applied to several low- and mid-level computer vision operations. CXP computes over compressed data without resorting to intermediate decompression steps. As a result of fewer data due to compression, fewer operations are required by CXP than are required by computing over the corresponding uncompressed image. In several cases, CXP techniques yield speedups on the order of the compression ratio. Where lossy high-compression transforms are employed, it is often possible to use approximations to derive CXP operations to yield increased computational efficiency via a simplified mix of operations. The reduced work requirement, which follows directly from the presence of fewer data, also implies a reduced power requirement, especially if simpler operations are involved in compressive versus noncompressive operations. Several image processing algorithms (edge detection, morphological operations, and component labeling) are analyzed in the context of three compression transforms: vector quantization (VQ), visual pattern image coding (VPIC), and EBLAST. The latter is a lossy high-compression transformation developed for underwater

  6. Custom hip prostheses by integrating CAD and casting technology

    NASA Astrophysics Data System (ADS)

    Silva, Pedro F.; Leal, Nuno; Neto, Rui J.; Lino, F. Jorge; Reis, Ana

    2012-09-01

    Total Hip Arthroplasty (THA) is a surgical intervention that is being achieving high rates of success, leaving room to research on long run durability, patient comfort and costs reduction. Even so, up to the present, little research has been done to improve the method of manufacturing customized prosthesis. The common customized prostheses are made by full machining. This document presents a different approach methodology which combines the study of medical images, through CAD (Computer Aided Design) software, SLadditive manufacturing, ceramic shell manufacture, precision foundry with Titanium alloys and Computer Aided Manufacturing (CAM). The goal is to achieve the best comfort for the patient, stress distribution and the maximum lifetime of the prosthesis produced by this integrated methodology. The way to achieve this desiderate is to make custom hip prosthesis which are adapted to each patient needs and natural physiognomy. Not only the process is reliable, but also represents a cost reduction comparing to the conventional full machined custom hip prosthesis.

  7. Computationally Efficient Multiscale Reactive Molecular Dynamics to Describe Amino Acid Deprotonation in Proteins

    PubMed Central

    2016-01-01

    An important challenge in the simulation of biomolecular systems is a quantitative description of the protonation and deprotonation process of amino acid residues. Despite the seeming simplicity of adding or removing a positively charged hydrogen nucleus, simulating the actual protonation/deprotonation process is inherently difficult. It requires both the explicit treatment of the excess proton, including its charge defect delocalization and Grotthuss shuttling through inhomogeneous moieties (water and amino residues), and extensive sampling of coupled condensed phase motions. In a recent paper (J. Chem. Theory Comput.2014, 10, 2729−273725061442), a multiscale approach was developed to map high-level quantum mechanics/molecular mechanics (QM/MM) data into a multiscale reactive molecular dynamics (MS-RMD) model in order to describe amino acid deprotonation in bulk water. In this article, we extend the fitting approach (called FitRMD) to create MS-RMD models for ionizable amino acids within proteins. The resulting models are shown to faithfully reproduce the free energy profiles of the reference QM/MM Hamiltonian for PT inside an example protein, the ClC-ec1 H+/Cl– antiporter. Moreover, we show that the resulting MS-RMD models are computationally efficient enough to then characterize more complex 2-dimensional free energy surfaces due to slow degrees of freedom such as water hydration of internal protein cavities that can be inherently coupled to the excess proton charge translocation. The FitRMD method is thus shown to be an effective way to map ab initio level accuracy into a much more computationally efficient reactive MD method in order to explicitly simulate and quantitatively describe amino acid protonation/deprotonation in proteins. PMID:26734942

  8. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems

    PubMed Central

    Ehsan, Shoaib; Clark, Adrian F.; ur Rehman, Naveed; McDonald-Maier, Klaus D.

    2015-01-01

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems. PMID:26184211

  9. Virtual tomography: a new approach to efficient human-computer interaction for medical imaging

    NASA Astrophysics Data System (ADS)

    Teistler, Michael; Bott, Oliver J.; Dormeier, Jochen; Pretschner, Dietrich P.

    2003-05-01

    By utilizing virtual reality (VR) technologies the computer system virtusMED implements the concept of virtual tomography for exploring medical volumetric image data. Photographic data from a virtual patient as well as CT or MRI data from real patients are visualized within a virtual scene. The view of this scene is determined either by a conventional computer mouse, a head-mounted display or a freely movable flat panel. A virtual examination probe is used to generate oblique tomographic images which are computed from the given volume data. In addition, virtual models can be integrated into the scene such as anatomical models of bones and inner organs. virtusMED has shown to be a valuable tool to learn human anaotomy and to udnerstand the principles of medical imaging such as sonography. Furthermore its utilization to improve CT and MRI based diagnosis is very promising. Compared to VR systems of the past, the standard PC-based system virtusMED is a cost-efficient and easily maintained solution providing a highly intuitive time-saving user interface for medical imaging.

  10. Dendritic nonlinearities are tuned for efficient spike-based computations in cortical circuits

    PubMed Central

    Ujfalussy, Balázs B; Makara, Judit K; Branco, Tiago; Lengyel, Máté

    2015-01-01

    Cortical neurons integrate thousands of synaptic inputs in their dendrites in highly nonlinear ways. It is unknown how these dendritic nonlinearities in individual cells contribute to computations at the level of neural circuits. Here, we show that dendritic nonlinearities are critical for the efficient integration of synaptic inputs in circuits performing analog computations with spiking neurons. We developed a theory that formalizes how a neuron's dendritic nonlinearity that is optimal for integrating synaptic inputs depends on the statistics of its presynaptic activity patterns. Based on their in vivo preynaptic population statistics (firing rates, membrane potential fluctuations, and correlations due to ensemble dynamics), our theory accurately predicted the responses of two different types of cortical pyramidal cells to patterned stimulation by two-photon glutamate uncaging. These results reveal a new computational principle underlying dendritic integration in cortical neurons by suggesting a functional link between cellular and systems--level properties of cortical circuits. DOI: http://dx.doi.org/10.7554/eLife.10056.001 PMID:26705334

  11. Methods for Efficiently and Accurately Computing Quantum Mechanical Free Energies for Enzyme Catalysis.

    PubMed

    Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L

    2016-01-01

    Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples.

  12. A Solution Methodology and Computer Program to Efficiently Model Thermodynamic and Transport Coefficients of Mixtures

    NASA Technical Reports Server (NTRS)

    Ferlemann, Paul G.

    2000-01-01

    A solution methodology has been developed to efficiently model multi-specie, chemically frozen, thermally perfect gas mixtures. The method relies on the ability to generate a single (composite) set of thermodynamic and transport coefficients prior to beginning a CFD solution. While not fundamentally a new concept, many applied CFD users are not aware of this capability nor have a mechanism to easily and confidently generate new coefficients. A database of individual specie property coefficients has been created for 48 species. The seven coefficient form of the thermodynamic functions is currently used rather then the ten coefficient form due to the similarity of the calculated properties, low temperature behavior and reduced CPU requirements. Sutherland laminar viscosity and thermal conductivity coefficients were computed in a consistent manner from available reference curves. A computer program has been written to provide CFD users with a convenient method to generate composite specie coefficients for any mixture. Mach 7 forebody/inlet calculations demonstrated nearly equivalent results and significant CPU time savings compared to a multi-specie solution approach. Results from high-speed combustor analysis also illustrate the ability to model inert test gas contaminants without additional computational expense.

  13. Cross-scale Efficient Tensor Contractions for Coupled Cluster Computations Through Multiple Programming Model Backends

    SciTech Connect

    Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel W.; Krylov, Anna I.

    2016-07-26

    Coupled-cluster methods provide highly accurate models of molecular structure by explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix-matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts to extend the Libtensor framework to work in the distributed memory environment in a scalable and energy efficient manner. We achieve up to 240 speedup compared with the best optimized shared memory implementation. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures, (Cray XC30&XC40, BlueGene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance. Nevertheless, we preserve a uni ed interface to both programming models to maintain the productivity of computational quantum chemists.

  14. The efficient computation of the nonlinear dynamic response of a foil-air bearing rotor system

    NASA Astrophysics Data System (ADS)

    Bonello, P.; Pham, H. M.

    2014-07-01

    The foil-air bearing (FAB) enables the emergence of oil-free turbomachinery. However, its potential to introduce undesirable nonlinear effects necessitates a reliable means for calculating the dynamic response. The computational burden has hitherto been alleviated by simplifications that compromised the true nature of the dynamic interaction between the rotor, air film and foil structure, introducing the potential for significant error. The overall novel contribution of this research is the development of efficient algorithms for the simultaneous solution of the state equations. The equations are extracted using two alternative transformations: (i) Finite Difference (FD); and (ii) a novel arbitrary-order Galerkin Reduction (GR) which does not use a grid, considerably reducing the number of state variables. A vectorized formulation facilitates the solution in two alternative ways: (i) in the time domain for arbitrary response via implicit integration using readily available routines; and (ii) in the frequency domain for the direct computation of self-excited periodic response via a novel Harmonic Balance (HB) method. GR and FD are cross-verified by time domain simulations which confirm that GR significantly reduces the computation time. Simulations also cross-verify the time and frequency domain solutions applied to the reference FD model and demonstrate the unique ability of HB to correctly accommodate structural damping.

  15. Computing the energy of a water molecule using multideterminants: A simple, efficient algorithm

    SciTech Connect

    Clark, Bryan K.; Morales, Miguel A; Mcminis, Jeremy; Kim, Jeongnim; Scuseria, Gustavo E

    2011-01-01

    Quantum Monte Carlo (QMC) methods such as variational Monte Carlo and fixed node diffusion Monte Carlo depend heavily on the quality of the trial wave function. Although Slater-Jastrow wave functions are the most commonly used variational ansatz in electronic structure, more sophisticated wave functions are critical to ascertaining new physics. One such wave function is the multi-Slater- Jastrow wave function which consists of a Jastrow function multiplied by the sum of Slater deter- minants. In this paper we describe a method for working with these wave functions in QMC codes that is easy to implement, efficient both in computational speed as well as memory, and easily par- allelized. The computational cost scales quadratically with particle number making this scaling no worse than the single determinant case and linear with the total number of excitations. Addition- ally, we implement this method and use it to compute the ground state energy of a water molecule. 2011 American Institute of Physics. [doi:10.1063/1.3665391

  16. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems.

    PubMed

    Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D

    2015-07-10

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  17. Quantum propagation of electronic excitations in macromolecules: A computationally efficient multiscale approach

    NASA Astrophysics Data System (ADS)

    Schneider, E.; a Beccara, S.; Mascherpa, F.; Faccioli, P.

    2016-07-01

    We introduce a theoretical approach to study the quantum-dissipative dynamics of electronic excitations in macromolecules, which enables to perform calculations in large systems and cover long-time intervals. All the parameters of the underlying microscopic Hamiltonian are obtained from ab initio electronic structure calculations, ensuring chemical detail. In the short-time regime, the theory is solvable using a diagrammatic perturbation theory, enabling analytic insight. To compute the time evolution of the density matrix at intermediate times, typically ≲ps , we develop a Monte Carlo algorithm free from any sign or phase problem, hence computationally efficient. Finally, the dynamics in the long-time and large-distance limit can be studied combining the microscopic calculations with renormalization group techniques to define a rigorous low-resolution effective theory. We benchmark our Monte Carlo algorithm against the results obtained in perturbation theory and using a semiclassical nonperturbative scheme. Then, we apply it to compute the intrachain charge mobility in a realistic conjugated polymer.

  18. Improving Computational Efficiency of Model Predictive Control Genetic Algorithms for Real-Time Decision Support

    NASA Astrophysics Data System (ADS)

    Minsker, B. S.; Zimmer, A. L.; Ostfeld, A.; Schmidt, A.

    2014-12-01

    Enabling real-time decision support, particularly under conditions of uncertainty, requires computationally efficient algorithms that can rapidly generate recommendations. In this paper, a suite of model predictive control (MPC) genetic algorithms are developed and tested offline to explore their value for reducing CSOs during real-time use in a deep-tunnel sewer system. MPC approaches include the micro-GA, the probability-based compact GA, and domain-specific GA methods that reduce the number of decision variable values analyzed within the sewer hydraulic model, thus reducing algorithm search space. Minimum fitness and constraint values achieved by all GA approaches, as well as computational times required to reach the minimum values, are compared to large population sizes with long convergence times. Optimization results for a subset of the Chicago combined sewer system indicate that genetic algorithm variations with coarse decision variable representation, eventually transitioning to the entire range of decision variable values, are most efficient at addressing the CSO control problem. Although diversity-enhancing micro-GAs evaluate a larger search space and exhibit shorter convergence times, these representations do not reach minimum fitness and constraint values. The domain-specific GAs prove to be the most efficient and are used to test CSO sensitivity to energy costs, CSO penalties, and pressurization constraint values. The results show that CSO volumes are highly dependent on the tunnel pressurization constraint, with reductions of 13% to 77% possible with less conservative operational strategies. Because current management practices may not account for varying costs at CSO locations and electricity rate changes in the summer and winter, the sensitivity of the results is evaluated for variable seasonal and diurnal CSO penalty costs and electricity-related system maintenance costs, as well as different sluice gate constraint levels. These findings indicate

  19. Computationally Efficient Numerical Model for the Evolution of Directional Ocean Surface Waves

    NASA Astrophysics Data System (ADS)

    Malej, M.; Choi, W.; Goullet, A.

    2011-12-01

    The main focus of this work has been the asymptotic and numerical modeling of weakly nonlinear ocean surface wave fields. In particular, a development of an efficient numerical model for the evolution of nonlinear ocean waves, including extreme waves known as Rogue/Freak waves, is of direct interest. Due to their elusive and destructive nature, the media often portrays Rogue waves as unimaginatively huge and unpredictable monsters of the sea. To address some of these concerns, derivations of reduced phase-resolving numerical models, based on the small wave steepness assumption, are presented and their corresponding numerical simulations via Fourier pseudo-spectral methods are discussed. The simulations are initialized with a well-known JONSWAP wave spectrum and different angular distributions are employed. Both deterministic and Monte-Carlo ensemble average simulations were carried out. Furthermore, this work concerns the development of a new computationally efficient numerical model for the short term prediction of evolving weakly nonlinear ocean surface waves. The derivations are originally based on the work of West et al. (1987) and since the waves in the ocean tend to travel primarily in one direction, the aforementioned new numerical model is derived with an additional assumption of a weak transverse dependence. In turn, comparisons of the ensemble averaged randomly initialized spectra, as well as deterministic surface-to-surface correlations are presented. The new model is shown to behave well in various directional wave fields and can potentially be a candidate for computationally efficient prediction and propagation of extreme ocean surface waves - Rogue/Freak waves.

  20. A new data integration approach for AutoCAD and GIS

    NASA Astrophysics Data System (ADS)

    Ye, Hongmei; Li, Yuhong; Wang, Cheng; Li, Lijun

    2006-10-01

    GIS has its advantages both on spatial data analysis and management, particularly on the geometric and attributive information management, which has also attracted lots attentions among researchers around world. AutoCAD plays more and more important roles as one of the main data sources of GIS. Various work and achievements can be found in the related literature. However, the conventional data integration from AutoCAD to GIS is time-consuming, which also can cause the information loss both in the geometric aspects and the attributive aspects for a large system. It is necessary and urgent to sort out new approach and algorithm for the efficient high-quality data integration. In this paper, a novel data integration approach from AutoCAD to GIS will be introduced based on the spatial data mining technique through the data structure analysis both in the AutoCAD and GIS. A practicable algorithm for the data conversion from CAD to GIS will be given as well. By a designed evaluation scheme, the accuracy of the conversion both in the geometric and the attributive information will be demonstrated. Finally, the validity and feasibility of the new approach will be shown by an experimental analysis.

  1. The CENP-A NAC/CAD kinetochore complex controls chromosome congression and spindle bipolarity.

    PubMed

    McClelland, Sarah E; Borusu, Satyarebala; Amaro, Ana C; Winter, Jennifer R; Belwal, Mukta; McAinsh, Andrew D; Meraldi, Patrick

    2007-12-12

    Kinetochores are complex protein machines that link chromosomes to spindle microtubules and contain a structural core composed of two conserved protein-protein interaction networks: the well-characterized KMN (KNL1/MIND/NDC80) and the recently identified CENP-A NAC/CAD. Here we show that the CENP-A NAC/CAD subunits can be assigned to one of two different functional classes; depletion of Class I proteins (Mcm21R(CENP-O) and Fta1R(CENP-L)) causes a failure in bipolar spindle assembly. In contrast, depletion of Class II proteins (CENP-H, Chl4R(CENP-N), CENP-I and Sim4R(CENP-K)) prevents binding of Class I proteins and causes chromosome congression defects, but does not perturb spindle formation. Co-depletion of Class I and Class II proteins restores spindle bipolarity, suggesting that Class I proteins regulate or counteract the function of Class II proteins. We also demonstrate that CENP-A NAC/CAD and KMN regulate kinetochore-microtubule attachments independently, even though CENP-A NAC/CAD can modulate NDC80 levels at kinetochores. Based on our results, we propose that the cooperative action of CENP-A NAC/CAD subunits and the KMN network drives efficient chromosome segregation and bipolar spindle assembly during mitosis.

  2. CAD/CAM interface design of excimer laser micro-processing system

    NASA Astrophysics Data System (ADS)

    Jing, Liang; Chen, Tao; Zuo, Tiechuan

    2005-12-01

    Recently CAD/CAM technology has been gradually used in the field of laser processing. The excimer laser micro-processing system just identified G instruction before CAD/CAM interface was designed. However the course of designing a part with G instruction for users is too hard. The efficiency is low and probability of making errors is high. By secondary development technology of AutoCAD with Visual Basic, an application was developed to pick-up each entity's information in graph and convert them to each entity's processing parameters. Also an additional function was added into former controlling software to identify these processing parameters of each entity and realize continue processing of graphic. Based on the above CAD/CAM interface, Users can design a part in AutoCAD instead of using G instruction. The period of designing a part is sharply shortened. This new way of design greatly guarantees the processing parameters of the part is right and exclusive. The processing of complex novel bio-chip has been realized by this new function.

  3. Discoloration of various CAD/CAM blocks after immersion in coffee

    PubMed Central

    Lauvahutanon, Sasipin; Shiozawa, Maho; Iwasaki, Naohiko; Oki, Meiko; Finger, Werner J.; Arksornnukit, Mansuang

    2017-01-01

    Objectives This study evaluated color differences (ΔEs) and translucency parameter changes (ΔTPs) of various computer-aided design/computer-aided manufacturing (CAD/CAM) blocks after immersion in coffee. Materials and Methods Eight CAD/CAM blocks and four restorative composite resins were evaluated. The CIE L*a*b* values of 2.0 mm thick disk-shaped specimens were measured using the spectrophotometer on white and black backgrounds (n = 6). The ΔEs and ΔTPs of one day, one week, and one month immersion in coffee or water were calculated. The values of each material were analyzed by two-way ANOVA and Tukey's multiple comparisons (α = 0.05). The ΔEs after prophylaxis paste polishing of 1 month coffee immersion specimens, water sorption and solubility were also evaluated. Results After one month in coffee, ΔEs of CAD/CAM composite resin blocks and restorative composites ranged from 1.6 to 3.7 and from 2.1 to 7.9, respectively, and ΔTPs decreased. The ANOVA of ΔEs and ΔTPs revealed significant differences in two main factors, immersion periods and media, and their interaction except for ΔEs of TEL (Telio CAD, Ivoclar Vivadent). The ΔEs significantly decreased after prophylaxis polishing except GRA (Gradia Block, GC). There was no significant correlation between ΔEs and water sorption or solubility in water. Conclusions The ΔEs of CAD/CAM blocks after immersion in coffee varied among products and were comparable to those of restorative composite resins. The discoloration of CAD/CAM composite resin blocks could be effectively removed with prophylaxis paste polishing, while that of some restorative composites could not be removed. PMID:28194359

  4. Geometrical splitting technique to improve the computational efficiency in Monte Carlo calculations for proton therapy

    PubMed Central

    Ramos-Méndez, José; Perl, Joseph; Faddegon, Bruce; Schümann, Jan; Paganetti, Harald

    2013-01-01

    Purpose: To present the implementation and validation of a geometrical based variance reduction technique for the calculation of phase space data for proton therapy dose calculation. Methods: The treatment heads at the Francis H Burr Proton Therapy Center were modeled with a new Monte Carlo tool (TOPAS based on Geant4). For variance reduction purposes, two particle-splitting planes were implemented. First, the particles were split upstream of the second scatterer or at the second ionization chamber. Then, particles reaching another plane immediately upstream of the field specific aperture were split again. In each case, particles were split by a factor of 8. At the second ionization chamber and at the latter plane, the cylindrical symmetry of the proton beam was exploited to position the split particles at randomly spaced locations rotated around the beam axis. Phase space data in IAEA format were recorded at the treatment head exit and the computational efficiency was calculated. Depth–dose curves and beam profiles were analyzed. Dose distributions were compared for a voxelized water phantom for different treatment fields for both the reference and optimized simulations. In addition, dose in two patients was simulated with and without particle splitting to compare the efficiency and accuracy of the technique. Results: A normalized computational efficiency gain of a factor of 10–20.3 was reached for phase space calculations for the different treatment head options simulated. Depth–dose curves and beam profiles were in reasonable agreement with the simulation done without splitting: within 1% for depth–dose with an average difference of (0.2 ± 0.4)%, 1 standard deviation, and a 0.3% statistical uncertainty of the simulations in the high dose region; 1.6% for planar fluence with an average difference of (0.4 ± 0.5)% and a statistical uncertainty of 0.3% in the high fluence region. The percentage differences between dose distributions in water for

  5. Low-cost, high-performance and efficiency computational photometer design

    NASA Astrophysics Data System (ADS)

    Siewert, Sam B.; Shihadeh, Jeries; Myers, Randall; Khandhar, Jay; Ivanov, Vitaly

    2014-05-01

    Researchers at the University of Alaska Anchorage and University of Colorado Boulder have built a low cost high performance and efficiency drop-in-place Computational Photometer (CP) to test in field applications ranging from port security and safety monitoring to environmental compliance monitoring and surveying. The CP integrates off-the-shelf visible spectrum cameras with near to long wavelength infrared detectors and high resolution digital snapshots in a single device. The proof of concept combines three or more detectors into a single multichannel imaging system that can time correlate read-out, capture, and image process all of the channels concurrently with high performance and energy efficiency. The dual-channel continuous read-out is combined with a third high definition digital snapshot capability and has been designed using an FPGA (Field Programmable Gate Array) to capture, decimate, down-convert, re-encode, and transform images from two standard definition CCD (Charge Coupled Device) cameras at 30Hz. The continuous stereo vision can be time correlated to megapixel high definition snapshots. This proof of concept has been fabricated as a fourlayer PCB (Printed Circuit Board) suitable for use in education and research for low cost high efficiency field monitoring applications that need multispectral and three dimensional imaging capabilities. Initial testing is in progress and includes field testing in ports, potential test flights in un-manned aerial systems, and future planned missions to image harsh environments in the arctic including volcanic plumes, ice formation, and arctic marine life.

  6. [The CAD-S, an instrument for the assessment of adaptation to divorce - separation].

    PubMed

    Yárnoz Yaben, Sagrario; Comino González, Priscila

    2010-02-01

    The CAD-S, an Instrument for the Assessment of Adaptation to Divorce - Separation. This paper presents an instrument for the evaluation of the adaptation to divorce-separation. The CAD-S is a questionnaire created in Spanish, made up of 20 items whose aim is the evaluation of the family's adaptation to divorce-separation, using one of the parents as informant. Data were collected in four different samples of divorced persons and their children from different autonomous comunities from the Spanish state. 223 parents and 160 children from divorced families took part in this study. Four factors emerged, accounting for 52.39 percent of the total variance: psychological and emotional difficulties, conflict with the ex-partner, disposition to co-parentality, and negative outcomes of separation for children. The results suggest that CAD-S appears to be a reliable and valid instrument, with high internal consistency (Cronbach's alpha) and adequate construct validity supported by its relations with measures of satisfaction with life (SWLS), in the case of parents, and conduct problems (CBCL) for children.

  7. Genetic and biochemical characterization of CAD-1, a chromosomally encoded new class A penicillinase from Carnobacterium divergens.

    PubMed

    Meziane-Cherif, Djalal; Decré, Dominique; Høiby, E Arne; Courvalin, Patrice; Périchon, Bruno

    2008-02-01

    Carnobacterium divergens clinical isolates BM4489 and BM4490 were resistant to penicillins but remained susceptible to combinations of amoxicillin-clavulanic acid and piperacillin-tazobactam. Cloning and sequencing of the responsible determinant from BM4489 revealed a coding sequence of 912 bp encoding a class A beta-lactamase named CAD-1. The bla(CAD-1) gene was assigned to a chromosomal location in the two strains that had distinct pulsed-field gel electrophoresis patterns. CAD-1 shared 53% and 42% identity with beta-lactamases from Bacillus cereus and Staphylococcus aureus, respectively. Alignment of CAD-1 with other class A beta-lactamases indicated the presence of 25 out of the 26 isofunctional amino acids in class A beta-lactamases. Escherichia coli harboring bla(CAD-1) exhibited resistance to penams (benzylpenicillin and amoxicillin) and remained susceptible to amoxicillin in combination with clavulanic acid. Mature CAD-1 consisted of a 34.4-kDa polypeptide. Kinetic analysis indicated that CAD-1 exhibited a narrow substrate profile, hydrolyzing benzylpenicillin, ampicillin, and piperacillin with catalytic efficiencies of 6,600, 3,200, and 2,900 mM(-1) s(-1), respectively. The enzyme did not interact with oxyiminocephalosporins, imipenem, or aztreonam. CAD-1 was inhibited by tazobactam (50% inhibitory concentration [IC(50)] = 0.27 microM), clavulanic acid (IC(50) = 4.7 microM), and sulbactam (IC(50) = 43.5 microM). The bla(CAD-1) gene is likely to have been acquired by BM4489 and BM4490 as part of a mobile genetic element, since it was not found in the susceptible type strain CIP 101029 and was adjacent to a gene for a resolvase.

  8. Measuring and tuning energy efficiency on large scale high performance computing platforms.

    SciTech Connect

    Laros, James H., III

    2011-08-01

    Recognition of the importance of power in the field of High Performance Computing, whether it be as an obstacle, expense or design consideration, has never been greater and more pervasive. While research has been conducted on many related aspects, there is a stark absence of work focused on large scale High Performance Computing. Part of the reason is the lack of measurement capability currently available on small or large platforms. Typically, research is conducted using coarse methods of measurement such as inserting a power meter between the power source and the platform, or fine grained measurements using custom instrumented boards (with obvious limitations in scale). To collect the measurements necessary to analyze real scientific computing applications at large scale, an in-situ measurement capability must exist on a large scale capability class platform. In response to this challenge, we exploit the unique power measurement capabilities of the Cray XT architecture to gain an understanding of power use and the effects of tuning. We apply these capabilities at the operating system level by deterministically halting cores when idle. At the application level, we gain an understanding of the power requirements of a range of important DOE/NNSA production scientific computing applications running at large scale (thousands of nodes), while simultaneously collecting current and voltage measurements on the hosting nodes. We examine the effects of both CPU and network bandwidth tuning and demonstrate energy savings opportunities of up to 39% with little or no impact on run-time performance. Capturing scale effects in our experimental results was key. Our results provide strong evidence that next generation large-scale platforms should not only approach CPU frequency scaling differently, but could also benefit from the capability to tune other platform components, such as the network, to achieve energy efficient performance.

  9. Developing an Efficient Computational Method that Estimates the Ability of Students in a Web-Based Learning Environment

    ERIC Educational Resources Information Center

    Lee, Young-Jin

    2012-01-01

    This paper presents a computational method that can efficiently estimate the ability of students from the log files of a Web-based learning environment capturing their problem solving processes. The computational method developed in this study approximates the posterior distribution of the student's ability obtained from the conventional Bayes…

  10. CAD-RADS(TM) Coronary Artery Disease - Reporting and Data System. An expert consensus document of the Society of Cardiovascular Computed Tomography (SCCT), the American College of Radiology (ACR) and the North American Society for Cardiovascular Imaging (NASCI). Endorsed by the American College of Cardiology.

    PubMed

    Cury, Ricardo C; Abbara, Suhny; Achenbach, Stephan; Agatston, Arthur; Berman, Daniel S; Budoff, Matthew J; Dill, Karin E; Jacobs, Jill E; Maroules, Christopher D; Rubin, Geoffrey D; Rybicki, Frank J; Schoepf, U Joseph; Shaw, Leslee J; Stillman, Arthur E; White, Charles S; Woodard, Pamela K; Leipsic, Jonathon A

    2016-01-01

    The intent of CAD-RADS - Coronary Artery Disease Reporting and Data System is to create a standardized method to communicate findings of coronary CT angiography (coronary CTA) in order to facilitate decision-making regarding further patient management. The suggested CAD-RADS classification is applied on a per-patient basis and represents the highest-grade coronary artery lesion documented by coronary CTA. It ranges from CAD-RADS 0 (Zero) for the complete absence of stenosis and plaque to CAD-RADS 5 for the presence of at least one totally occluded coronary artery and should always be interpreted in conjunction with the impression found in the report. Specific recommendations are provided for further management of patients with stable or acute chest pain based on the CAD-RADS classification. The main goal of CAD-RADS is to standardize reporting of coronary CTA results and to facilitate communication of test results to referring physicians along with suggestions for subsequent patient management. In addition, CAD-RADS will provide a framework of standardization that may benefit education, research, peer-review and quality assurance with the potential to ultimately result in improved quality of care.

  11. CAD-RADS™: Coronary Artery Disease - Reporting and Data System: An Expert Consensus Document of the Society of Cardiovascular Computed Tomography (SCCT), the American College of Radiology (ACR) and the North American Society for Cardiovascular Imaging (NASCI). Endorsed by the American College of Cardiology.

    PubMed

    Cury, Ricardo C; Abbara, Suhny; Achenbach, Stephan; Agatston, Arthur; Berman, Daniel S; Budoff, Matthew J; Dill, Karin E; Jacobs, Jill E; Maroules, Christopher D; Rubin, Geoffrey D; Rybicki, Frank J; Schoepf, U Joseph; Shaw, Leslee J; Stillman, Arthur E; White, Charles S; Woodard, Pamela K; Leipsic, Jonathon A

    2016-12-01

    The intent of CAD-RADS - Coronary Artery Disease Reporting and Data System is to create a standardized method to communicate findings of coronary CT angiography (coronary CTA) in order to facilitate decision-making regarding further patient management. The suggested CAD-RADS classification is applied on a per-patient basis and represents the highest-grade coronary artery lesion documented by coronary CTA. It ranges from CAD-RADS 0 (Zero) for the complete absence of stenosis and plaque to CAD-RADS 5 for the presence of at least one totally occluded coronary artery and should always be interpreted in conjunction with the impression found in the report. Specific recommendations are provided for further management of patients with stable or acute chest pain based on the CAD-RADS classification. The main goal of CAD-RADS is to standardize reporting of coronary CTA results and to facilitate communication of test results to referring physicians along with suggestions for subsequent patient management. In addition, CAD-RADS will provide a framework of standardization that may benefit education, research, peer-review and quality assurance with the potential to ultimately result in improved quality of care.

  12. Evaluation of Intradural Stimulation Efficiency and Selectivity in a Computational Model of Spinal Cord Stimulation

    PubMed Central

    Howell, Bryan; Lad, Shivanand P.; Grill, Warren M.

    2014-01-01

    Spinal cord stimulation (SCS) is an alternative or adjunct therapy to treat chronic pain, a prevalent and clinically challenging condition. Although SCS has substantial clinical success, the therapy is still prone to failures, including lead breakage, lead migration, and poor pain relief. The goal of this study was to develop a computational model of SCS and use the model to compare activation of neural elements during intradural and extradural electrode placement. We constructed five patient-specific models of SCS. Stimulation thresholds predicted by the model were compared to stimulation thresholds measured intraoperatively, and we used these models to quantify the efficiency and selectivity of intradural and extradural SCS. Intradural placement dramatically increased stimulation efficiency and reduced the power required to stimulate the dorsal columns by more than 90%. Intradural placement also increased selectivity, allowing activation of a greater proportion of dorsal column fibers before spread of activation to dorsal root fibers, as well as more selective activation of individual dermatomes at different lateral deviations from the midline. Further, the results suggest that current electrode designs used for extradural SCS are not optimal for intradural SCS, and a novel azimuthal tripolar design increased stimulation selectivity, even beyond that achieved with an intradural paddle array. Increased stimulation efficiency is expected to increase the battery life of implantable pulse generators, increase the recharge interval of rechargeable implantable pulse generators, and potentially reduce stimulator volume. The greater selectivity of intradural stimulation may improve the success rate of SCS by mitigating the sensitivity of pain relief to malpositioning of the electrode. The outcome of this effort is a better quantitative understanding of how intradural electrode placement can potentially increase the selectivity and efficiency of SCS, which, in turn

  13. Interoperation of heterogeneous CAD tools in Ptolemy II

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Wu, Bicheng; Liu, Xiaojun; Lee, Edward A.

    1999-03-01

    Typical complex systems that involve microsensors and microactuators exhibit heterogeneity both at the implementation level and the problem level. For example, a system can be modeled using discrete events for digital circuits and SPICE-like analog descriptions for sensors. This heterogeneity exist not only in different implementation domains, but also at different level of abstraction. This naturally leads to a heterogeneous approach to system design that uses domain-specific models of computation (MoC) at various levels of abstractions to define a system, and leverages multiple CAD tools to do simulation, verification and synthesis. As the size and scope of the system increase, the integration becomes too difficult and unmanageable if different tools are coordinated using simple scripts. In addition, for MEMS devices and mixed-signal circuits, it is essential to integrate tools with different MoC to simulate the whole system. Ptolemy II, a heterogeneous system-level design tool, supports the interaction among different MoCs. This paper discusses heterogeneous CAD tool interoperability in the Ptolemy II framework. The key is to understand the semantic interface and classify the tools by their MoC and their level of abstraction. Interfaces are designed for each domain so that the external tools can be easily wrapped. Then the interoperability of the tools becomes the interoperability of the semantics. Ptolemy II can act as the standard interface among different tools to achieve the overall design modeling. A micro-accelerometer with digital feedback is studied as an example.

  14. Key Parameters of Hybrid Materials for CAD/CAM-Based Restorative Dentistry.

    PubMed

    Horvath, Sebastian D

    2016-10-01

    Hybrid materials are a recent addition to the dental armamentarium for computer-assisted design/ computer-assisted manufacturing (CAD/CAM)-based restorative dentistry. They are intended to provide dentists with the capability of restoring single teeth in one appointment with a material that emulates the structure and physical properties of natural teeth. This article aims to provide an overview of currently available hybrid materials and offer the reader further understanding of their key clinical parameters and possible limitations.

  15. Multi Objective Optimization for Calibration and Efficient Uncertainty Analysis of Computationally Expensive Watershed Models

    NASA Astrophysics Data System (ADS)

    Akhtar, T.; Shoemaker, C. A.

    2011-12-01

    Assessing the sensitivity of calibration results to different calibration criteria can be done through multi objective optimization that considers multiple calibration criteria. This analysis can be extended to uncertainty analysis by comparing the results of simulation of the model with parameter sets from many points along a Pareto Front. In this study we employ multi-objective optimization in order to understand which parameter values should be used for flow parameters of a SWAT model, (Soil and Water Assessment Tool) designed to simulate flow in the Cannonsville Reservoir in upstate New York. The comprehensive analysis procedure encapsulates identification of suitable objectives, analysis of trade-offs obtained through multi-objective optimization, and the impact of the trade-offs uncertainty. Examples of multiple criteria can include a) quality of the fit in different seasons, b) quality of the fit for high flow events and for low flow events, c) quality of the fit for different constituents (e.g. water versus nutrients). Many distributed watershed models are computationally expensive and include a large number of parameters that are to be calibrated. Efficient optimization algorithms are hence needed to find good solutions to multi-criteria calibration problems in a feasible amount of time. We apply a new algorithm called Gap Optimized Multi-Objective Optimization using Response Surfaces (GOMORS), for efficient multi-criteria optimization of the Cannonsville SWAT watershed calibration problem. GOMORS is a stochastic optimization method, which makes use of Radial Basis Functions for approximation of the computationally expensive objectives. GOMORS performance is also compared against other multi-objective algorithms ParEGO and NSGA-II. ParEGO is a kriging based efficient multi-objective optimization algorithm, whereas NSGA-II is a well-known multi-objective evolutionary optimization algorithm. GOMORS is more efficient than both ParEGO and NSGA-II in providing

  16. Research related to improved computer aided design software package. [comparative efficiency of finite, boundary, and hybrid element methods in elastostatics

    NASA Technical Reports Server (NTRS)

    Walston, W. H., Jr.

    1986-01-01

    The comparative computational efficiencies of the finite element (FEM), boundary element (BEM), and hybrid boundary element-finite element (HVFEM) analysis techniques are evaluated for representative bounded domain interior and unbounded domain exterior problems in elastostatics. Computational efficiency is carefully defined in this study as the computer time required to attain a specified level of solution accuracy. The study found the FEM superior to the BEM for the interior problem, while the reverse was true for the exterior problem. The hybrid analysis technique was found to be comparable or superior to both the FEM and BEM for both the interior and exterior problems.

  17. CAD/CAM of braided preforms for advanced composites

    NASA Astrophysics Data System (ADS)

    Yang, Gui; Pastore, Christopher; Tsai, Yung Jia; Soebroto, Heru; Ko, Frank

    A CAD/CAM system for braiding to produce preforms for advanced textile structural composites is presented in this paper. The CAD and CAM systems are illustrated in detail. The CAD system identifies the fiber placement and orientation needed to fabricate a braided structure over a mandrel, for subsequent composite formation. The CAM system uses the design parameters generated by the CAD system to control the braiding machine. Experimental evidence demonstrating the success of combining these two technologies to form a unified CAD/CAM system for the manufacture of braided fabric preforms with complex structural shapes is presented.

  18. Exploring Infiniband Hardware Virtualization in OpenNebula towards Efficient High-Performance Computing

    SciTech Connect

    Pais Pitta de Lacerda Ruivo, Tiago; Bernabeu Altayo, Gerard; Garzoglio, Gabriele; Timm, Steven; Kim, Hyun-Woo; Noh, Seo-Young; Raicu, Ioan

    2014-11-11

    has been widely accepted that software virtualization has a big negative impact on high-performance computing (HPC) application performance. This work explores the potential use of Infiniband hardware virtualization in an OpenNebula cloud towards the efficient support of MPI-based workloads. We have implemented, deployed, and tested an Infiniband network on the FermiCloud private Infrastructure-as-a-Service (IaaS) cloud. To avoid software virtualization towards minimizing the virtualization overhead, we employed a technique called Single Root Input/Output Virtualization (SRIOV). Our solution spanned modifications to the Linux’s Hypervisor as well as the OpenNebula manager. We evaluated the performance of the hardware virtualization on up to 56 virtual machines connected by up to 8 DDR Infiniband network links, with micro-benchmarks (latency and bandwidth) as well as w a MPI-intensive application (the HPL Linpack benchmark).

  19. A novel class of highly efficient and accurate time-integrators in nonlinear computational mechanics

    NASA Astrophysics Data System (ADS)

    Wang, Xuechuan; Atluri, Satya N.

    2017-01-01

    A new class of time-integrators is presented for strongly nonlinear dynamical systems. These algorithms are far superior to the currently common time integrators in computational efficiency and accuracy. These three algorithms are based on a local variational iteration method applied over a finite interval of time. By using Chebyshev polynomials as trial functions and Dirac-Delta functions as the test functions over the finite time interval, the three algorithms are developed into three different discrete time-integrators through the collocation method. These time integrators are labeled as Chebyshev local iterative collocation methods. Through examples of the forced Duffing oscillator, the Lorenz system, and the multiple coupled Duffing equations (which arise as semi-discrete equations for beams, plates and shells undergoing large deformations), it is shown that the new algorithms are far superior to the 4th order Runge-Kutta and ODE45 of MATLAB, in predicting the chaotic responses of strongly nonlinear dynamical systems.

  20. Efficient method for computing the maximum-likelihood quantum state from measurements with additive Gaussian noise.

    PubMed

    Smolin, John A; Gambetta, Jay M; Smith, Graeme

    2012-02-17

    We provide an efficient method for computing the maximum-likelihood mixed quantum state (with density matrix ρ) given a set of measurement outcomes in a complete orthonormal operator basis subject to Gaussian noise. Our method works by first changing basis yielding a candidate density matrix μ which may have nonphysical (negative) eigenvalues, and then finding the nearest physical state under the 2-norm. Our algorithm takes at worst O(d(4)) for the basis change plus O(d(3)) for finding ρ where d is the dimension of the quantum state. In the special case where the measurement basis is strings of Pauli operators, the basis change takes only O(d(3)) as well. The workhorse of the algorithm is a new linear-time method for finding the closest probability distribution (in Euclidean distance) to a set of real numbers summing to one.

  1. Schematic for efficient computation of GC, GC3, and AT3 bias spectra of genome.

    PubMed

    Rizvi, Ahsan Z; Venu Gopal, T; Bhattacharya, C

    2012-01-01

    Selection of synonymous codons for an amino acid is biased in protein translation process. This biased selection causes repetition of synonymous codons in structural parts of genome that stands for high N/3 peaks in DNA spectrum. Period-3 spectral property is utilized here to produce a 3-phase network model based on polyphase filterbank concepts for derivation of codon bias spectra (CBS). Modification of parameters in this model can produce GC, GC3, and AT3 bias spectra. Complete schematic in LabVIEW platform is presented here for efficient and parallel computation of GC, GC3, and AT3 bias spectra of genomes alongwith results of CBS patterns. We have performed the correlation coefficient analysis of GC, GC3, and AT3 bias spectra with codon bias patterns of CBS for biological and statistical significance of this model.

  2. Storage and computationally efficient permutations of factorized covariance and square-root information matrices

    NASA Technical Reports Server (NTRS)

    Muellerschoen, R. J.

    1988-01-01

    A unified method to permute vector-stored upper-triangular diagonal factorized covariance (UD) and vector stored upper-triangular square-root information filter (SRIF) arrays is presented. The method involves cyclical permutation of the rows and columns of the arrays and retriangularization with appropriate square-root-free fast Givens rotations or elementary slow Givens reflections. A minimal amount of computation is performed and only one scratch vector of size N is required, where N is the column dimension of the arrays. To make the method efficient for large SRIF arrays on a virtual memory machine, three additional scratch vectors each of size N are used to avoid expensive paging faults. The method discussed is compared with the methods and routines of Bierman's Estimation Subroutine Library (ESL).

  3. Comparing Two Opacity Models in Monte Carlo Radiative Heat Transfer: Computational Efficiency and Parallel Load Balancing

    NASA Astrophysics Data System (ADS)

    Cleveland, Mathew A.; Palmer, Todd S.

    2013-09-01

    Thermal heating from radiative heat transfer can have a significant effect on combustion systems. A variety of models have been developed to represent the strongly varying opacities found in combustion gases (Goutiere et al., 2000). This work evaluates the computational efficiency and load balance issues associated with two opacity models implemented in a 3D parallel Monte Carlo solver: the spectral-line-based weighted sum of gray gases (SLW) (Denison and Webb, 1993) and the spectral line-by-line (LBL) (Wang and Modest, 2007) opacity models. The parallel performance of the opacity models is evaluated using the Su and Olson (1999) frequency-dependent semi-analytic benchmark problem. Weak scaling, strong scaling, and history scaling studies were performed and comparisons were made for each opacity model. Comparisons of load balance sensitivities to these types of scaling were also evaluated. It was found that the SLW model has some attributes that might be valuable in a select set of parallel problems.

  4. A computational study of the effect of unstructured mesh quality on solution efficiency

    SciTech Connect

    Batdorf, M.; Freitag, L.A.; Ollivier-Gooch, C.

    1997-09-01

    It is well known that mesh quality affects both efficiency and accuracy of CFD solutions. Meshes with distorted elements make solutions both more difficult to compute and less accurate. We review a recently proposed technique for improving mesh quality as measured by element angle (dihedral angle in three dimensions) using a combination of optimization-based smoothing techniques and local reconnection schemes. Typical results that quantify mesh improvement for a number of application meshes are presented. We then examine effects of mesh quality as measured by the maximum angle in the mesh on the convergence rates of two commonly used CFD solution techniques. Numerical experiments are performed that quantify the cost and benefit of using mesh optimization schemes for incompressible flow over a cylinder and weakly compressible flow over a cylinder.

  5. Computational Intelligence and Wavelet Transform Based Metamodel for Efficient Generation of Not-Yet Simulated Waveforms.

    PubMed

    Oltean, Gabriel; Ivanciu, Laura-Nicoleta

    2016-01-01

    The design and verification of complex electronic systems, especially the analog and mixed-signal ones, prove to be extremely time consuming tasks, if only circuit-level simulations are involved. A significant amount of time can be saved if a cost effective solution is used for the extensive analysis of the system, under all conceivable conditions. This paper proposes a data-driven method to build fast to evaluate, but also accurate metamodels capable of generating not-yet simulated waveforms as a function of different combinations of the parameters of the system. The necessary data are obtained by early-stage simulation of an electronic control system from the automotive industry. The metamodel development is based on three key elements: a wavelet transform for waveform characterization, a genetic algorithm optimization to detect the optimal wavelet transform and to identify the most relevant decomposition coefficients, and an artificial neuronal network to derive the relevant coefficients of the wavelet transform for any new parameters combination. The resulted metamodels for three different waveform families are fully reliable. They satisfy the required key points: high accuracy (a maximum mean squared error of 7.1x10-5 for the unity-based normalized waveforms), efficiency (fully affordable computational effort for metamodel build-up: maximum 18 minutes on a general purpose computer), and simplicity (less than 1 second for running the metamodel, the user only provides the parameters combination). The metamodels can be used for very efficient generation of new waveforms, for any possible combination of dependent parameters, offering the possibility to explore the entire design space. A wide range of possibilities becomes achievable for the user, such as: all design corners can be analyzed, possible worst-case situations can be investigated, extreme values of waveforms can be discovered, sensitivity analyses can be performed (the influence of each parameter on the

  6. Solving hard computational problems efficiently: asymptotic parametric complexity 3-coloring algorithm.

    PubMed

    Martín H, José Antonio

    2013-01-01

    Many practical problems in almost all scientific and technological disciplines have been classified as computationally hard (NP-hard or even NP-complete). In life sciences, combinatorial optimization problems frequently arise in molecular biology, e.g., genome sequencing; global alignment of multiple genomes; identifying siblings or discovery of dysregulated pathways. In almost all of these problems, there is the need for proving a hypothesis about certain property of an object that can be present if and only if it adopts some particular admissible structure (an NP-certificate) or be absent (no admissible structure), however, none of the standard approaches can discard the hypothesis when no solution can be found, since none can provide a proof that there is no admissible structure. This article presents an algorithm that introduces a novel type of solution method to "efficiently" solve the graph 3-coloring problem; an NP-complete problem. The proposed method provides certificates (proofs) in both cases: present or absent, so it is possible to accept or reject the hypothesis on the basis of a rigorous proof. It provides exact solutions and is polynomial-time (i.e., efficient) however parametric. The only requirement is sufficient computational power, which is controlled by the parameter α∈N. Nevertheless, here it is proved that the probability of requiring a value of α>k to obtain a solution for a random graph decreases exponentially: P(α>k)≤2(-(k+1)), making tractable almost all problem instances. Thorough experimental analyses were performed. The algorithm was tested on random graphs, planar graphs and 4-regular planar graphs. The obtained experimental results are in accordance with the theoretical expected results.

  7. Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals

    PubMed Central

    Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G.

    2016-01-01

    This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors’ previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp–p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat. PMID:27382478

  8. Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals.

    PubMed

    Guven, Onur; Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G

    2016-06-01

    This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors' previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp-p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat.

  9. Computer-Assisted Dieting: Effects of a Randomized Nutrition Intervention

    ERIC Educational Resources Information Center

    Schroder, Kerstin E. E.

    2011-01-01

    Objectives: To compare the effects of a computer-assisted dieting intervention (CAD) with and without self-management training on dieting among 55 overweight and obese adults. Methods: Random assignment to a single-session nutrition intervention (CAD-only) or a combined CAD plus self-management group intervention (CADG). Dependent variables were…

  10. CBT Pilot Program Instructional Guide. Basic Drafting Skills Curriculum Delivered through CAD Workstations and Artificial Intelligence Software.

    ERIC Educational Resources Information Center

    Smith, Richard J.; Sauer, Mardelle A.

    This guide is intended to assist teachers in using computer-aided design (CAD) workstations and artificial intelligence software to teach basic drafting skills. The guide outlines a 7-unit shell program that may also be used as a generic authoring system capable of supporting computer-based training (CBT) in other subject areas. The first section…

  11. Biomimetic CAD/CAM restoration made of human enamel and dentin: case report at 4th year of clinical 
service.

    PubMed

    Magne, Pascal; Schlichting, Luís Henrique

    Currently, no dental material can exactly match the unique properties of dentin and enamel. Recently, a revolutionary approach was introduced in which a real tooth was utilized in combination with computer-aided design/computer-aided manufacturing (CAD/CAM) technology to obtain a natural CAD/CAM restoration. After 4 years of clinical service, the case was reevaluated and revealed an optimal condition of the biomimetic restoration.

  12. A Hybrid Model for the Computationally-Efficient Simulation of the Cerebellar Granular Layer

    PubMed Central

    Cattani, Anna; Solinas, Sergio; Canuto, Claudio

    2016-01-01

    The aim of the present paper is to efficiently describe the membrane potential dynamics of neural populations formed by species having a high density difference in specific brain areas. We propose a hybrid model whose main ingredients are a conductance-based model (ODE system) and its continuous counterpart (PDE system) obtained through a limit process in which the number of neurons confined in a bounded region of the brain tissue is sent to infinity. Specifically, in the discrete model, each cell is described by a set of time-dependent variables, whereas in the continuum model, cells are grouped into populations that are described by a set of continuous variables. Communications between populations, which translate into interactions among the discrete and the continuous models, are the essence of the hybrid model we present here. The cerebellum and cerebellum-like structures show in their granular layer a large difference in the relative density of neuronal species making them a natural testing ground for our hybrid model. By reconstructing the ensemble activity of the cerebellar granular layer network and by comparing our results to a more realistic computational network, we demonstrate that our description of the network activity, even though it is not biophysically detailed, is still capable of reproducing salient features of neural network dynamics. Our modeling approach yields a significant computational cost reduction by increasing the simulation speed at least 270 times. The hybrid model reproduces interesting dynamics such as local microcircuit synchronization, traveling waves, center-surround, and time-windowing. PMID:27148027

  13. Computer Controlled Portable Greenhouse Climate Control System for Enhanced Energy Efficiency

    NASA Astrophysics Data System (ADS)

    Datsenko, Anthony; Myer, Steve; Petties, Albert; Hustek, Ryan; Thompson, Mark

    2010-04-01

    This paper discusses a student project at Kettering University focusing on the design and construction of an energy efficient greenhouse climate control system. In order to maintain acceptable temperatures and stabilize temperature fluctuations in a portable plastic greenhouse economically, a computer controlled climate control system was developed to capture and store thermal energy incident on the structure during daylight periods and release the stored thermal energy during dark periods. The thermal storage mass for the greenhouse system consisted of a water filled base unit. The heat exchanger consisted of a system of PVC tubing. The control system used a programmable LabView computer interface to meet functional specifications that minimized temperature fluctuations and recorded data during operation. The greenhouse was a portable sized unit with a 5' x 5' footprint. Control input sensors were temperature, water level, and humidity sensors and output control devices were fan actuating relays and water fill solenoid valves. A Graphical User Interface was developed to monitor the system, set control parameters, and to provide programmable data recording times and intervals.

  14. Approaches for the computationally efficient assessment of the plug-in HEV impact on the grid

    NASA Astrophysics Data System (ADS)

    Lee, Tae-Kyung; Filipi, Zoran S.

    2012-11-01

    Realistic duty cycles are critical for design and assessment of hybrid propulsion systems, in particular, plug-in hybrid electric vehicles. The analysis of the PHEV impact requires a large amount of data about daily missions for ensuring realism in predicted temporal loads on the grid. This paper presents two approaches for the reduction of the computational effort while assessing the large scale PHEV impact on the grid, namely 1) "response surface modelling" approach; and 2) "daily driving schedule modelling" approach. The response surface modelling approach replaces the time-consuming vehicle simulations by response surfaces constructed off-line with the consideration of the real-world driving. The daily driving modelling approach establishes a correlation between departure and arrival times, and it predicts representative driving patterns with a significantly reduced number of simulation cases. In both cases, representative synthetic driving cycles are used to capture the naturalistic driving characteristics for a given trip length. The proposed approaches enable construction of 24-hour missions, assessments of charging requirements at the time of plugging-in, and temporal distributions of the load on the grid with high computational efficiency.

  15. Method for Performing an Efficient Monte Carlo Simulation of Lipid Mixtures on a Concurrent Computer

    NASA Astrophysics Data System (ADS)

    Moore, Andrew; Huang, Juyang; Gibson, Thomas

    2003-10-01

    We are interested in performing extensive Monte Carlo simulations of lipid mixtures in cell membranes. These computations will be performed on a Gnu/Linux Beowulf cluster using the industry-standard Message Passing Interface (MPI) for handling node-to-node communication and overall program management. Devising an efficient parallel decomposition of the simulation is crucial for success. The goal is to balance the load on the compute nodes so that each does the same amount of work and to minimize the amount of (relatively slow) node-to-node communication. To this end, we report a method for performing simulations on a boundless three-dimensional surface. The surface is modeled by a two-dimensional array which can represent either a rectangular or triangular lattice. The array is distributed evenly across multiple processors in a block-row configuration. The sequence of calculations minimizes the delay from passing messages between nodes and uses the delay that does exist to perform local operations on each node.

  16. A computationally efficient method for simulating fluid flow in elastic pipes in three dimensions

    NASA Astrophysics Data System (ADS)

    Doctors, G. M.; Mazzeo, M. D.; Coveney, P. V.

    2010-09-01

    We propose a new method for carrying out lattice-Boltzmann simulations of pulsatile fluid flow in three-dimensional elastic pipes. It is based on estimating the distances from sites at the edge of the simulation box to the wall along the lattice directions from the displacement of the closest point on the wall and the curvature there, followed by application of a nonequilibrium extrapolation method. Viscous flow in an elastic pipe is studied in three dimensions at a wall displacement of 5% of the radius of the pipe, which is realistic for blood flow through large cerebral arteries. The numerical results for the pressure difference, wall displacement and flow velocity agree well with the analytical predictions. At all sites, the calculation depends only on information from nearest neighbours, so the method proposed is suitable for efficient computation on multicore machines. Compared to simulations with rigid walls, simulations with elastic walls require only 13% more computational effort at the parameters chosen in this study.

  17. Modeling the evolution of channel shape: Balancing computational efficiency with hydraulic fidelity

    USGS Publications Warehouse

    Wobus, C.W.; Kean, J.W.; Tucker, G.E.; Anderson, R. Scott

    2008-01-01

    The cross-sectional shape of a natural river channel controls the capacity of the system to carry water off a landscape, to convey sediment derived from hillslopes, and to erode its bed and banks. Numerical models that describe the response of a landscape to changes in climate or tectonics therefore require formulations that can accommodate evolution of channel cross-sectional geometry. However, fully two-dimensional (2-D) flow models are too computationally expensive to implement in large-scale landscape evolution models, while available simple empirical relationships between width and discharge do not adequately capture the dynamics of channel adjustment. We have developed a simplified 2-D numerical model of channel evolution in a cohesive, detachment-limited substrate subject to steady, unidirectional flow. Erosion is assumed to be proportional to boundary shear stress, which is calculated using an approximation of the flow field in which log-velocity profiles are assumed to apply along vectors that are perpendicular to the local channel bed. Model predictions of the velocity structure, peak boundary shear stress, and equilibrium channel shape compare well with predictions of a more sophisticated but more computationally demanding ray-isovel model. For example, the mean velocities computed by the two models are consistent to within ???3%, and the predicted peak shear stress is consistent to within ???7%. Furthermore, the shear stress distributions predicted by our model compare favorably with available laboratory measurements for prescribed channel shapes. A modification to our simplified code in which the flow includes a high-velocity core allows the model to be extended to estimate shear stress distributions in channels with large width-to-depth ratios. Our model is efficient enough to incorporate into large-scale landscape evolution codes and can be used to examine how channels adjust both cross-sectional shape and slope in response to tectonic and climatic

  18. A computationally efficient exact pseudopotential method. I. Analytic reformulation of the Phillips-Kleinman theory.

    PubMed

    Smallwood, C Jay; Larsen, Ross E; Glover, William J; Schwartz, Benjamin J

    2006-08-21

    Even with modern computers, it is still not possible to solve the Schrodinger equation exactly for systems with more than a handful of electrons. For many systems, the deeply bound core electrons serve merely as placeholders and only a few valence electrons participate in the chemical process of interest. Pseudopotential theory takes advantage of this fact to reduce the dimensionality of a multielectron chemical problem: the Schrodinger equation is solved only for the valence electrons, and the effects of the core electrons are included implicitly via an extra term in the Hamiltonian known as the pseudopotential. Phillips and Kleinman (PK) [Phys. Rev. 116, 287 (1959)]. demonstrated that it is possible to derive a pseudopotential that guarantees that the valence electron wave function is orthogonal to the (implicitly included) core electron wave functions. The PK theory, however, is expensive to implement since the pseudopotential is nonlocal and its computation involves iterative evaluation of the full Hamiltonian. In this paper, we present an analytically exact reformulation of the PK pseudopotential theory. Our reformulation has the advantage that it greatly simplifies the expressions that need to be evaluated during the iterative determination of the pseudopotential, greatly increasing the computational efficiency. We demonstrate our new formalism by calculating the pseudopotential for the 3s valence electron of the Na atom, and in the subsequent paper, we show that pseudopotentials for molecules as complex as tetrahydrofuran can be calculated with our formalism in only a few seconds. Our reformulation also provides a clear geometric interpretation of how the constraint equations in the PK theory, which are required to obtain a unique solution, are themselves sufficient to calculate the pseudopotential.

  19. Separation efficiency of a hydrodynamic separator using a 3D computational fluid dynamics multiscale approach.

    PubMed

    Schmitt, Vivien; Dufresne, Matthieu; Vazquez, Jose; Fischer, Martin; Morin, Antoine

    2014-01-01

    The aim of this study is to investigate the use of computational fluid dynamics (CFD) to predict the solid separation efficiency of a hydrodynamic separator. The numerical difficulty concerns the discretization of the geometry to simulate both the global behavior and the local phenomena that occur near the screen. In this context, a CFD multiscale approach was used: a global model (at the scale of the device) is used to observe the hydrodynamic behavior within the device; a local model (portion of the screen) is used to determine the local phenomena that occur near the screen. The Eulerian-Lagrangian approach was used to model the particle trajectories in both models. The global model shows the influence of the particles' characteristics on the trapping efficiency. A high density favors the sedimentation. In contrast, particles with small densities (1,040 kg/m(3)) are steered by the hydrodynamic behavior and can potentially be trapped by the separator. The use of the local model allows us to observe the particle trajectories near the screen. A comparison between two types of screens (perforated plate vs expanded metal) highlights the turbulent effects created by the shape of the screen.

  20. Computationally-efficient stochastic cluster dynamics method for modeling damage accumulation in irradiated materials

    SciTech Connect

    Hoang, Tuan L.; Marian, Jaime; Bulatov, Vasily V.; Hosemann, Peter

    2015-11-01

    An improved version of a recently developed stochastic cluster dynamics (SCD) method (Marian and Bulatov, 2012) [6] is introduced as an alternative to rate theory (RT) methods for solving coupled ordinary differential equation (ODE) systems for irradiation damage simulations. SCD circumvents by design the curse of dimensionality of the variable space that renders traditional ODE-based RT approaches inefficient when handling complex defect population comprised of multiple (more than two) defect species. Several improvements introduced here enable efficient and accurate simulations of irradiated materials up to realistic (high) damage doses characteristic of next-generation nuclear systems. The first improvement is a procedure for efficiently updating the defect reaction-network and event selection in the context of a dynamically expanding reaction-network. Next is a novel implementation of the τ-leaping method that speeds up SCD simulations by advancing the state of the reaction network in large time increments when appropriate. Lastly, a volume rescaling procedure is introduced to control the computational complexity of the expanding reaction-network through occasional reductions of the defect population while maintaining accurate statistics. The enhanced SCD method is then applied to model defect cluster accumulation in iron thin films subjected to triple ion-beam (Fe{sup 3+}, He{sup +} and H{sup +}) irradiations, for which standard RT or spatially-resolved kinetic Monte Carlo simulations are prohibitively expensive.