Science.gov

Sample records for computationally efficient cad

  1. Computing Mass Properties From AutoCAD

    NASA Technical Reports Server (NTRS)

    Jones, A.

    1990-01-01

    Mass properties of structures computed from data in drawings. AutoCAD to Mass Properties (ACTOMP) computer program developed to facilitate quick calculations of mass properties of structures containing many simple elements in such complex configurations as trusses or sheet-metal containers. Mathematically modeled in AutoCAD or compatible computer-aided design (CAD) system in minutes by use of three-dimensional elements. Written in Microsoft Quick-Basic (Version 2.0).

  2. A CAD (Classroom Assessment Design) of a Computer Programming Course

    ERIC Educational Resources Information Center

    Hawi, Nazir S.

    2012-01-01

    This paper presents a CAD (classroom assessment design) of an entry-level undergraduate computer programming course "Computer Programming I". CAD has been the product of a long experience in teaching computer programming courses including teaching "Computer Programming I" 22 times. Each semester, CAD is evaluated and modified for the subsequent…

  3. Computer-aided diagnosis (CAD) for colonoscopy

    NASA Astrophysics Data System (ADS)

    Gu, Jia; Poirson, Allen

    2007-03-01

    Colorectal cancer is the second leading cause of cancer deaths, and ranks third for new cancer cases and cancer mortality for both men and women. However, its death rate can be dramatically reduced by appropriate treatment when early detection is available. The purpose of colonoscopy is to identify and assess the severity of lesions, which may be flat or protruding. Due to the subjective nature of the examination, colonoscopic proficiency is highly variable and dependent upon the colonoscopist's knowledge and experience. An automated image processing system providing an objective, rapid, and inexpensive analysis of video from a standard colonoscope could provide a valuable tool for screening and diagnosis. In this paper, we present the design, functionality and preliminary results of its Computer-Aided-Diagnosis (CAD) system for colonoscopy - ColonoCAD TM. ColonoCAD is a complex multi-sensor, multi-data and multi-algorithm image processing system, incorporating data management and visualization, video quality assessment and enhancement, calibration, multiple view based reconstruction, feature extraction and classification. As this is a new field in medical image processing, our hope is that this paper will provide the framework to encourage and facilitate collaboration and discussion between industry, academia, and medical practitioners.

  4. Computer-aided-diagnosis (CAD) for colposcopy

    NASA Astrophysics Data System (ADS)

    Lange, Holger; Ferris, Daron G.

    2005-04-01

    Uterine cervical cancer is the second most common cancer among women worldwide. Colposcopy is a diagnostic method, whereby a physician (colposcopist) visually inspects the lower genital tract (cervix, vulva and vagina), with special emphasis on the subjective appearance of metaplastic epithelium comprising the transformation zone on the cervix. Cervical cancer precursor lesions and invasive cancer exhibit certain distinctly abnormal morphologic features. Lesion characteristics such as margin; color or opacity; blood vessel caliber, intercapillary spacing and distribution; and contour are considered by colposcopists to derive a clinical diagnosis. Clinicians and academia have suggested and shown proof of concept that automated image analysis of cervical imagery can be used for cervical cancer screening and diagnosis, having the potential to have a direct impact on improving women"s health care and reducing associated costs. STI Medical Systems is developing a Computer-Aided-Diagnosis (CAD) system for colposcopy -- ColpoCAD. At the heart of ColpoCAD is a complex multi-sensor, multi-data and multi-feature image analysis system. A functional description is presented of the envisioned ColpoCAD system, broken down into: Modality Data Management System, Image Enhancement, Feature Extraction, Reference Database, and Diagnosis and directed Biopsies. The system design and development process of the image analysis system is outlined. The system design provides a modular and open architecture built on feature based processing. The core feature set includes the visual features used by colposcopists. This feature set can be extended to include new features introduced by new instrument technologies, like fluorescence and impedance, and any other plausible feature that can be extracted from the cervical data. Preliminary results of our research on detecting the three most important features: blood vessel structures, acetowhite regions and lesion margins are shown. As this is a new

  5. Preparing Students for Computer Aided Drafting (CAD). A Conceptual Approach.

    ERIC Educational Resources Information Center

    Putnam, A. R.; Duelm, Brian

    This presentation outlines guidelines for developing and implementing an introductory course in computer-aided drafting (CAD) that is geared toward secondary-level students. The first section of the paper, which deals with content identification and selection, includes lists of mechanical drawing and CAD competencies and a list of rationales for…

  6. CAD-centric Computation Management System for a Virtual TBM

    SciTech Connect

    Ramakanth Munipalli; K.Y. Szema; P.Y. Huang; C.M. Rowell; A.Ying; M. Abdou

    2011-05-03

    HyPerComp Inc. in research collaboration with TEXCEL has set out to build a Virtual Test Blanket Module (VTBM) computational system to address the need in contemporary fusion research for simulating the integrated behavior of the blanket, divertor and plasma facing components in a fusion environment. Physical phenomena to be considered in a VTBM will include fluid flow, heat transfer, mass transfer, neutronics, structural mechanics and electromagnetics. We seek to integrate well established (third-party) simulation software in various disciplines mentioned above. The integrated modeling process will enable user groups to interoperate using a common modeling platform at various stages of the analysis. Since CAD is at the core of the simulation (as opposed to computational meshes which are different for each problem,) VTBM will have a well developed CAD interface, governing CAD model editing, cleanup, parameter extraction, model deformation (based on simulation,) CAD-based data interpolation. In Phase-I, we built the CAD-hub of the proposed VTBM and demonstrated its use in modeling a liquid breeder blanket module with coupled MHD and structural mechanics using HIMAG and ANSYS. A complete graphical user interface of the VTBM was created, which will form the foundation of any future development. Conservative data interpolation via CAD (as opposed to mesh-based transfer), the regeneration of CAD models based upon computed deflections, are among the other highlights of phase-I activity.

  7. Introduction to CAD/Computers. High-Technology Training Module.

    ERIC Educational Resources Information Center

    Lockerby, Hugh

    This learning module for an eighth-grade introductory technology course is designed to help teachers introduce students to computer-assisted design (CAD) in a communications unit on graphics. The module contains a module objective and five specific objectives, a content outline, suggested instructor methodology, student activities, a list of six…

  8. Role of computer aided detection (CAD) integration: case study with meniscal and articular cartilage CAD applications

    NASA Astrophysics Data System (ADS)

    Safdar, Nabile; Ramakrishna, Bharath; Saiprasad, Ganesh; Siddiqui, Khan; Siegel, Eliot

    2008-03-01

    Knee-related injuries involving the meniscal or articular cartilage are common and require accurate diagnosis and surgical intervention when appropriate. With proper techniques and experience, confidence in detection of meniscal tears and articular cartilage abnormalities can be quite high. However, for radiologists without musculoskeletal training, diagnosis of such abnormalities can be challenging. In this paper, the potential of improving diagnosis through integration of computer-aided detection (CAD) algorithms for automatic detection of meniscal tears and articular cartilage injuries of the knees is studied. An integrated approach in which the results of algorithms evaluating either meniscal tears or articular cartilage injuries provide feedback to each other is believed to improve the diagnostic accuracy of the individual CAD algorithms due to the known association between abnormalities in these distinct anatomic structures. The correlation between meniscal tears and articular cartilage injuries is exploited to improve the final diagnostic results of the individual algorithms. Preliminary results from the integrated application are encouraging and more comprehensive tests are being planned.

  9. Computer Aided Detection (CAD) Systems for Mammography and the Use of GRID in Medicine

    NASA Astrophysics Data System (ADS)

    Lauria, Adele

    It is well known that the most effective way to defeat breast cancer is early detection, as surgery and medical therapies are more efficient when the disease is diagnosed at an early stage. The principal diagnostic technique for breast cancer detection is X-ray mammography. Screening programs have been introduced in many European countries to invite women to have periodic radiological breast examinations. In such screenings, radiologists are often required to examine large numbers of mammograms with a double reading, that is, two radiologists examine the images independently and then compare their results. In this way an increment in sensitivity (the rate of correctly identified images with a lesion) of up to 15% is obtained.1,2 In most radiological centres, it is a rarity to find two radiologists to examine each report. In recent years different Computer Aided Detection (CAD) systems have been developed as a support to radiologists working in mammography: one may hope that the "second opinion" provided by CAD might represent a lower cost alternative to improve the diagnosis. At present, four CAD systems have obtained the FDA approval in the USA. † Studies3,4 show an increment in sensitivity when CAD systems are used. Freer and Ulissey in 2001 5 demonstrated that the use of a commercial CAD system (ImageChecker M1000, R2 Technology) increases the number of cancers detected up to 19.5% with little increment in recall rate. Ciatto et al.,5 in a study simulating a double reading with a commercial CAD system (SecondLook‡), showed a moderate increment in sensitivity while reducing specificity (the rate of correctly identified images without a lesion). Notwithstanding these optimistic results, there is an ongoing debate to define the advantages of the use of CAD as second reader: the main limits underlined, e.g., by Nishikawa6 are that retrospective studies are considered much too optimistic and that clinical studies must be performed to demonstrate a statistically

  10. Creation of Anatomically Accurate Computer-Aided Design (CAD) Solid Models from Medical Images

    NASA Technical Reports Server (NTRS)

    Stewart, John E.; Graham, R. Scott; Samareh, Jamshid A.; Oberlander, Eric J.; Broaddus, William C.

    1999-01-01

    Most surgical instrumentation and implants used in the world today are designed with sophisticated Computer-Aided Design (CAD)/Computer-Aided Manufacturing (CAM) software. This software automates the mechanical development of a product from its conceptual design through manufacturing. CAD software also provides a means of manipulating solid models prior to Finite Element Modeling (FEM). Few surgical products are designed in conjunction with accurate CAD models of human anatomy because of the difficulty with which these models are created. We have developed a novel technique that creates anatomically accurate, patient specific CAD solids from medical images in a matter of minutes.

  11. Computer-aided design and computer-aided manufacture (CAD/CAM) system for construction of spinal orthosis for patients with adolescent idiopathic scoliosis.

    PubMed

    Wong, M S

    2011-01-01

    ABSTRACT Spinal orthoses are commonly prescribed to patients with moderate adolescent idiopathic scoliosis (AIS) for prevention of further curve deterioration. In conventional manufacturing method, plaster bandages are used to obtain the patient's body contour and then the plaster cast is rectified manually. With computer-aided design and computer-aided manufacture (CAD/CAM) system, a series of automated processes from body scanning to digital rectification and milling of the positive model can be performed in a fast and accurate fashion. The purpose of this manuscript is to introduce the application of CAD/CAM system to the construction of spinal orthosis for patients with AIS. Based on evidence within the literature, CAD/CAM method can achieve similar clinical outcomes but with higher efficiency than the conventional fabrication method. Therefore, CAD/CAM method should be considered a substitute to the conventional method in fabrication of spinal orthoses for patients with AIS.

  12. Computer Use and CAD in Assisting Schools in the Creation of Facilities.

    ERIC Educational Resources Information Center

    Beach, Robert H.; Essex, Nathan

    1987-01-01

    Computer-aided design (CAD) programs are powerful drafting tools, but are also able to assist with many other facility planning functions. Describes the hardware, software, and the learning process that led to understanding the CAD software at the University of Alabama. (MLF)

  13. A Multidisciplinary Research Team Approach to Computer-Aided Drafting (CAD) System Selection. Final Report.

    ERIC Educational Resources Information Center

    Franken, Ken; And Others

    A multidisciplinary research team was assembled to review existing computer-aided drafting (CAD) systems for the purpose of enabling staff in the Design Drafting Department at Linn Technical College (Missouri) to select the best system out of the many CAD systems in existence. During the initial stage of the evaluation project, researchers…

  14. Role of Computer Aided Diagnosis (CAD) in the detection of pulmonary nodules on 64 row multi detector computed tomography

    PubMed Central

    Prakashini, K; Babu, Satish; Rajgopal, KV; Kokila, K Raja

    2016-01-01

    Aims and Objectives: To determine the overall performance of an existing CAD algorithm with thin-section computed tomography (CT) in the detection of pulmonary nodules and to evaluate detection sensitivity at a varying range of nodule density, size, and location. Materials and Methods: A cross-sectional prospective study was conducted on 20 patients with 322 suspected nodules who underwent diagnostic chest imaging using 64-row multi-detector CT. The examinations were evaluated on reconstructed images of 1.4 mm thickness and 0.7 mm interval. Detection of pulmonary nodules, initially by a radiologist of 2 years experience (RAD) and later by CAD lung nodule software was assessed. Then, CAD nodule candidates were accepted or rejected accordingly. Detected nodules were classified based on their size, density, and location. The performance of the RAD and CAD system was compared with the gold standard that is true nodules confirmed by consensus of senior RAD and CAD together. The overall sensitivity and false-positive (FP) rate of CAD software was calculated. Observations and Results: Of the 322 suspected nodules, 221 were classified as true nodules on the consensus of senior RAD and CAD together. Of the true nodules, the RAD detected 206 (93.2%) and 202 (91.4%) by the CAD. CAD and RAD together picked up more number of nodules than either CAD or RAD alone. Overall sensitivity for nodule detection with the CAD program was 91.4%, and FP detection per patient was 5.5%. The CAD showed comparatively higher sensitivity for nodules of size 4–10 mm (93.4%) and nodules in hilar (100%) and central (96.5%) location when compared to RAD's performance. Conclusion: CAD performance was high in detecting pulmonary nodules including the small size and low-density nodules. CAD even with relatively high FP rate, assists and improves RAD's performance as a second reader, especially for nodules located in the central and hilar region and for small nodules by saving RADs time. PMID:27578931

  15. Surgical retained foreign object (RFO) prevention by computer aided detection (CAD)

    NASA Astrophysics Data System (ADS)

    Marentis, Theodore C.; Hadjiiyski, Lubomir; Chaudhury, Amrita R.; Rondon, Lucas; Chronis, Nikolaos; Chan, Heang-Ping

    2014-03-01

    Surgical Retained Foreign Objects (RFOs) cause significant morbidity and mortality. They are associated with $1.5 billion annually in preventable medical costs. The detection accuracy of radiographs for RFOs is a mediocre 59%. We address the RFO problem with two complementary technologies: a three dimensional (3D) Gossypiboma Micro Tag (μTa) that improves the visibility of RFOs on radiographs, and a Computer Aided Detection (CAD) system that detects the μTag. The 3D geometry of the μTag produces a similar 2D depiction on radiographs regardless of its orientation in the human body and ensures accurate detection by a radiologist and the CAD. We create a database of cadaveric radiographs with the μTag and other common man-made objects positioned randomly. We develop the CAD modules that include preprocessing, μTag enhancement, labeling, segmentation, feature analysis, classification and detection. The CAD can operate in a high specificity mode for the surgeon to allow for seamless workflow integration and function as a first reader. The CAD can also operate in a high sensitivity mode for the radiologist to ensure accurate detection. On a data set of 346 cadaveric radiographs, the CAD system performed at a high specificity (85.5% sensitivity, 0.02 FPs/image) for the OR and a high sensitivity (96% sensitivity, 0.73 FPs/image) for the radiologists.

  16. Adjoint Sensitivity Computations for an Embedded-Boundary Cartesian Mesh Method and CAD Geometry

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis,Michael J.

    2006-01-01

    Cartesian-mesh methods are perhaps the most promising approach for addressing the issues of flow solution automation for aerodynamic design problems. In these methods, the discretization of the wetted surface is decoupled from that of the volume mesh. This not only enables fast and robust mesh generation for geometry of arbitrary complexity, but also facilitates access to geometry modeling and manipulation using parametric Computer-Aided Design (CAD) tools. Our goal is to combine the automation capabilities of Cartesian methods with an eficient computation of design sensitivities. We address this issue using the adjoint method, where the computational cost of the design sensitivities, or objective function gradients, is esseutially indepeudent of the number of design variables. In previous work, we presented an accurate and efficient algorithm for the solution of the adjoint Euler equations discretized on Cartesian meshes with embedded, cut-cell boundaries. Novel aspects of the algorithm included the computation of surface shape sensitivities for triangulations based on parametric-CAD models and the linearization of the coupling between the surface triangulation and the cut-cells. The objective of the present work is to extend our adjoint formulation to problems involving general shape changes. Central to this development is the computation of volume-mesh sensitivities to obtain a reliable approximation of the objective finction gradient. Motivated by the success of mesh-perturbation schemes commonly used in body-fitted unstructured formulations, we propose an approach based on a local linearization of a mesh-perturbation scheme similar to the spring analogy. This approach circumvents most of the difficulties that arise due to non-smooth changes in the cut-cell layer as the boundary shape evolves and provides a consistent approximation tot he exact gradient of the discretized abjective function. A detailed gradient accurace study is presented to verify our approach

  17. An Analysis of Computer Aided Design (CAD) Packages Used at MSFC for the Recent Initiative to Integrate Engineering Activities

    NASA Technical Reports Server (NTRS)

    Smith, Leigh M.; Parker, Nelson C. (Technical Monitor)

    2002-01-01

    This paper analyzes the use of Computer Aided Design (CAD) packages at NASA's Marshall Space Flight Center (MSFC). It examines the effectiveness of recent efforts to standardize CAD practices across MSFC engineering activities. An assessment of the roles played by management, designers, analysts, and manufacturers in this initiative will be explored. Finally, solutions are presented for better integration of CAD across MSFC in the future.

  18. CAD: Designs on Business.

    ERIC Educational Resources Information Center

    Milburn, Ken

    1988-01-01

    Provides a general review of the field of Computer-Aided Design Software including specific reviews of "Autosketch,""Generic CADD,""Drafix 1 Plus,""FastCAD," and "Autocad Release 9." Brief articles include "Blueprint for Generation,""CAD for Every Department,""Ideas Sketched in Glass,""CAD on the MAC," and "A CAD Package Sampler." (CW)

  19. Project Integration Architecture (PIA) and Computational Analysis Programming Interface (CAPRI) for Accessing Geometry Data from CAD Files

    NASA Technical Reports Server (NTRS)

    Benyo, Theresa L.

    2002-01-01

    Integration of a supersonic inlet simulation with a computer aided design (CAD) system is demonstrated. The integration is performed using the Project Integration Architecture (PIA). PIA provides a common environment for wrapping many types of applications. Accessing geometry data from CAD files is accomplished by incorporating appropriate function calls from the Computational Analysis Programming Interface (CAPRI). CAPRI is a CAD vendor neutral programming interface that aids in acquiring geometry data directly from CAD files. The benefits of wrapping a supersonic inlet simulation into PIA using CAPRI are; direct access of geometry data, accurate capture of geometry data, automatic conversion of data units, CAD vendor neutral operation, and on-line interactive history capture. This paper describes the PIA and the CAPRI wrapper and details the supersonic inlet simulation demonstration.

  20. Teaching for CAD Expertise

    ERIC Educational Resources Information Center

    Chester, Ivan

    2007-01-01

    CAD (Computer Aided Design) has now become an integral part of Technology Education. The recent introduction of highly sophisticated, low-cost CAD software and CAM hardware capable of running on desktop computers has accelerated this trend. There is now quite widespread introduction of solid modeling CAD software into secondary schools but how…

  1. Computationally efficient control allocation

    NASA Technical Reports Server (NTRS)

    Durham, Wayne (Inventor)

    2001-01-01

    A computationally efficient method for calculating near-optimal solutions to the three-objective, linear control allocation problem is disclosed. The control allocation problem is that of distributing the effort of redundant control effectors to achieve some desired set of objectives. The problem is deemed linear if control effectiveness is affine with respect to the individual control effectors. The optimal solution is that which exploits the collective maximum capability of the effectors within their individual physical limits. Computational efficiency is measured by the number of floating-point operations required for solution. The method presented returned optimal solutions in more than 90% of the cases examined; non-optimal solutions returned by the method were typically much less than 1% different from optimal and the errors tended to become smaller than 0.01% as the number of controls was increased. The magnitude of the errors returned by the present method was much smaller than those that resulted from either pseudo inverse or cascaded generalized inverse solutions. The computational complexity of the method presented varied linearly with increasing numbers of controls; the number of required floating point operations increased from 5.5 i, to seven times faster than did the minimum-norm solution (the pseudoinverse), and at about the same rate as did the cascaded generalized inverse solution. The computational requirements of the method presented were much better than that of previously described facet-searching methods which increase in proportion to the square of the number of controls.

  2. Selection and implementation of a computer aided design and drafting (CAD/D) system

    SciTech Connect

    Davis, J.P.

    1981-01-01

    Faced with very heavy workloads and limited engineering and graphics personnel, Transco opted for a computer-aided design and drafting system that can produce intelligent drawings, which have associated data bases that can be integrated with other graphical and nongraphical data bases to form comprehensive sets of data for construction projects. Because so much time was being spent in all phases of materials and inventory control, Transco decided to integrate materials-management capabilities into the CAD/D system. When a specific item of material is requested on the graphics equipment, the request triggers production of both the drawing and a materials list. Transco plans to extend its computer applications into mapping tasks as well.

  3. Computer-aided determination of occlusal contact points for dental 3-D CAD.

    PubMed

    Maruyama, Tomoaki; Nakamura, Yasuo; Hayashi, Toyohiko; Kato, Kazumasa

    2006-05-01

    Present dental CAD systems enable us to design functional occlusal tooth surfaces which harmonize with the patient's stomatognathic function. In order to avoid occlusal interferences during tooth excursions, currently available systems usually use the patient's functional occlusal impressions for the design of occlusal contact points. Previous interfere-free design, however, has been done on a trial-and-error basis by using visual inspection. To improve this time-consuming procedure, this paper proposes a computer-aided system for assisting in the determination of the occlusal contact points by visualizing the appropriate regions of the opposing surface. The system can designate such regions from data of the opposing occlusal surfaces and their relative movements can be simulated by using a virtual articulator. Experiments for designing the crown of a lower first molar demonstrated that all contact points selected within the designated regions completely satisfied the required contact or separation during tooth excursions, confirming the effectiveness of our computer-aided procedure.

  4. Computer-Aided Design/Manufacturing (CAD/M) for high-speed interconnect

    NASA Astrophysics Data System (ADS)

    Santoski, N. F.

    1981-10-01

    The objective of the Computer-Aided Design/Manufacturing (CAD/M) for High-Speed Interconnect Program study was to assess techniques for design, analysis and fabrication of interconnect structures between high-speed logic ICs that are clocked in the 200 MHz to 5 GHz range. Interconnect structure models were investigated and integrated with existing device models. Design rules for interconnects were developed in terms of parameters that can be installed in software that is used for the design, analysis and fabrication of circuits. To implement these design rules in future software development, algorithms and software development techniques were defined. Major emphasis was on Printed Wiring Board and hybrid level circuits as opposed to monolithic chips. Various packaging schemes were considered, including controlled impedance lines in the 50 to 200 ohms range where needed. The design rules developed are generic in nature, in that various architecture classes and device technologies were considered.

  5. The computation of all plane/surface intersections for CAD/CAM applications

    NASA Technical Reports Server (NTRS)

    Hoitsma, D. H., Jr.; Roche, M.

    1984-01-01

    The problem of the computation and display of all intersections of a given plane with a rational bicubic surface patch for use on an interactive CAD/CAM system is examined. The general problem of calculating all intersections of a plane and a surface consisting of rational bicubic patches is reduced to the case of a single generic patch by applying a rejection algorithm which excludes all patches that do not intersect the plane. For each pertinent patch the algorithm presented computed the intersection curves by locating an initial point on each curve, and computes successive points on the curve using a tolerance step equation. A single cubic equation solver is used to compute the initial curve points lying on the boundary of a surface patch, and the method of resultants as applied to curve theory is used to determine critical points which, in turn, are used to locate initial points that lie on intersection curves which are in the interior of the patch. Examples are given to illustrate the ability of this algorithm to produce all intersection curves.

  6. Can computer-aided diagnosis (CAD) help radiologists find mammographically missed screening cancers?

    NASA Astrophysics Data System (ADS)

    Nishikawa, Robert M.; Giger, Maryellen L.; Schmidt, Robert A.; Papaioannou, John

    2001-06-01

    We present data from a pilot observer study whose goal is design a study to test the hypothesis that computer-aided diagnosis (CAD) can improve radiologists' performance in reading screening mammograms. In a prospective evaluation of our computer detection schemes, we have analyzed over 12,000 clinical exams. Retrospective review of the negative screening mammograms for all cancer cases found an indication of the cancer in 23 of these negative cases. The computer found 54% of these in our prospective testing. We added to these cases normal exams to create a dataset of 75 cases. Four radiologists experienced in mammography read the cases and gave their BI-RADS assessment and their confidence that the patient should be called back for diagnostic mammography. They did so once reading the films only and a second time reading with the computer aid. Three radiologists had no change in area under the ROC curve (mean Az of 0.73) and one improved from 0.73 to 0.78, but this difference failed to reach statistical significance (p equals 0.23). These data are being used to plan a larger more powerful study.

  7. Improvement of MS (multiple sclerosis) CAD (computer aided diagnosis) performance using C/C++ and computing engine in the graphical processing unit (GPU)

    NASA Astrophysics Data System (ADS)

    Suh, Joohyung; Ma, Kevin; Le, Anh

    2011-03-01

    Multiple Sclerosis (MS) is a disease which is caused by damaged myelin around axons of the brain and spinal cord. Currently, MR Imaging is used for diagnosis, but it is very highly variable and time-consuming since the lesion detection and estimation of lesion volume are performed manually. For this reason, we developed a CAD (Computer Aided Diagnosis) system which would assist segmentation of MS to facilitate physician's diagnosis. The MS CAD system utilizes K-NN (k-nearest neighbor) algorithm to detect and segment the lesion volume in an area based on the voxel. The prototype MS CAD system was developed under the MATLAB environment. Currently, the MS CAD system consumes a huge amount of time to process data. In this paper we will present the development of a second version of MS CAD system which has been converted into C/C++ in order to take advantage of the GPU (Graphical Processing Unit) which will provide parallel computation. With the realization of C/C++ and utilizing the GPU, we expect to cut running time drastically. The paper investigates the conversion from MATLAB to C/C++ and the utilization of a high-end GPU for parallel computing of data to improve algorithm performance of MS CAD.

  8. Revision of Electro-Mechanical Drafting Program to Include CAD/D (Computer-Aided Drafting/Design). Final Report.

    ERIC Educational Resources Information Center

    Snyder, Nancy V.

    North Seattle Community College decided to integrate computer-aided design/drafting (CAD/D) into its Electro-Mechanical Drafting Program. This choice necessitated a redefinition of the program through new curriculum and course development. To initiate the project, a new industrial advisory council was formed. Major electronic and recruiting firms…

  9. Development of problem-oriented software packages for numerical studies and computer-aided design (CAD) of gyrotrons

    NASA Astrophysics Data System (ADS)

    Damyanova, M.; Sabchevski, S.; Zhelyazkov, I.; Vasileva, E.; Balabanova, E.; Dankov, P.; Malinov, P.

    2016-03-01

    Gyrotrons are the most powerful sources of coherent CW (continuous wave) radiation in the frequency range situated between the long-wavelength edge of the infrared light (far-infrared region) and the microwaves, i.e., in the region of the electromagnetic spectrum which is usually called the THz-gap (or T-gap), since the output power of other devices (e.g., solid-state oscillators) operating in this interval is by several orders of magnitude lower. In the recent years, the unique capabilities of the sub-THz and THz gyrotrons have opened the road to many novel and future prospective applications in various physical studies and advanced high-power terahertz technologies. In this paper, we present the current status and functionality of the problem-oriented software packages (most notably GYROSIM and GYREOSS) used for numerical studies, computer-aided design (CAD) and optimization of gyrotrons for diverse applications. They consist of a hierarchy of codes specialized to modelling and simulation of different subsystems of the gyrotrons (EOS, resonant cavity, etc.) and are based on adequate physical models, efficient numerical methods and algorithms.

  10. Efficient Universal Blind Quantum Computation

    NASA Astrophysics Data System (ADS)

    Giovannetti, Vittorio; Maccone, Lorenzo; Morimae, Tomoyuki; Rudolph, Terry G.

    2013-12-01

    We give a cheat sensitive protocol for blind universal quantum computation that is efficient in terms of computational and communication resources: it allows one party to perform an arbitrary computation on a second party’s quantum computer without revealing either which computation is performed, or its input and output. The first party’s computational capabilities can be extremely limited: she must only be able to create and measure single-qubit superposition states. The second party is not required to use measurement-based quantum computation. The protocol requires the (optimal) exchange of O(Jlog⁡2(N)) single-qubit states, where J is the computational depth and N is the number of qubits needed for the computation.

  11. CAD/CAM/CNC.

    ERIC Educational Resources Information Center

    Domermuth, Dave; And Others

    1996-01-01

    Includes "Quick Start CNC (computer numerical control) with a Vacuum Filter and Laminated Plastic" (Domermuth); "School and Industry Cooperate for Mutual Benefit" (Buckler); and "CAD (computer-assisted drafting) Careers--What Professionals Have to Say" (Skinner). (JOW)

  12. CAD/CAE Integration Enhanced by New CAD Services Standard

    NASA Technical Reports Server (NTRS)

    Claus, Russell W.

    2002-01-01

    A Government-industry team led by the NASA Glenn Research Center has developed a computer interface standard for accessing data from computer-aided design (CAD) systems. The Object Management Group, an international computer standards organization, has adopted this CAD services standard. The new standard allows software (e.g., computer-aided engineering (CAE) and computer-aided manufacturing software to access multiple CAD systems through one programming interface. The interface is built on top of a distributed computing system called the Common Object Request Broker Architecture (CORBA). CORBA allows the CAD services software to operate in a distributed, heterogeneous computing environment.

  13. Computer Graphic Design Using Auto-CAD and Plug Nozzle Research

    NASA Technical Reports Server (NTRS)

    Rogers, Rayna C.

    2004-01-01

    The purpose of creating computer generated images varies widely. They can be use for computational fluid dynamics (CFD), or as a blueprint for designing parts. The schematic that I will be working on the summer will be used to create nozzles that are a part of a larger system. At this phase in the project, the nozzles needed for the systems have been fabricated. One part of my mission is to create both three dimensional and two dimensional models on Auto-CAD 2002 of the nozzles. The research on plug nozzles will allow me to have a better understanding of how they assist in the thrust need for a missile to take off. NASA and the United States military are working together to develop a new design concept. On most missiles a convergent-divergent nozzle is used to create thrust. However, the two are looking into different concepts for the nozzle. The standard convergent-divergent nozzle forces a mixture of combustible fluids and air through a smaller area in comparison to where the combination was mixed. Once it passes through the smaller area known as A8 it comes out the end of the nozzle which is larger the first or area A9. This creates enough thrust for the mechanism whether it is an F-18 fighter jet or a missile. The A9 section of the convergent-divergent nozzle has a mechanism that controls how large A9 can be. This is needed because the pressure of the air coming out nozzle must be equal to that of the ambient pressure other wise there will be a loss of performance in the machine. The plug nozzle however does not need to have an A9 that can vary. When the air flow comes out it can automatically sense what the ambient pressure is and will adjust accordingly. The objective of this design is to create a plug nozzle that is not as complicated mechanically as it counterpart the convergent-divergent nozzle.

  14. IGIS (Interactive Geologic Interpretation System) computer-aided photogeologic mapping with image processing, graphics and CAD/CAM capabilities

    SciTech Connect

    McGuffie, B.A.; Johnson, L.F.; Alley, R.E.; Lang, H.R. )

    1989-10-01

    Advances in computer technology are changing the way geologists integrate and use data. Although many geoscience disciplines are absolutely dependent upon computer processing, photogeological and map interpretation computer procedures are just now being developed. Historically, geologists collected data in the field and mapped manually on a topographic map or aerial photographic base. New software called the interactive Geologic Interpretation System (IGIS) is being developed at the Jet Propulsion Laboratory (JPL) within the National Aeronautics and Space Administration (NASA)-funded Multispectral Analysis of Sedimentary Basins Project. To complement conventional geological mapping techniques, Landsat Thematic Mapper (TM) or other digital remote sensing image data and co-registered digital elevation data are combined using computer imaging, graphics, and CAD/CAM techniques to provide tools for photogeologic interpretation, strike/dip determination, cross section construction, stratigraphic section measurement, topographic slope measurement, terrain profile generation, rotatable 3-D block diagram generation, and seismic analysis.

  15. Evaluation of Five Microcomputer CAD Packages.

    ERIC Educational Resources Information Center

    Leach, James A.

    1987-01-01

    Discusses the similarities, differences, advanced features, applications and number of users of five microcomputer computer-aided design (CAD) packages. Included are: "AutoCAD (V.2.17)"; "CADKEY (V.2.0)"; "CADVANCE (V.1.0)"; "Super MicroCAD"; and "VersaCAD Advanced (V.4.00)." Describes the evaluation of the packages and makes recommendations for…

  16. Assessment of the Incremental Benefit of Computer-Aided Detection (CAD) for Interpretation of CT Colonography by Experienced and Inexperienced Readers

    PubMed Central

    Boone, Darren; Mallett, Susan; McQuillan, Justine; Taylor, Stuart A.; Altman, Douglas G.; Halligan, Steve

    2015-01-01

    Objectives To quantify the incremental benefit of computer-assisted-detection (CAD) for polyps, for inexperienced readers versus experienced readers of CT colonography. Methods 10 inexperienced and 16 experienced radiologists interpreted 102 colonography studies unassisted and with CAD utilised in a concurrent paradigm. They indicated any polyps detected on a study sheet. Readers’ interpretations were compared against a ground-truth reference standard: 46 studies were normal and 56 had at least one polyp (132 polyps in total). The primary study outcome was the difference in CAD net benefit (a combination of change in sensitivity and change in specificity with CAD, weighted towards sensitivity) for detection of patients with polyps. Results Inexperienced readers’ per-patient sensitivity rose from 39.1% to 53.2% with CAD and specificity fell from 94.1% to 88.0%, both statistically significant. Experienced readers’ sensitivity rose from 57.5% to 62.1% and specificity fell from 91.0% to 88.3%, both non-significant. Net benefit with CAD assistance was significant for inexperienced readers but not for experienced readers: 11.2% (95%CI 3.1% to 18.9%) versus 3.2% (95%CI -1.9% to 8.3%) respectively. Conclusions Concurrent CAD resulted in a significant net benefit when used by inexperienced readers to identify patients with polyps by CT colonography. The net benefit was nearly four times the magnitude of that observed for experienced readers. Experienced readers did not benefit significantly from concurrent CAD. PMID:26355745

  17. CAD for small hydro projects

    SciTech Connect

    Bishop, N.A. Jr. )

    1994-04-01

    Over the past decade, computer-aided design (CAD) has become a practical and economical design tool. Today, specifying CAD hardware and software is relatively easy once you know what the design requirements are. But finding experienced CAD professionals is often more difficult. Most CAD users have only two or three years of design experience; more experienced design personnel are frequently not CAD literate. However, effective use of CAD can be the key to lowering design costs and improving design quality--a quest familiar to every manager and designer. By emphasizing computer-aided design literacy at all levels of the firm, a Canadian joint-venture company that specializes in engineering small hydroelectric projects has cut costs, become more productive and improved design quality. This article describes how they did it.

  18. Efficient computation of optimal actions.

    PubMed

    Todorov, Emanuel

    2009-07-14

    Optimal choice of actions is a fundamental problem relevant to fields as diverse as neuroscience, psychology, economics, computer science, and control engineering. Despite this broad relevance the abstract setting is similar: we have an agent choosing actions over time, an uncertain dynamical system whose state is affected by those actions, and a performance criterion that the agent seeks to optimize. Solving problems of this kind remains hard, in part, because of overly generic formulations. Here, we propose a more structured formulation that greatly simplifies the construction of optimal control laws in both discrete and continuous domains. An exhaustive search over actions is avoided and the problem becomes linear. This yields algorithms that outperform Dynamic Programming and Reinforcement Learning, and thereby solve traditional problems more efficiently. Our framework also enables computations that were not possible before: composing optimal control laws by mixing primitives, applying deterministic methods to stochastic systems, quantifying the benefits of error tolerance, and inferring goals from behavioral data via convex optimization. Development of a general class of easily solvable problems tends to accelerate progress--as linear systems theory has done, for example. Our framework may have similar impact in fields where optimal choice of actions is relevant.

  19. Efficient computation of optimal actions

    PubMed Central

    Todorov, Emanuel

    2009-01-01

    Optimal choice of actions is a fundamental problem relevant to fields as diverse as neuroscience, psychology, economics, computer science, and control engineering. Despite this broad relevance the abstract setting is similar: we have an agent choosing actions over time, an uncertain dynamical system whose state is affected by those actions, and a performance criterion that the agent seeks to optimize. Solving problems of this kind remains hard, in part, because of overly generic formulations. Here, we propose a more structured formulation that greatly simplifies the construction of optimal control laws in both discrete and continuous domains. An exhaustive search over actions is avoided and the problem becomes linear. This yields algorithms that outperform Dynamic Programming and Reinforcement Learning, and thereby solve traditional problems more efficiently. Our framework also enables computations that were not possible before: composing optimal control laws by mixing primitives, applying deterministic methods to stochastic systems, quantifying the benefits of error tolerance, and inferring goals from behavioral data via convex optimization. Development of a general class of easily solvable problems tends to accelerate progress—as linear systems theory has done, for example. Our framework may have similar impact in fields where optimal choice of actions is relevant. PMID:19574462

  20. Catheter detection and classification on chest radiographs: an automated prototype computer-aided detection (CAD) system for radiologists

    NASA Astrophysics Data System (ADS)

    Ramakrishna, Bharath; Brown, Matthew; Goldin, Jonathan; Cagnon, Chris; Enzmann, Dieter

    2011-03-01

    Chest radiographs are the quickest and safest method to check placement of man-made medical devices placed in the body like catheters, stents and pacemakers etc out of which catheters are the most commonly used devices. The two most often used catheters especially in the ICU are the Endotracheal (ET) tube used to maintain patient's airway and the Nasogastric (NG) tube used to feed and administer drugs. Tertiary ICU's typically generate over 250 chest radiographs per day to confirm tube placement. Incorrect tube placements can cause serious complications and can even be fatal. The task of identifying these tubes on chest radiographs is difficult for radiologists and ICU personnel given the high volume of cases. This motivates the need for an automatic detection system to aid radiologists in processing these critical cases in a timely fashion while maintaining patient safety. To-date there has been very little research in this area. This paper develops a new fully automatic prototype computer-aided detection (CAD) system for detection and classification of catheters on chest radiographs using a combination of template matching, morphological processing and region growing. The preliminary evaluation was carried out on 25 cases. The prototype CAD system was able to detect ET and NG tubes with sensitivities of 73.7% and 76.5% respectively and with specificities of 91.3% and 84.0% respectively. The results from the prototype system show that it is feasible to automatically detect both catheters on chest radiographs, with the potential to significantly speed the delivery of imaging services while maintaining high accuracy.

  1. Computer-aided detection of lung cancer on chest radiographs: effect of machine CAD true positive/false negative detections on radiologists' confidence level

    NASA Astrophysics Data System (ADS)

    Freedman, Matthew T.; Osicka, Teresa; Lo, Shih-Chung Benedict; Lure, Fleming; Xu, Xin-Wei; Lin, Jesse; Zhao, Hui; Zhang, Ron

    2004-05-01

    This paper evaluates the effect of Computer-Aided Detection prompts on the confidence and detection of cancer on chest radiographs. Expected findings included an increase in confidence rating and a decrease in variance in confidence when radiologists interacted with a computer prompt that confirmed their initial decision or induced them to switch from an incorrect to a correct decision. Their confidence rating decreased and the variance of confidence rating increased when the computer failed to confirm a correct or incorrect decision. A population of cases was identified that changed among reading modalities. This unstable group of cases differed between the Independent and Sequential without CAD modalities in cancer detection by radiologists and cancer detection by machine. CAD prompts induced the radiologists to make two types of changes in cases: changes on the sequential modality with CAD that restored an initial diagnosis made in the Independent read and new changes that were not present in the Independent or Sequential reads without CAD. This has implications for double reading of cases. The effects of intra-observer variability and inter-observer variability are suggested as potential causes for differences in statistical significance of the Independent and Sequential Design approaches to ROC studies.

  2. TRAD or CAD? A Comparison.

    ERIC Educational Resources Information Center

    Resetarits, Paul J.

    1989-01-01

    Studies whether traditional drafting equipment (TRAD) or computer aided drafting equipment (CAD) is more effective. Proposes that students using only CAD can learn principles of drafting as well as students using only TRAD. Reports no significant difference either on achievement or attitude. (MVL)

  3. Comparison of standard and double reading and computer-aided detection (CAD) of interval cancers at prior negative screening mammograms: blind review.

    PubMed

    Ciatto, S; Rosselli Del Turco, M; Burke, P; Visioli, C; Paci, E; Zappa, M

    2003-11-01

    The study evaluates the role of computer-aided detection (CAD) in improving the detection of interval cancers as compared to conventional single (CONV) or double reading (DOUBLE). With this purpose, a set of 89 negative cases was seeded with 31 mammograms reported as negative and developing interval cancer in the following 2-year interval (false negative (FN)=11, minimal signs (MS)=20). A total of radiologists read the set with CONV and then with CAD. Overall, there were 589 cancer and 1691 noncancer readings with both CONV and CAD. Double reading was simulated by combining conventional readings in all 171 possible combinations of 19 radiologists, resulting in a total of 5301 cancer and 15 219 noncancer readings. Conventional single, DOUBLE and CAD readings were compared in terms of sensitivity and recall rate. Considering all 19 readings, cancer was identified in 190 or 248 of 589 readings (32.2 vs 42.1%, chi(2)=11.80, df=1, P<0.01) and recalls were 287 or 405 of 1691 readings (16.9 vs 23.9%, chi(2)=24.87, df=1, P<0.01) at CONV or CAD, respectively. When considering FN and MS cases separately, sensitivity at CONV or CAD was 50.2 or 62.6% (chi(2)=6.98, df=1, P=0.01) for FN and 22.3 or 30.7% (chi(2)=6.47, df=1, P=0.01) for MS cases, respectively. Computer-aided detection (average of 19 readings) was slightly and not significantly less sensitive (sensitivity: 42.1 vs 46.1%, chi(2)=3.24, df=1, P=0.07) but more specific (recall rate 23.9 vs 26.1%, chi(2)=3.8, df=1, P=0.04) as compared to DOUBLE (average of 171 readings). Average sensitivity for FN cases only was 62.6% for CAD and 64.8% for DOUBLE (chi(2)=0.32, df=1, P=0.58). Corresponding values for MS cases were 30.7% for CAD and 35.7% for DOUBLE (chi(2)=3.53, df=1, P=0.06). Compared to CONV, CAD allowed for improved sensitivity, though with reduced specificity, both effects being statistically significant. Computer-aided detection was almost as sensitive as DOUBLE but significantly more specific. Computer

  4. Statistical-techniques-based computer-aided diagnosis (CAD) using texture feature analysis: application in computed tomography (CT) imaging to fatty liver disease

    NASA Astrophysics Data System (ADS)

    Chung, Woon-Kwan; Park, Hyong-Hu; Im, In-Chul; Lee, Jae-Seung; Goo, Eun-Hoe; Dong, Kyung-Rae

    2012-09-01

    This paper proposes a computer-aided diagnosis (CAD) system based on texture feature analysis and statistical wavelet transformation technology to diagnose fatty liver disease with computed tomography (CT) imaging. In the target image, a wavelet transformation was performed for each lesion area to set the region of analysis (ROA, window size: 50 × 50 pixels) and define the texture feature of a pixel. Based on the extracted texture feature values, six parameters (average gray level, average contrast, relative smoothness, skewness, uniformity, and entropy) were determined to calculate the recognition rate for a fatty liver. In addition, a multivariate analysis of the variance (MANOVA) method was used to perform a discriminant analysis to verify the significance of the extracted texture feature values and the recognition rate for a fatty liver. According to the results, each texture feature value was significant for a comparison of the recognition rate for a fatty liver ( p < 0.05). Furthermore, the F-value, which was used as a scale for the difference in recognition rates, was highest in the average gray level, relatively high in the skewness and the entropy, and relatively low in the uniformity, the relative smoothness and the average contrast. The recognition rate for a fatty liver had the same scale as that for the F-value, showing 100% (average gray level) at the maximum and 80% (average contrast) at the minimum. Therefore, the recognition rate is believed to be a useful clinical value for the automatic detection and computer-aided diagnosis (CAD) using the texture feature value. Nevertheless, further study on various diseases and singular diseases will be needed in the future.

  5. Gathering Empirical Evidence Concerning Links between Computer Aided Design (CAD) and Creativity

    ERIC Educational Resources Information Center

    Musta'amal, Aede Hatib; Norman, Eddie; Hodgson, Tony

    2009-01-01

    Discussion is often reported concerning potential links between computer-aided designing and creativity, but there is a lack of systematic enquiry to gather empirical evidence concerning such links. This paper reports an indication of findings from other research studies carried out in contexts beyond general education that have sought evidence…

  6. CAD/CAM (Computer Aided Design/Computer Aided Manufacture). A Brief Guide to Materials in the Library of Congress.

    ERIC Educational Resources Information Center

    Havas, George D.

    This brief guide to materials in the Library of Congress (LC) on computer aided design and/or computer aided manufacturing lists reference materials and other information sources under 13 headings: (1) brief introductions; (2) LC subject headings used for such materials; (3) textbooks; (4) additional titles; (5) glossaries and handbooks; (6)…

  7. HistoCAD: Machine Facilitated Quantitative Histoimaging with Computer Assisted Diagnosis

    NASA Astrophysics Data System (ADS)

    Tomaszewski, John E.

    Prostatic adenocarcinoma (CAP) is the most common malignancy in American men. In 2010 there will be an estimated 217,730 new cases and 32,050 deaths from CAP in the US. The diagnosis of prostatic adenocarcinoma is made exclusively from the histological evaluation of prostate tissue. The sampling protocols used to obtain 18 gauge (1.5 mm diameter) needle cores are standard sampling templates consisting of 6-12 cores performed in the context of an elevated serum value for prostate specific antigen (PSA). In this context, the prior probability of cancer is somewhat increased. However, even in this screened population, the efficiency of finding cancer is low at only approximately 20%. Histopathologists are faced with the task of reviewing the 5-10 million cores of tissue resulting from approximately 1,000,000 biopsy procedures yearly, parsing all the benign scenes from the worrisome scenes, and deciding which of the worrisome images are cancer.

  8. Computing Efficiency Of Transfer Of Microwave Power

    NASA Technical Reports Server (NTRS)

    Pinero, L. R.; Acosta, R.

    1995-01-01

    BEAM computer program enables user to calculate microwave power-transfer efficiency between two circular apertures at arbitrary range. Power-transfer efficiency obtained numerically. Two apertures have generally different sizes and arbitrary taper illuminations. BEAM also analyzes effect of distance and taper illumination on transmission efficiency for two apertures of equal size. Written in FORTRAN.

  9. Immersive CAD

    SciTech Connect

    Ames, A.L.

    1999-02-01

    This paper documents development of a capability for performing shape-changing editing operations on solid model representations in an immersive environment. The capability includes part- and assembly-level operations, with part modeling supporting topology-invariant and topology-changing modifications. A discussion of various design considerations in developing an immersive capability is included, along with discussion of a prototype implementation we have developed and explored. The project investigated approaches to providing both topology-invariant and topology-changing editing. A prototype environment was developed to test the approaches and determine the usefulness of immersive editing. The prototype showed exciting potential in redefining the CAD interface. It is fun to use. Editing is much faster and friendlier than traditional feature-based CAD software. The prototype algorithms did not reliably provide a sufficient frame rate for complex geometries, but has provided the necessary roadmap for development of a production capability.

  10. The polar phase response property of monopolar ECG voltages using a Computer-Aided Design and Drafting (CAD)-based data acquisition system.

    PubMed

    Goswami, B; Mitra, M; Nag, B; Mitra, T K

    1993-11-01

    The present paper discusses a Computer-Aided Design and Drafting (CAD) based data acquisition and polar phase response study of the ECG. The scalar ECG does not show vector properties although such properties are embedded in it. In the present paper the polar phase response property of monopolar chest lead (V1 to V6) ECG voltages has been studied. A software tool has been used to evaluate the relative phase response of ECG voltages. The data acquisition of monopolar ECG records of chest leads V1 to V6 from the chart recorder has been done with the help of the AutoCAD application package. The spin harmonic constituents of ECG voltages are evaluated at each harmonic plane and the polar phase responses are studied at each plane. Some interesting results have been observed in some typical cases which are discussed in the paper. PMID:8307653

  11. Comparison of sensitivity and reading time for the use of computer-aided detection (CAD) of pulmonary nodules at MDCT as concurrent or second reader.

    PubMed

    Beyer, F; Zierott, L; Fallenberg, E M; Juergens, K U; Stoeckel, J; Heindel, W; Wormanns, D

    2007-11-01

    The purpose of this study was to compare sensitivity for detection of pulmonary nodules in MDCT scans and reading time of radiologists when using CAD as the second reader (SR) respectively concurrent reader (CR). Four radiologists analyzed 50 chest MDCT scans chosen from clinical routine two times and marked all detected pulmonary nodules: first with CAD as CR (display of CAD results immediately in the reading session) and later (median 14 weeks) with CAD as SR (display of CAD markers after completion of first reading without CAD). A Siemens LungCAD prototype was used. Sensitivities for detection of nodules and reading times were recorded. Sensitivity of reading with CAD as SR was significantly higher than reading without CAD (p < 0.001) and CAD as CR (p < 0.001). For nodule size of 1.75 mm or above no significant sensitivity difference between CAD as CR and reading without CAD was observed; e.g., for nodules above 4 mm sensitivity was 68% without CAD, 68% with CAD as CR (p = 0.45) and 75% with CAD as SR (p < 0.001). Reading time was significantly shorter for CR (274 s) compared to reading without CAD (294 s; p = 0.04) and SR (337 s; p < 0.001). In our study CAD could either speed up reading of chest CT cases for pulmonary nodules without relevant loss of sensitivity when used as CR, or it increased sensitivity at the cost of longer reading times when used as SR.

  12. Improvements in computer-aided detection/computer-aided classification (CAD/CAC) of bottom mines through post analysis of a diverse set of very shallow water (VSW) environmental test data

    NASA Astrophysics Data System (ADS)

    Ciany, Charles M.; Zurawski, William C.

    2004-09-01

    In 1999 Raytheon adapted its shallow-water Side-Looking Sonar (SLS) Computer Aided Detection/Computer Aided Classification (CAD/CAC) algorithm to process side-scan sonar data obtained with the Woods Hole Oceanographic Institute's Remote Environmental Monitoring Units (REMUS) autonomous underwater vehicle (AUV). To date, Raytheon has demonstrated the ability to effectively execute mine-hunting missions with the REMUS vehicle through the fusion of its CAD/CAC algorithm with several other CAD/CAC algorithms to achieve low false alarm rates while maintaining a high probability of correct detection/classification. Mine-hunting in the very shallow water (VSW) environment poses a host of difficulties including such issues as: a higher incidence of man made clutter, significant interference due to biological sources (such as kelp or silt), the scouring of mines into the bottom, interference from surface/bottom bounce, and image distortion due to vehicle motion during image generation. These issues coupled with highly variable bottom conditions and small bottom targets make reliable hunting in the VSW environment very difficult. In order to be operationally viable, the individual CAD/CAC algorithms must demonstrate robustness over these very different mine-hunting environments. A higher normalized false alarm rate per algorithm is considered acceptable based on the false alarm reduction achieved through multi-algorithm fusion. Raytheon's recent CAD/CAC algorithm enhancements demonstrate a significant improvement in overall CAD/CAC performance across a diverse set of environments, from the relatively benign Gulf of Mexico environment to the more challenging areas off the coast of southern California containing significant biological and bottom clutter. The improvements are attributed to incorporating an image normalizer into the algorithm's pre-processing stage in conjunction with several other modifications. The algorithm enhancements resulted in an 11% increase in overall

  13. PC Board Layout and Electronic Drafting with CAD. Teacher Edition.

    ERIC Educational Resources Information Center

    Bryson, Jimmy

    This teacher's guide contains 11 units of instruction for a course on computer electronics and computer-assisted drafting (CAD) using a personal computer (PC). The course covers the following topics: introduction to electronic drafting with CAD; CAD system and software; basic electronic theory; component identification; basic integrated circuit…

  14. Fabricating a tooth- and implant-supported maxillary obturator for a patient after maxillectomy with computer-guided surgery and CAD/CAM technology: A clinical report.

    PubMed

    Noh, Kwantae; Pae, Ahran; Lee, Jung-Woo; Kwon, Yong-Dae

    2016-05-01

    An obturator prosthesis with insufficient retention and support may be improved with implant placement. However, implant surgery in patients after maxillary tumor resection can be complicated because of limited visibility and anatomic complexity. Therefore, computer-guided surgery can be advantageous even for experienced surgeons. In this clinical report, the use of computer-guided surgery is described for implant placement using a bone-supported surgical template for a patient with maxillary defects. The prosthetic procedure was facilitated and simplified by using computer-aided design/computer-aided manufacture (CAD/CAM) technology. Oral function and phonetics were restored using a tooth- and implant-supported obturator prosthesis. No clinical symptoms and no radiographic signs of significant bone loss around the implants were found at a 3-year follow-up. The treatment approach presented here can be a viable option for patients with insufficient remaining zygomatic bone after a hemimaxillectomy. PMID:26774316

  15. Improving the radiologist-CAD interaction: designing for appropriate trust.

    PubMed

    Jorritsma, W; Cnossen, F; van Ooijen, P M A

    2015-02-01

    Computer-aided diagnosis (CAD) has great potential to improve radiologists' diagnostic performance. However, the reported performance of the radiologist-CAD team is lower than what might be expected based on the performance of the radiologist and the CAD system in isolation. This indicates that the interaction between radiologists and the CAD system is not optimal. An important factor in the interaction between humans and automated aids (such as CAD) is trust. Suboptimal performance of the human-automation team is often caused by an inappropriate level of trust in the automation. In this review, we examine the role of trust in the radiologist-CAD interaction and suggest ways to improve the output of the CAD system so that it allows radiologists to calibrate their trust in the CAD system more effectively. Observer studies of the CAD systems show that radiologists often have an inappropriate level of trust in the CAD system. They sometimes under-trust CAD, thereby reducing its potential benefits, and sometimes over-trust it, leading to diagnostic errors they would not have made without CAD. Based on the literature on trust in human-automation interaction and the results of CAD observer studies, we have identified four ways to improve the output of CAD so that it allows radiologists to form a more appropriate level of trust in CAD. Designing CAD systems for appropriate trust is important and can improve the performance of the radiologist-CAD team. Future CAD research and development should acknowledge the importance of the radiologist-CAD interaction, and specifically the role of trust therein, in order to create the perfect artificial partner for the radiologist. This review focuses on the role of trust in the radiologist-CAD interaction. The aim of the review is to encourage CAD developers to design for appropriate trust and thereby improve the performance of the radiologist-CAD team. PMID:25459198

  16. Computationally efficient method to construct scar functions

    NASA Astrophysics Data System (ADS)

    Revuelta, F.; Vergini, E. G.; Benito, R. M.; Borondo, F.

    2012-02-01

    The performance of a simple method [E. L. Sibert III, E. Vergini, R. M. Benito, and F. Borondo, New J. Phys.NJOPFM1367-263010.1088/1367-2630/10/5/053016 10, 053016 (2008)] to efficiently compute scar functions along unstable periodic orbits with complicated trajectories in configuration space is discussed, using a classically chaotic two-dimensional quartic oscillator as an illustration.

  17. Viewing CAD Drawings on the Internet

    ERIC Educational Resources Information Center

    Schwendau, Mark

    2004-01-01

    Computer aided design (CAD) has been producing 3-D models for years. AutoCAD software is frequently used to create sophisticated 3-D models. These CAD files can be exported as 3DS files for import into Autodesk's 3-D Studio Viz. In this program, the user can render and modify the 3-D model before exporting it out as a WRL (world file hyperlinked)…

  18. CAD/CAM. High-Technology Training Module.

    ERIC Educational Resources Information Center

    Zuleger, Robert

    This high technology training module is an advanced course on computer-assisted design/computer-assisted manufacturing (CAD/CAM) for grades 11 and 12. This unit, to be used with students in advanced drafting courses, introduces the concept of CAD/CAM. The content outline includes the following seven sections: (1) CAD/CAM software; (2) computer…

  19. CAD Services: an Industry Standard Interface for Mechanical CAD Interoperability

    NASA Technical Reports Server (NTRS)

    Claus, Russell; Weitzer, Ilan

    2002-01-01

    Most organizations seek to design and develop new products in increasingly shorter time periods. At the same time, increased performance demands require a team-based multidisciplinary design process that may span several organizations. One approach to meet these demands is to use 'Geometry Centric' design. In this approach, design engineers team their efforts through one united representation of the design that is usually captured in a CAD system. Standards-based interfaces are critical to provide uniform, simple, distributed services that enable the 'Geometry Centric' design approach. This paper describes an industry-wide effort, under the Object Management Group's (OMG) Manufacturing Domain Task Force, to define interfaces that enable the interoperability of CAD, Computer Aided Manufacturing (CAM), and Computer Aided Engineering (CAE) tools. This critical link to enable 'Geometry Centric' design is called: Cad Services V1.0. This paper discusses the features of this standard and proposed application.

  20. Computerized design of CAD

    NASA Astrophysics Data System (ADS)

    Paul, B. E.; Pham, T. A.

    1982-11-01

    A computerized ballistic design technique for CAD/PAD is described by which a set of ballistic design parameters are determined, all of which satisfy a particular performance requirement. In addition, the program yields the remaining performance predictions, so that only a very few computer runs of the design program can quickly bring the ballistic design within the specification limits prescribed. An example is presented for a small propulsion device, such as a remover or actuator, for which the input specifications define a maximum allowable thrust and minimum end-of-stroke velocity. The resulting output automatically satisfies the input requirements, and will always yield an acceptable ballistic design.

  1. Changing computing paradigms towards power efficiency

    PubMed Central

    Klavík, Pavel; Malossi, A. Cristiano I.; Bekas, Costas; Curioni, Alessandro

    2014-01-01

    Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications. PMID:24842033

  2. Changing computing paradigms towards power efficiency.

    PubMed

    Klavík, Pavel; Malossi, A Cristiano I; Bekas, Costas; Curioni, Alessandro

    2014-06-28

    Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications.

  3. Efficient communication in massively parallel computers

    SciTech Connect

    Cypher, R.E.

    1989-01-01

    A fundamental operation in parallel computation is sorting. Sorting is important not only because it is required by many algorithms, but also because it can be used to implement irregular, pointer-based communication. The author studies two algorithms for sorting in massively parallel computers. First, he examines Shellsort. Shellsort is a sorting algorithm that is based on a sequence of parameters called increments. Shellsort can be used to create a parallel sorting device known as a sorting network. Researchers have suggested that if the correct increment sequence is used, an optimal size sorting network can be obtained. All published increment sequences have been monotonically decreasing. He shows that no monotonically decreasing increment sequence will yield an optimal size sorting network. Second, he presents a sorting algorithm called Cubesort. Cubesort is the fastest known sorting algorithm for a variety of parallel computers aver a wide range of parameters. He also presents a paradigm for developing parallel algorithms that have efficient communication. The paradigm, called the data reduction paradigm, consists of using a divide-and-conquer strategy. Both the division and combination phases of the divide-and-conquer algorithm may require irregular, pointer-based communication between processors. However, the problem is divided so as to limit the amount of data that must be communicated. As a result the communication can be performed efficiently. He presents data reduction algorithms for the image component labeling problem, the closest pair problem and four versions of the parallel prefix problem.

  4. CAD systems simplify engineering drawings

    SciTech Connect

    Holt, J.

    1986-10-01

    Computer assisted drafting systems, with today's technology, provide high-quality, timely drawings that can be justified by the lower costs for the final product. The author describes Exxon Pipeline Co.'s experience in deciding on hardware and software for a CAD system installation and the benefits effected by this procedure and equipment.

  5. Computed tomography and CAD/CAE methods for the study of the osseus inner Ear bone of Greek quaternary endemic mammals

    NASA Astrophysics Data System (ADS)

    Provatidis, C. G.; Theodorou, E. G.; Theodorou, G. E.

    It is undisputed that the use of computed tomography gives the researcher an inside view of the internal morphology of precious findings. The main goal, in this study, is to take advantage of the huge possibilities that derive from the use of CT Scans in the field of Vertebrate Palaeontology. Rare fossils skull parts (Ospetrosum of Elephas tiliensis from Tilos, Phanourios minor from Cyprus and Candiacervus sp. from Crete) brought to light by excavations, required further analysis of their inside structure by non destructive methods. Selected specimens were scanned and exported into Dicom files. These were then imported into MIMICS Software in order to develop the required 3D digital CAD models. By using distinctive reference points on the bone geometry based on palaeontological criteria, section views were created thus revealing the extremely complex inside structure and making it available for farther palaeontological analysis.

  6. Comparison of sensitivity and reading time for the use of computer aided detection (CAD) of pulmonary nodules at MDCT as concurrent or second reader

    NASA Astrophysics Data System (ADS)

    Beyer, F.; Zierott, L.; Fallenberg, E. M.; Juergens, K.; Stoeckel, J.; Heindel, W.; Wormanns, D.

    2006-03-01

    Purpose: To compare sensitivity and reading time when using CAD as second reader resp. concurrent reader. Materials and Methods: Fifty chest MDCT scans due to clinical indication were analysed independently by four radiologists two times: First with CAD as concurrent reader (display of CAD results simultaneously to the primary reading by the radiologist); then after a median of 14 weeks with CAD as second reader (CAD results were shown after completion of a reading session without CAD). A prototype version of Siemens LungCAD (Siemens,Malvern,USA) was used. Sensitivities and reading times for detecting nodules >=4mm of concurrent reading, reading without CAD and second reading were recorded. In a consensus conference false positive findings were eliminated. Student's T-Test was used to compare sensitivities and reading times. Results: 108 true positive nodules were found. Mean sensitivity was .68 for reading without CAD, .68 for concurrent reading and .75 for second reading. Differences of sensitivities were significant between concurrent and second reading (p<.001) resp. reading without CAD and second reading (p=.001). Mean reading time for concurrent reading was significant shorter (274s) compared to reading without CAD (294s;p=.04) and second reading (337sp<.001). New work to be presented: To our knowledge this is the first study that compares sensitivities and reading times between use of CAD as concurrent resp. second reader. Conclusion: CAD can either be used to speed up reading of chest CT cases for pulmonary nodules without loss of sensitivity as concurrent reader -OR (and not AND) to increase sensitivity and reading time as second reader.

  7. Education and Training Packages for CAD/CAM.

    ERIC Educational Resources Information Center

    Wright, I. C.

    1986-01-01

    Discusses educational efforts in the fields of Computer Assisted Design and Manufacturing (CAD/CAM). Describes two educational training initiatives underway in the United Kingdom, one of which is a resource materials package for teachers of CAD/CAM at the undergraduate level, and the other a training course for managers of CAD/CAM systems. (TW)

  8. Efficient computations with the likelihood ratio distribution.

    PubMed

    Kruijver, Maarten

    2015-01-01

    What is the probability that the likelihood ratio exceeds a threshold t, if a specified hypothesis is true? This question is asked, for instance, when performing power calculations for kinship testing, when computing true and false positive rates for familial searching and when computing the power of discrimination of a complex mixture. Answering this question is not straightforward, since there is are a huge number of possible genotypic combinations to consider. Different solutions are found in the literature. Several authors estimate the threshold exceedance probability using simulation. Corradi and Ricciardi [1] propose a discrete approximation to the likelihood ratio distribution which yields a lower and upper bound on the probability. Nothnagel et al. [2] use the normal distribution as an approximation to the likelihood ratio distribution. Dørum et al. [3] introduce an algorithm that can be used for exact computation, but this algorithm is computationally intensive, unless the threshold t is very large. We present three new approaches to the problem. Firstly, we show how importance sampling can be used to make the simulation approach significantly more efficient. Importance sampling is a statistical technique that turns out to work well in the current context. Secondly, we present a novel algorithm for computing exceedance probabilities. The algorithm is exact, fast and can handle relatively large problems. Thirdly, we introduce an approach that combines the novel algorithm with the discrete approximation of Corradi and Ricciardi. This last approach can be applied to very large problems and yields a lower and upper bound on the exceedance probability. The use of the different approaches is illustrated with examples from forensic genetics, such as kinship testing, familial searching and mixture interpretation. The algorithms are implemented in an R-package called DNAprofiles, which is freely available from CRAN.

  9. A reciprocal allosteric mechanism for efficient transfer of labile intermediates between active sites in CAD, the mammalian pyrimidine-biosynthetic multienzyme polypeptide.

    PubMed

    Irvine, H S; Shaw, S M; Paton, A; Carrey, E A

    1997-08-01

    Carbamoyl phosphate is the product of carbamoyl phosphate synthetase (CPS II) activity and the substrate of the aspartate transcarbamoylase (ATCase) activity, each of which is found in CAD, a large 240-kDa multienzyme polypeptide in mammals that catalyses the first three steps in pyrimidine biosynthesis. In our study of the transfer of the labile intermediate between the two active sites, we have used assays that differentiate the synthesis of carbamoyl phosphate from the overall reaction of CPS II and ATCase that produces carbamoyl aspartate. We provided excess exogenous carbamoyl phosphate and monitored its access to the respective active sites through the production of carbamoyl phosphate and carbamoyl aspartate from radiolabelled bicarbonate. Three features indicate interactions between the folded CPS II and ATCase domains causing reciprocal conformational changes. First, even in the presence of approximately 1 mM unlabelled carbamoyl phosphate, when the aspartate concentration is high ATCase uses endogenous carbamoyl phosphate for the synthesis of radiolabelled carbamoyl aspartate. In contrast, the isolated CPS II forward reaction is inhibited by excess unlabelled carbamoyl phosphate. Secondly, the affinity of the ATCase for carbamoyl phosphate and aspartate is modulated when substrates bind to CPS II. Thirdly, the transition-state analogue phosphonacetyl-L-aspartate is a less efficient inhibitor of the ATCase when the substrates for CPS II are present. All these effects operate when CPS II is in the more active P state, which is induced by high concentrations of ATP and magnesium ions and when 5'-phosphoribosyl diphosphate (the allosteric activator) is present with low concentrations of ATP; these are conditions that would be met during active biosynthesis in the cell. We propose a phenomenon of reciprocal allostery that encourages the efficient transfer of the labile intermediate within the multienzyme polypeptide CAD. In this model, binding of aspartate to

  10. A primer on the energy efficiency of computing

    SciTech Connect

    Koomey, Jonathan G.

    2015-03-30

    The efficiency of computing at peak output has increased rapidly since the dawn of the computer age. This paper summarizes some of the key factors affecting the efficiency of computing in all usage modes. While there is still great potential for improving the efficiency of computing devices, we will need to alter how we do computing in the next few decades because we are finally approaching the limits of current technologies.

  11. Application of Fisher fusion techniques to improve the individual performance of sonar computer-aided detection/computer-aided classification (CAD/CAC) algorithms

    NASA Astrophysics Data System (ADS)

    Ciany, Charles M.; Zurawski, William C.

    2009-05-01

    Raytheon has extensively processed high-resolution sidescan sonar images with its CAD/CAC algorithms to provide classification of targets in a variety of shallow underwater environments. The Raytheon CAD/CAC algorithm is based on non-linear image segmentation into highlight, shadow, and background regions, followed by extraction, association, and scoring of features from candidate highlight and shadow regions of interest (ROIs). The targets are classified by thresholding an overall classification score, which is formed by summing the individual feature scores. The algorithm performance is measured in terms of probability of correct classification as a function of false alarm rate, and is determined by both the choice of classification features and the manner in which the classifier rates and combines these features to form its overall score. In general, the algorithm performs very reliably against targets that exhibit "strong" highlight and shadow regions in the sonar image- i.e., both the highlight echo and its associated shadow region from the target are distinct relative to the ambient background. However, many real-world undersea environments can produce sonar images in which a significant percentage of the targets exhibit either "weak" highlight or shadow regions in the sonar image. The challenge of achieving robust performance in these environments has traditionally been addressed by modifying the individual feature scoring algorithms to optimize the separation between the corresponding highlight or shadow feature scores of targets and non-targets. This study examines an alternate approach that employs principles of Fisher fusion to determine a set of optimal weighting coefficients that are applied to the individual feature scores before summing to form the overall classification score. The results demonstrate improved performance of the CAD/CAC algorithm on at-sea data sets.

  12. Project CAD as of July 1978: CAD support project, situation in July 1978

    NASA Technical Reports Server (NTRS)

    Boesch, L.; Lang-Lendorff, G.; Rothenberg, R.; Stelzer, V.

    1979-01-01

    The structure of Computer Aided Design (CAD) and the requirements for program developments in past and future are described. The actual standard and the future aims of CAD programs are presented. The developed programs in: (1) civil engineering; (2) mechanical engineering; (3) chemical engineering/shipbuilding; (4) electrical engineering; and (5) general programs are discussed.

  13. Using AutoCAD for descriptive geometry exercises. in undergraduate structural geology

    NASA Astrophysics Data System (ADS)

    Jacobson, Carl E.

    2001-02-01

    The exercises in descriptive geometry typically utilized in undergraduate structural geology courses are quickly and easily solved using the computer drafting program AutoCAD. The key to efficient use of AutoCAD for descriptive geometry involves taking advantage of User Coordinate Systems, alternative angle conventions, relative coordinates, and other aspects of AutoCAD that may not be familiar to the beginning user. A summary of these features and an illustration of their application to the creation of structure contours for a planar dipping bed provides the background necessary to solve other problems in descriptive geometry with the computer. The ease of the computer constructions reduces frustration for the student and provides more time to think about the principles of the problems.

  14. Efficient Computational Screening of Organic Polymer Photovoltaics.

    PubMed

    Kanal, Ilana Y; Owens, Steven G; Bechtel, Jonathon S; Hutchison, Geoffrey R

    2013-05-16

    There has been increasing interest in rational, computationally driven design methods for materials, including organic photovoltaics (OPVs). Our approach focuses on a screening "pipeline", using a genetic algorithm for first stage screening and multiple filtering stages for further refinement. An important step forward is to expand our diversity of candidate compounds, including both synthetic and property-based measures of diversity. For example, top monomer pairs from our screening are all donor-donor (D-D) combinations, in contrast with the typical donor-acceptor (D-A) motif used in organic photovoltaics. We also find a strong "sequence effect", in which the average HOMO-LUMO gap of tetramers changes by ∼0.2 eV as a function of monomer sequence (e.g., ABBA versus BAAB); this has rarely been explored in conjugated polymers. Beyond such optoelectronic optimization, we discuss other properties needed for high-efficiency organic solar cells, and applications of screening methods to other areas, including non-fullerene n-type materials, tandem cells, and improving charge and exciton transport. PMID:26282968

  15. Train effectively for CAD/D

    SciTech Connect

    Not Available

    1983-04-01

    After failing with an unstructured computer-aided drafting/ design CAD/D program, Bechtel changed to a structured training program. Five considerations are presented here: teach CAD/D to engineers, not engineering to CAD/D experts; keep the program flexible enough to avoid rewriting due to fast technology evolution; pace information delivery; and rote learning of sequences only works if the students have a conceptual model first. On the job training is necessary, and better monitoring systems to test the OJT are needed. One such test is presented.

  16. AutoCAD-To-NASTRAN Translator Program

    NASA Technical Reports Server (NTRS)

    Jones, A.

    1989-01-01

    Program facilitates creation of finite-element mathematical models from geometric entities. AutoCAD to NASTRAN translator (ACTON) computer program developed to facilitate quick generation of small finite-element mathematical models for use with NASTRAN finite-element modeling program. Reads geometric data of drawing from Data Exchange File (DXF) used in AutoCAD and other PC-based drafting programs. Written in Microsoft Quick-Basic (Version 2.0).

  17. Efficient quantum computing using coherent photon conversion.

    PubMed

    Langford, N K; Ramelow, S; Prevedel, R; Munro, W J; Milburn, G J; Zeilinger, A

    2011-10-12

    Single photons are excellent quantum information carriers: they were used in the earliest demonstrations of entanglement and in the production of the highest-quality entanglement reported so far. However, current schemes for preparing, processing and measuring them are inefficient. For example, down-conversion provides heralded, but randomly timed, single photons, and linear optics gates are inherently probabilistic. Here we introduce a deterministic process--coherent photon conversion (CPC)--that provides a new way to generate and process complex, multiquanta states for photonic quantum information applications. The technique uses classically pumped nonlinearities to induce coherent oscillations between orthogonal states of multiple quantum excitations. One example of CPC, based on a pumped four-wave-mixing interaction, is shown to yield a single, versatile process that provides a full set of photonic quantum processing tools. This set satisfies the DiVincenzo criteria for a scalable quantum computing architecture, including deterministic multiqubit entanglement gates (based on a novel form of photon-photon interaction), high-quality heralded single- and multiphoton states free from higher-order imperfections, and robust, high-efficiency detection. It can also be used to produce heralded multiphoton entanglement, create optically switchable quantum circuits and implement an improved form of down-conversion with reduced higher-order effects. Such tools are valuable building blocks for many quantum-enabled technologies. Finally, using photonic crystal fibres we experimentally demonstrate quantum correlations arising from a four-colour nonlinear process suitable for CPC and use these measurements to study the feasibility of reaching the deterministic regime with current technology. Our scheme, which is based on interacting bosonic fields, is not restricted to optical systems but could also be implemented in optomechanical, electromechanical and superconducting

  18. Aerodynamic Design of Complex Configurations Using Cartesian Methods and CAD Geometry

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.; Pulliam, Thomas H.

    2003-01-01

    The objective for this paper is to present the development of an optimization capability for the Cartesian inviscid-flow analysis package of Aftosmis et al. We evaluate and characterize the following modules within the new optimization framework: (1) A component-based geometry parameterization approach using a CAD solid representation and the CAPRI interface. (2) The use of Cartesian methods in the development Optimization techniques using a genetic algorithm. The discussion and investigations focus on several real world problems of the optimization process. We examine the architectural issues associated with the deployment of a CAD-based design approach in a heterogeneous parallel computing environment that contains both CAD workstations and dedicated compute nodes. In addition, we study the influence of noise on the performance of optimization techniques, and the overall efficiency of the optimization process for aerodynamic design of complex three-dimensional configurations. of automated optimization tools. rithm and a gradient-based algorithm.

  19. Multi-site evaluation of a computer aided detection (CAD) algorithm for small acute intra-cranial hemorrhage and development of a stand-alone CAD system ready for deployment in a clinical environment

    NASA Astrophysics Data System (ADS)

    Deshpande, Ruchi R.; Fernandez, James; Lee, Joon K.; Chan, Tao; Liu, Brent J.; Huang, H. K.

    2010-03-01

    Timely detection of Acute Intra-cranial Hemorrhage (AIH) in an emergency environment is essential for the triage of patients suffering from Traumatic Brain Injury. Moreover, the small size of lesions and lack of experience on the reader's part could lead to difficulties in the detection of AIH. A CT based CAD algorithm for the detection of AIH has been developed in order to improve upon the current standard of identification and treatment of AIH. A retrospective analysis of the algorithm has already been carried out with 135 AIH CT studies with 135 matched normal head CT studies from the Los Angeles County General Hospital/ University of Southern California Hospital System (LAC/USC). In the next step, AIH studies have been collected from Walter Reed Army Medical Center, and are currently being processed using the AIH CAD system as part of implementing a multi-site assessment and evaluation of the performance of the algorithm. The sensitivity and specificity numbers from the Walter Reed study will be compared with the numbers from the LAC/USC study to determine if there are differences in the presentation and detection due to the difference in the nature of trauma between the two sites. Simultaneously, a stand-alone system with a user friendly GUI has been developed to facilitate implementation in a clinical setting.

  20. Training a CAD classifier with correlated data

    NASA Astrophysics Data System (ADS)

    Dundar, Murat; Krishnapuram, Balaji; Wolf, Matthias; Lakare, Sarang; Bogoni, Luca; Bi, Jinbo; Rao, R. Bharat

    2007-03-01

    Most methods for classifier design assume that the training samples are drawn independently and identically from an unknown data generating distribution (i.i.d.), although this assumption is violated in several real life problems. Relaxing this i.i.d. assumption, we develop training algorithms for the more realistic situation where batches or sub-groups of training samples may have internal correlations, although the samples from different batches may be considered to be uncorrelated; we also consider the extension to cases with hierarchical--i.e. higher order--correlation structure between batches of training samples. After describing efficient algorithms that scale well to large datasets, we provide some theoretical analysis to establish their validity. Experimental results from real-life Computer Aided Detection (CAD) problems indicate that relaxing the i.i.d. assumption leads to statistically significant improvements in the accuracy of the learned classifier.

  1. [An experimental research on the fabrication of the fused porcelain to CAD/CAM molar crown].

    PubMed

    Dai, Ning; Zhou, Yongyao; Liao, Wenhe; Yu, Qing; An, Tao; Jiao, Yiqun

    2007-02-01

    This paper introduced the fabrication process of the fused porcelain to molar crown with CAD/CAM technology. Firstly, preparation teeth data was retrieved by the 3D-optical measuring system. Then, we have reconstructed the inner surface designed the outer surface shape with the computer aided design software. Finally, the mini high-speed NC milling machine was used to produce the fused porcelain to CAD/CAM molar crown. The result has proved that the fabrication process is reliable and efficient. The dental restoration quality is steady and precise. PMID:17333906

  2. CAD/CAM: Practical and Persuasive in Canadian Schools

    ERIC Educational Resources Information Center

    Willms, Ed

    2007-01-01

    Chances are that many high school students would not know how to use drafting instruments, but some might want to gain competence in computer-assisted design (CAD) and possibly computer-assisted manufacturing (CAM). These students are often attracted to tech courses by the availability of CAD/CAM instructions, and many go on to impress employers…

  3. An application protocol for CAD to CAD transfer of electronic information

    NASA Technical Reports Server (NTRS)

    Azu, Charles C., Jr.

    1993-01-01

    The exchange of Computer Aided Design (CAD) information between dissimilar CAD systems is a problem. This is especially true for transferring electronics CAD information such as multi-chip module (MCM), hybrid microcircuit assembly (HMA), and printed circuit board (PCB) designs. Currently, there exists several neutral data formats for transferring electronics CAD information. These include IGES, EDIF, and DXF formats. All these formats have limitations for use in exchanging electronic data. In an attempt to overcome these limitations, the Navy's MicroCIM program implemented a project to transfer hybrid microcircuit design information between dissimilar CAD systems. The IGES (Initial Graphics Exchange Specification) format is used since it is well established within the CAD industry. The goal of the project is to have a complete transfer of microelectronic CAD information, using IGES, without any data loss. An Application Protocol (AP) is being developed to specify how hybrid microcircuit CAD information will be represented by IGES entity constructs. The AP defines which IGES data items are appropriate for describing HMA geometry, connectivity, and processing as well as HMA material characteristics.

  4. Cone beam computed tomography imaging as a primary diagnostic tool for computer-guided surgery and CAD-CAM interim removable and fixed dental prostheses.

    PubMed

    Charette, Jyme R; Goldberg, Jack; Harris, Bryan T; Morton, Dean; Llop, Daniel R; Lin, Wei-Shao

    2016-08-01

    This article describes a digital workflow using cone beam computed tomography imaging as the primary diagnostic tool in the virtual planning of the computer-guided surgery and fabrication of a maxillary interim complete removable dental prosthesis and mandibular interim implant-supported complete fixed dental prosthesis with computer-aided design and computer-aided manufacturing technology. Diagnostic impressions (conventional or digital) and casts are unnecessary in this proposed digital workflow, providing clinicians with an alternative treatment in the indicated clinical scenario. PMID:27086108

  5. Efficient Computation Of Confidence Intervals Of Parameters

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.

    1992-01-01

    Study focuses on obtaining efficient algorithm for estimation of confidence intervals of ML estimates. Four algorithms selected to solve associated constrained optimization problem. Hybrid algorithms, following search and gradient approaches, prove best.

  6. Some Workplace Effects of CAD and CAM.

    ERIC Educational Resources Information Center

    Ebel, Karl-H.; Ulrich, Erhard

    1987-01-01

    Examines the impact of computer-aided design (CAD) and computer-aided manufacturing (CAM) on employment, work organization, working conditions, job content, training, and industrial relations in several countries. Finds little evidence of negative employment effects since productivity gains are offset by various compensatory factors. (Author/CH)

  7. Pipe Drafting with CAD. Teacher Edition.

    ERIC Educational Resources Information Center

    Smithson, Buddy

    This teacher's guide contains nine units of instruction for a course on computer-assisted pipe drafting. The course covers the following topics: introduction to pipe drafting with CAD (computer-assisted design); flow diagrams; pipe and pipe components; valves; piping plans and elevations; isometrics; equipment fabrication drawings; piping design…

  8. Efficient Parallel Engineering Computing on Linux Workstations

    NASA Technical Reports Server (NTRS)

    Lou, John Z.

    2010-01-01

    A C software module has been developed that creates lightweight processes (LWPs) dynamically to achieve parallel computing performance in a variety of engineering simulation and analysis applications to support NASA and DoD project tasks. The required interface between the module and the application it supports is simple, minimal and almost completely transparent to the user applications, and it can achieve nearly ideal computing speed-up on multi-CPU engineering workstations of all operating system platforms. The module can be integrated into an existing application (C, C++, Fortran and others) either as part of a compiled module or as a dynamically linked library (DLL).

  9. Efficient Computation Of Manipulator Inertia Matrix

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1991-01-01

    Improved method for computation of manipulator inertia matrix developed, based on concept of spatial inertia of composite rigid body. Required for implementation of advanced dynamic-control schemes as well as dynamic simulation of manipulator motion. Motivated by increasing demand for fast algorithms to provide real-time control and simulation capability and, particularly, need for faster-than-real-time simulation capability, required in many anticipated space teleoperation applications.

  10. A CAD System for Hemorrhagic Stroke.

    PubMed

    Nowinski, Wieslaw L; Qian, Guoyu; Hanley, Daniel F

    2014-09-01

    Computer-aided detection/diagnosis (CAD) is a key component of routine clinical practice, increasingly used for detection, interpretation, quantification and decision support. Despite a critical need, there is no clinically accepted CAD system for stroke yet. Here we introduce a CAD system for hemorrhagic stroke. This CAD system segments, quantifies, and displays hematoma in 2D/3D, and supports evacuation of hemorrhage by thrombolytic treatment monitoring progression and quantifying clot removal. It supports seven-step workflow: select patient, add a new study, process patient's scans, show segmentation results, plot hematoma volumes, show 3D synchronized time series hematomas, and generate report. The system architecture contains four components: library, tools, application with user interface, and hematoma segmentation algorithm. The tools include a contour editor, 3D surface modeler, 3D volume measure, histogramming, hematoma volume plot, and 3D synchronized time-series hematoma display. The CAD system has been designed and implemented in C++. It has also been employed in the CLEAR and MISTIE phase-III, multicenter clinical trials. This stroke CAD system is potentially useful in research and clinical applications, particularly for clinical trials.

  11. Skyline View: Efficient Distributed Subspace Skyline Computation

    NASA Astrophysics Data System (ADS)

    Kim, Jinhan; Lee, Jongwuk; Hwang, Seung-Won

    Skyline queries have gained much attention as alternative query semantics with pros (e.g.low query formulation overhead) and cons (e.g.large control over result size). To overcome the cons, subspace skyline queries have been recently studied, where users iteratively specify relevant feature subspaces on search space. However, existing works mainly focuss on centralized databases. This paper aims to extend subspace skyline computation to distributed environments such as the Web, where the most important issue is to minimize the cost of accessing vertically distributed objects. Toward this goal, we exploit prior skylines that have overlapped subspaces to the given subspace. In particular, we develop algorithms for three scenarios- when the subspace of prior skylines is superspace, subspace, or the rest. Our experimental results validate that our proposed algorithm shows significantly better performance than the state-of-the-art algorithms.

  12. Efficient computation of parameter confidence intervals

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.

    1987-01-01

    An important step in system identification of aircraft is the estimation of stability and control derivatives from flight data along with an assessment of parameter accuracy. When the maximum likelihood estimation technique is used, parameter accuracy is commonly assessed by the Cramer-Rao lower bound. It is known, however, that in some cases the lower bound can be substantially different from the parameter variance. Under these circumstances the Cramer-Rao bounds may be misleading as an accuracy measure. This paper discusses the confidence interval estimation problem based on likelihood ratios, which offers a more general estimate of the error bounds. Four approaches are considered for computing confidence intervals of maximum likelihood parameter estimates. Each approach is applied to real flight data and compared.

  13. A computable expression of closure to efficient causation.

    PubMed

    Mossio, Matteo; Longo, Giuseppe; Stewart, John

    2009-04-01

    In this paper, we propose a mathematical expression of closure to efficient causation in terms of lambda-calculus; we argue that this opens up the perspective of developing principled computer simulations of systems closed to efficient causation in an appropriate programming language. An important implication of our formulation is that, by exhibiting an expression in lambda-calculus, which is a paradigmatic formalism for computability and programming, we show that there are no conceptual or principled problems in realizing a computer simulation or model of closure to efficient causation. We conclude with a brief discussion of the question whether closure to efficient causation captures all relevant properties of living systems. We suggest that it might not be the case, and that more complex definitions could indeed create crucial some obstacles to computability.

  14. Duality quantum computer and the efficient quantum simulations

    NASA Astrophysics Data System (ADS)

    Wei, Shi-Jie; Long, Gui-Lu

    2016-03-01

    Duality quantum computing is a new mode of a quantum computer to simulate a moving quantum computer passing through a multi-slit. It exploits the particle wave duality property for computing. A quantum computer with n qubits and a qudit simulates a moving quantum computer with n qubits passing through a d-slit. Duality quantum computing can realize an arbitrary sum of unitaries and therefore a general quantum operator, which is called a generalized quantum gate. All linear bounded operators can be realized by the generalized quantum gates, and unitary operators are just the extreme points of the set of generalized quantum gates. Duality quantum computing provides flexibility and a clear physical picture in designing quantum algorithms, and serves as a powerful bridge between quantum and classical algorithms. In this paper, after a brief review of the theory of duality quantum computing, we will concentrate on the applications of duality quantum computing in simulations of Hamiltonian systems. We will show that duality quantum computing can efficiently simulate quantum systems by providing descriptions of the recent efficient quantum simulation algorithm of Childs and Wiebe (Quantum Inf Comput 12(11-12):901-924, 2012) for the fast simulation of quantum systems with a sparse Hamiltonian, and the quantum simulation algorithm by Berry et al. (Phys Rev Lett 114:090502, 2015), which provides exponential improvement in precision for simulating systems with a sparse Hamiltonian.

  15. Texture functions in image analysis: A computationally efficient solution

    NASA Technical Reports Server (NTRS)

    Cox, S. C.; Rose, J. F.

    1983-01-01

    A computationally efficient means for calculating texture measurements from digital images by use of the co-occurrence technique is presented. The calculation of the statistical descriptors of image texture and a solution that circumvents the need for calculating and storing a co-occurrence matrix are discussed. The results show that existing efficient algorithms for calculating sums, sums of squares, and cross products can be used to compute complex co-occurrence relationships directly from the digital image input.

  16. An Evaluation of Internet-Based CAD Collaboration Tools

    ERIC Educational Resources Information Center

    Smith, Shana Shiang-Fong

    2004-01-01

    Due to the now widespread use of the Internet, most companies now require computer aided design (CAD) tools that support distributed collaborative design on the Internet. Such CAD tools should enable designers to share product models, as well as related data, from geographically distant locations. However, integrated collaborative design…

  17. Making a Case for CAD in the Curriculum.

    ERIC Educational Resources Information Center

    Threlfall, K. Denise

    1995-01-01

    Computer-assisted design (CAD) technology is transforming the apparel industry. Students of fashion merchandising and clothing design must be prepared on state-of-the-art equipment. ApparelCAD software is one example of courseware for instruction in pattern design and production. (SK)

  18. Computationally efficient Bayesian inference for inverse problems.

    SciTech Connect

    Marzouk, Youssef M.; Najm, Habib N.; Rahn, Larry A.

    2007-10-01

    Bayesian statistics provides a foundation for inference from noisy and incomplete data, a natural mechanism for regularization in the form of prior information, and a quantitative assessment of uncertainty in the inferred results. Inverse problems - representing indirect estimation of model parameters, inputs, or structural components - can be fruitfully cast in this framework. Complex and computationally intensive forward models arising in physical applications, however, can render a Bayesian approach prohibitive. This difficulty is compounded by high-dimensional model spaces, as when the unknown is a spatiotemporal field. We present new algorithmic developments for Bayesian inference in this context, showing strong connections with the forward propagation of uncertainty. In particular, we introduce a stochastic spectral formulation that dramatically accelerates the Bayesian solution of inverse problems via rapid evaluation of a surrogate posterior. We also explore dimensionality reduction for the inference of spatiotemporal fields, using truncated spectral representations of Gaussian process priors. These new approaches are demonstrated on scalar transport problems arising in contaminant source inversion and in the inference of inhomogeneous material or transport properties. We also present a Bayesian framework for parameter estimation in stochastic models, where intrinsic stochasticity may be intermingled with observational noise. Evaluation of a likelihood function may not be analytically tractable in these cases, and thus several alternative Markov chain Monte Carlo (MCMC) schemes, operating on the product space of the observations and the parameters, are introduced.

  19. On the Use of CAD and Cartesian Methods for Aerodynamic Optimization

    NASA Technical Reports Server (NTRS)

    Nemec, M.; Aftosmis, M. J.; Pulliam, T. H.

    2004-01-01

    The objective for this paper is to present the development of an optimization capability for Curt3D, a Cartesian inviscid-flow analysis package. We present the construction of a new optimization framework and we focus on the following issues: 1) Component-based geometry parameterization approach using parametric-CAD models and CAPRI. A novel geometry server is introduced that addresses the issue of parallel efficiency while only sparingly consuming CAD resources; 2) The use of genetic and gradient-based algorithms for three-dimensional aerodynamic design problems. The influence of noise on the optimization methods is studied. Our goal is to create a responsive and automated framework that efficiently identifies design modifications that result in substantial performance improvements. In addition, we examine the architectural issues associated with the deployment of a CAD-based approach in a heterogeneous parallel computing environment that contains both CAD workstations and dedicated compute engines. We demonstrate the effectiveness of the framework for a design problem that features topology changes and complex geometry.

  20. Earthquake detection through computationally efficient similarity search.

    PubMed

    Yoon, Clara E; O'Reilly, Ossian; Bergen, Karianne J; Beroza, Gregory C

    2015-12-01

    Seismology is experiencing rapid growth in the quantity of data, which has outpaced the development of processing algorithms. Earthquake detection-identification of seismic events in continuous data-is a fundamental operation for observational seismology. We developed an efficient method to detect earthquakes using waveform similarity that overcomes the disadvantages of existing detection methods. Our method, called Fingerprint And Similarity Thresholding (FAST), can analyze a week of continuous seismic waveform data in less than 2 hours, or 140 times faster than autocorrelation. FAST adapts a data mining algorithm, originally designed to identify similar audio clips within large databases; it first creates compact "fingerprints" of waveforms by extracting key discriminative features, then groups similar fingerprints together within a database to facilitate fast, scalable search for similar fingerprint pairs, and finally generates a list of earthquake detections. FAST detected most (21 of 24) cataloged earthquakes and 68 uncataloged earthquakes in 1 week of continuous data from a station located near the Calaveras Fault in central California, achieving detection performance comparable to that of autocorrelation, with some additional false detections. FAST is expected to realize its full potential when applied to extremely long duration data sets over a distributed network of seismic stations. The widespread application of FAST has the potential to aid in the discovery of unexpected seismic signals, improve seismic monitoring, and promote a greater understanding of a variety of earthquake processes. PMID:26665176

  1. Earthquake detection through computationally efficient similarity search

    PubMed Central

    Yoon, Clara E.; O’Reilly, Ossian; Bergen, Karianne J.; Beroza, Gregory C.

    2015-01-01

    Seismology is experiencing rapid growth in the quantity of data, which has outpaced the development of processing algorithms. Earthquake detection—identification of seismic events in continuous data—is a fundamental operation for observational seismology. We developed an efficient method to detect earthquakes using waveform similarity that overcomes the disadvantages of existing detection methods. Our method, called Fingerprint And Similarity Thresholding (FAST), can analyze a week of continuous seismic waveform data in less than 2 hours, or 140 times faster than autocorrelation. FAST adapts a data mining algorithm, originally designed to identify similar audio clips within large databases; it first creates compact “fingerprints” of waveforms by extracting key discriminative features, then groups similar fingerprints together within a database to facilitate fast, scalable search for similar fingerprint pairs, and finally generates a list of earthquake detections. FAST detected most (21 of 24) cataloged earthquakes and 68 uncataloged earthquakes in 1 week of continuous data from a station located near the Calaveras Fault in central California, achieving detection performance comparable to that of autocorrelation, with some additional false detections. FAST is expected to realize its full potential when applied to extremely long duration data sets over a distributed network of seismic stations. The widespread application of FAST has the potential to aid in the discovery of unexpected seismic signals, improve seismic monitoring, and promote a greater understanding of a variety of earthquake processes. PMID:26665176

  2. Earthquake detection through computationally efficient similarity search.

    PubMed

    Yoon, Clara E; O'Reilly, Ossian; Bergen, Karianne J; Beroza, Gregory C

    2015-12-01

    Seismology is experiencing rapid growth in the quantity of data, which has outpaced the development of processing algorithms. Earthquake detection-identification of seismic events in continuous data-is a fundamental operation for observational seismology. We developed an efficient method to detect earthquakes using waveform similarity that overcomes the disadvantages of existing detection methods. Our method, called Fingerprint And Similarity Thresholding (FAST), can analyze a week of continuous seismic waveform data in less than 2 hours, or 140 times faster than autocorrelation. FAST adapts a data mining algorithm, originally designed to identify similar audio clips within large databases; it first creates compact "fingerprints" of waveforms by extracting key discriminative features, then groups similar fingerprints together within a database to facilitate fast, scalable search for similar fingerprint pairs, and finally generates a list of earthquake detections. FAST detected most (21 of 24) cataloged earthquakes and 68 uncataloged earthquakes in 1 week of continuous data from a station located near the Calaveras Fault in central California, achieving detection performance comparable to that of autocorrelation, with some additional false detections. FAST is expected to realize its full potential when applied to extremely long duration data sets over a distributed network of seismic stations. The widespread application of FAST has the potential to aid in the discovery of unexpected seismic signals, improve seismic monitoring, and promote a greater understanding of a variety of earthquake processes.

  3. Advanced Geologic Modeling Using CAD and Unstructured Meshes

    NASA Astrophysics Data System (ADS)

    Melnikova, Y.; Jacquemyn, C.; Osman, H.; Gorman, G.; Hampson, G.; Jackson, M.

    2015-12-01

    Capturing complex, multiscale geologic heterogeneity in subsurface flow models is challenging. Surface-based modeling (SBM) offers an alternative approach to conventional grid-based methods. In SBM, all geologic features that impact the distribution of material properties, such as porosity and permeability, are modeled as volumes bounded by surfaces. Within these volumes, termed geologic domains, the material properties are constant. A typical model contains numerous such domains. The surfaces have parametric, grid-free representation which, in principle, allows for unlimited complexity, since no resolution is implied at the stage of modeling and features of any scale can be included. We demonstrate a method to create stochastic, surface-based models using computer aided design (CAD) and efficiently discretise them for flow simulation. The surfaces are represented using non-uniform, rational B-splines (NURBS), and processed in a CAD environment employing Boolean operations. We show examples of fluvial channels, fracture networks and scour events. Cartesian-like grids are not able to capture the complex geometries in these models without using excessively large numbers of grid blocks. Unstructured meshes can more efficiently approximate the geometries. However, high aspect ratio features and varying curvatures present challenges for algorithms to produce quality, unstructured meshes without excessive user interaction. We contribute an automated integrated workflow that processes the input geometry created in the CAD environment, creates the final model, and discretises it with a quality tetrahedral mesh. For computational efficiency, we use a geometry-adaptive mesh that distributes the element density and size in accordance with the geometrical complexity of the model. We show examples of finite-element flow simulations of the resulting geologic models. The new approach has broad application in modeling subsurface flow.

  4. Informatics infrastructure of CAD system.

    PubMed

    Pietka, Ewa; Gertych, Arkadiusz; Witko, Krzysztof

    2005-01-01

    A computer aided diagnosis (CAD) system requires several components which influence its effectiveness. An image processing methodology is responsible for the analysis, database structure archives and distributes the patient demographics, clinical information, and image data. A graphical user interface is applied in order to enter the data and present it to the user. By designing dynamic Web pages a remote access to the entire is granted. The computer aided diagnosis system includes three layers, which might be installed on various platforms. Elements of the application software are designed independently. Integration of all components is another issue discussed in the presented paper. Implementation of a computer aided diagnosis system improves and accelerates the analysis by giving to the user objective measurement tools. It also standardizes the decision-making process and solves the problem of replicability. Finally, it permits a set of images and features to be collected and recognized as a medical standard and be applied in education and research. PMID:15755535

  5. Efficiently modeling neural networks on massively parallel computers

    SciTech Connect

    Farber, R.M.

    1992-01-01

    Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper will describe the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SMM computers and can be implemented on computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors. We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can be extend to arbitrarily large networks by merging the memory space of separate processors with fast adjacent processor inter-processor communications. This paper will consider the simulation of only feed forward neural network although this method is extendible to recurrent networks.

  6. Efficiently modeling neural networks on massively parallel computers

    SciTech Connect

    Farber, R.M.

    1992-12-01

    Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper will describe the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SMM computers and can be implemented on computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors. We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can be extend to arbitrarily large networks by merging the memory space of separate processors with fast adjacent processor inter-processor communications. This paper will consider the simulation of only feed forward neural network although this method is extendible to recurrent networks.

  7. Efficiently modeling neural networks on massively parallel computers

    NASA Technical Reports Server (NTRS)

    Farber, Robert M.

    1993-01-01

    Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper describes the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD (Single Instruction Multiple Data) architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SIMD computers and can be implemented on MIMD computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors (which has a sub-linear runtime growth on the order of O(log(number of processors)). We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can extend to arbitrarily large networks (within memory limitations) by merging the memory space of separate processors with fast adjacent processor interprocessor communications. This paper will consider the simulation of only feed forward neural network although this method is extendable to recurrent networks.

  8. An efficient method for computation of the manipulator inertia matrix

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1989-01-01

    An efficient method of computation of the manipulator inertia matrix is presented. Using spatial notations, the method leads to the definition of the composite rigid-body spatial inertia, which is a spatial representation of the notion of augmented body. The previously proposed methods, the physical interpretations leading to their derivation, and their redundancies are analyzed. The proposed method achieves a greater efficiency by eliminating the redundancy in the intrinsic equations as well as by a better choice of coordinate frame for their projection. In this case, removing the redundancy leads to greater efficiency of the computation in both serial and parallel senses.

  9. A Case Study in CAD Design Automation

    ERIC Educational Resources Information Center

    Lowe, Andrew G.; Hartman, Nathan W.

    2011-01-01

    Computer-aided design (CAD) software and other product life-cycle management (PLM) tools have become ubiquitous in industry during the past 20 years. Over this time they have continuously evolved, becoming programs with enormous capabilities, but the companies that use them have not evolved their design practices at the same rate. Due to the…

  10. Mechanical Drafting with CAD. Teacher Edition.

    ERIC Educational Resources Information Center

    McClain, Gerald R.

    This instructor's manual contains 13 units of instruction for a course on mechanical drafting with options for using computer-aided drafting (CAD). Each unit includes some or all of the following basic components of a unit of instruction: objective sheet, suggested activities for the teacher, assignment sheets and answers to assignment sheets,…

  11. Positive Wigner Functions Render Classical Simulation of Quantum Computation Efficient

    NASA Astrophysics Data System (ADS)

    Mari, A.; Eisert, J.

    2012-12-01

    We show that quantum circuits where the initial state and all the following quantum operations can be represented by positive Wigner functions can be classically efficiently simulated. This is true both for continuous-variable as well as discrete variable systems in odd prime dimensions, two cases which will be treated on entirely the same footing. Noting the fact that Clifford and Gaussian operations preserve the positivity of the Wigner function, our result generalizes the Gottesman-Knill theorem. Our algorithm provides a way of sampling from the output distribution of a computation or a simulation, including the efficient sampling from an approximate output distribution in the case of sampling imperfections for initial states, gates, or measurements. In this sense, this work highlights the role of the positive Wigner function as separating classically efficiently simulable systems from those that are potentially universal for quantum computing and simulation, and it emphasizes the role of negativity of the Wigner function as a computational resource.

  12. A scheme for efficient quantum computation with linear optics

    NASA Astrophysics Data System (ADS)

    Knill, E.; Laflamme, R.; Milburn, G. J.

    2001-01-01

    Quantum computers promise to increase greatly the efficiency of solving problems such as factoring large integers, combinatorial optimization and quantum physics simulation. One of the greatest challenges now is to implement the basic quantum-computational elements in a physical system and to demonstrate that they can be reliably and scalably controlled. One of the earliest proposals for quantum computation is based on implementing a quantum bit with two optical modes containing one photon. The proposal is appealing because of the ease with which photon interference can be observed. Until now, it suffered from the requirement for non-linear couplings between optical modes containing few photons. Here we show that efficient quantum computation is possible using only beam splitters, phase shifters, single photon sources and photo-detectors. Our methods exploit feedback from photo-detectors and are robust against errors from photon loss and detector inefficiency. The basic elements are accessible to experimental investigation with current technology.

  13. I/O-Efficient Scientific Computation Using TPIE

    NASA Technical Reports Server (NTRS)

    Vengroff, Darren Erik; Vitter, Jeffrey Scott

    1996-01-01

    In recent years, input/output (I/O)-efficient algorithms for a wide variety of problems have appeared in the literature. However, systems specifically designed to assist programmers in implementing such algorithms have remained scarce. TPIE is a system designed to support I/O-efficient paradigms for problems from a variety of domains, including computational geometry, graph algorithms, and scientific computation. The TPIE interface frees programmers from having to deal not only with explicit read and write calls, but also the complex memory management that must be performed for I/O-efficient computation. In this paper we discuss applications of TPIE to problems in scientific computation. We discuss algorithmic issues underlying the design and implementation of the relevant components of TPIE and present performance results of programs written to solve a series of benchmark problems using our current TPIE prototype. Some of the benchmarks we present are based on the NAS parallel benchmarks while others are of our own creation. We demonstrate that the central processing unit (CPU) overhead required to manage I/O is small and that even with just a single disk, the I/O overhead of I/O-efficient computation ranges from negligible to the same order of magnitude as CPU time. We conjecture that if we use a number of disks in parallel this overhead can be all but eliminated.

  14. CAD-CAE in Electrical Machines and Drives Teaching.

    ERIC Educational Resources Information Center

    Belmans, R.; Geysen, W.

    1988-01-01

    Describes the use of computer-aided design (CAD) techniques in teaching the design of electrical motors. Approaches described include three technical viewpoints, such as electromagnetics, thermal, and mechanical aspects. Provides three diagrams, a table, and conclusions. (YP)

  15. Overview of NASA MSFC IEC Multi-CAD Collaboration Capability

    NASA Technical Reports Server (NTRS)

    Moushon, Brian; McDuffee, Patrick

    2005-01-01

    This viewgraph presentation provides an overview of a Design and Data Management System (DDMS) for Computer Aided Design (CAD) collaboration in order to support the Integrated Engineering Capability (IEC) at Marshall Space Flight Center (MSFC).

  16. Do Computers Improve the Drawing of a Geometrical Figure for 10 Year-Old Children?

    ERIC Educational Resources Information Center

    Martin, Perrine; Velay, Jean-Luc

    2012-01-01

    Nowadays, computer aided design (CAD) is widely used by designers. Would children learn to draw more easily and more efficiently if they were taught with computerised tools? To answer this question, we made an experiment designed to compare two methods for children to do the same drawing: the classical "pen and paper" method and a CAD method. We…

  17. Efficient Turing-Universal Computation with DNA Polymers

    NASA Astrophysics Data System (ADS)

    Qian, Lulu; Soloveichik, David; Winfree, Erik

    Bennett's proposed chemical Turing machine is one of the most important thought experiments in the study of the thermodynamics of computation. Yet the sophistication of molecular engineering required to physically construct Bennett's hypothetical polymer substrate and enzymes has deterred experimental implementations. Here we propose a chemical implementation of stack machines - a Turing-universal model of computation similar to Turing machines - using DNA strand displacement cascades as the underlying chemical primitive. More specifically, the mechanism described herein is the addition and removal of monomers from the end of a DNA polymer, controlled by strand displacement logic. We capture the motivating feature of Bennett's scheme: that physical reversibility corresponds to logically reversible computation, and arbitrarily little energy per computation step is required. Further, as a method of embedding logic control into chemical and biological systems, polymer-based chemical computation is significantly more efficient than geometry-free chemical reaction networks.

  18. Popescu-Rohrlich correlations imply efficient instantaneous nonlocal quantum computation

    NASA Astrophysics Data System (ADS)

    Broadbent, Anne

    2016-08-01

    In instantaneous nonlocal quantum computation, two parties cooperate in order to perform a quantum computation on their joint inputs, while being restricted to a single round of simultaneous communication. Previous results showed that instantaneous nonlocal quantum computation is possible, at the cost of an exponential amount of prior shared entanglement (in the size of the input). Here, we show that a linear amount of entanglement suffices, (in the size of the computation), as long as the parties share nonlocal correlations as given by the Popescu-Rohrlich box. This means that communication is not required for efficient instantaneous nonlocal quantum computation. Exploiting the well-known relation to position-based cryptography, our result also implies the impossibility of secure position-based cryptography against adversaries with nonsignaling correlations. Furthermore, our construction establishes a quantum analog of the classical communication complexity collapse under nonsignaling correlations.

  19. CAD in the processing plant environment or managing the CAD revolution

    SciTech Connect

    Woolbert, M.A.; Bennett, R.S.; Haring, W.I.

    1985-10-01

    The author presents a case report on the use of a Computer Aided Design/Computer Aided Drafting (CAD) system. Illustrated is a four-work station system, in addition to which there are two 70 megabyte disk drives, a check plotter and a 24-inch wide electrostatic plotter, a 300 megabyte disk for on-line storage, a tape drive for archive and backup, and a 1-megabyte network process server. It is a distributed logic system. The author states that the CAD system both inexpensive enough and powerful enough for the plant environment is relatively new on the market, made possible by the advent of super microcomputers. Also discussed is the impact the CAD system has had on productivity.

  20. Coupling Photon Monte Carlo Simulation and CAD Software. Application to X-ray Nondestructive Evaluation

    NASA Astrophysics Data System (ADS)

    Tabary, J.; Glière, A.

    A Monte Carlo radiation transport simulation program, EGS Nova, and a Computer Aided Design software, BRL-CAD, have been coupled within the framework of Sindbad, a Nondestructive Evaluation (NDE) simulation system. In its current status, the program is very valuable in a NDE laboratory context, as it helps simulate the images due to the uncollided and scattered photon fluxes in a single NDE software environment, without having to switch to a Monte Carlo code parameters set. Numerical validations show a good agreement with EGS4 computed and published data. As the program's major drawback is the execution time, computational efficiency improvements are foreseen.

  1. Full-mouth rehabilitation with monolithic CAD/CAM-fabricated hybrid and all-ceramic materials: A case report and 3-year follow up.

    PubMed

    Selz, Christian F; Vuck, Alexander; Guess, Petra C

    2016-02-01

    Esthetic full-mouth rehabilitation represents a great challenge for clinicians and dental technicians. Computer-aided design/ computer-assisted manufacture (CAD/CAM) technology and novel ceramic materials in combination with adhesive cementation provide a reliable, predictable, and economic workflow. Polychromatic feldspathic CAD/CAM ceramics that are specifically designed for anterior indications result in superior esthetics, whereas novel CAD/CAM hybrid ceramics provide sufficient fracture resistance and adsorption of the occlusal load in posterior areas. Screw-retained monolithic CAD/CAM lithium disilicate crowns (ie, hybrid abutment crowns) represent a reliable and time- and cost-efficient prosthetic implant solution. This case report details a CAD/CAM approach to the full-arch rehabilitation of a 65-year-old patient with toothand implant-supported restorations and provides an overview of the applied CAD/CAM materials and the utilized chairside intraoral scanner. The esthetics, functional occlusion, and gingival and peri-implant tissues remained stable over a follow-up period of 3 years. No signs of fractures within the restorations were observed.

  2. CAD-CAM at Bendix Kansas city: the BICAM system

    SciTech Connect

    Witte, D.R.

    1983-04-01

    Bendix Kansas City Division (BEKC) has been involved in Computer Aided Manufacturing (CAM) technology since the late 1950's when the numerical control (N/C) analysts installed computers to aid in N/C tape preparation for numerically controlled machines. Computer Aided Design (CAD) technology was introduced in 1976, when a number of 2D turnkey drafting stations were procured for printed wiring board (PWB) drawing definition and maintenance. In June, 1980, CAD-CAM Operations was formed to incorporate an integrated CAD-CAM capability into Bendix operations. In March 1982, a ninth division was added to the existing eight divisions at Bendix. Computer Integrated Manufacturing (CIM) is a small organization, reporting directly to the general manager, who has responsibility to coordinate the overall integration of computer aided systems at Bendix. As a long range plan, CIM has adopted a National Bureau of Standards (NBS) architecture titled Factory of the Future. Conceptually, the Bendix CAD-CAM system has a centrally located data base which can be accessed by both CAD and CAM tools, processes, and personnel thus forming an integrated Computer Aided Engineering (CAE) System. This is a key requirement of the Bendix CAD-CAM system that will be presented in more detail.

  3. A Computationally Efficient Algorithm for Aerosol Phase Equilibrium

    SciTech Connect

    Zaveri, Rahul A.; Easter, Richard C.; Peters, Len K.; Wexler, Anthony S.

    2004-10-04

    Three-dimensional models of atmospheric inorganic aerosols need an accurate yet computationally efficient thermodynamic module that is repeatedly used to compute internal aerosol phase state equilibrium. In this paper, we describe the development and evaluation of a computationally efficient numerical solver called MESA (Multicomponent Equilibrium Solver for Aerosols). The unique formulation of MESA allows iteration of all the equilibrium equations simultaneously while maintaining overall mass conservation and electroneutrality in both the solid and liquid phases. MESA is unconditionally stable, shows robust convergence, and typically requires only 10 to 20 single-level iterations (where all activity coefficients and aerosol water content are updated) per internal aerosol phase equilibrium calculation. Accuracy of MESA is comparable to that of the highly accurate Aerosol Inorganics Model (AIM), which uses a rigorous Gibbs free energy minimization approach. Performance evaluation will be presented for a number of complex multicomponent mixtures commonly found in urban and marine tropospheric aerosols.

  4. An overview of energy efficiency techniques in cluster computing systems

    SciTech Connect

    Valentini, Giorgio Luigi; Lassonde, Walter; Khan, Samee Ullah; Min-Allah, Nasro; Madani, Sajjad A.; Li, Juan; Zhang, Limin; Wang, Lizhe; Ghani, Nasir; Kolodziej, Joanna; Li, Hongxiang; Zomaya, Albert Y.; Xu, Cheng-Zhong; Balaji, Pavan; Vishnu, Abhinav; Pinel, Fredric; Pecero, Johnatan E.; Kliazovich, Dzmitry; Bouvry, Pascal

    2011-09-10

    Two major constraints demand more consideration for energy efficiency in cluster computing: (a) operational costs, and (b) system reliability. Increasing energy efficiency in cluster systems will reduce energy consumption, excess heat, lower operational costs, and improve system reliability. Based on the energy-power relationship, and the fact that energy consumption can be reduced with strategic power management, we focus in this survey on the characteristic of two main power management technologies: (a) static power management (SPM) systems that utilize low-power components to save the energy, and (b) dynamic power management (DPM) systems that utilize software and power-scalable components to optimize the energy consumption. We present the current state of the art in both of the SPM and DPM techniques, citing representative examples. The survey is concluded with a brief discussion and some assumptions about the possible future directions that could be explored to improve the energy efficiency in cluster computing.

  5. Computationally Efficient Clustering of Audio-Visual Meeting Data

    NASA Astrophysics Data System (ADS)

    Hung, Hayley; Friedland, Gerald; Yeo, Chuohao

    This chapter presents novel computationally efficient algorithms to extract semantically meaningful acoustic and visual events related to each of the participants in a group discussion using the example of business meeting recordings. The recording setup involves relatively few audio-visual sensors, comprising a limited number of cameras and microphones. We first demonstrate computationally efficient algorithms that can identify who spoke and when, a problem in speech processing known as speaker diarization. We also extract visual activity features efficiently from MPEG4 video by taking advantage of the processing that was already done for video compression. Then, we present a method of associating the audio-visual data together so that the content of each participant can be managed individually. The methods presented in this article can be used as a principal component that enables many higher-level semantic analysis tasks needed in search, retrieval, and navigation.

  6. Fast and Computationally Efficient Boundary Detection Technique for Medical Images

    NASA Astrophysics Data System (ADS)

    Das, Arpita; Goswami, Partha; Sen, Susanta

    2011-03-01

    Detection of edge is a fundamental procedure of image processing. Many edge detection algorithms have been developed based on computation of the intensity gradient. In medical images, boundaries of the objects are vague for gradual change of intensities. Therefore need exists to develop a computationally efficient and accurate edge detection approach. We have presented such algorithm using modified global threshold technique. In our work, the boundaries are highlighted from the background by selecting a threshold (T) that separates object and background. In the image, where object to background or vice-verse transition occurs, pixel intensity either rises greater or equal to T (background to object transition) or falls less than T (object to background). We have marked these transition regions as object boundary and enhanced the corresponding intensity. The value of T may be specified heuristically or by following specific algorithm. Conventional global threshold algorithm computes the value of T automatically. But this approach is not computationally efficient and required a large memory. In this study, we have proposed a parameter for which computation of T is very easy and fast. We have also proved that a fixed size memory [ 256 × 4 Byte] is enough to compute this algorithm.

  7. CAD/CAM for optomechatronics

    NASA Astrophysics Data System (ADS)

    Zhou, Haiguang; Han, Min

    2003-10-01

    We focus at CAD/CAM for optomechatronics. We have developed a kind of CAD/CAM, which is not only for mechanics but also for optics and electronic. The software can be used for training and education. We introduce mechanical CAD, optical CAD and electrical CAD, we show how to draw a circuit diagram, mechanical diagram and luminous transmission diagram, from 2D drawing to 3D drawing. We introduce how to create 2D and 3D parts for optomechatronics, how to edit tool paths, how to select parameters for process, how to run the post processor, dynamic show the tool path and generate the CNC programming. We introduce the joint application of CAD&CAM. We aim at how to match the requirement of optical, mechanical and electronics.

  8. Efficient quantum circuits for one-way quantum computing.

    PubMed

    Tanamoto, Tetsufumi; Liu, Yu-Xi; Hu, Xuedong; Nori, Franco

    2009-03-13

    While Ising-type interactions are ideal for implementing controlled phase flip gates in one-way quantum computing, natural interactions between solid-state qubits are most often described by either the XY or the Heisenberg models. We show an efficient way of generating cluster states directly using either the imaginary SWAP (iSWAP) gate for the XY model, or the sqrt[SWAP] gate for the Heisenberg model. Our approach thus makes one-way quantum computing more feasible for solid-state devices.

  9. The efficient computation of Fourier transforms on the symmetric group

    NASA Astrophysics Data System (ADS)

    Maslen, D. K.

    1998-07-01

    This paper introduces new techniques for the efficient computation of Fourier transforms on symmetric groups and their homogeneous spaces. We replace the matrix multiplications in Clausen's algorithm with sums indexed by combinatorial objects that generalize Young tableaux, and write the result in a form similar to Horner's rule. The algorithm we obtain computes the Fourier transform of a function on S-n in no more than 3/4n(n - 1)S-n multiplications and the same number of additions. Analysis of our algorithm leads to several combinatorial problems that generalize path counting. We prove corresponding results for inverse transforms and transforms on homogeneous spaces.

  10. Efficient computations of quantum canonical Gibbs state in phase space

    NASA Astrophysics Data System (ADS)

    Bondar, Denys I.; Campos, Andre G.; Cabrera, Renan; Rabitz, Herschel A.

    2016-06-01

    The Gibbs canonical state, as a maximum entropy density matrix, represents a quantum system in equilibrium with a thermostat. This state plays an essential role in thermodynamics and serves as the initial condition for nonequilibrium dynamical simulations. We solve a long standing problem for computing the Gibbs state Wigner function with nearly machine accuracy by solving the Bloch equation directly in the phase space. Furthermore, the algorithms are provided yielding high quality Wigner distributions for pure stationary states as well as for Thomas-Fermi and Bose-Einstein distributions. The developed numerical methods furnish a long-sought efficient computation framework for nonequilibrium quantum simulations directly in the Wigner representation.

  11. Efficient computations of quantum canonical Gibbs state in phase space.

    PubMed

    Bondar, Denys I; Campos, Andre G; Cabrera, Renan; Rabitz, Herschel A

    2016-06-01

    The Gibbs canonical state, as a maximum entropy density matrix, represents a quantum system in equilibrium with a thermostat. This state plays an essential role in thermodynamics and serves as the initial condition for nonequilibrium dynamical simulations. We solve a long standing problem for computing the Gibbs state Wigner function with nearly machine accuracy by solving the Bloch equation directly in the phase space. Furthermore, the algorithms are provided yielding high quality Wigner distributions for pure stationary states as well as for Thomas-Fermi and Bose-Einstein distributions. The developed numerical methods furnish a long-sought efficient computation framework for nonequilibrium quantum simulations directly in the Wigner representation. PMID:27415384

  12. ProperCAD: A portable object-oriented parallel environment for VLSI CAD

    NASA Technical Reports Server (NTRS)

    Ramkumar, Balkrishna; Banerjee, Prithviraj

    1993-01-01

    Most parallel algorithms for VLSI CAD proposed to date have one important drawback: they work efficiently only on machines that they were designed for. As a result, algorithms designed to date are dependent on the architecture for which they are developed and do not port easily to other parallel architectures. A new project under way to address this problem is described. A Portable object-oriented parallel environment for CAD algorithms (ProperCAD) is being developed. The objectives of this research are (1) to develop new parallel algorithms that run in a portable object-oriented environment (CAD algorithms using a general purpose platform for portable parallel programming called CARM is being developed and a C++ environment that is truly object-oriented and specialized for CAD applications is also being developed); and (2) to design the parallel algorithms around a good sequential algorithm with a well-defined parallel-sequential interface (permitting the parallel algorithm to benefit from future developments in sequential algorithms). One CAD application that has been implemented as part of the ProperCAD project, flat VLSI circuit extraction, is described. The algorithm, its implementation, and its performance on a range of parallel machines are discussed in detail. It currently runs on an Encore Multimax, a Sequent Symmetry, Intel iPSC/2 and i860 hypercubes, a NCUBE 2 hypercube, and a network of Sun Sparc workstations. Performance data for other applications that were developed are provided: namely test pattern generation for sequential circuits, parallel logic synthesis, and standard cell placement.

  13. A compute-Efficient Bitmap Compression Index for Database Applications

    SciTech Connect

    Wu, Kesheng; Shoshani, Arie

    2006-01-01

    FastBit: A Compute-Efficient Bitmap Compression Index for Database Applications The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is highly efficient for performing search and retrieval operations on large datasets. The WAH technique is optimized for computational efficiency. The WAH-based bitmap indexing software, called FastBit, is particularly appropriate to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry. Some commercial database products already include some Version of a bitmap index, which could possibly be replaced by the WAR bitmap compression techniques for potentially large operational speedup. Experimental results show performance improvements by an average factor of 10 over bitmap technology used by industry, as well as increased efficiencies in constructing compressed bitmaps. FastBit can be use as a stand-alone index, or integrated into a database system. ien integrated into a database system, this technique may be particularly useful for real-time business analysis applications. Additional FastRit applications may include efficient real-time exploration of scientific models, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization. FastBit was proven theoretically to be time-optimal because it provides a search time proportional to the number of elements selected by the index.

  14. A compute-Efficient Bitmap Compression Index for Database Applications

    2006-01-01

    FastBit: A Compute-Efficient Bitmap Compression Index for Database Applications The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is highly efficient for performing search and retrieval operations on large datasets. The WAH technique is optimized for computational efficiency. The WAH-based bitmap indexing software, called FastBit, is particularly appropriate to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry. Some commercial database products already include some Version of a bitmap index,more » which could possibly be replaced by the WAR bitmap compression techniques for potentially large operational speedup. Experimental results show performance improvements by an average factor of 10 over bitmap technology used by industry, as well as increased efficiencies in constructing compressed bitmaps. FastBit can be use as a stand-alone index, or integrated into a database system. ien integrated into a database system, this technique may be particularly useful for real-time business analysis applications. Additional FastRit applications may include efficient real-time exploration of scientific models, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization. FastBit was proven theoretically to be time-optimal because it provides a search time proportional to the number of elements selected by the index.« less

  15. Ergonomics Perspective in Agricultural Research: A User-Centred Approach Using CAD and Digital Human Modeling (DHM) Technologies

    NASA Astrophysics Data System (ADS)

    Patel, Thaneswer; Sanjog, J.; Karmakar, Sougata

    2016-09-01

    Computer-aided Design (CAD) and Digital Human Modeling (DHM) (specialized CAD software for virtual human representation) technologies endow unique opportunities to incorporate human factors pro-actively in design development. Challenges of enhancing agricultural productivity through improvement of agricultural tools/machineries and better human-machine compatibility can be ensured by adoption of these modern technologies. Objectives of present work are to provide the detailed scenario of CAD and DHM applications in agricultural sector; and finding out means for wide adoption of these technologies for design and development of cost-effective, user-friendly, efficient and safe agricultural tools/equipment and operator's workplace. Extensive literature review has been conducted for systematic segregation and representation of available information towards drawing inferences. Although applications of various CAD software have momentum in agricultural research particularly for design and manufacturing of agricultural equipment/machinery, use of DHM is still at its infancy in this sector. Current review discusses about reasons of less adoption of these technologies in agricultural sector and steps to be taken for their wide adoption. It also suggests possible future research directions to come up with better ergonomic design strategies for improvement of agricultural equipment/machines and workstations through application of CAD and DHM.

  16. Ergonomics Perspective in Agricultural Research: A User-Centred Approach Using CAD and Digital Human Modeling (DHM) Technologies

    NASA Astrophysics Data System (ADS)

    Patel, Thaneswer; Sanjog, J.; Karmakar, Sougata

    2016-06-01

    Computer-aided Design (CAD) and Digital Human Modeling (DHM) (specialized CAD software for virtual human representation) technologies endow unique opportunities to incorporate human factors pro-actively in design development. Challenges of enhancing agricultural productivity through improvement of agricultural tools/machineries and better human-machine compatibility can be ensured by adoption of these modern technologies. Objectives of present work are to provide the detailed scenario of CAD and DHM applications in agricultural sector; and finding out means for wide adoption of these technologies for design and development of cost-effective, user-friendly, efficient and safe agricultural tools/equipment and operator's workplace. Extensive literature review has been conducted for systematic segregation and representation of available information towards drawing inferences. Although applications of various CAD software have momentum in agricultural research particularly for design and manufacturing of agricultural equipment/machinery, use of DHM is still at its infancy in this sector. Current review discusses about reasons of less adoption of these technologies in agricultural sector and steps to be taken for their wide adoption. It also suggests possible future research directions to come up with better ergonomic design strategies for improvement of agricultural equipment/machines and workstations through application of CAD and DHM.

  17. Cost reduction advantages of CAD/CAM

    NASA Astrophysics Data System (ADS)

    Parsons, G. T.

    1983-05-01

    Features of the CAD/CAM system implemented at the General Dynamics Convair division are summarized. CAD/CAM was initiated in 1976 to enhance engineering, manufacturing and quality assurance and thereby the company's competitive bidding position. Numerical models are substituted for hardware models wherever possible and numerical criteria are defined in design for guiding computer-controlled parts manufacturing machines. The system comprises multiple terminals, a data base, digitizer, printers, disk and tape drives, and graphics displays. The applications include the design and manufacture of parts and components for avionics, structures, scientific investigations, and aircraft structural components. Interfaces with other computers allow structural analyses by finite element codes. Although time savings have not been gained compared to manual drafting, components of greater complexity than could have been designed by hand have been designed and manufactured.

  18. Generating Composite Overlapping Grids on CAD Geometries

    SciTech Connect

    Henshaw, W.D.

    2002-02-07

    We describe some algorithms and tools that have been developed to generate composite overlapping grids on geometries that have been defined with computer aided design (CAD) programs. This process consists of five main steps. Starting from a description of the surfaces defining the computational domain we (1) correct errors in the CAD representation, (2) determine topology of the patched-surface, (3) build a global triangulation of the surface, (4) construct structured surface and volume grids using hyperbolic grid generation, and (5) generate the overlapping grid by determining the holes and the interpolation points. The overlapping grid generator which is used for the final step also supports the rapid generation of grids for block-structured adaptive mesh refinement and for moving grids. These algorithms have been implemented as part of the Overture object-oriented framework.

  19. Efficient Computational Techniques for Electromagnetic Propagation and Scattering.

    NASA Astrophysics Data System (ADS)

    Wagner, Robert Louis

    Electromagnetic propagation and scattering problems are important in many application areas such as communications, high-speed circuitry, medical imaging, geophysical remote sensing, nondestructive testing, and radar. This thesis develops several new techniques for the efficient computer solution of such problems. Most of this thesis deals with the efficient solution of electromagnetic scattering problems formulated as surface integral equations. A standard method of moments (MOM) formulation is used to reduce the problem to the solution of a dense, N times N matrix equation, where N is the number of surface current unknowns. An iterative solution technique is used, requiring the computation of many matrix-vector multiplications. Techniques developed for this problem include the ray-propagation fast multipole algorithm (RPFMA), which is a simple, non-nested, physically intuitive technique based on the fast multipole method (FMM). The RPFMA is implemented for two-dimensional surface integral equations, and reduces the cost of a matrix-vector multiplication from O(N^2) to O(N^ {4/3}). The use of wavelets is also studied for the solution of two-dimensional surface integral equations. It is shown that the use of wavelets as basis functions produces a MOM matrix with substantial sparsity. However, unlike the RPFMA, the use of a wavelet basis does not reduce the computational complexity of the problem. In other words, the sparse MOM matrix in the wavelet basis still has O(N ^2) significant entries. The fast multipole method-fast Fourier transform (FMM-FFT) method is developed to compute the scattering of an electromagnetic wave from a two-dimensional rough surface. The resulting algorithm computes a matrix-vector multiply in O(N log N) operations. This algorithm is shown to be more efficient than another O(N log N) algorithm, the multi-level fast multipole algorithm (MLFMA), for surfaces of small height. For surfaces with larger roughness, the MLFMA is found to be more

  20. Convolutional networks for fast, energy-efficient neuromorphic computing

    PubMed Central

    Esser, Steven K.; Merolla, Paul A.; Arthur, John V.; Cassidy, Andrew S.; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J.; McKinstry, Jeffrey L.; Melano, Timothy; Barch, Davis R.; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D.; Modha, Dharmendra S.

    2016-01-01

    Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware’s underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer. PMID:27651489

  1. Improving computational efficiency of Monte Carlo simulations with variance reduction

    SciTech Connect

    Turner, A.

    2013-07-01

    CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)

  2. Efficient MATLAB computations with sparse and factored tensors.

    SciTech Connect

    Bader, Brett William; Kolda, Tamara Gibson (Sandia National Lab, Livermore, CA)

    2006-12-01

    In this paper, the term tensor refers simply to a multidimensional or N-way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose storing sparse tensors using coordinate format and describe the computational efficiency of this scheme for various mathematical operations, including those typical to tensor decomposition algorithms. Second, we study factored tensors, which have the property that they can be assembled from more basic components. We consider two specific types: a Tucker tensor can be expressed as the product of a core tensor (which itself may be dense, sparse, or factored) and a matrix along each mode, and a Kruskal tensor can be expressed as the sum of rank-1 tensors. We are interested in the case where the storage of the components is less than the storage of the full tensor, and we demonstrate that many elementary operations can be computed using only the components. All of the efficiencies described in this paper are implemented in the Tensor Toolbox for MATLAB.

  3. Resin-composite Blocks for Dental CAD/CAM Applications

    PubMed Central

    Ruse, N.D.; Sadoun, M.J.

    2014-01-01

    Advances in digital impression technology and manufacturing processes have led to a dramatic paradigm shift in dentistry and to the widespread use of computer-aided design/computer-aided manufacturing (CAD/CAM) in the fabrication of indirect dental restorations. Research and development in materials suitable for CAD/CAM applications are currently the most active field in dental materials. Two classes of materials are used in the production of CAD/CAM restorations: glass-ceramics/ceramics and resin composites. While glass-ceramics/ceramics have overall superior mechanical and esthetic properties, resin-composite materials may offer significant advantages related to their machinability and intra-oral reparability. This review summarizes recent developments in resin-composite materials for CAD/CAM applications, focusing on both commercial and experimental materials. PMID:25344335

  4. Resin-composite blocks for dental CAD/CAM applications.

    PubMed

    Ruse, N D; Sadoun, M J

    2014-12-01

    Advances in digital impression technology and manufacturing processes have led to a dramatic paradigm shift in dentistry and to the widespread use of computer-aided design/computer-aided manufacturing (CAD/CAM) in the fabrication of indirect dental restorations. Research and development in materials suitable for CAD/CAM applications are currently the most active field in dental materials. Two classes of materials are used in the production of CAD/CAM restorations: glass-ceramics/ceramics and resin composites. While glass-ceramics/ceramics have overall superior mechanical and esthetic properties, resin-composite materials may offer significant advantages related to their machinability and intra-oral reparability. This review summarizes recent developments in resin-composite materials for CAD/CAM applications, focusing on both commercial and experimental materials.

  5. An image database management system for conducting CAD research

    NASA Astrophysics Data System (ADS)

    Gruszauskas, Nicholas; Drukker, Karen; Giger, Maryellen L.

    2007-03-01

    The development of image databases for CAD research is not a trivial task. The collection and management of images and their related metadata from multiple sources is a time-consuming but necessary process. By standardizing and centralizing the methods in which these data are maintained, one can generate subsets of a larger database that match the specific criteria needed for a particular research project in a quick and efficient manner. A research-oriented management system of this type is highly desirable in a multi-modality CAD research environment. An online, webbased database system for the storage and management of research-specific medical image metadata was designed for use with four modalities of breast imaging: screen-film mammography, full-field digital mammography, breast ultrasound and breast MRI. The system was designed to consolidate data from multiple clinical sources and provide the user with the ability to anonymize the data. Input concerning the type of data to be stored as well as desired searchable parameters was solicited from researchers in each modality. The backbone of the database was created using MySQL. A robust and easy-to-use interface for entering, removing, modifying and searching information in the database was created using HTML and PHP. This standardized system can be accessed using any modern web-browsing software and is fundamental for our various research projects on computer-aided detection, diagnosis, cancer risk assessment, multimodality lesion assessment, and prognosis. Our CAD database system stores large amounts of research-related metadata and successfully generates subsets of cases that match the user's desired search criteria.

  6. A new CAD approach for improving efficacy of cancer screening

    NASA Astrophysics Data System (ADS)

    Zheng, Bin; Qian, Wei; Li, Lihua; Pu, Jiantao; Kang, Yan; Lure, Fleming; Tan, Maxine; Qiu, Yuchen

    2015-03-01

    Since performance and clinical utility of current computer-aided detection (CAD) schemes of detecting and classifying soft tissue lesions (e.g., breast masses and lung nodules) is not satisfactory, many researchers in CAD field call for new CAD research ideas and approaches. The purpose of presenting this opinion paper is to share our vision and stimulate more discussions of how to overcome or compensate the limitation of current lesion-detection based CAD schemes in the CAD research community. Since based on our observation that analyzing global image information plays an important role in radiologists' decision making, we hypothesized that using the targeted quantitative image features computed from global images could also provide highly discriminatory power, which are supplementary to the lesion-based information. To test our hypothesis, we recently performed a number of independent studies. Based on our published preliminary study results, we demonstrated that global mammographic image features and background parenchymal enhancement of breast MR images carried useful information to (1) predict near-term breast cancer risk based on negative screening mammograms, (2) distinguish between true- and false-positive recalls in mammography screening examinations, and (3) classify between malignant and benign breast MR examinations. The global case-based CAD scheme only warns a risk level of the cases without cueing a large number of false-positive lesions. It can also be applied to guide lesion-based CAD cueing to reduce false-positives but enhance clinically relevant true-positive cueing. However, before such a new CAD approach is clinically acceptable, more work is needed to optimize not only the scheme performance but also how to integrate with lesion-based CAD schemes in the clinical practice.

  7. Energy Efficient Biomolecular Simulations with FPGA-based Reconfigurable Computing

    SciTech Connect

    Hampton, Scott S; Agarwal, Pratul K

    2010-05-01

    Reconfigurable computing (RC) is being investigated as a hardware solution for improving time-to-solution for biomolecular simulations. A number of popular molecular dynamics (MD) codes are used to study various aspects of biomolecules. These codes are now capable of simulating nanosecond time-scale trajectories per day on conventional microprocessor-based hardware, but biomolecular processes often occur at the microsecond time-scale or longer. A wide gap exists between the desired and achievable simulation capability; therefore, there is considerable interest in alternative algorithms and hardware for improving the time-to-solution of MD codes. The fine-grain parallelism provided by Field Programmable Gate Arrays (FPGA) combined with their low power consumption make them an attractive solution for improving the performance of MD simulations. In this work, we use an FPGA-based coprocessor to accelerate the compute-intensive calculations of LAMMPS, a popular MD code, achieving up to 5.5 fold speed-up on the non-bonded force computations of the particle mesh Ewald method and up to 2.2 fold speed-up in overall time-to-solution, and potentially an increase by a factor of 9 in power-performance efficiencies for the pair-wise computations. The results presented here provide an example of the multi-faceted benefits to an application in a heterogeneous computing environment.

  8. CAD/CAM data management

    NASA Technical Reports Server (NTRS)

    Bray, O. H.

    1984-01-01

    The role of data base management in CAD/CAM, particularly for geometric data is described. First, long term and short term objectives for CAD/CAM data management are identified. Second, the benefits of the data base management approach are explained. Third, some of the additional work needed in the data base area is discussed.

  9. Improving robustness and computational efficiency using modern C++

    NASA Astrophysics Data System (ADS)

    Paterno, M.; Kowalkowski, J.; Green, C.

    2014-06-01

    For nearly two decades, the C++ programming language has been the dominant programming language for experimental HEP. The publication of ISO/IEC 14882:2011, the current version of the international standard for the C++ programming language, makes available a variety of language and library facilities for improving the robustness, expressiveness, and computational efficiency of C++ code. However, much of the C++ written by the experimental HEP community does not take advantage of the features of the language to obtain these benefits, either due to lack of familiarity with these features or concern that these features must somehow be computationally inefficient. In this paper, we address some of the features of modern C+-+, and show how they can be used to make programs that are both robust and computationally efficient. We compare and contrast simple yet realistic examples of some common implementation patterns in C, currently-typical C++, and modern C++, and show (when necessary, down to the level of generated assembly language code) the quality of the executable code produced by recent C++ compilers, with the aim of allowing the HEP community to make informed decisions on the costs and benefits of the use of modern C++.

  10. Improving robustness and computational efficiency using modern C++

    SciTech Connect

    Paterno, M.; Kowalkowski, J.; Green, C.

    2014-01-01

    For nearly two decades, the C++ programming language has been the dominant programming language for experimental HEP. The publication of ISO/IEC 14882:2011, the current version of the international standard for the C++ programming language, makes available a variety of language and library facilities for improving the robustness, expressiveness, and computational efficiency of C++ code. However, much of the C++ written by the experimental HEP community does not take advantage of the features of the language to obtain these benefits, either due to lack of familiarity with these features or concern that these features must somehow be computationally inefficient. In this paper, we address some of the features of modern C+-+, and show how they can be used to make programs that are both robust and computationally efficient. We compare and contrast simple yet realistic examples of some common implementation patterns in C, currently-typical C++, and modern C++, and show (when necessary, down to the level of generated assembly language code) the quality of the executable code produced by recent C++ compilers, with the aim of allowing the HEP community to make informed decisions on the costs and benefits of the use of modern C++.

  11. CAD Model and Visual Assisted Control System for NIF Target Area Positioners

    SciTech Connect

    Tekle, E A; Wilson, E F; Paik, T S

    2007-10-03

    The National Ignition Facility (NIF) target chamber contains precision motion control systems that reach up to 6 meters into the target chamber for handling targets and diagnostics. Systems include the target positioner, an alignment sensor, and diagnostic manipulators (collectively called positioners). Target chamber shot experiments require a variety of positioner arrangements near the chamber center to be aligned to an accuracy of 10 micrometers. Positioners are some of the largest devices in NIF, and they require careful monitoring and control in 3 dimensions to prevent interferences. The Integrated Computer Control System provides efficient and flexible multi-positioner controls. This is accomplished through advanced video-control integration incorporating remote position sensing and realtime analysis of a CAD model of target chamber devices. The control system design, the method used to integrate existing mechanical CAD models, and the offline test laboratory used to verify proper operation of the control system are described.

  12. Improving the Efficiency of Abdominal Aortic Aneurysm Wall Stress Computations

    PubMed Central

    Zelaya, Jaime E.; Goenezen, Sevan; Dargon, Phong T.; Azarbal, Amir-Farzin; Rugonyi, Sandra

    2014-01-01

    An abdominal aortic aneurysm is a pathological dilation of the abdominal aorta, which carries a high mortality rate if ruptured. The most commonly used surrogate marker of rupture risk is the maximal transverse diameter of the aneurysm. More recent studies suggest that wall stress from models of patient-specific aneurysm geometries extracted, for instance, from computed tomography images may be a more accurate predictor of rupture risk and an important factor in AAA size progression. However, quantification of wall stress is typically computationally intensive and time-consuming, mainly due to the nonlinear mechanical behavior of the abdominal aortic aneurysm walls. These difficulties have limited the potential of computational models in clinical practice. To facilitate computation of wall stresses, we propose to use a linear approach that ensures equilibrium of wall stresses in the aneurysms. This proposed linear model approach is easy to implement and eliminates the burden of nonlinear computations. To assess the accuracy of our proposed approach to compute wall stresses, results from idealized and patient-specific model simulations were compared to those obtained using conventional approaches and to those of a hypothetical, reference abdominal aortic aneurysm model. For the reference model, wall mechanical properties and the initial unloaded and unstressed configuration were assumed to be known, and the resulting wall stresses were used as reference for comparison. Our proposed linear approach accurately approximates wall stresses for varying model geometries and wall material properties. Our findings suggest that the proposed linear approach could be used as an effective, efficient, easy-to-use clinical tool to estimate patient-specific wall stresses. PMID:25007052

  13. Experiences With Efficient Methodologies for Teaching Computer Programming to Geoscientists

    NASA Astrophysics Data System (ADS)

    Jacobs, Christian T.; Gorman, Gerard J.; Rees, Huw E.; Craig, Lorraine E.

    2016-08-01

    Computer programming was once thought of as a skill required only by professional software developers. But today, given the ubiquitous nature of computation and data science it is quickly becoming necessary for all scientists and engineers to have at least a basic knowledge of how to program. Teaching how to program, particularly to those students with little or no computing background, is well-known to be a difficult task. However, there is also a wealth of evidence-based teaching practices for teaching programming skills which can be applied to greatly improve learning outcomes and the student experience. Adopting these practices naturally gives rise to greater learning efficiency - this is critical if programming is to be integrated into an already busy geoscience curriculum. This paper considers an undergraduate computer programming course, run during the last 5 years in the Department of Earth Science and Engineering at Imperial College London. The teaching methodologies that were used each year are discussed alongside the challenges that were encountered, and how the methodologies affected student performance. Anonymised student marks and feedback are used to highlight this, and also how the adjustments made to the course eventually resulted in a highly effective learning environment.

  14. Exploiting stoichiometric redundancies for computational efficiency and network reduction.

    PubMed

    Ingalls, Brian P; Bembenek, Eric

    2015-01-01

    Analysis of metabolic networks typically begins with construction of the stoichiometry matrix, which characterizes the network topology. This matrix provides, via the balance equation, a description of the potential steady-state flow distribution. This paper begins with the observation that the balance equation depends only on the structure of linear redundancies in the network, and so can be stated in a succinct manner, leading to computational efficiencies in steady-state analysis. This alternative description of steady-state behaviour is then used to provide a novel method for network reduction, which complements existing algorithms for describing intracellular networks in terms of input-output macro-reactions (to facilitate bioprocess optimization and control). Finally, it is demonstrated that this novel reduction method can be used to address elementary mode analysis of large networks: the modes supported by a reduced network can capture the input-output modes of a metabolic module with significantly reduced computational effort.

  15. Exploiting stoichiometric redundancies for computational efficiency and network reduction.

    PubMed

    Ingalls, Brian P; Bembenek, Eric

    2015-01-01

    Analysis of metabolic networks typically begins with construction of the stoichiometry matrix, which characterizes the network topology. This matrix provides, via the balance equation, a description of the potential steady-state flow distribution. This paper begins with the observation that the balance equation depends only on the structure of linear redundancies in the network, and so can be stated in a succinct manner, leading to computational efficiencies in steady-state analysis. This alternative description of steady-state behaviour is then used to provide a novel method for network reduction, which complements existing algorithms for describing intracellular networks in terms of input-output macro-reactions (to facilitate bioprocess optimization and control). Finally, it is demonstrated that this novel reduction method can be used to address elementary mode analysis of large networks: the modes supported by a reduced network can capture the input-output modes of a metabolic module with significantly reduced computational effort. PMID:25547516

  16. Exploiting stoichiometric redundancies for computational efficiency and network reduction

    PubMed Central

    Ingalls, Brian P.; Bembenek, Eric

    2015-01-01

    Abstract Analysis of metabolic networks typically begins with construction of the stoichiometry matrix, which characterizes the network topology. This matrix provides, via the balance equation, a description of the potential steady-state flow distribution. This paper begins with the observation that the balance equation depends only on the structure of linear redundancies in the network, and so can be stated in a succinct manner, leading to computational efficiencies in steady-state analysis. This alternative description of steady-state behaviour is then used to provide a novel method for network reduction, which complements existing algorithms for describing intracellular networks in terms of input-output macro-reactions (to facilitate bioprocess optimization and control). Finally, it is demonstrated that this novel reduction method can be used to address elementary mode analysis of large networks: the modes supported by a reduced network can capture the input-output modes of a metabolic module with significantly reduced computational effort. PMID:25547516

  17. Adding computationally efficient realism to Monte Carlo turbulence simulation

    NASA Technical Reports Server (NTRS)

    Campbell, C. W.

    1985-01-01

    Frequently in aerospace vehicle flight simulation, random turbulence is generated using the assumption that the craft is small compared to the length scales of turbulence. The turbulence is presumed to vary only along the flight path of the vehicle but not across the vehicle span. The addition of the realism of three-dimensionality is a worthy goal, but any such attempt will not gain acceptance in the simulator community unless it is computationally efficient. A concept for adding three-dimensional realism with a minimum of computational complexity is presented. The concept involves the use of close rational approximations to irrational spectra and cross-spectra so that systems of stable, explicit difference equations can be used to generate the turbulence.

  18. Increasing computational efficiency of cochlear models using boundary layers

    NASA Astrophysics Data System (ADS)

    Alkhairy, Samiya A.; Shera, Christopher A.

    2015-12-01

    Our goal is to develop methods to improve the efficiency of computational models of the cochlea for applications that require the solution accurately only within a basal region of interest, specifically by decreasing the number of spatial sections needed for simulation of the problem with good accuracy. We design algebraic spatial and parametric transformations to computational models of the cochlea. These transformations are applied after the basal region of interest and allow for spatial preservation, driven by the natural characteristics of approximate spatial causality of cochlear models. The project is of foundational nature and hence the goal is to design, characterize and develop an understanding and framework rather than optimization and globalization. Our scope is as follows: designing the transformations; understanding the mechanisms by which computational load is decreased for each transformation; development of performance criteria; characterization of the results of applying each transformation to a specific physical model and discretization and solution schemes. In this manuscript, we introduce one of the proposed methods (complex spatial transformation) for a case study physical model that is a linear, passive, transmission line model in which the various abstraction layers (electric parameters, filter parameters, wave parameters) are clearer than other models. This is conducted in the frequency domain for multiple frequencies using a second order finite difference scheme for discretization and direct elimination for solving the discrete system of equations. The performance is evaluated using two developed simulative criteria for each of the transformations. In conclusion, the developed methods serve to increase efficiency of a computational traveling wave cochlear model when spatial preservation can hold, while maintaining good correspondence with the solution of interest and good accuracy, for applications in which the interest is in the solution

  19. Computationally efficient strategies to perform anomaly detection in hyperspectral images

    NASA Astrophysics Data System (ADS)

    Rossi, Alessandro; Acito, Nicola; Diani, Marco; Corsini, Giovanni

    2012-11-01

    In remote sensing, hyperspectral sensors are effectively used for target detection and recognition because of their high spectral resolution that allows discrimination of different materials in the sensed scene. When a priori information about the spectrum of the targets of interest is not available, target detection turns into anomaly detection (AD), i.e. searching for objects that are anomalous with respect to the scene background. In the field of AD, anomalies can be generally associated to observations that statistically move away from background clutter, being this latter intended as a local neighborhood surrounding the observed pixel or as a large part of the image. In this context, many efforts have been put to reduce the computational load of AD algorithms so as to furnish information for real-time decision making. In this work, a sub-class of AD methods is considered that aim at detecting small rare objects that are anomalous with respect to their local background. Such techniques not only are characterized by mathematical tractability but also allow the design of real-time strategies for AD. Within these methods, one of the most-established anomaly detectors is the RX algorithm which is based on a local Gaussian model for background modeling. In the literature, the RX decision rule has been employed to develop computationally efficient algorithms implemented in real-time systems. In this work, a survey of computationally efficient methods to implement the RX detector is presented where advanced algebraic strategies are exploited to speed up the estimate of the covariance matrix and of its inverse. The comparison of the overall number of operations required by the different implementations of the RX algorithms is given and discussed by varying the RX parameters in order to show the computational improvements achieved with the introduced algebraic strategy.

  20. A dimension reduction strategy for improving the efficiency of computer-aided detection for CT colonography

    NASA Astrophysics Data System (ADS)

    Song, Bowen; Zhang, Guopeng; Wang, Huafeng; Zhu, Wei; Liang, Zhengrong

    2013-02-01

    Various types of features, e.g., geometric features, texture features, projection features etc., have been introduced for polyp detection and differentiation tasks via computer aided detection and diagnosis (CAD) for computed tomography colonography (CTC). Although these features together cover more information of the data, some of them are statistically highly-related to others, which made the feature set redundant and burdened the computation task of CAD. In this paper, we proposed a new dimension reduction method which combines hierarchical clustering and principal component analysis (PCA) for false positives (FPs) reduction task. First, we group all the features based on their similarity using hierarchical clustering, and then PCA is employed within each group. Different numbers of principal components are selected from each group to form the final feature set. Support vector machine is used to perform the classification. The results show that when three principal components were chosen from each group we can achieve an area under the curve of receiver operating characteristics of 0.905, which is as high as the original dataset. Meanwhile, the computation time is reduced by 70% and the feature set size is reduce by 77%. It can be concluded that the proposed method captures the most important information of the feature set and the classification accuracy is not affected after the dimension reduction. The result is promising and further investigation, such as automatically threshold setting, are worthwhile and are under progress.

  1. Efficient quantum algorithm for computing n-time correlation functions.

    PubMed

    Pedernales, J S; Di Candia, R; Egusquiza, I L; Casanova, J; Solano, E

    2014-07-11

    We propose a method for computing n-time correlation functions of arbitrary spinorial, fermionic, and bosonic operators, consisting of an efficient quantum algorithm that encodes these correlations in an initially added ancillary qubit for probe and control tasks. For spinorial and fermionic systems, the reconstruction of arbitrary n-time correlation functions requires the measurement of two ancilla observables, while for bosonic variables time derivatives of the same observables are needed. Finally, we provide examples applicable to different quantum platforms in the frame of the linear response theory.

  2. Efficient parallel global garbage collection on massively parallel computers

    SciTech Connect

    Kamada, Tomio; Matsuoka, Satoshi; Yonezawa, Akinori

    1994-12-31

    On distributed-memory high-performance MPPs where processors are interconnected by an asynchronous network, efficient Garbage Collection (GC) becomes difficult due to inter-node references and references within pending, unprocessed messages. The parallel global GC algorithm (1) takes advantage of reference locality, (2) efficiently traverses references over nodes, (3) admits minimum pause time of ongoing computations, and (4) has been shown to scale up to 1024 node MPPs. The algorithm employs a global weight counting scheme to substantially reduce message traffic. The two methods for confirming the arrival of pending messages are used: one counts numbers of messages and the other uses network `bulldozing.` Performance evaluation in actual implementations on a multicomputer with 32-1024 nodes, Fujitsu AP1000, reveals various favorable properties of the algorithm.

  3. Efficient Parallel Kernel Solvers for Computational Fluid Dynamics Applications

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He

    1997-01-01

    Distributed-memory parallel computers dominate today's parallel computing arena. These machines, such as Intel Paragon, IBM SP2, and Cray Origin2OO, have successfully delivered high performance computing power for solving some of the so-called "grand-challenge" problems. Despite initial success, parallel machines have not been widely accepted in production engineering environments due to the complexity of parallel programming. On a parallel computing system, a task has to be partitioned and distributed appropriately among processors to reduce communication cost and to attain load balance. More importantly, even with careful partitioning and mapping, the performance of an algorithm may still be unsatisfactory, since conventional sequential algorithms may be serial in nature and may not be implemented efficiently on parallel machines. In many cases, new algorithms have to be introduced to increase parallel performance. In order to achieve optimal performance, in addition to partitioning and mapping, a careful performance study should be conducted for a given application to find a good algorithm-machine combination. This process, however, is usually painful and elusive. The goal of this project is to design and develop efficient parallel algorithms for highly accurate Computational Fluid Dynamics (CFD) simulations and other engineering applications. The work plan is 1) developing highly accurate parallel numerical algorithms, 2) conduct preliminary testing to verify the effectiveness and potential of these algorithms, 3) incorporate newly developed algorithms into actual simulation packages. The work plan has well achieved. Two highly accurate, efficient Poisson solvers have been developed and tested based on two different approaches: (1) Adopting a mathematical geometry which has a better capacity to describe the fluid, (2) Using compact scheme to gain high order accuracy in numerical discretization. The previously developed Parallel Diagonal Dominant (PDD) algorithm

  4. IMPROVING TACONITE PROCESSING PLANT EFFICIENCY BY COMPUTER SIMULATION, Final Report

    SciTech Connect

    William M. Bond; Salih Ersayin

    2007-03-30

    This project involved industrial scale testing of a mineral processing simulator to improve the efficiency of a taconite processing plant, namely the Minorca mine. The Concentrator Modeling Center at the Coleraine Minerals Research Laboratory, University of Minnesota Duluth, enhanced the capabilities of available software, Usim Pac, by developing mathematical models needed for accurate simulation of taconite plants. This project provided funding for this technology to prove itself in the industrial environment. As the first step, data representing existing plant conditions were collected by sampling and sample analysis. Data were then balanced and provided a basis for assessing the efficiency of individual devices and the plant, and also for performing simulations aimed at improving plant efficiency. Performance evaluation served as a guide in developing alternative process strategies for more efficient production. A large number of computer simulations were then performed to quantify the benefits and effects of implementing these alternative schemes. Modification of makeup ball size was selected as the most feasible option for the target performance improvement. This was combined with replacement of existing hydrocyclones with more efficient ones. After plant implementation of these modifications, plant sampling surveys were carried out to validate findings of the simulation-based study. Plant data showed very good agreement with the simulated data, confirming results of simulation. After the implementation of modifications in the plant, several upstream bottlenecks became visible. Despite these bottlenecks limiting full capacity, concentrator energy improvement of 7% was obtained. Further improvements in energy efficiency are expected in the near future. The success of this project demonstrated the feasibility of a simulation-based approach. Currently, the Center provides simulation-based service to all the iron ore mining companies operating in northern

  5. An Algorithm for Projecting Points onto a Patched CAD Model

    SciTech Connect

    Henshaw, W D

    2001-05-29

    We are interested in building structured overlapping grids for geometries defined by computer-aided-design (CAD) packages. Geometric information defining the boundary surfaces of a computation domain is often provided in the form of a collection of possibly hundreds of trimmed patches. The first step in building an overlapping volume grid on such a geometry is to build overlapping surface grids. A surface grid is typically built using hyperbolic grid generation; starting from a curve on the surface, a grid is grown by marching over the surface. A given hyperbolic grid will typically cover many of the underlying CAD surface patches. The fundamental operation needed for building surface grids is that of projecting a point in space onto the closest point on the CAD surface. We describe an fast algorithm for performing this projection, it will make use of a fairly coarse global triangulation of the CAD geometry. We describe how to build this global triangulation by first determining the connectivity of the CAD surface patches. This step is necessary since it often the case that the CAD description will contain no information specifying how a given patch connects to other neighboring patches. Determining the connectivity is difficult since the surface patches may contain mistakes such as gaps or overlaps between neighboring patches.

  6. A computational efficient modelling of laminar separation bubbles

    NASA Technical Reports Server (NTRS)

    Dini, Paolo; Maughmer, Mark D.

    1990-01-01

    In predicting the aerodynamic characteristics of airfoils operating at low Reynolds numbers, it is often important to account for the effects of laminar (transitional) separation bubbles. Previous approaches to the modelling of this viscous phenomenon range from fast but sometimes unreliable empirical correlations for the length of the bubble and the associated increase in momentum thickness, to more accurate but significantly slower displacement-thickness iteration methods employing inverse boundary-layer formulations in the separated regions. Since the penalty in computational time associated with the more general methods is unacceptable for airfoil design applications, use of an accurate yet computationally efficient model is highly desirable. To this end, a semi-empirical bubble model was developed and incorporated into the Eppler and Somers airfoil design and analysis program. The generality and the efficiency was achieved by successfully approximating the local viscous/inviscid interaction, the transition location, and the turbulent reattachment process within the framework of an integral boundary-layer method. Comparisons of the predicted aerodynamic characteristics with experimental measurements for several airfoils show excellent and consistent agreement for Reynolds numbers from 2,000,000 down to 100,000.

  7. Efficient Computation of the Topology of Level Sets

    SciTech Connect

    Pascucci, V; Cole-McLaughlin, K

    2002-07-19

    This paper introduces two efficient algorithms that compute the Contour Tree of a 3D scalar field F and its augmented version with the Betti numbers of each isosurface. The Contour Tree is a fundamental data structure in scientific visualization that is used to pre-process the domain mesh to allow optimal computation of isosurfaces with minimal storage overhead. The Contour Tree can be also used to build user interfaces reporting the complete topological characterization of a scalar field, as shown in Figure 1. In the first part of the paper we present a new scheme that augments the Contour Tree with the Betti numbers of each isocontour in linear time. We show how to extend the scheme introduced in 3 with the Betti number computation without increasing its complexity. Thus we improve on the time complexity from our previous approach 8 from 0(m log m) to 0(n log n+m), where m is the number of tetrahedra and n is the number of vertices in the domain of F. In the second part of the paper we introduce a new divide and conquer algorithm that computes the Augmented Contour Tree for scalar fields defined on rectilinear grids. The central part of the scheme computes the output contour tree by merging two intermediate contour trees and is independent of the interpolant. In this way we confine any knowledge regarding a specific interpolant to an oracle that computes the tree for a single cell. We have implemented this oracle for the trilinear interpolant and plan to replace it with higher order interpolants when needed. The complexity of the scheme is O(n + t log n), where t is the number of critical points of F. This allows for the first time to compute the Contour Tree in linear time in many practical cases when t = O(n{sup 1-e}). We report the running times for a parallel implementation of our algorithm, showing good scalability with the number of processors.

  8. Solving the Heterogeneous VHTR Core with Efficient Grid Computing

    NASA Astrophysics Data System (ADS)

    Connolly, Kevin John; Rahnema, Farzad

    2014-06-01

    This paper uses the coarse mesh transport method COMET to solve the eigenvalue and pin fission density distribution of the Very High Temperature Reactor (VHTR). It does this using the Boltzmann transport equation without such low-order approximations as diffusion, and it does not simplify the reactor core problem through homogenization techniques. This method is chosen as it makes highly efficient use of grid computing resources: it conducts a series of calculations at the block level using Monte Carlo to model the explicit geometry within the core without approximation, and compiles a compendium of data with the solution set. From there, it is able to solve the desired core configuration on a single processor in a fraction of the time necessary for whole-core deterministic or stochastic transport calculations. Thus, the method supplies a solution which has the accuracy of a whole-core Monte Carlo solution via the computing power available to the user. The core solved herein, the VHTR, was chosen due to its complexity. With a high level of detailed heterogeneity present from the core level to the pin level, and with asymmetric blocks and control material present outside of the fueled region of the core, this reactor geometry creates problems for methods which rely on homogenization or diffusion methods. Even transport methods find it challenging to solve. As it is desirable to reduce the number of assumptions necessary for a whole core calculation, this choice of reactor and solution method combination is an appropriate choice for a demonstration on an efficient use of grid computing.

  9. Computationally efficient implementation of combustion chemistry in parallel PDF calculations

    NASA Astrophysics Data System (ADS)

    Lu, Liuyan; Lantz, Steven R.; Ren, Zhuyin; Pope, Stephen B.

    2009-08-01

    In parallel calculations of combustion processes with realistic chemistry, the serial in situ adaptive tabulation (ISAT) algorithm [S.B. Pope, Computationally efficient implementation of combustion chemistry using in situ adaptive tabulation, Combustion Theory and Modelling, 1 (1997) 41-63; L. Lu, S.B. Pope, An improved algorithm for in situ adaptive tabulation, Journal of Computational Physics 228 (2009) 361-386] substantially speeds up the chemistry calculations on each processor. To improve the parallel efficiency of large ensembles of such calculations in parallel computations, in this work, the ISAT algorithm is extended to the multi-processor environment, with the aim of minimizing the wall clock time required for the whole ensemble. Parallel ISAT strategies are developed by combining the existing serial ISAT algorithm with different distribution strategies, namely purely local processing (PLP), uniformly random distribution (URAN), and preferential distribution (PREF). The distribution strategies enable the queued load redistribution of chemistry calculations among processors using message passing. They are implemented in the software x2f_mpi, which is a Fortran 95 library for facilitating many parallel evaluations of a general vector function. The relative performance of the parallel ISAT strategies is investigated in different computational regimes via the PDF calculations of multiple partially stirred reactors burning methane/air mixtures. The results show that the performance of ISAT with a fixed distribution strategy strongly depends on certain computational regimes, based on how much memory is available and how much overlap exists between tabulated information on different processors. No one fixed strategy consistently achieves good performance in all the regimes. Therefore, an adaptive distribution strategy, which blends PLP, URAN and PREF, is devised and implemented. It yields consistently good performance in all regimes. In the adaptive parallel

  10. Efficient Homotopy Continuation Algorithms with Application to Computational Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Brown, David A.

    New homotopy continuation algorithms are developed and applied to a parallel implicit finite-difference Newton-Krylov-Schur external aerodynamic flow solver for the compressible Euler, Navier-Stokes, and Reynolds-averaged Navier-Stokes equations with the Spalart-Allmaras one-equation turbulence model. Many new analysis tools, calculations, and numerical algorithms are presented for the study and design of efficient and robust homotopy continuation algorithms applicable to solving very large and sparse nonlinear systems of equations. Several specific homotopies are presented and studied and a methodology is presented for assessing the suitability of specific homotopies for homotopy continuation. . A new class of homotopy continuation algorithms, referred to as monolithic homotopy continuation algorithms, is developed. These algorithms differ from classical predictor-corrector algorithms by combining the predictor and corrector stages into a single update, significantly reducing the amount of computation and avoiding wasted computational effort resulting from over-solving in the corrector phase. The new algorithms are also simpler from a user perspective, with fewer input parameters, which also improves the user's ability to choose effective parameters on the first flow solve attempt. Conditional convergence is proved analytically and studied numerically for the new algorithms. The performance of a fully-implicit monolithic homotopy continuation algorithm is evaluated for several inviscid, laminar, and turbulent flows over NACA 0012 airfoils and ONERA M6 wings. The monolithic algorithm is demonstrated to be more efficient than the predictor-corrector algorithm for all applications investigated. It is also demonstrated to be more efficient than the widely-used pseudo-transient continuation algorithm for all inviscid and laminar cases investigated, and good performance scaling with grid refinement is demonstrated for the inviscid cases. Performance is also demonstrated

  11. Schools (Students) Exchanging CAD/CAM Files over the Internet.

    ERIC Educational Resources Information Center

    Mahoney, Gary S.; Smallwood, James E.

    This document discusses how students and schools can benefit from exchanging computer-aided design/computer-aided manufacturing (CAD/CAM) files over the Internet, explains how files are exchanged, and examines the problem of selected hardware/software incompatibility. Key terms associated with information search services are defined, and several…

  12. Efficient Universal Computing Architectures for Decoding Neural Activity

    PubMed Central

    Rapoport, Benjamin I.; Turicchia, Lorenzo; Wattanapanitch, Woradorn; Davidson, Thomas J.; Sarpeshkar, Rahul

    2012-01-01

    The ability to decode neural activity into meaningful control signals for prosthetic devices is critical to the development of clinically useful brain– machine interfaces (BMIs). Such systems require input from tens to hundreds of brain-implanted recording electrodes in order to deliver robust and accurate performance; in serving that primary function they should also minimize power dissipation in order to avoid damaging neural tissue; and they should transmit data wirelessly in order to minimize the risk of infection associated with chronic, transcutaneous implants. Electronic architectures for brain– machine interfaces must therefore minimize size and power consumption, while maximizing the ability to compress data to be transmitted over limited-bandwidth wireless channels. Here we present a system of extremely low computational complexity, designed for real-time decoding of neural signals, and suited for highly scalable implantable systems. Our programmable architecture is an explicit implementation of a universal computing machine emulating the dynamics of a network of integrate-and-fire neurons; it requires no arithmetic operations except for counting, and decodes neural signals using only computationally inexpensive logic operations. The simplicity of this architecture does not compromise its ability to compress raw neural data by factors greater than . We describe a set of decoding algorithms based on this computational architecture, one designed to operate within an implanted system, minimizing its power consumption and data transmission bandwidth; and a complementary set of algorithms for learning, programming the decoder, and postprocessing the decoded output, designed to operate in an external, nonimplanted unit. The implementation of the implantable portion is estimated to require fewer than 5000 operations per second. A proof-of-concept, 32-channel field-programmable gate array (FPGA) implementation of this portion is consequently energy efficient

  13. The Efficiency of Various Computers and Optimizations in Performing Finite Element Computations

    NASA Technical Reports Server (NTRS)

    Marcus, Martin H.; Broduer, Steve (Technical Monitor)

    2001-01-01

    With the advent of computers with many processors, it becomes unclear how to best exploit this advantage. For example, matrices can be inverted by applying several processors to each vector operation, or one processor can be applied to each matrix. The former approach has diminishing returns beyond a handful of processors, but how many processors depends on the computer architecture. Applying one processor to each matrix is feasible with enough ram memory and scratch disk space, but the speed at which this is done is found to vary by a factor of three depending on how it is done. The cost of the computer must also be taken into account. A computer with many processors and fast interprocessor communication is much more expensive than the same computer and processors with slow interprocessor communication. Consequently, for problems that require several matrices to be inverted, the best speed per dollar for computers is found to be several small workstations that are networked together, such as in a Beowulf cluster. Since these machines typically have two processors per node, each matrix is most efficiently inverted with no more than two processors assigned to it.

  14. Computer Aided Drafting. Instructor's Guide.

    ERIC Educational Resources Information Center

    Henry, Michael A.

    This guide is intended for use in introducing students to the operation and applications of computer-aided drafting (CAD) systems. The following topics are covered in the individual lessons: understanding CAD (CAD versus traditional manual drafting and care of software and hardware); using the components of a CAD system (primary and other input…

  15. Optimization of computation efficiency in underwater acoustic navigation system.

    PubMed

    Lee, Hua

    2016-04-01

    This paper presents a technique for the estimation of the relative bearing angle between the unmanned underwater vehicle (UUV) and the base station for the homing and docking operations. The key requirement of this project includes computation efficiency and estimation accuracy for direct implementation onto the UUV electronic hardware, subject to the extreme constraints of physical limitation of the hardware due to the size and dimension of the UUV housing, electric power consumption for the requirement of UUV survey duration and range coverage, and heat dissipation of the hardware. Subsequent to the design and development of the algorithm, two phases of experiments were conducted to illustrate the feasibility and capability of this technique. The presentation of this paper includes system modeling, mathematical analysis, and results from laboratory experiments and full-scale sea tests. PMID:27106337

  16. Direct composite resin layering techniques for creating lifelike CAD/CAM-fabricated composite resin veneers and crowns.

    PubMed

    LeSage, Brian

    2014-07-01

    Direct composite resin layering techniques preserve sound tooth structure and improve function and esthetics. However, intraoral placement techniques present challenges involving isolation, contamination, individual patient characteristics, and the predictability of restorative outcomes. Computer-aided design and computer-aided manufacturing (CAD/CAM) restorations enable dentists to better handle these variables and provide durable restorations in an efficient and timely manner; however, milled restorations may appear monochromatic and lack proper esthetic characteristics. For these reasons, an uncomplicated composite resin layering restoration technique can be used to combine the benefits of minimally invasive direct restorations and the ease and precision of indirect CAD/CAM restorations. Because most dentists are familiar with and skilled at composite resin layering, the use of such a technique can provide predictable and highly esthetic results. This article describes the layered composite resin restoration technique.

  17. Direct composite resin layering techniques for creating lifelike CAD/CAM-fabricated composite resin veneers and crowns.

    PubMed

    LeSage, Brian

    2014-07-01

    Direct composite resin layering techniques preserve sound tooth structure and improve function and esthetics. However, intraoral placement techniques present challenges involving isolation, contamination, individual patient characteristics, and the predictability of restorative outcomes. Computer-aided design and computer-aided manufacturing (CAD/CAM) restorations enable dentists to better handle these variables and provide durable restorations in an efficient and timely manner; however, milled restorations may appear monochromatic and lack proper esthetic characteristics. For these reasons, an uncomplicated composite resin layering restoration technique can be used to combine the benefits of minimally invasive direct restorations and the ease and precision of indirect CAD/CAM restorations. Because most dentists are familiar with and skilled at composite resin layering, the use of such a technique can provide predictable and highly esthetic results. This article describes the layered composite resin restoration technique. PMID:24680167

  18. Efficient Computer Network Anomaly Detection by Changepoint Detection Methods

    NASA Astrophysics Data System (ADS)

    Tartakovsky, Alexander G.; Polunchenko, Aleksey S.; Sokolov, Grigory

    2013-02-01

    We consider the problem of efficient on-line anomaly detection in computer network traffic. The problem is approached statistically, as that of sequential (quickest) changepoint detection. A multi-cyclic setting of quickest change detection is a natural fit for this problem. We propose a novel score-based multi-cyclic detection algorithm. The algorithm is based on the so-called Shiryaev-Roberts procedure. This procedure is as easy to employ in practice and as computationally inexpensive as the popular Cumulative Sum chart and the Exponentially Weighted Moving Average scheme. The likelihood ratio based Shiryaev-Roberts procedure has appealing optimality properties, particularly it is exactly optimal in a multi-cyclic setting geared to detect a change occurring at a far time horizon. It is therefore expected that an intrusion detection algorithm based on the Shiryaev-Roberts procedure will perform better than other detection schemes. This is confirmed experimentally for real traces. We also discuss the possibility of complementing our anomaly detection algorithm with a spectral-signature intrusion detection system with false alarm filtering and true attack confirmation capability, so as to obtain a synergistic system.

  19. Tensor scale: An analytic approach with efficient computation and applications☆

    PubMed Central

    Xu, Ziyue; Saha, Punam K.; Dasgupta, Soura

    2015-01-01

    Scale is a widely used notion in computer vision and image understanding that evolved in the form of scale-space theory where the key idea is to represent and analyze an image at various resolutions. Recently, we introduced a notion of local morphometric scale referred to as “tensor scale” using an ellipsoidal model that yields a unified representation of structure size, orientation and anisotropy. In the previous work, tensor scale was described using a 2-D algorithmic approach and a precise analytic definition was missing. Also, the application of tensor scale in 3-D using the previous framework is not practical due to high computational complexity. In this paper, an analytic definition of tensor scale is formulated for n-dimensional (n-D) images that captures local structure size, orientation and anisotropy. Also, an efficient computational solution in 2- and 3-D using several novel differential geometric approaches is presented and the accuracy of results is experimentally examined. Also, a matrix representation of tensor scale is derived facilitating several operations including tensor field smoothing to capture larger contextual knowledge. Finally, the applications of tensor scale in image filtering and n-linear interpolation are presented and the performance of their results is examined in comparison with respective state-of-art methods. Specifically, the performance of tensor scale based image filtering is compared with gradient and Weickert’s structure tensor based diffusive filtering algorithms. Also, the performance of tensor scale based n-linear interpolation is evaluated in comparison with standard n-linear and windowed-sinc interpolation methods. PMID:26236148

  20. An efficient parallel algorithm for accelerating computational protein design

    PubMed Central

    Zhou, Yichao; Xu, Wei; Donald, Bruce R.; Zeng, Jianyang

    2014-01-01

    Motivation: Structure-based computational protein design (SCPR) is an important topic in protein engineering. Under the assumption of a rigid backbone and a finite set of discrete conformations of side-chains, various methods have been proposed to address this problem. A popular method is to combine the dead-end elimination (DEE) and A* tree search algorithms, which provably finds the global minimum energy conformation (GMEC) solution. Results: In this article, we improve the efficiency of computing A* heuristic functions for protein design and propose a variant of A* algorithm in which the search process can be performed on a single GPU in a massively parallel fashion. In addition, we make some efforts to address the memory exceeding problem in A* search. As a result, our enhancements can achieve a significant speedup of the A*-based protein design algorithm by four orders of magnitude on large-scale test data through pre-computation and parallelization, while still maintaining an acceptable memory overhead. We also show that our parallel A* search algorithm could be successfully combined with iMinDEE, a state-of-the-art DEE criterion, for rotamer pruning to further improve SCPR with the consideration of continuous side-chain flexibility. Availability: Our software is available and distributed open-source under the GNU Lesser General License Version 2.1 (GNU, February 1999). The source code can be downloaded from http://www.cs.duke.edu/donaldlab/osprey.php or http://iiis.tsinghua.edu.cn/∼compbio/software.html. Contact: zengjy321@tsinghua.edu.cn Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24931991

  1. Bio++: efficient extensible libraries and tools for computational molecular evolution.

    PubMed

    Guéguen, Laurent; Gaillard, Sylvain; Boussau, Bastien; Gouy, Manolo; Groussin, Mathieu; Rochette, Nicolas C; Bigot, Thomas; Fournier, David; Pouyet, Fanny; Cahais, Vincent; Bernard, Aurélien; Scornavacca, Céline; Nabholz, Benoît; Haudry, Annabelle; Dachary, Loïc; Galtier, Nicolas; Belkhir, Khalid; Dutheil, Julien Y

    2013-08-01

    Efficient algorithms and programs for the analysis of the ever-growing amount of biological sequence data are strongly needed in the genomics era. The pace at which new data and methodologies are generated calls for the use of pre-existing, optimized-yet extensible-code, typically distributed as libraries or packages. This motivated the Bio++ project, aiming at developing a set of C++ libraries for sequence analysis, phylogenetics, population genetics, and molecular evolution. The main attractiveness of Bio++ is the extensibility and reusability of its components through its object-oriented design, without compromising the computer-efficiency of the underlying methods. We present here the second major release of the libraries, which provides an extended set of classes and methods. These extensions notably provide built-in access to sequence databases and new data structures for handling and manipulating sequences from the omics era, such as multiple genome alignments and sequencing reads libraries. More complex models of sequence evolution, such as mixture models and generic n-tuples alphabets, are also included.

  2. Textbook Multigrid Efficiency for Computational Fluid Dynamics Simulations

    NASA Technical Reports Server (NTRS)

    Brandt, Achi; Thomas, James L.; Diskin, Boris

    2001-01-01

    Considerable progress over the past thirty years has been made in the development of large-scale computational fluid dynamics (CFD) solvers for the Euler and Navier-Stokes equations. Computations are used routinely to design the cruise shapes of transport aircraft through complex-geometry simulations involving the solution of 25-100 million equations; in this arena the number of wind-tunnel tests for a new design has been substantially reduced. However, simulations of the entire flight envelope of the vehicle, including maximum lift, buffet onset, flutter, and control effectiveness have not been as successful in eliminating the reliance on wind-tunnel testing. These simulations involve unsteady flows with more separation and stronger shock waves than at cruise. The main reasons limiting further inroads of CFD into the design process are: (1) the reliability of turbulence models; and (2) the time and expense of the numerical simulation. Because of the prohibitive resolution requirements of direct simulations at high Reynolds numbers, transition and turbulence modeling is expected to remain an issue for the near term. The focus of this paper addresses the latter problem by attempting to attain optimal efficiencies in solving the governing equations. Typically current CFD codes based on the use of multigrid acceleration techniques and multistage Runge-Kutta time-stepping schemes are able to converge lift and drag values for cruise configurations within approximately 1000 residual evaluations. An optimally convergent method is defined as having textbook multigrid efficiency (TME), meaning the solutions to the governing system of equations are attained in a computational work which is a small (less than 10) multiple of the operation count in the discretized system of equations (residual equations). In this paper, a distributed relaxation approach to achieving TME for Reynolds-averaged Navier-Stokes (RNAS) equations are discussed along with the foundations that form the

  3. [Development of the dental CAD/CAM system].

    PubMed

    Kawanaka, M

    1990-06-01

    Studies have been undertaken to apply CAD/CAM system to Dentistry and to make prosthetic appliances with this system automatically. Specimens are 4 times large plaster models. For the inside of the crown, the plaster model of prepared tooth is measured with laser displacement meter then the numerical data is obtained. After modification of this data for the concave cutting, the modeling machine works with this numerical data. For the outside of the crown, the typical colonal figure data (= CAD Data Base) is prepared. And this data is modified with computer to fit the prepared tooth margin and proximal or antagonical tooth (= CAD). This CAD Data Base was obtained with 3 dimensional point digitizer (3DPD). Because this measuring method with 3DPD is to be able to select points, the CAD Data Base could be consists of characteristic points. When this data base is really used, it is interpolated with s-spline. Spline interpolation is indispensable to the CAD/CAM system. Further understanding of this system, explanation is divided into three parts which are 3D measurement, CAD and CAM. (3D measurement) Two types of 3D measurement is dealed with this system. One is for the CAD data base and another is for the prepared tooth model. 3D measurement of the prepared tooth model is equivalent of the impression takings in the routine method. For the clear marginal line and for the uniform distribution of measuring points, the prepared tooth model is tilted and rotated on the working table when it is measured with laser. (CAD) The CAD Data Base can be extended, contracted, parallel translated and rotated with the Affine transformation. For putting the individual margin data on the CAD Data Base, the prepared tooth margin is re-digitized with 3DPD. Occlusal data is taken from F.G.P. core. (CAM) The application of the spline interpolation to the tool offset theory, which is effective at the groove especially, makes easy to calculate the tool path. When the prepared tooth model is

  4. False positive reduction for lung nodule CAD

    NASA Astrophysics Data System (ADS)

    Zhao, Luyin; Boroczky, Lilla; Drysdale, Jeremy; Agnihotri, Lalitha; Lee, Michael C.

    2007-03-01

    Computer-aided detection (CAD) algorithms 'automatically' identify lung nodules on thoracic multi-slice CT scans (MSCT) thereby providing physicians with a computer-generated 'second opinion'. While CAD systems can achieve high sensitivity, their limited specificity has hindered clinical acceptance. To overcome this problem, we propose a false positive reduction (FPR) system based on image processing and machine learning to reduce the number of false positive lung nodules identified by CAD algorithms and thereby improve system specificity. To discriminate between true and false nodules, twenty-three 3D features were calculated from each candidate nodule's volume of interest (VOI). A genetic algorithm (GA) and support vector machine (SVM) were then used to select an optimal subset of features from this pool of candidate features. Using this feature subset, we trained an SVM classifier to eliminate as many false positives as possible while retaining all the true nodules. To overcome the imbalanced nature of typical datasets (significantly more false positives than true positives), an intelligent data selection algorithm was designed and integrated into the machine learning framework, thus further improving the FPR rate. Three independent datasets were used to train and validate the system. Using two datasets for training and the third for validation, we achieved a 59.4% FPR rate while removing one true nodule on the validation datasets. In a second experiment, 75% of the cases were randomly selected from each of the three datasets and the remaining cases were used for validation. A similar FPR rate and true positive retention rate was achieved. Additional experiments showed that the GA feature selection process integrated with the proposed data selection algorithm outperforms the one without it by 5%-10% FPR rate. The methods proposed can be also applied to other application areas, such as computer-aided diagnosis of lung nodules.

  5. Computer-Aided Apparel Design in University Curricula.

    ERIC Educational Resources Information Center

    Belleau, Bonnie D.; Bourgeois, Elva B.

    1991-01-01

    As computer-assisted design (CAD) become an integral part of the fashion industry, universities must integrate CAD into the apparel curriculum. Louisiana State University's curriculum enables students to collaborate in CAD problem solving with industry personnel. (SK)

  6. Computer Aided Design of Computer Generated Holograms for electron beam fabrication

    NASA Technical Reports Server (NTRS)

    Urquhart, Kristopher S.; Lee, Sing H.; Guest, Clark C.; Feldman, Michael R.; Farhoosh, Hamid

    1989-01-01

    Computer Aided Design (CAD) systems that have been developed for electrical and mechanical design tasks are also effective tools for the process of designing Computer Generated Holograms (CGHs), particularly when these holograms are to be fabricated using electron beam lithography. CAD workstations provide efficient and convenient means of computing, storing, displaying, and preparing for fabrication many of the features that are common to CGH designs. Experience gained in the process of designing CGHs with various types of encoding methods is presented. Suggestions are made so that future workstations may further accommodate the CGH design process.

  7. Use of CAD output to guide the intelligent display of digital mammograms

    NASA Astrophysics Data System (ADS)

    Bloomquist, Aili K.; Yaffe, Martin J.; Mawdsley, Gordon E.; Morgan, Trevor; Rico, Dan; Jong, Roberta A.

    2003-05-01

    For digital mammography to be efficient, methods are needed to choose an initial default image presentation that maximizes the amount of relevant information perceived by the radiologist and minimizes the amount of time spent adjusting the image display parameters. The purpose of this work is to explore the possibility of using the output of computer aided detection (CAD) software to guide image enhancement and presentation. A set of 16 digital mammograms with lesions of known pathology was used to develop and evaluate an enhancement and display protocol to improve the initial softcopy presentation of digital mammograms. Lesions were identified by CAD and the DICOM structured report produced by the CAD program was used to determine what enhancement algorithm should be applied in the identified regions of the image. An improved version of contrast limited adaptive histogram equalization (CLAHE) is used to enhance calcifications. For masses, the image is first smoothed using a non-linear diffusion technique; subsequently, local contrast is enhanced with a method based on morphological operators. A non-linear lookup table is automatically created to optimize the contrast in the regions of interest (detected lesions) without losing the context of the periphery of the breast. The effectiveness of the enhancement will be compared with the default presentation of the images using a forced choice preference study.

  8. A computationally efficient particle-simulation method suited to vector-computer architectures

    SciTech Connect

    McDonald, J.D.

    1990-01-01

    Recent interest in a National Aero-Space Plane (NASP) and various Aero-assisted Space Transfer Vehicles (ASTVs) presents the need for a greater understanding of high-speed rarefied flight conditions. Particle simulation techniques such as the Direct Simulation Monte Carlo (DSMC) method are well suited to such problems, but the high cost of computation limits the application of the methods to two-dimensional or very simple three-dimensional problems. This research re-examines the algorithmic structure of existing particle simulation methods and re-structures them to allow efficient implementation on vector-oriented supercomputers. A brief overview of the DSMC method and the Cray-2 vector computer architecture are provided, and the elements of the DSMC method that inhibit substantial vectorization are identified. One such element is the collision selection algorithm. A complete reformulation of underlying kinetic theory shows that this may be efficiently vectorized for general gas mixtures. The mechanics of collisions are vectorizable in the DSMC method, but several optimizations are suggested that greatly enhance performance. Also this thesis proposes a new mechanism for the exchange of energy between vibration and other energy modes. The developed scheme makes use of quantized vibrational states and is used in place of the Borgnakke-Larsen model. Finally, a simplified representation of physical space and boundary conditions is utilized to further reduce the computational cost of the developed method. Comparison to solutions obtained from the DSMC method for the relaxation of internal energy modes in a homogeneous gas, as well as single and multiple specie shock wave profiles, are presented. Additionally, a large scale simulation of the flow about the proposed Aeroassisted Flight Experiment (AFE) vehicle is included as an example of the new computational capability of the developed particle simulation method.

  9. A computationally efficient Multicomponent Equilibrium Solver for Aerosols (MESA)

    NASA Astrophysics Data System (ADS)

    Zaveri, Rahul A.; Easter, Richard C.; Peters, Leonard K.

    2005-12-01

    Development and application of a new Multicomponent Equilibrium Solver for Aerosols (MESA) is described for systems containing H+, NH4+, Na+, Ca2+, SO42-, HSO4-, NO3-, and Cl- ions. The equilibrium solution is obtained by integrating a set of pseudo-transient ordinary differential equations describing the precipitation and dissolution reactions for all the possible salts to steady state. A comprehensive temperature dependent mutual deliquescence relative humidity (MDRH) parameterization is developed for all the possible salt mixtures, thereby eliminating the need for a rigorous numerical solution when ambient RH is less than MDRH(T). The solver is unconditionally stable, mass conserving, and shows robust convergence. Performance of MESA was evaluated against the Web-based AIM Model III, which served as a benchmark for accuracy, and the EQUISOLV II solver for speed. Important differences in the convergence and thermodynamic errors in MESA and EQUISOLV II are discussed. The average ratios of speeds of MESA over EQUISOLV II ranged between 1.4 and 5.8, with minimum and maximum ratios of 0.6 and 17, respectively. Because MESA directly diagnoses MDRH, it is significantly more efficient when RH < MDRH. MESA's superior performance is partially due to its "hard-wired" code for the present system as opposed to EQUISOLV II, which has a more generalized structure for solving any number and type of reactions at temperatures down to 190 K. These considerations suggest that MESA is highly attractive for use in 3-D aerosol/air-quality models for lower tropospheric applications (T > 240 K) in which both accuracy and computational efficiency are critical.

  10. 3D-CAD Effects on Creative Design Performance of Different Spatial Abilities Students

    ERIC Educational Resources Information Center

    Chang, Y.

    2014-01-01

    Students' creativity is an important focus globally and is interrelated with students' spatial abilities. Additionally, three-dimensional computer-assisted drawing (3D-CAD) overcomes barriers to spatial expression during the creative design process. Does 3D-CAD affect students' creative abilities? The purpose of this study was to…

  11. The Use of a Parametric Feature Based CAD System to Teach Introductory Engineering Graphics.

    ERIC Educational Resources Information Center

    Howell, Steven K.

    1995-01-01

    Describes the use of a parametric-feature-based computer-aided design (CAD) System, AutoCAD Designer, in teaching concepts of three dimensional geometrical modeling and design. Allows engineering graphics to go beyond the role of documentation and communication and allows an engineer to actually build a virtual prototype of a design idea and…

  12. Teaching an Introductory CAD Course with the System-Engineering Approach.

    ERIC Educational Resources Information Center

    Pao, Y. C.

    1985-01-01

    Advocates that introductory computer aided design (CAD) courses be incorporated into engineering curricula in close conjunction with the system dynamics course. Block diagram manipulation/Bode analysis and finite elementary analysis are used as examples to illustrate the interdisciplinary nature of CAD teaching. (JN)

  13. Computationally efficient finite element evaluation of natural patellofemoral mechanics.

    PubMed

    Fitzpatrick, Clare K; Baldwin, Mark A; Rullkoetter, Paul J

    2010-12-01

    pressures averaged 8.3%, 11.2%, and 5.7% between rigid and deformable analyses in the tibiofemoral joint. As statistical, probabilistic, and optimization techniques can require hundreds to thousands of analyses, a viable platform is crucial to component evaluation or clinical applications. The computationally efficient rigid body platform described in this study may be integrated with statistical and probabilistic methods and has potential clinical application in understanding in vivo joint mechanics on a subject-specific or population basis.

  14. CAD/CAM systems, materials, and clinical guidelines for all-ceramic crowns and fixed partial dentures.

    PubMed

    McLaren, Edward A; Terry, Douglas A

    2002-07-01

    Advances in dental ceramic materials and the development of computer-aided design/computer-aided manufacturing (CAD/CAM) and milling technology have facilitated the development and application of superior dental ceramics. CAD/CAM allows the use of materials that cannot be used with conventional dental processing techniques. This article reviews the main techniques and new materials used in dentistry for CAD/CAM-generated crowns and fixed partial dentures. Also covered are the clinical guidelines for using these systems.

  15. The Effect of Computer Automation on Institutional Review Board (IRB) Office Efficiency

    ERIC Educational Resources Information Center

    Oder, Karl; Pittman, Stephanie

    2015-01-01

    Companies purchase computer systems to make their processes more efficient through automation. Some academic medical centers (AMC) have purchased computer systems for their institutional review boards (IRB) to increase efficiency and compliance with regulations. IRB computer systems are expensive to purchase, deploy, and maintain. An AMC should…

  16. Mammogram CAD, hybrid registration and iconic analysis

    NASA Astrophysics Data System (ADS)

    Boucher, A.; Cloppet, F.; Vincent, N.

    2013-03-01

    This paper aims to develop a computer aided diagnosis (CAD) based on a two-step methodology to register and analyze pairs of temporal mammograms. The concept of "medical file", including all the previous medical information on a patient, enables joint analysis of different acquisitions taken at different times, and the detection of significant modifications. The developed registration method aims to superimpose at best the different anatomical structures of the breast. The registration is designed in order to get rid of deformation undergone by the acquisition process while preserving those due to breast changes indicative of malignancy. In order to reach this goal, a referent image is computed from control points based on anatomical features that are extracted automatically. Then the second image of the couple is realigned on the referent image, using a coarse-to-fine approach according to expert knowledge that allows both rigid and non-rigid transforms. The joint analysis detects the evolution between two images representing the same scene. In order to achieve this, it is important to know the registration error limits in order to adapt the observation scale. The approach used in this paper is based on an image sparse representation. Decomposed in regular patterns, the images are analyzed under a new angle. The evolution detection problem has many practical applications, especially in medical images. The CAD is evaluated using recall and precision of differences in mammograms.

  17. On the Use of CAD-Native Predicates and Geometry in Surface Meshing

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.

    1999-01-01

    Several paradigms for accessing computer-aided design (CAD) geometry during surface meshing for computational fluid dynamics are discussed. File translation, inconsistent geometry engines, and nonnative point construction are all identified as sources of nonrobustness. The paper argues in favor of accessing CAD parts and assemblies in their native format, without translation, and for the use of CAD-native predicates and constructors in surface mesh generation. The discussion also emphasizes the importance of examining the computational requirements for exact evaluation of triangulation predicates during surface meshing.

  18. CAD May Not be Necessary for Microcalcifications in the Digital era, CAD May Benefit Radiologists for Masses

    PubMed Central

    Destounis, Stamatia V.; Arieno, Andrea L.; Morgan, Renee C.

    2012-01-01

    Objective: The aim of this study was to evaluate the effectiveness of computer-aided detection (CAD) to mark the cancer on digital mammograms at the time of breast cancer diagnosis and also review retrospectively whether CAD marked the cancer if visible on any available prior mammograms, thus potentially identifying breast cancer at an earlier stage. We sought to determine why breast lesions may or may not be marked by CAD. In particular, we analyzed factors such as breast density, mammographic views, and lesion characteristics. Materials and Methods: Retrospective review from 2004 to 2008 revealed 3445 diagnosed breast cancers in both symptomatic and asymptomatic patients; 1293 of these were imaged with full field digital mammography (FFDM). After cancer diagnosis, in a retrospective review held by the radiologist staff, 43 of these cancers were found to be visible on prior-year mammograms (false-negative cases); these breast cancer cases are the basis of this analysis. All cases had CAD evaluation available at the time of cancer diagnosis and on prior mammography studies. Data collected included patient demographics, breast density, palpability, lesion type, mammographic size, CAD marks on current- and prior-year mammograms, needle biopsy method, pathology results (core needle and/or surgical), surgery type, and lesion size. Results: On retrospective review of the mammograms by the staff radiologists, 43 cancers were discovered to be visible on prior-year mammograms. All 43 cancers were masses (mass classification included mass, mass with calcification, and mass with architectural distortion); no pure microcalcifications were identified in this cohort. Mammograms with CAD applied at the time of breast cancer diagnosis were able to detect 79% (34/43) of the cases and 56% (24/43) from mammograms with CAD applied during prior year(s). In heterogeneously dense/extremely dense tissue, CAD marked 79% (27/34) on mammograms taken at the time of diagnosis and 56% (19

  19. Present State of CAD Teaching in Spanish Universities

    ERIC Educational Resources Information Center

    Garcia, Ramon Rubio; Santos, Ramon Gallego; Quiros, Javier Suarez; Penin, Pedro I. Alvarez

    2005-01-01

    During the 1990s, all Spanish Universities updated the syllabuses of their courses as a result of the entry into force of the new Organic Law of Universities ("Ley Organica de Universidades") and, for the first time, "Computer Assisted Design" (CAD) appears in the list of core subjects (compulsory teaching content set by the government) in many of…

  20. Correlating Trainee Attributes to Performance in 3D CAD Training

    ERIC Educational Resources Information Center

    Hamade, Ramsey F.; Artail, Hassan A.; Sikstrom, Sverker

    2007-01-01

    Purpose: The purpose of this exploratory study is to identify trainee attributes relevant for development of skills in 3D computer-aided design (CAD). Design/methodology/approach: Participants were trained to perform cognitive tasks of comparable complexity over time. Performance data were collected on the time needed to construct test models, and…

  1. The design and construction of the CAD-1 airship

    NASA Technical Reports Server (NTRS)

    Kleiner, H. J.; Schneider, R.; Duncan, J. L.

    1975-01-01

    The background history, design philosophy and Computer application as related to the design of the envelope shape, stress calculations and flight trajectories of the CAD-1 airship, now under construction by Canadian Airship Development Corporation are reported. A three-phase proposal for future development of larger cargo carrying airships is included.

  2. Program Evolves from Basic CAD to Total Manufacturing Experience

    ERIC Educational Resources Information Center

    Cassola, Joel

    2011-01-01

    Close to a decade ago, John Hersey High School (JHHS) in Arlington Heights, Illinois, made a transition from a traditional classroom-based pre-engineering program. The new program is geared towards helping students understand the entire manufacturing process. Previously, a JHHS student would design a project in computer-aided design (CAD) software…

  3. Building Efficient Wireless Infrastructures for Pervasive Computing Environments

    ERIC Educational Resources Information Center

    Sheng, Bo

    2010-01-01

    Pervasive computing is an emerging concept that thoroughly brings computing devices and the consequent technology into people's daily life and activities. Most of these computing devices are very small, sometimes even "invisible", and often embedded into the objects surrounding people. In addition, these devices usually are not isolated, but…

  4. Balancing Accuracy and Computational Efficiency for Ternary Gas Hydrate Systems

    NASA Astrophysics Data System (ADS)

    White, M. D.

    2011-12-01

    phase transitions. This paper describes and demonstrates a numerical solution scheme for ternary hydrate systems that seeks a balance between accuracy and computational efficiency. This scheme uses a generalize cubic equation of state, functional forms for the hydrate equilibria and cage occupancies, variable switching scheme for phase transitions, and kinetic exchange of hydrate formers (i.e., CH4, CO2, and N2) between the mobile phases (i.e., aqueous, liquid CO2, and gas) and hydrate phase. Accuracy of the scheme will be evaluated by comparing property values and phase equilibria against experimental data. Computational efficiency of the scheme will be evaluated by comparing the base scheme against variants. The application of interest will the production of a natural gas hydrate deposit from a geologic formation, using the guest molecule exchange process; where, a mixture of CO2 and N2 are injected into the formation. During the guest-molecule exchange, CO2 and N2 will predominately replace CH4 in the large and small cages of the sI structure, respectively.

  5. How to Quickly Import CAD Geometry into Thermal Desktop

    NASA Astrophysics Data System (ADS)

    Wright, Shonte; Beltran, Emilio

    2002-07-01

    There are several groups at JPL (Jet Propulsion Laboratory) that are committed to concurrent design efforts, two are featured here. Center for Space Mission Architecture and Design (CSMAD) enables the practical application of advanced process technologies in JPL's mission architecture process. Team I functions as an incubator for projects that are in the Discovery, and even pre-Discovery proposal stages. JPL's concurrent design environment is to a large extent centered on the CAD (Computer Aided Design) file. During concurrent design sessions CAD geometry is ported to other more specialized engineering design packages.

  6. Construction CAE; Integration of CAD, simulation, planning and cost control

    SciTech Connect

    Wickard, D.A. ); Bill, R.D.; Gates, K.H.; Yoshinaga, T.; Ohcoshi, S. )

    1989-01-01

    Construction CAE is a simulation, planning, scheduling, and cost control tool that is integrated with a computer aided design (CAD) system. The system uses a CAD model and allows the user to perform construction simulation on objects defined within the model. Initial cost/schedule reports as well as those required for project chronicling are supported through an interface to a work breakdown structure (WBS) and a client's existing schedule reporting system. By integrating currently available project control tools with a simulation system, Construction CAE is more effective than its individual components.

  7. CAD Skills Increased through Multicultural Design Project

    ERIC Educational Resources Information Center

    Clemons, Stephanie

    2006-01-01

    This article discusses how students in a college-entry-level CAD course researched four generations of their family histories and documented cultural and symbolic influences within their family backgrounds. AutoCAD software was then used to manipulate those cultural and symbolic images to create the design for a multicultural area rug. AutoCAD was…

  8. Cool-and Unusual-CAD Applications

    ERIC Educational Resources Information Center

    Calhoun, Ken

    2004-01-01

    This article describes several very useful applications of AutoCAD that may lie outside the normal scope of application. AutoCAD commands used in this article are based on AutoCAD 2000I. The author and his students used a Hewlett Packard 750C DesignJet plotter for plotting. (Contains 5 figures and 5 photos.)

  9. A Computationally Efficient Multicomponent Equilibrium Solver for Aerosols (MESA)

    SciTech Connect

    Zaveri, Rahul A.; Easter, Richard C.; Peters, Len K.

    2005-12-23

    deliquescence points as well as mass growth factors for the sulfate-rich systems. The MESA-MTEM configuration required only 5 to 10 single-level iterations to obtain the equilibrium solution for ~44% of the 328 multiphase problems solved in the 16 test cases at RH values ranging between 20% and 90%, while ~85% of the problems solved required less than 20 iterations. Based on the accuracy and computational efficiency considerations, the MESA-MTEM configuration is attractive for use in 3-D aerosol/air quality models.

  10. Comparative fracture strength analysis of Lava and Digident CAD/CAM zirconia ceramic crowns

    PubMed Central

    Kwon, Taek-Ka; Pak, Hyun-Soon; Han, Jung-Suk; Lee, Jai-Bong; Kim, Sung-Hun

    2013-01-01

    PURPOSE All-ceramic crowns are subject to fracture during function. To minimize this common clinical complication, zirconium oxide has been used as the framework for all-ceramic crowns. The aim of this study was to compare the fracture strengths of two computer-aided design/computer-aided manufacturing (CAD/CAM) zirconia crown systems: Lava and Digident. MATERIALS AND METHODS Twenty Lava CAD/CAM zirconia crowns and twenty Digident CAD/CAM zirconia crowns were fabricated. A metal die was also duplicated from the original prepared tooth for fracture testing. A universal testing machine was used to determine the fracture strength of the crowns. RESULTS The mean fracture strengths were as follows: 54.9 ± 15.6 N for the Lava CAD/CAM zirconia crowns and 87.0 ± 16.0 N for the Digident CAD/CAM zirconia crowns. The difference between the mean fracture strengths of the Lava and Digident crowns was statistically significant (P<.001). Lava CAD/CAM zirconia crowns showed a complete fracture of both the veneering porcelain and the core whereas the Digident CAD/CAM zirconia crowns showed fracture only of the veneering porcelain. CONCLUSION The fracture strengths of CAD/CAM zirconia crowns differ depending on the compatibility of the core material and the veneering porcelain. PMID:23755332

  11. Experiences with Efficient Methodologies for Teaching Computer Programming to Geoscientists

    ERIC Educational Resources Information Center

    Jacobs, Christian T.; Gorman, Gerard J.; Rees, Huw E.; Craig, Lorraine E.

    2016-01-01

    Computer programming was once thought of as a skill required only by professional software developers. But today, given the ubiquitous nature of computation and data science it is quickly becoming necessary for all scientists and engineers to have at least a basic knowledge of how to program. Teaching how to program, particularly to those students…

  12. IFEMS, an Interactive Finite Element Modeling System Using a CAD/CAM System

    NASA Technical Reports Server (NTRS)

    Mckellip, S.; Schuman, T.; Lauer, S.

    1980-01-01

    A method of coupling a CAD/CAM system with a general purpose finite element mesh generator is described. The three computer programs which make up the interactive finite element graphics system are discussed.

  13. Effectiveness of a Standard Computer Interface Paradigm on Computer Anxiety, Self-Direction, Efficiency, and Self-Confidence.

    ERIC Educational Resources Information Center

    Ward, Hugh C., Jr.

    A study was undertaken to explore whether students using an advance organizer-metacognitive learning strategy would be less anxious, more self-directing, more efficient, and more self-confident when learning unknown computer applications software than students using traditional computer software learning strategies. The first experiment was…

  14. Efficient reinforcement learning: computational theories, neuroscience and robotics.

    PubMed

    Kawato, Mitsuo; Samejima, Kazuyuki

    2007-04-01

    Reinforcement learning algorithms have provided some of the most influential computational theories for behavioral learning that depends on reward and penalty. After briefly reviewing supporting experimental data, this paper tackles three difficult theoretical issues that remain to be explored. First, plain reinforcement learning is much too slow to be considered a plausible brain model. Second, although the temporal-difference error has an important role both in theory and in experiments, how to compute it remains an enigma. Third, function of all brain areas, including the cerebral cortex, cerebellum, brainstem and basal ganglia, seems to necessitate a new computational framework. Computational studies that emphasize meta-parameters, hierarchy, modularity and supervised learning to resolve these issues are reviewed here, together with the related experimental data.

  15. Efficient computation of root mean square deviations under rigid transformations.

    PubMed

    Hildebrandt, Anna K; Dietzen, Matthias; Lengauer, Thomas; Lenhof, Hans-Peter; Althaus, Ernst; Hildebrandt, Andreas

    2014-04-15

    The computation of root mean square deviations (RMSD) is an important step in many bioinformatics applications. If approached naively, each RMSD computation takes time linear in the number of atoms. In addition, a careful implementation is required to achieve numerical stability, which further increases runtimes. In practice, the structural variations under consideration are often induced by rigid transformations of the protein, or are at least dominated by a rigid component. In this work, we show how RMSD values resulting from rigid transformations can be computed in constant time from the protein's covariance matrix, which can be precomputed in linear time. As a typical application scenario is protein clustering, we will also show how the Ward-distance which is popular in this field can be reduced to RMSD evaluations, yielding a constant time approach for their computation.

  16. CAD programs: a tool for crime scene processing and reconstruction

    NASA Astrophysics Data System (ADS)

    Boggiano, Daniel; De Forest, Peter R.; Sheehan, Francis X.

    1997-02-01

    Computer aided drafting (CAD) programs have great potential for helping the forensic scientist. One of their most direct and useful applications is crime scene documentation, as an aid in rendering neat, unambiguous line drawings of crime scenes. Once the data has been entered, it can easily be displayed, printed, or plotted in a variety of formats. Final renditions from this initial data entry can take multiple forms and can have multiple uses. As a demonstrative aid, a CAD program can produce two dimensional (2-D) drawings of the scene from one's notes to scale. These 2-D renditions are court display quality and help to make the forensic scientists's testimony easily understood. Another use for CAD is as an analytical tool for scene reconstruction. More than just a drawing aid, CAD can generate useful information from the data input. It can help reconstruct bullet paths or locations of furniture in a room when it is critical to the reconstruction. Data entry at the scene, on a notebook computer, can assist in framing and answering questions so that the forensic scientist can test hypotheses while actively documenting the scene. Further, three dimensional (3-D) renditions of items can be viewed from many 'locations' by using the program to rotate the object and the observers' viewpoint.

  17. Computer-aided design development transition for IPAD environment

    NASA Technical Reports Server (NTRS)

    Owens, H. G.; Mock, W. D.; Mitchell, J. C.

    1980-01-01

    The relationship of federally sponsored computer-aided design/computer-aided manufacturing (CAD/CAM) programs to the aircraft life cycle design process, an overview of NAAD'S CAD development program, an evaluation of the CAD design process, a discussion of the current computing environment within which NAAD is developing its CAD system, some of the advantages/disadvantages of the NAAD-IPAD approach, and CAD developments during transition into the IPAD system are discussed.

  18. Limits on efficient computation in the physical world

    NASA Astrophysics Data System (ADS)

    Aaronson, Scott Joel

    More than a speculative technology, quantum computing seems to challenge our most basic intuitions about how the physical world should behave. In this thesis I show that, while some intuitions from classical computer science must be jettisoned in the light of modern physics, many others emerge nearly unscathed; and I use powerful tools from computational complexity theory to help determine which are which. In the first part of the thesis, I attack the common belief that quantum computing resembles classical exponential parallelism, by showing that quantum computers would face serious limitations on a wider range of problems than was previously known. In particular, any quantum algorithm that solves the collision problem---that of deciding whether a sequence of n integers is one-to-one or two-to-one---must query the sequence O (n1/5) times. This resolves a question that was open for years; previously no lower bound better than constant was known. A corollary is that there is no "black-box" quantum algorithm to break cryptographic hash functions or solve the Graph Isomorphism problem in polynomial time. I also show that relative to an oracle, quantum computers could not solve NP-complete problems in polynomial time, even with the help of nonuniform "quantum advice states"; and that any quantum algorithm needs O (2n/4/n) queries to find a local minimum of a black-box function on the n-dimensional hypercube. Surprisingly, the latter result also leads to new classical lower bounds for the local search problem. Finally, I give new lower bounds on quantum one-way communication complexity, and on the quantum query complexity of total Boolean functions and recursive Fourier sampling. The second part of the thesis studies the relationship of the quantum computing model to physical reality. I first examine the arguments of Leonid Levin, Stephen Wolfram, and others who believe quantum computing to be fundamentally impossible. I find their arguments unconvincing without a "Sure

  19. Model-Based Engineering and Manufacturing CAD/CAM Benchmark

    SciTech Connect

    Domm, T.D.; Underwood, R.S.

    1999-04-26

    The Benehmark Project was created from a desire to identify best practices and improve the overall efficiency and performance of the Y-12 Plant's systems and personnel supprting the manufacturing mission. The mission of the benchmark team was to search out industry leaders in manufacturing and evaluate lheir engineering practices and processes to determine direction and focus fm Y-12 modmizadon efforts. The companies visited included several large established companies and anew, small, high-tech machining firm. As a result of this efforL changes are recommended that will enable Y-12 to become a more responsive cost-effective manufacturing facility capable of suppordng the needs of the Nuclear Weapons Complex (NW@) and Work Fw Others into the 21' century. The benchmark team identified key areas of interest, both focused and gencml. The focus arm included Human Resources, Information Management, Manufacturing Software Tools, and Standarda/ Policies and Practices. Areas of general interest included Inhstructure, Computer Platforms and Networking, and Organizational Structure. The method for obtaining the desired information in these areas centered on the creation of a benchmark questionnaire. The questionnaire was used throughout each of the visits as the basis for information gathering. The results of this benchmark showed that all companies are moving in the direction of model-based engineering and manufacturing. There was evidence that many companies are trying to grasp how to manage current and legacy data. In terms of engineering design software tools, the companies contacted were using both 3-D solid modeling and surfaced Wire-frame models. The manufacturing computer tools were varie4 with most companies using more than one software product to generate machining data and none currently performing model-based manufacturing (MBM) ftom a common medel. The majority of companies were closer to identifying or using a single computer-aided design (CAD) system than a

  20. CAD-Based Aerodynamic Design of Complex Configurations using a Cartesian Method

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.; Pulliam, Thomas H.

    2003-01-01

    A modular framework for aerodynamic optimization of complex geometries is developed. By working directly with a parametric CAD system, complex-geometry models are modified nnd tessellated in an automatic fashion. The use of a component-based Cartesian method significantly reduces the demands on the CAD system, and also provides for robust and efficient flowfield analysis. The optimization is controlled using either a genetic or quasi-Newton algorithm. Parallel efficiency of the framework is maintained even when subject to limited CAD resources by dynamically re-allocating the processors of the flow solver. Overall, the resulting framework can explore designs incorporating large shape modifications and changes in topology.

  1. Parametric bicubic spline and CAD tools for complex targets shape modelling in physical optics radar cross section prediction

    NASA Astrophysics Data System (ADS)

    Delogu, A.; Furini, F.

    1991-09-01

    Increasing interest in radar cross section (RCS) reduction is placing new demands on theoretical, computation, and graphic techniques for calculating scattering properties of complex targets. In particular, computer codes capable of predicting the RCS of an entire aircraft at high frequency and of achieving RCS control with modest structural changes, are becoming of paramount importance in stealth design. A computer code, evaluating the RCS of arbitrary shaped metallic objects that are computer aided design (CAD) generated, and its validation with measurements carried out using ALENIA RCS test facilities are presented. The code, based on the physical optics method, is characterized by an efficient integration algorithm with error control, in order to contain the computer time within acceptable limits, and by an accurate parametric representation of the target surface in terms of bicubic splines.

  2. Model-Based Engineering and Manufacturing CAD/CAM Benchmark.

    SciTech Connect

    Domm, T.C.; Underwood, R.S.

    1999-10-13

    The Benchmark Project was created from a desire to identify best practices and improve the overall efficiency and performance of the Y-12 Plant's systems and personnel supporting the manufacturing mission. The mission of the benchmark team was to search out industry leaders in manufacturing and evaluate their engineering practices and processes to determine direction and focus for Y-12 modernization efforts. The companies visited included several large established companies and a new, small, high-tech machining firm. As a result of this effort, changes are recommended that will enable Y-12 to become a more modern, responsive, cost-effective manufacturing facility capable of supporting the needs of the Nuclear Weapons Complex (NWC) into the 21st century. The benchmark team identified key areas of interest, both focused and general. The focus areas included Human Resources, Information Management, Manufacturing Software Tools, and Standards/Policies and Practices. Areas of general interest included Infrastructure, Computer Platforms and Networking, and Organizational Structure. The results of this benchmark showed that all companies are moving in the direction of model-based engineering and manufacturing. There was evidence that many companies are trying to grasp how to manage current and legacy data. In terms of engineering design software tools, the companies contacted were somewhere between 3-D solid modeling and surfaced wire-frame models. The manufacturing computer tools were varied, with most companies using more than one software product to generate machining data and none currently performing model-based manufacturing (MBM) from a common model. The majority of companies were closer to identifying or using a single computer-aided design (CAD) system than a single computer-aided manufacturing (CAM) system. The Internet was a technology that all companies were looking to either transport information more easily throughout the corporation or as a conduit for

  3. Methods for Computationally Efficient Structured CFD Simulations of Complex Turbomachinery Flows

    NASA Technical Reports Server (NTRS)

    Herrick, Gregory P.; Chen, Jen-Ping

    2012-01-01

    This research presents more efficient computational methods by which to perform multi-block structured Computational Fluid Dynamics (CFD) simulations of turbomachinery, thus facilitating higher-fidelity solutions of complicated geometries and their associated flows. This computational framework offers flexibility in allocating resources to balance process count and wall-clock computation time, while facilitating research interests of simulating axial compressor stall inception with more complete gridding of the flow passages and rotor tip clearance regions than is typically practiced with structured codes. The paradigm presented herein facilitates CFD simulation of previously impractical geometries and flows. These methods are validated and demonstrate improved computational efficiency when applied to complicated geometries and flows.

  4. Computationally efficient calibration of WATCLASS Hydrologic models using surrogate optimization

    NASA Astrophysics Data System (ADS)

    Kamali, M.; Ponnambalam, K.; Soulis, E. D.

    2007-07-01

    In this approach, exploration of the cost function space was performed with an inexpensive surrogate function, not the expensive original function. The Design and Analysis of Computer Experiments(DACE) surrogate function, which is one type of approximate models, which takes correlation function for error was employed. The results for Monte Carlo Sampling, Latin Hypercube Sampling and Design and Analysis of Computer Experiments(DACE) approximate model have been compared. The results show that DACE model has a good potential for predicting the trend of simulation results. The case study of this document was WATCLASS hydrologic model calibration on Smokey-River watershed.

  5. Efficient computational simulation of actin stress fiber remodeling.

    PubMed

    Ristori, T; Obbink-Huizer, C; Oomens, C W J; Baaijens, F P T; Loerakker, S

    2016-09-01

    Understanding collagen and stress fiber remodeling is essential for the development of engineered tissues with good functionality. These processes are complex, highly interrelated, and occur over different time scales. As a result, excessive computational costs are required to computationally predict the final organization of these fibers in response to dynamic mechanical conditions. In this study, an analytical approximation of a stress fiber remodeling evolution law was derived. A comparison of the developed technique with the direct numerical integration of the evolution law showed relatively small differences in results, and the proposed method is one to two orders of magnitude faster.

  6. Efficient computational simulation of actin stress fiber remodeling.

    PubMed

    Ristori, T; Obbink-Huizer, C; Oomens, C W J; Baaijens, F P T; Loerakker, S

    2016-09-01

    Understanding collagen and stress fiber remodeling is essential for the development of engineered tissues with good functionality. These processes are complex, highly interrelated, and occur over different time scales. As a result, excessive computational costs are required to computationally predict the final organization of these fibers in response to dynamic mechanical conditions. In this study, an analytical approximation of a stress fiber remodeling evolution law was derived. A comparison of the developed technique with the direct numerical integration of the evolution law showed relatively small differences in results, and the proposed method is one to two orders of magnitude faster. PMID:26823159

  7. Computer aided production engineering

    SciTech Connect

    Not Available

    1986-01-01

    This book presents the following contents: CIM in avionics; computer analysis of product designs for robot assembly; a simulation decision mould for manpower forecast and its application; development of flexible manufacturing system; advances in microcomputer applications in CAD/CAM; an automated interface between CAD and process planning; CAM and computer vision; low friction pneumatic actuators for accurate robot control; robot assembly of printed circuit boards; information systems design for computer integrated manufacture; and a CAD engineering language to aid manufacture.

  8. An Efficient Virtual Machine Consolidation Scheme for Multimedia Cloud Computing

    PubMed Central

    Han, Guangjie; Que, Wenhui; Jia, Gangyong; Shu, Lei

    2016-01-01

    Cloud computing has innovated the IT industry in recent years, as it can delivery subscription-based services to users in the pay-as-you-go model. Meanwhile, multimedia cloud computing is emerging based on cloud computing to provide a variety of media services on the Internet. However, with the growing popularity of multimedia cloud computing, its large energy consumption cannot only contribute to greenhouse gas emissions, but also result in the rising of cloud users’ costs. Therefore, the multimedia cloud providers should try to minimize its energy consumption as much as possible while satisfying the consumers’ resource requirements and guaranteeing quality of service (QoS). In this paper, we have proposed a remaining utilization-aware (RUA) algorithm for virtual machine (VM) placement, and a power-aware algorithm (PA) is proposed to find proper hosts to shut down for energy saving. These two algorithms have been combined and applied to cloud data centers for completing the process of VM consolidation. Simulation results have shown that there exists a trade-off between the cloud data center’s energy consumption and service-level agreement (SLA) violations. Besides, the RUA algorithm is able to deal with variable workload to prevent hosts from overloading after VM placement and to reduce the SLA violations dramatically. PMID:26901201

  9. An Efficient Virtual Machine Consolidation Scheme for Multimedia Cloud Computing.

    PubMed

    Han, Guangjie; Que, Wenhui; Jia, Gangyong; Shu, Lei

    2016-02-18

    Cloud computing has innovated the IT industry in recent years, as it can delivery subscription-based services to users in the pay-as-you-go model. Meanwhile, multimedia cloud computing is emerging based on cloud computing to provide a variety of media services on the Internet. However, with the growing popularity of multimedia cloud computing, its large energy consumption cannot only contribute to greenhouse gas emissions, but also result in the rising of cloud users' costs. Therefore, the multimedia cloud providers should try to minimize its energy consumption as much as possible while satisfying the consumers' resource requirements and guaranteeing quality of service (QoS). In this paper, we have proposed a remaining utilization-aware (RUA) algorithm for virtual machine (VM) placement, and a power-aware algorithm (PA) is proposed to find proper hosts to shut down for energy saving. These two algorithms have been combined and applied to cloud data centers for completing the process of VM consolidation. Simulation results have shown that there exists a trade-off between the cloud data center's energy consumption and service-level agreement (SLA) violations. Besides, the RUA algorithm is able to deal with variable workload to prevent hosts from overloading after VM placement and to reduce the SLA violations dramatically.

  10. An Efficient Virtual Machine Consolidation Scheme for Multimedia Cloud Computing.

    PubMed

    Han, Guangjie; Que, Wenhui; Jia, Gangyong; Shu, Lei

    2016-01-01

    Cloud computing has innovated the IT industry in recent years, as it can delivery subscription-based services to users in the pay-as-you-go model. Meanwhile, multimedia cloud computing is emerging based on cloud computing to provide a variety of media services on the Internet. However, with the growing popularity of multimedia cloud computing, its large energy consumption cannot only contribute to greenhouse gas emissions, but also result in the rising of cloud users' costs. Therefore, the multimedia cloud providers should try to minimize its energy consumption as much as possible while satisfying the consumers' resource requirements and guaranteeing quality of service (QoS). In this paper, we have proposed a remaining utilization-aware (RUA) algorithm for virtual machine (VM) placement, and a power-aware algorithm (PA) is proposed to find proper hosts to shut down for energy saving. These two algorithms have been combined and applied to cloud data centers for completing the process of VM consolidation. Simulation results have shown that there exists a trade-off between the cloud data center's energy consumption and service-level agreement (SLA) violations. Besides, the RUA algorithm is able to deal with variable workload to prevent hosts from overloading after VM placement and to reduce the SLA violations dramatically. PMID:26901201

  11. Comparing performance of the CADstream and the DynaCAD breast MRI CAD systems : CADstream vs. DynaCAD in breast MRI.

    PubMed

    Pan, Joann; Dogan, Basak E; Carkaci, Selin; Santiago, Lumarie; Arribas, Elsa; Cantor, Scott B; Wei, Wei; Stafford, R Jason; Whitman, Gary J

    2013-10-01

    Computer-aided diagnosis (CAD) systems are software programs that use algorithms to find patterns associated with breast cancer on breast magnetic resonance imaging (MRI). The most commonly used CAD systems in the USA are CADstream (CS) (Merge Healthcare Inc., Chicago, IL) and DynaCAD for Breast (DC) (Invivo, Gainesville, FL). Our primary objective in this study was to compare the CS and DC breast MRI CAD systems for diagnostic accuracy and postprocessed image quality. Our secondary objective was to compare the evaluation times of radiologists using each system. Three radiologists evaluated 30 biopsy-proven malignant lesions and 29 benign lesions on CS and DC and rated the lesions' malignancy status using the Breast Imaging Reporting and Data System. Image quality was ranked on a 0-5 scale, and mean reading times were also recorded. CS detected 70 % of the malignant and 32 % of the benign lesions while DC detected 81 % of the malignant lesions and 34 % of the benign lesions. Analysis of the area under the receiver operating characteristic curve revealed that the difference in diagnostic performance was not statistically significant. On image quality scores, CS had significantly higher volume rendering (VR) (p < 0.0001) and motion correction (MC) scores (p < 0.0001). There were no statistically significant differences in the remaining image quality scores. Differences in evaluation times between DC and CS were also not statistically significant. We conclude that both CS and DC perform similarly in aiding detection of breast cancer on MRI. MRI CAD selection will likely be based on other factors, such as user interface and image quality preferences, including MC and VR. PMID:23589186

  12. Efficient algorithm to compute mutually connected components in interdependent networks.

    PubMed

    Hwang, S; Choi, S; Lee, Deokjae; Kahng, B

    2015-02-01

    Mutually connected components (MCCs) play an important role as a measure of resilience in the study of interdependent networks. Despite their importance, an efficient algorithm to obtain the statistics of all MCCs during the removal of links has thus far been absent. Here, using a well-known fully dynamic graph algorithm, we propose an efficient algorithm to accomplish this task. We show that the time complexity of this algorithm is approximately O(N(1.2)) for random graphs, which is more efficient than O(N(2)) of the brute-force algorithm. We confirm the correctness of our algorithm by comparing the behavior of the order parameter as links are removed with existing results for three types of double-layer multiplex networks. We anticipate that this algorithm will be used for simulations of large-size systems that have been previously inaccessible. PMID:25768559

  13. Understanding dental CAD/CAM for restorations--the digital workflow from a mechanical engineering viewpoint.

    PubMed

    Tapie, L; Lebon, N; Mawussi, B; Fron Chabouis, H; Duret, F; Attal, J-P

    2015-01-01

    As digital technology infiltrates every area of daily life, including the field of medicine, so it is increasingly being introduced into dental practice. Apart from chairside practice, computer-aided design/computer-aided manufacturing (CAD/CAM) solutions are available for creating inlays, crowns, fixed partial dentures (FPDs), implant abutments, and other dental prostheses. CAD/CAM dental solutions can be considered a chain of digital devices and software for the almost automatic design and creation of dental restorations. However, dentists who want to use the technology often do not have the time or knowledge to understand it. A basic knowledge of the CAD/CAM digital workflow for dental restorations can help dentists to grasp the technology and purchase a CAM/CAM system that meets the needs of their office. This article provides a computer-science and mechanical-engineering approach to the CAD/CAM digital workflow to help dentists understand the technology.

  14. Understanding dental CAD/CAM for restorations--the digital workflow from a mechanical engineering viewpoint.

    PubMed

    Tapie, L; Lebon, N; Mawussi, B; Fron Chabouis, H; Duret, F; Attal, J-P

    2015-01-01

    As digital technology infiltrates every area of daily life, including the field of medicine, so it is increasingly being introduced into dental practice. Apart from chairside practice, computer-aided design/computer-aided manufacturing (CAD/CAM) solutions are available for creating inlays, crowns, fixed partial dentures (FPDs), implant abutments, and other dental prostheses. CAD/CAM dental solutions can be considered a chain of digital devices and software for the almost automatic design and creation of dental restorations. However, dentists who want to use the technology often do not have the time or knowledge to understand it. A basic knowledge of the CAD/CAM digital workflow for dental restorations can help dentists to grasp the technology and purchase a CAM/CAM system that meets the needs of their office. This article provides a computer-science and mechanical-engineering approach to the CAD/CAM digital workflow to help dentists understand the technology. PMID:25911827

  15. Probabilistic structural analysis algorithm development for computational efficiency

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1991-01-01

    The PSAM (Probabilistic Structural Analysis Methods) program is developing a probabilistic structural risk assessment capability for the SSME components. An advanced probabilistic structural analysis software system, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), is being developed as part of the PSAM effort to accurately simulate stochastic structures operating under severe random loading conditions. One of the challenges in developing the NESSUS system is the development of the probabilistic algorithms that provide both efficiency and accuracy. The main probability algorithms developed and implemented in the NESSUS system are efficient, but approximate in nature. In the last six years, the algorithms have improved very significantly.

  16. Learning with Computer-Based Multimedia: Gender Effects on Efficiency

    ERIC Educational Resources Information Center

    Pohnl, Sabine; Bogner, Franz X.

    2012-01-01

    Up to now, only a few studies in multimedia learning have focused on gender effects. While research has mostly focused on learning success, the effect of gender on instructional efficiency (IE) has not yet been considered. Consequently, we used a quasi-experimental design to examine possible gender differences in the learning success, mental…

  17. College Students' Reading Efficiency with Computer-Presented Text.

    ERIC Educational Resources Information Center

    Wepner, Shelley B.; Feeley, Joan T.

    Focusing on improving college students' reading efficiency, a study investigated whether a commercially-prepared computerized speed reading package, Speed Reader II, could be utilized as effectively as traditionally printed text. Subjects were 70 college freshmen from a college reading and rate improvement course with borderline scores on the…

  18. A New Stochastic Computing Methodology for Efficient Neural Network Implementation.

    PubMed

    Canals, Vincent; Morro, Antoni; Oliver, Antoni; Alomar, Miquel L; Rosselló, Josep L

    2016-03-01

    This paper presents a new methodology for the hardware implementation of neural networks (NNs) based on probabilistic laws. The proposed encoding scheme circumvents the limitations of classical stochastic computing (based on unipolar or bipolar encoding) extending the representation range to any real number using the ratio of two bipolar-encoded pulsed signals. Furthermore, the novel approach presents practically a total noise-immunity capability due to its specific codification. We introduce different designs for building the fundamental blocks needed to implement NNs. The validity of the present approach is demonstrated through a regression and a pattern recognition task. The low cost of the methodology in terms of hardware, along with its capacity to implement complex mathematical functions (such as the hyperbolic tangent), allows its use for building highly reliable systems and parallel computing.

  19. Computationally efficient statistical differential equation modeling using homogenization

    USGS Publications Warehouse

    Hooten, Mevin B.; Garlick, Martha J.; Powell, James A.

    2013-01-01

    Statistical models using partial differential equations (PDEs) to describe dynamically evolving natural systems are appearing in the scientific literature with some regularity in recent years. Often such studies seek to characterize the dynamics of temporal or spatio-temporal phenomena such as invasive species, consumer-resource interactions, community evolution, and resource selection. Specifically, in the spatial setting, data are often available at varying spatial and temporal scales. Additionally, the necessary numerical integration of a PDE may be computationally infeasible over the spatial support of interest. We present an approach to impose computationally advantageous changes of support in statistical implementations of PDE models and demonstrate its utility through simulation using a form of PDE known as “ecological diffusion.” We also apply a statistical ecological diffusion model to a data set involving the spread of mountain pine beetle (Dendroctonus ponderosae) in Idaho, USA.

  20. [Efficiency of computed tomography in diagnosis of silicotuberculosis].

    PubMed

    Naumenko, E S; Gol'del'man, A G; Tikhotskaia, L I; Zhovtiak, E P; Iarina, A L; Ershov, V I; Larina, E N

    1998-01-01

    The routine methods X-ray study and computed tomography (CT) were compared in a group of patients engaged in fireproof industry. CT yields valuable additional data in early silicotuberculosis, which makes it possible to follow the extent of a silicotuberculous process more completely, to make a better diagnosis of nodal and focal shadows, to identify small decay cavities in the foci and infiltrates. CT is the method of choice in following up patients with silicotuberculosis.

  1. Chunking as the result of an efficiency computation trade-off

    PubMed Central

    Ramkumar, Pavan; Acuna, Daniel E.; Berniker, Max; Grafton, Scott T.; Turner, Robert S.; Kording, Konrad P.

    2016-01-01

    How to move efficiently is an optimal control problem, whose computational complexity grows exponentially with the horizon of the planned trajectory. Breaking a compound movement into a series of chunks, each planned over a shorter horizon can thus reduce the overall computational complexity and associated costs while limiting the achievable efficiency. This trade-off suggests a cost-effective learning strategy: to learn new movements we should start with many short chunks (to limit the cost of computation). As practice reduces the impediments to more complex computation, the chunking structure should evolve to allow progressively more efficient movements (to maximize efficiency). Here we show that monkeys learning a reaching sequence over an extended period of time adopt this strategy by performing movements that can be described as locally optimal trajectories. Chunking can thus be understood as a cost-effective strategy for producing and learning efficient movements. PMID:27397420

  2. Dental students' preferences and performance in crown design: conventional wax-added versus CAD.

    PubMed

    Douglas, R Duane; Hopp, Christa D; Augustin, Marcus A

    2014-12-01

    The purpose of this study was to evaluate dental students' perceptions of traditional waxing vs. computer-aided crown design and to determine the effectiveness of either technique through comparative grading of the final products. On one of twoidentical tooth preparations, second-year students at one dental school fabricated a wax pattern for a full contour crown; on the second tooth preparation, the same students designed and fabricated an all-ceramic crown using computer-aided design (CAD) and computer-aided manufacturing (CAM) technology. Projects were graded for occlusion and anatomic form by three faculty members. On completion of the projects, 100 percent of the students (n=50) completed an eight-question, five-point Likert scalesurvey, designed to assess their perceptions of and learning associated with the two design techniques. The average grades for the crown design projects were 78.3 (CAD) and 79.1 (wax design). The mean numbers of occlusal contacts were 3.8 (CAD) and 2.9(wax design), which was significantly higher for CAD (p=0.02). The survey results indicated that students enjoyed designing afull contour crown using CAD as compared to using conventional wax techniques and spent less time designing the crown using CAD. From a learning perspective, students felt that they learned more about position and the size/strength of occlusal contacts using CAD. However, students recognized that CAD technology has limits in terms of representing anatomic contours and excursive occlusion compared to conventional wax techniques. The results suggest that crown design using CAD could be considered as an adjunct to conventional wax-added techniques in preclinical fixed prosthodontic curricula.

  3. Learning-based image preprocessing for robust computer-aided detection

    NASA Astrophysics Data System (ADS)

    Raghupathi, Laks; Devarakota, Pandu R.; Wolf, Matthias

    2013-03-01

    Recent studies have shown that low dose computed tomography (LDCT) can be an effective screening tool to reduce lung cancer mortality. Computer-aided detection (CAD) would be a beneficial second reader for radiologists in such cases. Studies demonstrate that while iterative reconstructions (IR) improve LDCT diagnostic quality, it however degrades CAD performance significantly (increased false positives) when applied directly. For improving CAD performance, solutions such as retraining with newer data or applying a standard preprocessing technique may not be suffice due to high prevalence of CT scanners and non-uniform acquisition protocols. Here, we present a learning-based framework that can adaptively transform a wide variety of input data to boost an existing CAD performance. This not only enhances their robustness but also their applicability in clinical workflows. Our solution consists of applying a suitable pre-processing filter automatically on the given image based on its characteristics. This requires the preparation of ground truth (GT) of choosing an appropriate filter resulting in improved CAD performance. Accordingly, we propose an efficient consolidation process with a novel metric. Using key anatomical landmarks, we then derive consistent feature descriptors for the classification scheme that then uses a priority mechanism to automatically choose an optimal preprocessing filter. We demonstrate CAD prototype∗ performance improvement using hospital-scale datasets acquired from North America, Europe and Asia. Though we demonstrated our results for a lung nodule CAD, this scheme is straightforward to extend to other post-processing tools dedicated to other organs and modalities.

  4. Efficient Helicopter Aerodynamic and Aeroacoustic Predictions on Parallel Computers

    NASA Technical Reports Server (NTRS)

    Wissink, Andrew M.; Lyrintzis, Anastasios S.; Strawn, Roger C.; Oliker, Leonid; Biswas, Rupak

    1996-01-01

    This paper presents parallel implementations of two codes used in a combined CFD/Kirchhoff methodology to predict the aerodynamics and aeroacoustics properties of helicopters. The rotorcraft Navier-Stokes code, TURNS, computes the aerodynamic flowfield near the helicopter blades and the Kirchhoff acoustics code computes the noise in the far field, using the TURNS solution as input. The overall parallel strategy adds MPI message passing calls to the existing serial codes to allow for communication between processors. As a result, the total code modifications required for parallel execution are relatively small. The biggest bottleneck in running the TURNS code in parallel comes from the LU-SGS algorithm that solves the implicit system of equations. We use a new hybrid domain decomposition implementation of LU-SGS to obtain good parallel performance on the SP-2. TURNS demonstrates excellent parallel speedups for quasi-steady and unsteady three-dimensional calculations of a helicopter blade in forward flight. The execution rate attained by the code on 114 processors is six times faster than the same cases run on one processor of the Cray C-90. The parallel Kirchhoff code also shows excellent parallel speedups and fast execution rates. As a performance demonstration, unsteady acoustic pressures are computed at 1886 far-field observer locations for a sample acoustics problem. The calculation requires over two hundred hours of CPU time on one C-90 processor but takes only a few hours on 80 processors of the SP2. The resultant far-field acoustic field is analyzed with state of-the-art audio and video rendering of the propagating acoustic signals.

  5. A computationally efficient QRS detection algorithm for wearable ECG sensors.

    PubMed

    Wang, Y; Deepu, C J; Lian, Y

    2011-01-01

    In this paper we present a novel Dual-Slope QRS detection algorithm with low computational complexity, suitable for wearable ECG devices. The Dual-Slope algorithm calculates the slopes on both sides of a peak in the ECG signal; And based on these slopes, three criterions are developed for simultaneously checking 1)Steepness 2)Shape and 3)Height of the signal, to locate the QRS complex. The algorithm, evaluated against MIT/BIH Arrhythmia Database, achieves a very high detection rate of 99.45%, a sensitivity of 99.82% and a positive prediction of 99.63%. PMID:22255619

  6. Probabilistic Damage Characterization Using the Computationally-Efficient Bayesian Approach

    NASA Technical Reports Server (NTRS)

    Warner, James E.; Hochhalter, Jacob D.

    2016-01-01

    This work presents a computationally-ecient approach for damage determination that quanti es uncertainty in the provided diagnosis. Given strain sensor data that are polluted with measurement errors, Bayesian inference is used to estimate the location, size, and orientation of damage. This approach uses Bayes' Theorem to combine any prior knowledge an analyst may have about the nature of the damage with information provided implicitly by the strain sensor data to form a posterior probability distribution over possible damage states. The unknown damage parameters are then estimated based on samples drawn numerically from this distribution using a Markov Chain Monte Carlo (MCMC) sampling algorithm. Several modi cations are made to the traditional Bayesian inference approach to provide signi cant computational speedup. First, an ecient surrogate model is constructed using sparse grid interpolation to replace a costly nite element model that must otherwise be evaluated for each sample drawn with MCMC. Next, the standard Bayesian posterior distribution is modi ed using a weighted likelihood formulation, which is shown to improve the convergence of the sampling process. Finally, a robust MCMC algorithm, Delayed Rejection Adaptive Metropolis (DRAM), is adopted to sample the probability distribution more eciently. Numerical examples demonstrate that the proposed framework e ectively provides damage estimates with uncertainty quanti cation and can yield orders of magnitude speedup over standard Bayesian approaches.

  7. Mixture of expert 3D massive-training ANNs for reduction of multiple types of false positives in CAD for detection of polyps in CT colonography.

    PubMed

    Suzuki, Kenji; Yoshida, Hiroyuki; Näppi, Janne; Armato, Samuel G; Dachman, Abraham H

    2008-02-01

    One of the major challenges in computer-aided detection (CAD) of polyps in CT colonography (CTC) is the reduction of false-positive detections (FPs) without a concomitant reduction in sensitivity. A large number of FPs is likely to confound the radiologist's task of image interpretation, lower the radiologist's efficiency, and cause radiologists to lose their confidence in CAD as a useful tool. Major sources of FPs generated by CAD schemes include haustral folds, residual stool, rectal tubes, the ileocecal valve, and extra-colonic structures such as the small bowel and stomach. Our purpose in this study was to develop a method for the removal of various types of FPs in CAD of polyps while maintaining a high sensitivity. To achieve this, we developed a "mixture of expert" three-dimensional (3D) massive-training artificial neural networks (MTANNs) consisting of four 3D MTANNs that were designed to differentiate between polyps and four categories of FPs: (1) rectal tubes, (2) stool with bubbles, (3) colonic walls with haustral folds, and (4) solid stool. Each expert 3D MTANN was trained with examples from a specific non-polyp category along with typical polyps. The four expert 3D MTANNs were combined with a mixing artificial neural network (ANN) such that different types of FPs could be removed. Our database consisted of 146 CTC datasets obtained from 73 patients whose colons were prepared by standard pre-colonoscopy cleansing. Each patient was scanned in both supine and prone positions. Radiologists established the locations of polyps through the use of optical-colonoscopy reports. Fifteen patients had 28 polyps, 15 of which were 5-9 mm and 13 were 10-25 mm in size. The CTC cases were subjected to our previously reported CAD method consisting of centerline-based extraction of the colon, shape-based detection of polyp candidates, and a Bayesian-ANN-based classification of polyps. The original CAD method yielded 96.4% (27/28) by-polyp sensitivity with an average of 3

  8. An efficient computational tool for ramjet combustor research

    SciTech Connect

    Vanka, S.P.; Krazinski, J.L.; Nejad, A.S.

    1988-01-01

    A multigrid based calculation procedure is presented for the efficient solution of the time-averaged equations of a turbulent elliptic reacting flow. The equations are solved on a non-orthogonal curvilinear coordinate system. The physical models currently incorporated are a two equation k-epsilon turbulence model, a four-step chemical kinetics mechanism, and a Lagrangian particle tracking procedure applicable for dilute sprays. Demonstration calculations are presented to illustrate the performance of the calculation procedure for a ramjet dump combustor configuration. 21 refs., 9 figs., 2 tabs.

  9. Integration of a CAD System Into an MDO Framework

    NASA Technical Reports Server (NTRS)

    Townsend, J. C.; Samareh, J. A.; Weston, R. P.; Zorumski, W. E.

    1998-01-01

    NASA Langley has developed a heterogeneous distributed computing environment, called the Framework for Inter-disciplinary Design Optimization, or FIDO. Its purpose has been to demonstrate framework technical feasibility and usefulness for optimizing the preliminary design of complex systems and to provide a working environment for testing optimization schemes. Its initial implementation has been for a simplified model of preliminary design of a high-speed civil transport. Upgrades being considered for the FIDO system include a more complete geometry description, required by high-fidelity aerodynamics and structures codes and based on a commercial Computer Aided Design (CAD) system. This report presents the philosophy behind some of the decisions that have shaped the FIDO system and gives a brief case study of the problems and successes encountered in integrating a CAD system into the FEDO framework.

  10. CYBERSECURITY AND USER ACCOUNTABILITY IN THE C-AD CONTROL SYSTEM

    SciTech Connect

    MORRIS,J.T.; BINELLO, S.; D OTTAVIO, T.; KATZ, R.A.

    2007-10-15

    A heightened awareness of cybersecurity has led to a review of the procedures that ensure user accountability for actions performed on the computers of the Collider-Accelerator Department (C-AD) Control System. Control system consoles are shared by multiple users in control rooms throughout the C-AD complex. A significant challenge has been the establishment of procedures that securely control and monitor access to these shared consoles without impeding accelerator operations. This paper provides an overview of C-AD cybersecurity strategies with an emphasis on recent enhancements in user authentication and tracking methods.

  11. A Software Demonstration of 'rap': Preparing CAD Geometries for Overlapping Grid Generation

    SciTech Connect

    Anders Petersson, N.

    2002-02-15

    We demonstrate the application code ''rap'' which is part of the ''Overture'' library. A CAD geometry imported from an IGES file is first cleaned up and simplified to suit the needs of mesh generation. Thereafter, the topology of the model is computed and a water-tight surface triangulation is created on the CAD surface. This triangulation is used to speed up the projection of points onto the CAD surface during the generation of overlapping surface grids. From each surface grid, volume grids are grown into the domain using a hyperbolic marching procedure. The final step is to fill any remaining parts of the interior with background meshes.

  12. An Educational Exercise Examining the Role of Model Attributes on the Creation and Alteration of CAD Models

    ERIC Educational Resources Information Center

    Johnson, Michael D.; Diwakaran, Ram Prasad

    2011-01-01

    Computer-aided design (CAD) is a ubiquitous tool that today's students will be expected to use proficiently for numerous engineering purposes. Taking full advantage of the features available in modern CAD programs requires that models are created in a manner that allows others to easily understand how they are organized and alter them in an…

  13. Westinghouse Idaho Nuclear Company, Inc. (WINCO) CAD activities at the Idaho Chemical Processing Plant (ICPP) (Idaho Engineering Laboratory)

    SciTech Connect

    Jensen, B.

    1989-04-18

    June 1985 -- The drafting manager obtained approval to implement a cad system at the ICPP. He formed a committee to evaluate the various cad systems and recommend a system that would most benefit the ICPP. A PC'' (personal computer) based system using Autocad software was recommended in lieu of the much more expensive main frame based systems.

  14. Westinghouse Idaho Nuclear Company, Inc. (WINCO) CAD activities at the Idaho Chemical Processing Plant (ICPP) (Idaho Engineering Laboratory)

    SciTech Connect

    Jensen, B.

    1989-04-18

    June 1985 -- The drafting manager obtained approval to implement a cad system at the ICPP. He formed a committee to evaluate the various cad systems and recommend a system that would most benefit the ICPP. A ``PC`` (personal computer) based system using Autocad software was recommended in lieu of the much more expensive main frame based systems.

  15. Performance evaluation of the NASA/KSC CAD/CAE and office automation LAN's

    NASA Technical Reports Server (NTRS)

    Zobrist, George W.

    1994-01-01

    This study's objective is the performance evaluation of the existing CAD/CAE (Computer Aided Design/Computer Aided Engineering) network at NASA/KSC. This evaluation also includes a similar study of the Office Automation network, since it is being planned to integrate this network into the CAD/CAE network. The Microsoft mail facility which is presently on the CAD/CAE network was monitored to determine its present usage. This performance evaluation of the various networks will aid the NASA/KSC network managers in planning for the integration of future workload requirements into the CAD/CAE network and determining the effectiveness of the planned FDDI (Fiber Distributed Data Interface) migration.

  16. Aberrant caspase-activated DNase (CAD) transcripts in human hepatoma cells.

    PubMed

    Hsieh, S Y; Liaw, S F; Lee, S N; Hsieh, P S; Lin, K H; Chu, C M; Liaw, Y F

    2003-01-27

    The gene of caspase-activated DNase (CAD), the key enzyme for nucleosome cleavage during apoptosis, is mapped at chromosome 1p36, a region usually associated with hemizygous deletions in human cancers, particularly in hepatoma (HCC). It is tempting to speculate that CAD plays a tumour-suppressive role in hepatocarcinogenesis. To address this, we examined the CAD transcripts in six human HCC cell lines, one liver tissue from a non-HCC subject, and peripheral blood leukocytes (PBL) from three healthy individuals. Alternatively spliced CAD transcripts with fusion of exon 1 to exon 7 were isolated in most of the examined samples including HCC cells and normal controls. However, relatively abundant alternatively spliced CAD transcripts with fusion of exon 2 to exon 6 or 7, in which the corresponding domain directing CAD interaction with ICAD was preserved, were found only in poorly differentiated Mahlavu and SK-Hep1 cells. Interestingly, an abnormal CAD transcript with its exon 3 replaced by a truncated transposable Alu repeat was isolated in Hep3B cells, indicative of the implication of an Alu-mediated genomic mutation. Moreover, mis-sense mutations in the CAD genes were identified in all six HCC cell lines. Upon UV-induced apoptosis, DNA fragmentation efficiency was found to be intact, partially reduced and remarkably reduced in Huh7 and J328, Hep3B and HepG2, and Mahlavu cells, respectively. That mutations and aberrantly spliced transcripts for the CAD gene are frequently present in human HCC cells, especially in poorly differentiated HCC cells, suggests a significant role of CAD in human hepatocarcinogenesis.

  17. Component-based approach to robot vision for computational efficiency

    NASA Astrophysics Data System (ADS)

    Lee, Junhee; Kim, Dongsun; Park, Yeonchool; Park, Sooyong; Lee, Sukhan

    2007-12-01

    The purpose of this paper is to show merit and feasibility of the component based approach in robot system integration. Many methodologies such as 'component based approach, 'middle ware based approach' are suggested to integrate various complex functions on robot system efficiently. However, these methodologies are not used to robot function development broadly, because these 'Top-down' methodologies are modeled and researched in software engineering field, which are different from robot function researches, so that cannot be trusted by function developers. Developers' the main concern of these methodologies is the performance decreasing, which origins from overhead of a framework. This paper overcomes this misunderstanding by showing time performance increasing, when an experiment uses 'Self Healing, Adaptive and Growing softwarE (SHAGE)' framework, one of the component based framework. As an example of real robot function, visual object recognition is chosen to experiment.

  18. An efficient computational approach for evaluating radiation flux for laser driven inertial confinement fusion targets

    NASA Astrophysics Data System (ADS)

    Li, Haiyan; Huang, Yunbao; Jiang, Shaoen; Jing, Longfei; Ding, Yongkun

    2015-08-01

    Radiation flux computation on the target is very important for laser driven Inertial Confinement Fusion, and view-factor based equation models (MacFarlane, 2003; Srivastava et al., 2000) are often used to compute this radiation flux on the capsule or samples inside the hohlraum. However, the equation models do not lead to sparse matrices and may involve an intensive solution process when discrete mesh elements become smaller and the number of equations increases. An efficient approach for the computation of radiation flux is proposed in this paper, in which, (1) symmetric and positive definite properties are achieved by transformation, and (2) an efficient Cholesky factorization algorithm is applied to significantly accelerate such equations models solving process. Finally, two targets on a laser facility built in China are considered to validate the computing efficiency of present approach. The results show that the radiation flux computation can be accelerated by a factor of 2.

  19. An accurate and efficient computation method of the hydration free energy of a large, complex molecule.

    PubMed

    Yoshidome, Takashi; Ekimoto, Toru; Matubayasi, Nobuyuki; Harano, Yuichi; Kinoshita, Masahiro; Ikeguchi, Mitsunori

    2015-05-01

    The hydration free energy (HFE) is a crucially important physical quantity to discuss various chemical processes in aqueous solutions. Although an explicit-solvent computation with molecular dynamics (MD) simulations is a preferable treatment of the HFE, huge computational load has been inevitable for large, complex solutes like proteins. In the present paper, we propose an efficient computation method for the HFE. In our method, the HFE is computed as a sum of 〈UUV〉/2 (〈UUV〉 is the ensemble average of the sum of pair interaction energy between solute and water molecule) and the water reorganization term mainly reflecting the excluded volume effect. Since 〈UUV〉 can readily be computed through a MD of the system composed of solute and water, an efficient computation of the latter term leads to a reduction of computational load. We demonstrate that the water reorganization term can quantitatively be calculated using the morphometric approach (MA) which expresses the term as the linear combinations of the four geometric measures of a solute and the corresponding coefficients determined with the energy representation (ER) method. Since the MA enables us to finish the computation of the solvent reorganization term in less than 0.1 s once the coefficients are determined, the use of the MA enables us to provide an efficient computation of the HFE even for large, complex solutes. Through the applications, we find that our method has almost the same quantitative performance as the ER method with substantial reduction of the computational load. PMID:25956125

  20. Enabling Efficient Climate Science Workflows in High Performance Computing Environments

    NASA Astrophysics Data System (ADS)

    Krishnan, H.; Byna, S.; Wehner, M. F.; Gu, J.; O'Brien, T. A.; Loring, B.; Stone, D. A.; Collins, W.; Prabhat, M.; Liu, Y.; Johnson, J. N.; Paciorek, C. J.

    2015-12-01

    A typical climate science workflow often involves a combination of acquisition of data, modeling, simulation, analysis, visualization, publishing, and storage of results. Each of these tasks provide a myriad of challenges when running on a high performance computing environment such as Hopper or Edison at NERSC. Hurdles such as data transfer and management, job scheduling, parallel analysis routines, and publication require a lot of forethought and planning to ensure that proper quality control mechanisms are in place. These steps require effectively utilizing a combination of well tested and newly developed functionality to move data, perform analysis, apply statistical routines, and finally, serve results and tools to the greater scientific community. As part of the CAlibrated and Systematic Characterization, Attribution and Detection of Extremes (CASCADE) project we highlight a stack of tools our team utilizes and has developed to ensure that large scale simulation and analysis work are commonplace and provide operations that assist in everything from generation/procurement of data (HTAR/Globus) to automating publication of results to portals like the Earth Systems Grid Federation (ESGF), all while executing everything in between in a scalable environment in a task parallel way (MPI). We highlight the use and benefit of these tools by showing several climate science analysis use cases they have been applied to.

  1. Computational efficiences for calculating rare earth f^n energies

    NASA Astrophysics Data System (ADS)

    Beck, Donald R.

    2009-05-01

    RecentlyootnotetextD. R. Beck and E. J. Domeier, Can. J. Phys. Walter Johnson issue, Jan. 2009., we have used new computational strategies to obtain wavefunctions and energies for Gd IV 4f^7 and 4f^65d levels. Here we extend one of these techniques to allow efficent inclusion of 4f^2 pair correlation effects using radial pair energies obtained from much simpler calculationsootnotetexte.g. K. Jankowski et al., Int. J. Quant. Chem. XXVII, 665 (1985). and angular factors which can be simply computedootnotetextD. R. Beck and C. A. Nicolaides, Excited States in Quantum Chemistry, C. A. Nicolaides and D. R. Beck (editors), D. Reidel (1978), p. 105ff.. This is a re-vitalization of an older ideaootnotetextI. Oksuz and O. Sinanoglu, Phys. Rev. 181, 54 (1969).. We display relationships between angular factors involving the exchange of holes and electrons (e.g. f^6 vs f^8, f^13d vs fd^9). We apply the results to Tb IV and Gd IV, whose spectra is largely unknown, but which may play a role in MRI medicine as endohedral metallofullerenes (e.g. Gd3N-C80ootnotetextM. C. Qian and S. N. Khanna, J. Appl. Phys. 101, 09E105 (2007).). Pr III results are in good agreement (910 cm-1) with experiment. Pu I 5f^2 radial pair energies are also presented.

  2. Efficient computation of coherent synchrotron radiation in a rectangular chamber

    NASA Astrophysics Data System (ADS)

    Warnock, Robert L.; Bizzozero, David A.

    2016-09-01

    We study coherent synchrotron radiation (CSR) in a perfectly conducting vacuum chamber of rectangular cross section, in a formalism allowing an arbitrary sequence of bends and straight sections. We apply the paraxial method in the frequency domain, with a Fourier development in the vertical coordinate but with no other mode expansions. A line charge source is handled numerically by a new method that rids the equations of singularities through a change of dependent variable. The resulting algorithm is fast compared to earlier methods, works for short bunches with complicated structure, and yields all six field components at any space-time point. As an example we compute the tangential magnetic field at the walls. From that one can make a perturbative treatment of the Poynting flux to estimate the energy deposited in resistive walls. The calculation was motivated by a design issue for LCLS-II, the question of how much wall heating from CSR occurs in the last bend of a bunch compressor and the following straight section. Working with a realistic longitudinal bunch form of r.m.s. length 10.4 μ m and a charge of 100 pC we conclude that the radiated power is quite small (28 W at a 1 MHz repetition rate), and all radiated energy is absorbed in the walls within 7 m along the straight section.

  3. An efficient network for interconnecting remote monitoring instruments and computers

    SciTech Connect

    Halbig, J.K.; Gainer, K.E.; Klosterbuer, S.F.

    1994-08-01

    Remote monitoring instrumentation must be connected with computers and other instruments. The cost and intrusiveness of installing cables in new and existing plants presents problems for the facility and the International Atomic Energy Agency (IAEA). The authors have tested a network that could accomplish this interconnection using mass-produced commercial components developed for use in industrial applications. Unlike components in the hardware of most networks, the components--manufactured and distributed in North America, Europe, and Asia--lend themselves to small and low-powered applications. The heart of the network is a chip with three microprocessors and proprietary network software contained in Read Only Memory. In addition to all nonuser levels of protocol, the software also contains message authentication capabilities. This chip can be interfaced to a variety of transmission media, for example, RS-485 lines, fiber topic cables, rf waves, and standard ac power lines. The use of power lines as the transmission medium in a facility could significantly reduce cabling costs.

  4. Efficient computer algebra algorithms for polynomial matrices in control design

    NASA Technical Reports Server (NTRS)

    Baras, J. S.; Macenany, D. C.; Munach, R.

    1989-01-01

    The theory of polynomial matrices plays a key role in the design and analysis of multi-input multi-output control and communications systems using frequency domain methods. Examples include coprime factorizations of transfer functions, cannonical realizations from matrix fraction descriptions, and the transfer function design of feedback compensators. Typically, such problems abstract in a natural way to the need to solve systems of Diophantine equations or systems of linear equations over polynomials. These and other problems involving polynomial matrices can in turn be reduced to polynomial matrix triangularization procedures, a result which is not surprising given the importance of matrix triangularization techniques in numerical linear algebra. Matrices with entries from a field and Gaussian elimination play a fundamental role in understanding the triangularization process. In the case of polynomial matrices, matrices with entries from a ring for which Gaussian elimination is not defined and triangularization is accomplished by what is quite properly called Euclidean elimination. Unfortunately, the numerical stability and sensitivity issues which accompany floating point approaches to Euclidean elimination are not very well understood. New algorithms are presented which circumvent entirely such numerical issues through the use of exact, symbolic methods in computer algebra. The use of such error-free algorithms guarantees that the results are accurate to within the precision of the model data--the best that can be hoped for. Care must be taken in the design of such algorithms due to the phenomenon of intermediate expressions swell.

  5. A determination of antioxidant efficiencies using ESR and computational methods

    NASA Astrophysics Data System (ADS)

    Rhodes, Christopher J.; Tran, Thuy T.; Morris, Harry

    2004-05-01

    Using Transition-State Theory, experimental rate constants, determined over a range of temperatures, for reactions of Vitamin E type antioxidants are analysed in terms of their enthalpies and entropies of activation. It is further shown that computational methods may be employed to calculate enthalpies and entropies, and hence Gibbs free energies, for the overall reactions. Within the linear free energy relationship (LFER) assumption, that the Gibbs free energy of activation is proportional to the overall Gibbs free energy change for the reaction, it is possible to rationalise, and even to predict, the relative contributions of enthalpy and entropy for reactions of interest, involving potential antioxidants. A method is devised, involving a competitive reaction between rad CH 3 radicals and both the spin-trap PBN and the antioxidant, which enables the relatively rapid determination of a relative ordering of activities for a series of potential antioxidant compounds, and also of their rate constants for scavenging rad CH 3 radicals (relative to the rate constant for addition of rad CH 3 to PBN).

  6. Computational Efficiency through Visual Argument: Do Graphic Organizers Communicate Relations in Text Too Effectively?

    ERIC Educational Resources Information Center

    Robinson, Daniel H.; Schraw, Gregory

    1994-01-01

    Three experiments involving 138 college students investigated why one type of graphic organizer (a matrix) may communicate interconcept relations better than an outline or text. Results suggest that a matrix is more computationally efficient than either outline or text, allowing the easier computation of relationships. (SLD)

  7. An Efficient Objective Analysis System for Parallel Computers

    NASA Technical Reports Server (NTRS)

    Stobie, J.

    1999-01-01

    A new atmospheric objective analysis system designed for parallel computers will be described. The system can produce a global analysis (on a 1 X 1 lat-lon grid with 18 levels of heights and winds and 10 levels of moisture) using 120,000 observations in 17 minutes on 32 CPUs (SGI Origin 2000). No special parallel code is needed (e.g. MPI or multitasking) and the 32 CPUs do not have to be on the same platform. The system is totally portable and can run on several different architectures at once. In addition, the system can easily scale up to 100 or more CPUS. This will allow for much higher resolution and significant increases in input data. The system scales linearly as the number of observations and the number of grid points. The cost overhead in going from 1 to 32 CPUs is 18%. In addition, the analysis results are identical regardless of the number of processors used. This system has all the characteristics of optimal interpolation, combining detailed instrument and first guess error statistics to produce the best estimate of the atmospheric state. Static tests with a 2 X 2.5 resolution version of this system showed it's analysis increments are comparable to the latest NASA operational system including maintenance of mass-wind balance. Results from several months of cycling test in the Goddard EOS Data Assimilation System (GEOS DAS) show this new analysis retains the same level of agreement between the first guess and observations (O-F statistics) as the current operational system.

  8. An Efficient Objective Analysis System for Parallel Computers

    NASA Technical Reports Server (NTRS)

    Stobie, James G.

    1999-01-01

    A new objective analysis system designed for parallel computers will be described. The system can produce a global analysis (on a 2 x 2.5 lat-lon grid with 20 levels of heights and winds and 10 levels of moisture) using 120,000 observations in less than 3 minutes on 32 CPUs (SGI Origin 2000). No special parallel code is needed (e.g. MPI or multitasking) and the 32 CPUs do not have to be on the same platform. The system Ls totally portable and can run on -several different architectures at once. In addition, the system can easily scale up to 100 or more CPUS. This will allow for much higher resolution and significant increases in input data. The system scales linearly as the number of observations and the number of grid points. The cost overhead in going from I to 32 CPus is 18%. in addition, the analysis results are identical regardless of the number of processors used. T'his system has all the characteristics of optimal interpolation, combining detailed instrument and first guess error statistics to produce the best estimate of the atmospheric state. It also includes a new quality control (buddy check) system. Static tests with the system showed it's analysis increments are comparable to the latest NASA operational system including maintenance of mass-wind balance. Results from a 2-month cycling test in the Goddard EOS Data Assimilation System (GEOS DAS) show this new analysis retains the same level of agreement between the first guess and observations (0-F statistics) throughout the entire two months.

  9. A more efficient anisotropic mesh adaptation for the computation of Lagrangian coherent structures

    NASA Astrophysics Data System (ADS)

    Fortin, A.; Briffard, T.; Garon, A.

    2015-03-01

    The computation of Lagrangian coherent structures is more and more used in fluid mechanics to determine subtle fluid flow structures. We present in this paper a new adaptive method for the efficient computation of Finite Time Lyapunov Exponent (FTLE) from which the coherent Lagrangian structures can be obtained. This new adaptive method considerably reduces the computational burden without any loss of accuracy on the FTLE field.

  10. Measured energy savings of an energy-efficient office computer system

    SciTech Connect

    Lapujade, P.G.

    1995-12-01

    Recent surveys have shown that the use of personal computer systems in commercial office buildings is expanding rapidly. In warmer climates, office equipment energy use also has important implications for building cooling loads as well as those directly associated with computing tasks. The U.S. Environmental Protection Agency (EPA) has developed the Energy Star (ES) rating system, intended to endorse more efficient machines. To research the comparative performance of conventional and low-energy computer systems, a test was conducted with the substitution of an ES computer system for the main clerical computer used at a research institution. Separate data on power demand (watts), power factor for the computer/monitor, and power demand for the dedicated laser printer were recorded every 15 minutes to a multichannel datalogger. The current system, a 486DX, 66 MHz computer (8 MB of RAM, and 340 MB hard disk) with a laser printer was monitored for an 86-day period. An ES computer and an ES printer with virtually identical capabilities were then substituted and the changes to power demand and power factor were recorded for an additional 86 days. Computer and printer usage patterns remained essentially constant over the entire monitoring period. The computer user was also interviewed to learn of any perceived shortcomings of the more energy-efficient system. Based on the monitoring, the ES computer system is calculated to produce energy savings of 25.8% (121 kWh) over one year.

  11. Introduction: From Efficient Quantum Computation to Nonextensive Statistical Mechanics

    NASA Astrophysics Data System (ADS)

    Prosen, Tomaz

    These few pages will attempt to make a short comprehensive overview of several contributions to this volume which concern rather diverse topics. I shall review the following works, essentially reversing the sequence indicated in my title: • First, by C. Tsallis on the relation of nonextensive statistics to the stability of quantum motion on the edge of quantum chaos. • Second, the contribution by P. Jizba on information theoretic foundations of generalized (nonextensive) statistics. • Third, the contribution by J. Rafelski on a possible generalization of Boltzmann kinetics, again, formulated in terms of nonextensive statistics. • Fourth, the contribution by D.L. Stein on the state-of-the-art open problems in spin glasses and on the notion of complexity there. • Fifth, the contribution by F.T. Arecchi on the quantum-like uncertainty relations and decoherence appearing in the description of perceptual tasks of the brain. • Sixth, the contribution by G. Casati on the measurement and information extraction in the simulation of complex dynamics by a quantum computer. Immediately, the following question arises: What do the topics of these talks have in common? Apart from the variety of questions they address, it is quite obvious that the common denominator of these contributions is an approach to describe and control "the complexity" by simple means. One of the very useful tools to handle such problems, also often used or at least referred to in several of the works presented here, is the concept of Tsallis entropy and nonextensive statistics.

  12. The Challenging Academic Development (CAD) Collective

    ERIC Educational Resources Information Center

    Peseta, Tai

    2005-01-01

    This article discusses the Challenging Academic Development (CAD) Collective and describes how it came out of a symposium called "Liminality, identity, and hybridity: On the promise of new conceptual frameworks for theorising academic/faculty development." The CAD Collective is and represents a space where people can open up their contexts and…

  13. Dynamic MRI-based computer aided diagnostic systems for early detection of kidney transplant rejection: A survey

    NASA Astrophysics Data System (ADS)

    Mostapha, Mahmoud; Khalifa, Fahmi; Alansary, Amir; Soliman, Ahmed; Gimel'farb, Georgy; El-Baz, Ayman

    2013-10-01

    Early detection of renal transplant rejection is important to implement appropriate medical and immune therapy in patients with transplanted kidneys. In literature, a large number of computer-aided diagnostic (CAD) systems using different image modalities, such as ultrasound (US), magnetic resonance imaging (MRI), computed tomography (CT), and radionuclide imaging, have been proposed for early detection of kidney diseases. A typical CAD system for kidney diagnosis consists of a set of processing steps including: motion correction, segmentation of the kidney and/or its internal structures (e.g., cortex, medulla), construction of agent kinetic curves, functional parameter estimation, diagnosis, and assessment of the kidney status. In this paper, we survey the current state-of-the-art CAD systems that have been developed for kidney disease diagnosis using dynamic MRI. In addition, the paper addresses several challenges that researchers face in developing efficient, fast and reliable CAD systems for the early detection of kidney diseases.

  14. Integrated Computer-Aided Drafting Instruction (ICADI).

    ERIC Educational Resources Information Center

    Chen, C. Y.; McCampbell, David H.

    Until recently, computer-aided drafting and design (CAD) systems were almost exclusively operated on mainframes or minicomputers and their cost prohibited many schools from offering CAD instruction. Today, many powerful personal computers are capable of performing the high-speed calculation and analysis required by the CAD application; however,…

  15. 3D CAD model retrieval method based on hierarchical multi-features

    NASA Astrophysics Data System (ADS)

    An, Ran; Wang, Qingwen

    2015-12-01

    The classical "Shape Distribution D2" algorithm takes the distance between two random points on a surface of CAD model as statistical features, and based on that it generates a feature vector to calculate the dissimilarity and achieve the retrieval goal. This algorithm has a simple principle, high computational efficiency and can get a better retrieval results for the simple shape models. Based on the analysis of D2 algorithm's shape distribution curve, this paper enhances the algorithm's descriptive ability for a model's overall shape through the statistics of the angle between two random points' normal vectors, especially for the distinctions between the model's plane features and curved surface features; meanwhile, introduce the ratio that a line between two random points cut off by the model's surface to enhance the algorithm's descriptive ability for a model's detailed features; finally, integrating the two shape describing methods with the original D2 algorithm, this paper proposes a new method based the hierarchical multi-features. Experimental results showed that this method has bigger improvements and could get a better retrieval results compared with the traditional 3D CAD model retrieval method.

  16. An efficient formulation of robot arm dynamics for control and computer simulation

    NASA Astrophysics Data System (ADS)

    Lee, C. S. G.; Nigam, R.

    This paper describes an efficient formulation of the dynamic equations of motion of industrial robots based on the Lagrange formulation of d'Alembert's principle. This formulation, as applied to a PUMA robot arm, results in a set of closed form second order differential equations with cross product terms. They are not as efficient in computation as those formulated by the Newton-Euler method, but provide a better analytical model for control analysis and computer simulation. Computational complexities of this dynamic model together with other models are tabulated for discussion.

  17. Computationally generated velocity taper for efficiency enhancement in a coupled-cavity traveling-wave tube

    NASA Technical Reports Server (NTRS)

    Wilson, Jeffrey D.

    1989-01-01

    A computational routine has been created to generate velocity tapers for efficiency enhancement in coupled-cavity TWTs. Programmed into the NASA multidimensional large-signal coupled-cavity TWT computer code, the routine generates the gradually decreasing cavity periods required to maintain a prescribed relationship between the circuit phase velocity and the electron-bunch velocity. Computational results for several computer-generated tapers are compared to those for an existing coupled-cavity TWT with a three-step taper. Guidelines are developed for prescribing the bunch-phase profile to produce a taper for efficiency. The resulting taper provides a calculated RF efficiency 45 percent higher than the step taper at center frequency and at least 37 percent higher over the bandwidth.

  18. Recovery Act - CAREER: Sustainable Silicon -- Energy-Efficient VLSI Interconnect for Extreme-Scale Computing

    SciTech Connect

    Chiang, Patrick

    2014-01-31

    The research goal of this CAREER proposal is to develop energy-efficient, VLSI interconnect circuits and systems that will facilitate future massively-parallel, high-performance computing. Extreme-scale computing will exhibit massive parallelism on multiple vertical levels, from thou­ sands of computational units on a single processor to thousands of processors in a single data center. Unfortunately, the energy required to communicate between these units at every level (on­ chip, off-chip, off-rack) will be the critical limitation to energy efficiency. Therefore, the PI's career goal is to become a leading researcher in the design of energy-efficient VLSI interconnect for future computing systems.

  19. Indications for Computer-Aided Design and Manufacturing in Congenital Craniofacial Reconstruction.

    PubMed

    Fisher, Mark; Medina, Miguel; Bojovic, Branko; Ahn, Edward; Dorafshar, Amir H

    2016-09-01

    The complex three-dimensional relationships in congenital craniofacial reconstruction uniquely lend themselves to the ability to accurately plan and model the result provided by computer-aided design and manufacturing (CAD/CAM). The goal of this study was to illustrate indications where CAD/CAM would be helpful in the treatment of congenital craniofacial anomalies reconstruction and to discuss the application of this technology and its outcomes. A retrospective review was performed of all congenital craniofacial cases performed by the senior author between 2010 and 2014. Cases where CAD/CAM was used were identified, and illustrative cases to demonstrate the benefits of CAD/CAM were selected. Preoperative appearance, computerized plan, intraoperative course, and final outcome were analyzed. Preoperative planning enabled efficient execution of the operative plan with predictable results. Risk factors which made these patients good candidates for CAD/CAM were identified and compiled. Several indications, including multisuture and revisional craniosynostosis, facial bipartition, four-wall box osteotomy, reduction cranioplasty, and distraction osteogenesis could benefit most from this technology. We illustrate the use of CAD/CAM for these applications and describe the decision-making process both before and during surgery. We explore why we believe that CAD/CAM is indicated in these scenarios as well as the disadvantages and risks. PMID:27516839

  20. Indications for Computer-Aided Design and Manufacturing in Congenital Craniofacial Reconstruction.

    PubMed

    Fisher, Mark; Medina, Miguel; Bojovic, Branko; Ahn, Edward; Dorafshar, Amir H

    2016-09-01

    The complex three-dimensional relationships in congenital craniofacial reconstruction uniquely lend themselves to the ability to accurately plan and model the result provided by computer-aided design and manufacturing (CAD/CAM). The goal of this study was to illustrate indications where CAD/CAM would be helpful in the treatment of congenital craniofacial anomalies reconstruction and to discuss the application of this technology and its outcomes. A retrospective review was performed of all congenital craniofacial cases performed by the senior author between 2010 and 2014. Cases where CAD/CAM was used were identified, and illustrative cases to demonstrate the benefits of CAD/CAM were selected. Preoperative appearance, computerized plan, intraoperative course, and final outcome were analyzed. Preoperative planning enabled efficient execution of the operative plan with predictable results. Risk factors which made these patients good candidates for CAD/CAM were identified and compiled. Several indications, including multisuture and revisional craniosynostosis, facial bipartition, four-wall box osteotomy, reduction cranioplasty, and distraction osteogenesis could benefit most from this technology. We illustrate the use of CAD/CAM for these applications and describe the decision-making process both before and during surgery. We explore why we believe that CAD/CAM is indicated in these scenarios as well as the disadvantages and risks.

  1. Tooth-colored CAD/CAM monolithic restorations.

    PubMed

    Reich, S

    2015-01-01

    A monolithic restoration (also known as a full contour restoration) is one that is manufactured from a single material for the fully anatomic replacement of lost tooth structure. Additional staining (followed by glaze firing if ceramic materials are used) may be performed to enhance the appearance of the restoration. For decades, monolithic restoration has been the standard for inlay and partial crown restorations manufactured by both pressing and computer-aided design and manufacturing (CAD/CAM) techniques. A limited selection of monolithic materials is now available for dental crown and bridge restorations. The IDS (2015) provided an opportunity to learn about and evaluate current trends in this field. In addition to new developments, established materials are also mentioned in this article to complete the picture. In line with the strategic focus of the IJCD, the focus here is naturally on CAD/CAM materials. PMID:26110926

  2. Tooth-colored CAD/CAM monolithic restorations.

    PubMed

    Reich, S

    2015-01-01

    A monolithic restoration (also known as a full contour restoration) is one that is manufactured from a single material for the fully anatomic replacement of lost tooth structure. Additional staining (followed by glaze firing if ceramic materials are used) may be performed to enhance the appearance of the restoration. For decades, monolithic restoration has been the standard for inlay and partial crown restorations manufactured by both pressing and computer-aided design and manufacturing (CAD/CAM) techniques. A limited selection of monolithic materials is now available for dental crown and bridge restorations. The IDS (2015) provided an opportunity to learn about and evaluate current trends in this field. In addition to new developments, established materials are also mentioned in this article to complete the picture. In line with the strategic focus of the IJCD, the focus here is naturally on CAD/CAM materials.

  3. Computational System For Rapid CFD Analysis In Engineering

    NASA Technical Reports Server (NTRS)

    Barson, Steven L.; Ascoli, Edward P.; Decroix, Michelle E.; Sindir, Munir M.

    1995-01-01

    Computational system comprising modular hardware and software sub-systems developed to accelerate and facilitate use of techniques of computational fluid dynamics (CFD) in engineering environment. Addresses integration of all aspects of CFD analysis process, including definition of hardware surfaces, generation of computational grids, CFD flow solution, and postprocessing. Incorporates interfaces for integration of all hardware and software tools needed to perform complete CFD analysis. Includes tools for efficient definition of flow geometry, generation of computational grids, computation of flows on grids, and postprocessing of flow data. System accepts geometric input from any of three basic sources: computer-aided design (CAD), computer-aided engineering (CAE), or definition by user.

  4. CAD/CAM at a Distance: Assessing the Effectiveness of Web-Based Instruction To Meet Workforce Development Needs. AIR 2000 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Wilkerson, Joyce A.; Elkins, Susan A.

    This qualitative case study assessed web-based instruction in a computer-aided design/computer-assisted manufacturing (CAD/CAM) course designed for workforce development. The study examined students' and instructors' experience in a CAD/CAM course delivered exclusively on the Internet, evaluating course content and delivery, clarity of…

  5. Topology, Dimerization, and Stability of the Single-Span Membrane Protein CadC

    PubMed Central

    Lindner, Eric; White, Stephen H.

    2014-01-01

    Under acid stress, Escherichia coli induce expression of CadA (lysine decarboxylase) and CadB (lysine/cadaverine antiporter) in a lysine-rich environment. The ToxR-like transcriptional activator CadC controls expression of the cadBA operon. Using a novel Signal Peptidase I (SPase I) cleavage assay, we show that CadC is a Type II single-span membrane protein (MP) with a cytoplasmic DNA binding domain and a periplasmic sensor domain. We further show that, as long assumed, dimerization of the sensor domain is required for activating the cadBA operon. We prove this using a chimera in which the periplasmic domain of RodZ—a Type II MP involved in the maintenance of the rod shape of E. coli—replaces the CadC sensor domain. Because the RodZ periplasmic domain cannot dimerize, the chimera cannot activate the operon. However, replacement of the TM domain of the chimera with the glycophorin-A (GpA) TM domain causes intramembrane dimerization and consequently operon activation. Using a low-expression protocol that eliminates extraneous TM-helix dimerization signals arising from protein over-expression, we enhanced dramatically the dynamic range of the β-galactosidase assay for cadBA activation. Consequently, the strength of the intramembrane dimerization of the GpA domain could be compared quantitatively with the strength of the much stronger periplasmic dimerization of CadC. For the signal-peptidase assay, we inserted a SPase I cleavage site (AAA or AQA) at the periplasmic end of the TM helix. Cleavage occured with high efficiency for all TM and periplasmic domains tested, thus eliminating the need for the cumbersome spheroplast-proteinase K method for topology determinations. PMID:24946151

  6. Understanding dental CAD/CAM for restorations--accuracy from a mechanical engineering viewpoint.

    PubMed

    Tapie, Laurent; Lebon, Nicolas; Mawussi, Bernardin; Fron-Chabouis, Hélène; Duret, Francois; Attal, Jean-Pierre

    2015-01-01

    As is the case in the field of medicine, as well as in most areas of daily life, digital technology is increasingly being introduced into dental practice. Computer-aided design/ computer-aided manufacturing (CAD/CAM) solutions are available not only for chairside practice but also for creating inlays, crowns, fixed partial dentures (FPDs), implant abutments, and other dental prostheses. CAD/CAM dental practice can be considered as the handling of devices and software processing for the almost automatic design and creation of dental restorations. However, dentists who want to use dental CAD/CAM systems often do not have enough information to understand the variations offered by such technology practice. Knowledge of the random and systematic errors in accuracy with CAD/CAM systems can help to achieve successful restorations with this technology, and help with the purchasing of a CAD/CAM system that meets the clinical needs of restoration. This article provides a mechanical engineering viewpoint of the accuracy of CAD/ CAM systems, to help dentists understand the impact of this technology on restoration accuracy. PMID:26734668

  7. Rationale for the Use of CAD/CAM Technology in Implant Prosthodontics

    PubMed Central

    Abduo, Jaafar; Lyons, Karl

    2013-01-01

    Despite the predictable longevity of implant prosthesis, there is an ongoing interest to continue to improve implant prosthodontic treatment and outcomes. One of the developments is the application of computer-aided design and computer-aided manufacturing (CAD/CAM) to produce implant abutments and frameworks from metal or ceramic materials. The aim of this narrative review is to critically evaluate the rationale of CAD/CAM utilization for implant prosthodontics. To date, CAD/CAM allows simplified production of precise and durable implant components. The precision of fit has been proven in several laboratory experiments and has been attributed to the design of implants. Milling also facilitates component fabrication from durable and aesthetic materials. With further development, it is expected that the CAD/CAM protocol will be further simplified. Although compelling clinical evidence supporting the superiority of CAD/CAM implant restorations is still lacking, it is envisioned that CAD/CAM may become the main stream for implant component fabrication. PMID:23690778

  8. Understanding dental CAD/CAM for restorations--accuracy from a mechanical engineering viewpoint.

    PubMed

    Tapie, Laurent; Lebon, Nicolas; Mawussi, Bernardin; Fron-Chabouis, Hélène; Duret, Francois; Attal, Jean-Pierre

    2015-01-01

    As is the case in the field of medicine, as well as in most areas of daily life, digital technology is increasingly being introduced into dental practice. Computer-aided design/ computer-aided manufacturing (CAD/CAM) solutions are available not only for chairside practice but also for creating inlays, crowns, fixed partial dentures (FPDs), implant abutments, and other dental prostheses. CAD/CAM dental practice can be considered as the handling of devices and software processing for the almost automatic design and creation of dental restorations. However, dentists who want to use dental CAD/CAM systems often do not have enough information to understand the variations offered by such technology practice. Knowledge of the random and systematic errors in accuracy with CAD/CAM systems can help to achieve successful restorations with this technology, and help with the purchasing of a CAD/CAM system that meets the clinical needs of restoration. This article provides a mechanical engineering viewpoint of the accuracy of CAD/ CAM systems, to help dentists understand the impact of this technology on restoration accuracy.

  9. Impact of CAD-deficiency in flax on biogas production.

    PubMed

    Wróbel-Kwiatkowska, Magdalena; Jabłoński, Sławomir; Szperlik, Jakub; Dymińska, Lucyna; Łukaszewicz, Marcin; Rymowicz, Waldemar; Hanuza, Jerzy; Szopa, Jan

    2015-12-01

    Global warming and the reduction in our fossil fuel reservoir have forced humanity to look for new means of energy production. Agricultural waste remains a large source for biofuel and bioenergy production. Flax shives are a waste product obtained during the processing of flax fibers. We investigated the possibility of using low-lignin flax shives for biogas production, specifically by assessing the impact of CAD deficiency on the biochemical and structural properties of shives. The study used genetically modified flax plants with a silenced CAD gene, which encodes the key enzyme for lignin synthesis. Reducing the lignin content modified cellulose crystallinity, improved flax shive fermentation and optimized biogas production. Chemical pretreatment of the shive biomass further increased biogas production efficiency.

  10. Impact of CAD-deficiency in flax on biogas production.

    PubMed

    Wróbel-Kwiatkowska, Magdalena; Jabłoński, Sławomir; Szperlik, Jakub; Dymińska, Lucyna; Łukaszewicz, Marcin; Rymowicz, Waldemar; Hanuza, Jerzy; Szopa, Jan

    2015-12-01

    Global warming and the reduction in our fossil fuel reservoir have forced humanity to look for new means of energy production. Agricultural waste remains a large source for biofuel and bioenergy production. Flax shives are a waste product obtained during the processing of flax fibers. We investigated the possibility of using low-lignin flax shives for biogas production, specifically by assessing the impact of CAD deficiency on the biochemical and structural properties of shives. The study used genetically modified flax plants with a silenced CAD gene, which encodes the key enzyme for lignin synthesis. Reducing the lignin content modified cellulose crystallinity, improved flax shive fermentation and optimized biogas production. Chemical pretreatment of the shive biomass further increased biogas production efficiency. PMID:26178244

  11. Flexible Concurrency Control for Legacy CAD to Construct Collaborative CAD Environment

    NASA Astrophysics Data System (ADS)

    Cai, Xiantao; Li, Xiaoxia; He, Fazhi; Han, Soonhung; Chen, Xiao

    Collaborative CAD (Co-CAD) systems can be constructed based on either 3D kernel or legacy stand-alone CAD systems, which are typically commercial CAD systems such as CATIA, Pro/E and so on. Most of synchronous Co-CAD systems, especially these based on legacy stand-alone CAD systems, adopt the lock mechanism or the floor control as concurrency controls which are very restrictive and stagnant. A flexible concurrency control method is proposed to support the flexible concurrency control in Co-CAD systems based on legacy stand-alone CAD systems. At first, a model of operation relationship is proposed with special consideration for the concurrency control of these kinds of Co-CAD system. Then two types of data structure, the Collaborative Feature Dependent Graph (Co-FDG) and the Collaborative Feature Operational List (Co-FOL), are presented as the cornerstone of flexible concurrency control. Next a Flexible Concurrency Control Algorithm (FCCA) is proposed. Finally a Selective Undo/Redo Algorithm is proposed which can improve the flexibility of Co-CAD furthermore.

  12. Advances in noninvasive detection of CAD

    SciTech Connect

    Kahn, J.K. )

    1991-04-01

    Advances in the noninvasive detection of myocardial ischemia are increasing our ability to diagnose coronary artery disease (CAD). Tomographic (SPECT) thallium imaging provides better identification of coronary arteries with atherosclerotic narrowing. Increased lung thallium uptake and transient ischemic dilatation of the heart are additional markers of severe CAD. Late thallium imaging, as well as reinjection imaging, provides more accurate identification of myocardial ischemia. Finally, new myocardial imaging agents, such as technetium Tc 99m sestamibi (Cardiolite), should improve detection of CAD by noninvasive methods.10 references.

  13. A Computationally Efficient Parallel Levenberg-Marquardt Algorithm for Large-Scale Big-Data Inversion

    NASA Astrophysics Data System (ADS)

    Lin, Y.; O'Malley, D.; Vesselinov, V. V.

    2015-12-01

    Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a

  14. Fabrication of lithium silicate ceramic veneers with a CAD/CAM approach: a clinical report of cleidocranial dysplasia.

    PubMed

    da Cunha, Leonardo Fernandes; Mukai, Eduardo; Hamerschmitt, Raphael Meneghetti; Correr, Gisele Maria

    2015-05-01

    The fabrication of minimally invasive ceramic veneers remains a challenge for dental restorations involving computer-aided design and computer-aided manufacturing (CAD/CAM). The application of an appropriate CAD/CAM protocol and correlation mode not only simplifies the fabrication of ceramic veneers but also improves the resulting esthetics. Ceramic veneers can restore tooth abnormalities caused by disorders such as cleidocranial dysplasia, enamel hypoplasia, or supernumerary teeth. This report illustrates the fabrication of dental veneers with a new lithium silicate ceramic and the CAD/CAM technique in a patient with cleidocranial dysplasia.

  15. An approximate solution to improve computational efficiency of impedance-type payload load prediction

    NASA Technical Reports Server (NTRS)

    White, C. W.

    1981-01-01

    The computational efficiency of the impedance type loads prediction method was studied. Three goals were addressed: devise a method to make the impedance method operate more efficiently in the computer; assess the accuracy and convenience of the method for determining the effect of design changes; and investigate the use of the method to identify design changes for reduction of payload loads. The method is suitable for calculation of dynamic response in either the frequency or time domain. It is concluded that: the choice of an orthogonal coordinate system will allow the impedance method to operate more efficiently in the computer; the approximate mode impedance technique is adequate for determining the effect of design changes, and is applicable for both statically determinate and statically indeterminate payload attachments; and beneficial design changes to reduce payload loads can be identified by the combined application of impedance techniques and energy distribution review techniques.

  16. Computationally Efficient Use of Derivatives in Emulation of Complex Computational Models

    SciTech Connect

    Williams, Brian J.; Marcy, Peter W.

    2012-06-07

    We will investigate the use of derivative information in complex computer model emulation when the correlation function is of the compactly supported Bohman class. To this end, a Gaussian process model similar to that used by Kaufman et al. (2011) is extended to a situation where first partial derivatives in each dimension are calculated at each input site (i.e. using gradients). A simulation study in the ten-dimensional case is conducted to assess the utility of the Bohman correlation function against strictly positive correlation functions when a high degree of sparsity is induced.

  17. Efficient and Flexible Computation of Many-Electron Wave Function Overlaps

    PubMed Central

    2016-01-01

    A new algorithm for the computation of the overlap between many-electron wave functions is described. This algorithm allows for the extensive use of recurring intermediates and thus provides high computational efficiency. Because of the general formalism employed, overlaps can be computed for varying wave function types, molecular orbitals, basis sets, and molecular geometries. This paves the way for efficiently computing nonadiabatic interaction terms for dynamics simulations. In addition, other application areas can be envisaged, such as the comparison of wave functions constructed at different levels of theory. Aside from explaining the algorithm and evaluating the performance, a detailed analysis of the numerical stability of wave function overlaps is carried out, and strategies for overcoming potential severe pitfalls due to displaced atoms and truncated wave functions are presented. PMID:26854874

  18. Education & Training for CAD/CAM: Results of a National Probability Survey. Krannert Institute Paper Series.

    ERIC Educational Resources Information Center

    Majchrzak, Ann

    A study was conducted of the training programs used by plants with Computer Automated Design/Computer Automated Manufacturing (CAD/CAM) to help their employees adapt to automated manufacturing. The study sought to determine the relative priorities of manufacturing establishments for training certain workers in certain skills; the status of…

  19. Use of MathCAD in a Pharmacokinetics Course for PharmD Students.

    ERIC Educational Resources Information Center

    Sullivan, Timothy J.

    1992-01-01

    This paper describes the application of the Student Edition of MathCAD as a computational aid in an introductory graduate level pharmacokinetics course. The program allows the student to perform mathematical calculations and analysis on a computer screen. The advantages and disadvantages of this application are discussed. (GLR)

  20. Equilibrium gas flow computations. I - Accurate and efficient calculation of equilibrium gas properties

    NASA Technical Reports Server (NTRS)

    Liu, Yen; Vinokur, Marcel

    1989-01-01

    This paper treats the accurate and efficient calculation of thermodynamic properties of arbitrary gas mixtures for equilibrium flow computations. New improvements in the Stupochenko-Jaffe model for the calculation of thermodynamic properties of diatomic molecules are presented. A unified formulation of equilibrium calculations for gas mixtures in terms of irreversible entropy is given. Using a highly accurate thermo-chemical data base, a new, efficient and vectorizable search algorithm is used to construct piecewise interpolation procedures with generate accurate thermodynamic variable and their derivatives required by modern computational algorithms. Results are presented for equilibrium air, and compared with those given by the Srinivasan program.

  1. Development of efficient computer program for dynamic simulation of telerobotic manipulation

    NASA Technical Reports Server (NTRS)

    Chen, J.; Ou, Y. J.

    1989-01-01

    Research in robot control has generated interest in computationally efficient forms of dynamic equations for multi-body systems. For a simply connected open-loop linkage, dynamic equations arranged in recursive form were found to be particularly efficient. A general computer program capable of simulating an open-loop manipulator with arbitrary number of links has been developed based on an efficient recursive form of Kane's dynamic equations. Also included in the program is some of the important dynamics of the joint drive system, i.e., the rotational effect of the motor rotors. Further efficiency is achieved by the use of symbolic manipulation program to generate the FORTRAN simulation program tailored for a specific manipulator based on the parameter values given. The formulations and the validation of the program are described, and some results are shown.

  2. Evaluation of Emerging Energy-Efficient Heterogeneous Computing Platforms for Biomolecular and Cellular Simulation Workloads

    PubMed Central

    Stone, John E.; Hallock, Michael J.; Phillips, James C.; Peterson, Joseph R.; Luthey-Schulten, Zaida; Schulten, Klaus

    2016-01-01

    Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers. PMID:27516922

  3. Energy-Efficient Computational Chemistry: Comparison of x86 and ARM Systems.

    PubMed

    Keipert, Kristopher; Mitra, Gaurav; Sunriyal, Vaibhav; Leang, Sarom S; Sosonkina, Masha; Rendell, Alistair P; Gordon, Mark S

    2015-11-10

    The computational efficiency and energy-to-solution of several applications using the GAMESS quantum chemistry suite of codes is evaluated for 32-bit and 64-bit ARM-based computers, and compared to an x86 machine. The x86 system completes all benchmark computations more quickly than either ARM system and is the best choice to minimize time to solution. The ARM64 and ARM32 computational performances are similar to each other for Hartree-Fock and density functional theory energy calculations. However, for memory-intensive second-order perturbation theory energy and gradient computations the lower ARM32 read/write memory bandwidth results in computation times as much as 86% longer than on the ARM64 system. The ARM32 system is more energy efficient than the x86 and ARM64 CPUs for all benchmarked methods, while the ARM64 CPU is more energy efficient than the x86 CPU for some core counts and molecular sizes.

  4. AutoBioCAD: full biodesign automation of genetic circuits.

    PubMed

    Rodrigo, Guillermo; Jaramillo, Alfonso

    2013-05-17

    Synthetic regulatory networks with prescribed functions are engineered by assembling a reduced set of functional elements. We could also assemble them computationally if the mathematical models of those functional elements were predictive enough in different genetic contexts. Only after achieving this will we have libraries of models of biological parts able to provide predictive dynamical behaviors for most circuits constructed with them. We thus need tools that can automatically explore different genetic contexts, in addition to being able to use such libraries to design novel circuits with targeted dynamics. We have implemented a new tool, AutoBioCAD, aimed at the automated design of gene regulatory circuits. AutoBioCAD loads a library of models of genetic elements and implements evolutionary design strategies to produce (i) nucleotide sequences encoding circuits with targeted dynamics that can then be tested experimentally and (ii) circuit models for testing regulation principles in natural systems, providing a new tool for synthetic biology. AutoBioCAD can be used to model and design genetic circuits with dynamic behavior, thanks to the incorporation of stochastic effects, robustness, qualitative dynamics, multiobjective optimization, or degenerate nucleotide sequences, all facilitating the link with biological part/circuit engineering.

  5. A CAD system based on spherical dual representations

    SciTech Connect

    Roach, J.W.; Paripati, P.K.; Wright, J.S.

    1987-08-01

    Computer-aided design (CAD) systems typically have many different functions: drafting, two-dimensional modeling, three-dimensional modeling, finite element analysis, and fit and tolerancing of parts. The authors report on the construction of a CAD system based on shape representation ideas used in the vision community to determine the shape of an object from its image. In the long term, they propose to construct a combined CAD and sensing system based on the same underlying object models. Considerable advantages follow from building a model-driven sensor fusion system that uses a common geometric model. In a manufacturing environment, for example, a library of objects can be built up and its models used in a vision and touch sensing system integrated into an automated assembly line to discriminate between objects and determine orientation and distance. If such a system could be made robust and highly reliable, then some of the most difficult problems that plague attempts to create a fully flexible automated environment would be solved.

  6. Use of Existing CAD Models for Radiation Shielding Analysis

    NASA Technical Reports Server (NTRS)

    Lee, K. T.; Barzilla, J. E.; Wilson, P.; Davis, A.; Zachman, J.

    2015-01-01

    The utility of a radiation exposure analysis depends not only on the accuracy of the underlying particle transport code, but also on the accuracy of the geometric representations of both the vehicle used as radiation shielding mass and the phantom representation of the human form. The current NASA/Space Radiation Analysis Group (SRAG) process to determine crew radiation exposure in a vehicle design incorporates both output from an analytic High Z and Energy Particle Transport (HZETRN) code and the properties (i.e., material thicknesses) of a previously processed drawing. This geometry pre-process can be time-consuming, and the results are less accurate than those determined using a Monte Carlo-based particle transport code. The current work aims to improve this process. Although several Monte Carlo programs (FLUKA, Geant4) are readily available, most use an internal geometry engine. The lack of an interface with the standard CAD formats used by the vehicle designers limits the ability of the user to communicate complex geometries. Translation of native CAD drawings into a format readable by these transport programs is time consuming and prone to error. The Direct Accelerated Geometry -United (DAGU) project is intended to provide an interface between the native vehicle or phantom CAD geometry and multiple particle transport codes to minimize problem setup, computing time and analysis error.

  7. CADS:Cantera Aerosol Dynamics Simulator.

    SciTech Connect

    Moffat, Harry K.

    2007-07-01

    This manual describes a library for aerosol kinetics and transport, called CADS (Cantera Aerosol Dynamics Simulator), which employs a section-based approach for describing the particle size distributions. CADS is based upon Cantera, a set of C++ libraries and applications that handles gas phase species transport and reactions. The method uses a discontinuous Galerkin formulation to represent the particle distributions within each section and to solve for changes to the aerosol particle distributions due to condensation, coagulation, and nucleation processes. CADS conserves particles, elements, and total enthalpy up to numerical round-off error, in all of its formulations. Both 0-D time dependent and 1-D steady state applications (an opposing-flow flame application) have been developed with CADS, with the initial emphasis on developing fundamental mechanisms for soot formation within fires. This report also describes the 0-D application, TDcads, which models a time-dependent perfectly stirred reactor.

  8. Computationally efficient scalar nonparaxial modeling of optical wave propagation in the far-field.

    PubMed

    Nguyen, Giang-Nam; Heggarty, Kevin; Gérard, Philippe; Serio, Bruno; Meyrueis, Patrick

    2014-04-01

    We present a scalar model to overcome the computation time and sampling interval limitations of the traditional Rayleigh-Sommerfeld (RS) formula and angular spectrum method in computing wide-angle diffraction in the far-field. Numerical and experimental results show that our proposed method based on an accurate nonparaxial diffraction step onto a hemisphere and a projection onto a plane accurately predicts the observed nonparaxial far-field diffraction pattern, while its calculation time is much lower than the more rigorous RS integral. The results enable a fast and efficient way to compute far-field nonparaxial diffraction when the conventional Fraunhofer pattern fails to predict correctly.

  9. Efficient computation of aerodynamic influence coefficients for aeroelastic analysis on a transputer network

    NASA Technical Reports Server (NTRS)

    Janetzke, David C.; Murthy, Durbha V.

    1991-01-01

    Aeroelastic analysis is multi-disciplinary and computationally expensive. Hence, it can greatly benefit from parallel processing. As part of an effort to develop an aeroelastic capability on a distributed memory transputer network, a parallel algorithm for the computation of aerodynamic influence coefficients is implemented on a network of 32 transputers. The aerodynamic influence coefficients are calculated using a 3-D unsteady aerodynamic model and a parallel discretization. Efficiencies up to 85 percent were demonstrated using 32 processors. The effect of subtask ordering, problem size, and network topology are presented. A comparison to results on a shared memory computer indicates that higher speedup is achieved on the distributed memory system.

  10. The CAD-EGS Project: Using CAD Geometrics in EGS4

    SciTech Connect

    Langeveld, Willy G.J.

    2002-03-28

    The objective of the CAD-EGS project is to provide a way to use a CAD system to create 3D geometries for use within EGS4. In this report, we describe an approach based on an intermediate file, written out by the CAD system, that is read by an EGS4 user code designed for the purpose. A prototype solution was implemented using a commonly used CAD system and the Virtual Reality Modeling Language (VRML) as an intermediate file format. We report results from the prototype, and discuss various problems arising from both the approach and the particular choices made.

  11. Spin-neurons: A possible path to energy-efficient neuromorphic computers

    SciTech Connect

    Sharad, Mrigank; Fan, Deliang; Roy, Kaushik

    2013-12-21

    Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices. Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and “thresholding” operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that “spin-neurons” (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.

  12. NREL's Building-Integrated Supercomputer Provides Heating and Efficient Computing (Fact Sheet)

    SciTech Connect

    Not Available

    2014-09-01

    NREL's Energy Systems Integration Facility (ESIF) is meant to investigate new ways to integrate energy sources so they work together efficiently, and one of the key tools to that investigation, a new supercomputer, is itself a prime example of energy systems integration. NREL teamed with Hewlett-Packard (HP) and Intel to develop the innovative warm-water, liquid-cooled Peregrine supercomputer, which not only operates efficiently but also serves as the primary source of building heat for ESIF offices and laboratories. This innovative high-performance computer (HPC) can perform more than a quadrillion calculations per second as part of the world's most energy-efficient HPC data center.

  13. Investigating the effects of majority voting on CAD systems: a LIDC case study

    NASA Astrophysics Data System (ADS)

    Carrazza, Miguel; Kennedy, Brendan; Rasin, Alexander; Furst, Jacob; Raicu, Daniela

    2016-03-01

    Computer-Aided Diagnosis (CAD) systems can provide a second opinion for either identifying suspicious regions on a medical image or predicting the degree of malignancy for a detected suspicious region. To develop a predictive model, CAD systems are trained on low-level image features extracted from image data and the class labels acquired through radiologists' interpretations or a gold standard (e.g., a biopsy). While the opinion of an expert radiologist is still an estimate of the answer, the ground truth may be extremely expensive to acquire. In such cases, CAD systems are trained on input data that contains multiple expert opinions per case with the expectation that the aggregate of labels will closely approximate the ground truth. Using multiple labels to solve this problem has its own challenges because of the inherent label uncertainty introduced by the variability in the radiologists' interpretations. Most CAD systems use majority voting (e.g., average, mode) to handle label uncertainty. This paper investigates the effects that majority voting can have on a CAD system by classifying and analyzing different semantic characteristics supplied with the Lung Image Database Consortium (LIDC) dataset. Using a decision tree based iterative predictive model, we show that majority voting with labels that exhibit certain types of skewed distribution can have a significant negative impact on the performance of a CAD system; therefore, alternative strategies for label integration are required when handling multiple interpretations.

  14. Diagnostic performance of radiologists with and without different CAD systems for mammography

    NASA Astrophysics Data System (ADS)

    Lauria, Adele; Fantacci, Maria E.; Bottigli, Ubaldo; Delogu, Pasquale; Fauci, Francesco; Golosio, Bruno; Indovina, Pietro L.; Masala, Giovanni L.; Oliva, Piernicola; Palmiero, Rosa; Raso, Giuseppe; Stumbo, Simone; Tangaro, Sabina

    2003-05-01

    The purpose of this study is the evaluation of the variation of performance in terms of sensitivity and specificity of two radiologists with different experience in mammography, with and without the assistance of two different CAD systems. The CAD considered are SecondLookTM (CADx Medical Systems, Canada), and CALMA (Computer Assisted Library in MAmmography). The first is a commercial system, the other is the result of a research project, supported by INFN (Istituto Nazionale di Fisica Nucleare, Italy); their characteristics have already been reported in literature. To compare the results with and without these tools, a dataset composed by 70 images of patients with cancer (biopsy proven) and 120 images of healthy breasts (with a three years follow up) has been collected. All the images have been digitized and analysed by two CAD, then two radiologists with respectively 6 and 2 years of experience in mammography indipendently made their diagnosis without and with, the support of the two CAD systems. In this work sensitivity and specificity variation, the Az area under the ROC curve, are reported. The results show that the use of a CAD allows for a substantial increment in sensitivity and a less pronounced decrement in specificity. The extent of these effects depends on the experience of the readers and is comparable for the two CAD considered.

  15. Using Neural Net Technology To Enhance the Efficiency of a Computer Adaptive Testing Application.

    ERIC Educational Resources Information Center

    Van Nelson, C.; Henriksen, Larry W.

    The potential for computer adaptive testing (CAT) has been well documented. In order to improve the efficiency of this process, it may be possible to utilize a neural network, or more specifically, a back propagation neural network. The paper asserts that in order to accomplish this end, it must be shown that grouping examinees by ability as…

  16. Framework for computationally efficient optimal irrigation scheduling using ant colony optimization

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A general optimization framework is introduced with the overall goal of reducing search space size and increasing the computational efficiency of evolutionary algorithm application for optimal irrigation scheduling. The framework achieves this goal by representing the problem in the form of a decisi...

  17. The Improvement of Efficiency in the Numerical Computation of Orbit Trajectories

    NASA Technical Reports Server (NTRS)

    Dyer, J.; Danchick, R.; Pierce, S.; Haney, R.

    1972-01-01

    An analysis, system design, programming, and evaluation of results are described for numerical computation of orbit trajectories. Evaluation of generalized methods, interaction of different formulations for satellite motion, transformation of equations of motion and integrator loads, and development of efficient integrators are also considered.

  18. Efficient shortest-path-tree computation in network routing based on pulse-coupled neural networks.

    PubMed

    Qu, Hong; Yi, Zhang; Yang, Simon X

    2013-06-01

    Shortest path tree (SPT) computation is a critical issue for routers using link-state routing protocols, such as the most commonly used open shortest path first and intermediate system to intermediate system. Each router needs to recompute a new SPT rooted from itself whenever a change happens in the link state. Most commercial routers do this computation by deleting the current SPT and building a new one using static algorithms such as the Dijkstra algorithm at the beginning. Such recomputation of an entire SPT is inefficient, which may consume a considerable amount of CPU time and result in a time delay in the network. Some dynamic updating methods using the information in the updated SPT have been proposed in recent years. However, there are still many limitations in those dynamic algorithms. In this paper, a new modified model of pulse-coupled neural networks (M-PCNNs) is proposed for the SPT computation. It is rigorously proved that the proposed model is capable of solving some optimization problems, such as the SPT. A static algorithm is proposed based on the M-PCNNs to compute the SPT efficiently for large-scale problems. In addition, a dynamic algorithm that makes use of the structure of the previously computed SPT is proposed, which significantly improves the efficiency of the algorithm. Simulation results demonstrate the effective and efficient performance of the proposed approach. PMID:23144039

  19. Efficient shortest-path-tree computation in network routing based on pulse-coupled neural networks.

    PubMed

    Qu, Hong; Yi, Zhang; Yang, Simon X

    2013-06-01

    Shortest path tree (SPT) computation is a critical issue for routers using link-state routing protocols, such as the most commonly used open shortest path first and intermediate system to intermediate system. Each router needs to recompute a new SPT rooted from itself whenever a change happens in the link state. Most commercial routers do this computation by deleting the current SPT and building a new one using static algorithms such as the Dijkstra algorithm at the beginning. Such recomputation of an entire SPT is inefficient, which may consume a considerable amount of CPU time and result in a time delay in the network. Some dynamic updating methods using the information in the updated SPT have been proposed in recent years. However, there are still many limitations in those dynamic algorithms. In this paper, a new modified model of pulse-coupled neural networks (M-PCNNs) is proposed for the SPT computation. It is rigorously proved that the proposed model is capable of solving some optimization problems, such as the SPT. A static algorithm is proposed based on the M-PCNNs to compute the SPT efficiently for large-scale problems. In addition, a dynamic algorithm that makes use of the structure of the previously computed SPT is proposed, which significantly improves the efficiency of the algorithm. Simulation results demonstrate the effective and efficient performance of the proposed approach.

  20. A technique to improve the esthetic aspects of CAD/CAM composite resin restorations.

    PubMed

    Rocca, Giovanni Tommaso; Bonnafous, François; Rizcalla, Nicolas; Krejci, Ivo

    2010-10-01

    Bonded indirect computer-aided design/computer-aided manufacturing (CAD/CAM) restorations are increasingly gaining popularity for the restoration of large defects in posterior teeth. In addition to ceramic blocks, composite resin blocks have been developed. Composite resins blocks may have improved mechanical properties, but have poor esthetics. Thus, an esthetic modification of the restoration after machine milling may be necessary. A step-by-step procedure for the external esthetic layering of a composite CAD/CAM restoration is described. This technique can be used to repair or modify any composite resin restoration.

  1. Computationally efficient algorithm for Gaussian Process regression in case of structured samples

    NASA Astrophysics Data System (ADS)

    Belyaev, M.; Burnaev, E.; Kapushev, Y.

    2016-04-01

    Surrogate modeling is widely used in many engineering problems. Data sets often have Cartesian product structure (for instance factorial design of experiments with missing points). In such case the size of the data set can be very large. Therefore, one of the most popular algorithms for approximation-Gaussian Process regression-can be hardly applied due to its computational complexity. In this paper a computationally efficient approach for constructing Gaussian Process regression in case of data sets with Cartesian product structure is presented. Efficiency is achieved by using a special structure of the data set and operations with tensors. Proposed algorithm has low computational as well as memory complexity compared to existing algorithms. In this work we also introduce a regularization procedure allowing to take into account anisotropy of the data set and avoid degeneracy of regression model.

  2. Developing a computationally efficient dynamic multilevel hybrid optimization scheme using multifidelity model interactions.

    SciTech Connect

    Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Castro, Joseph Pete Jr.; Giunta, Anthony Andrew

    2006-01-01

    Many engineering application problems use optimization algorithms in conjunction with numerical simulators to search for solutions. The formulation of relevant objective functions and constraints dictate possible optimization algorithms. Often, a gradient based approach is not possible since objective functions and constraints can be nonlinear, nonconvex, non-differentiable, or even discontinuous and the simulations involved can be computationally expensive. Moreover, computational efficiency and accuracy are desirable and also influence the choice of solution method. With the advent and increasing availability of massively parallel computers, computational speed has increased tremendously. Unfortunately, the numerical and model complexities of many problems still demand significant computational resources. Moreover, in optimization, these expenses can be a limiting factor since obtaining solutions often requires the completion of numerous computationally intensive simulations. Therefore, we propose a multifidelity optimization algorithm (MFO) designed to improve the computational efficiency of an optimization method for a wide range of applications. In developing the MFO algorithm, we take advantage of the interactions between multi fidelity models to develop a dynamic and computational time saving optimization algorithm. First, a direct search method is applied to the high fidelity model over a reduced design space. In conjunction with this search, a specialized oracle is employed to map the design space of this high fidelity model to that of a computationally cheaper low fidelity model using space mapping techniques. Then, in the low fidelity space, an optimum is obtained using gradient or non-gradient based optimization, and it is mapped back to the high fidelity space. In this paper, we describe the theory and implementation details of our MFO algorithm. We also demonstrate our MFO method on some example problems and on two applications: earth penetrators and

  3. Parallel-META: efficient metagenomic data analysis based on high-performance computation

    PubMed Central

    2012-01-01

    Background Metagenomics method directly sequences and analyses genome information from microbial communities. There are usually more than hundreds of genomes from different microbial species in the same community, and the main computational tasks for metagenomic data analyses include taxonomical and functional component examination of all genomes in the microbial community. Metagenomic data analysis is both data- and computation- intensive, which requires extensive computational power. Most of the current metagenomic data analysis softwares were designed to be used on a single computer or single computer clusters, which could not match with the fast increasing number of large metagenomic projects' computational requirements. Therefore, advanced computational methods and pipelines have to be developed to cope with such need for efficient analyses. Result In this paper, we proposed Parallel-META, a GPU- and multi-core-CPU-based open-source pipeline for metagenomic data analysis, which enabled the efficient and parallel analysis of multiple metagenomic datasets and the visualization of the results for multiple samples. In Parallel-META, the similarity-based database search was parallelized based on GPU computing and multi-core CPU computing optimization. Experiments have shown that Parallel-META has at least 15 times speed-up compared to traditional metagenomic data analysis method, with the same accuracy of the results http://www.computationalbioenergy.org/parallel-meta.html. Conclusion The parallel processing of current metagenomic data would be very promising: with current speed up of 15 times and above, binning would not be a very time-consuming process any more. Therefore, some deeper analysis of the metagenomic data, such as the comparison of different samples, would be feasible in the pipeline, and some of these functionalities have been included into the Parallel-META pipeline. PMID:23046922

  4. CAD system for the assistance of a comparative reading for lung cancer using retrospective helical CT images

    NASA Astrophysics Data System (ADS)

    Kubo, Mitsuru; Yamamoto, Takuya; Kawata, Yoshiki; Niki, Noboru; Eguchi, Kenji; Ohmatsu, Hironobu; Kakinuma, Ryutaro; Kaneko, Masahiro; Kusumoto, Masahiko; Moriyama, Noriyuki; Mori, Kiyoshi; Nishiyama, Hiroyuki

    2001-07-01

    The objective of our study is to develop a new computer- aided diagnosis (CAD) system to support effectually the comparative reading using serial helical CT images for lung cancer screening without using the film display. The placement of pulmonary shadows between the serial helical CT images is sometimes different to change the size and the shape of lung by inspired air. We analyzed the motion of the pulmonary structure using the serial cases of 17 pairs, which are different in the inspired air. This algorithm consists of the extraction process of region of interest such as the lung, heart and blood vessels region using thresholding and fuzzy c-means method, and the comparison process of each region in serial CT images using template matching. We validated the efficiency of this algorithm by application to image of 60 subjects. The algorithm could compare the slice images correctly in most combinations with respect to physician's point of view. The experimental results of the proposed algorithm indicate that our CAD system without using the film display is useful to increase the efficiency of the mass screening process.

  5. A new computational scheme for the Dirac-Hartree-Fock method employing an efficient integral algorithm

    NASA Astrophysics Data System (ADS)

    Yanai, Takeshi; Nakajima, Takahito; Ishikawa, Yasuyuki; Hirao, Kimihiko

    2001-04-01

    A highly efficient computational scheme for four-component relativistic ab initio molecular orbital (MO) calculations over generally contracted spherical harmonic Gaussian-type spinors (GTSs) is presented. Benchmark calculations for the ground states of the group IB hydrides, MH, and dimers, M2 (M=Cu, Ag, and Au), by the Dirac-Hartree-Fock (DHF) method were performed with a new four-component relativistic ab initio MO program package oriented toward contracted GTSs. The relativistic electron repulsion integrals (ERIs), the major bottleneck in routine DHF calculations, are calculated efficiently employing the fast ERI routine SPHERICA, exploiting the general contraction scheme, and the accompanying coordinate expansion method developed by Ishida. Illustrative calculations clearly show the efficiency of our computational scheme.

  6. A uniform algebraically-based approach to computational physics and efficient programming

    NASA Astrophysics Data System (ADS)

    Raynolds, James; Mullin, Lenore

    2007-03-01

    We present an approach to computational physics in which a common formalism is used both to express the physical problem as well as to describe the underlying details of how computation is realized on arbitrary multiprocessor/memory computer architectures. This formalism is the embodiment of a generalized algebra of multi-dimensional arrays (A Mathematics of Arrays) and an efficient computational implementation is obtained through the composition of of array indices (the psi-calculus) of algorithms defined using matrices, tensors, and arrays in general. The power of this approach arises from the fact that multiple computational steps (e.g. Fourier Transform followed by convolution, etc.) can be algebraically composed and reduced to an simplified expression (i.e. Operational Normal Form), that when directly translated into computer code, can be mathematically proven to be the most efficient implementation with the least number of temporary variables, etc. This approach will be illustrated in the context of a cache-optimized FFT that outperforms or is competitive with established library routines: ESSL, FFTW, IMSL, NAG.

  7. Computer-Aided Design in Further Education.

    ERIC Educational Resources Information Center

    Ingham, Peter, Ed.

    This publication updates the 1982 occasional paper that was intended to foster staff awareness and assist colleges in Great Britain considering the use of computer-aided design (CAD) material in engineering courses. The paper begins by defining CAD and its place in the Integrated Business System with a brief discussion of the effect of CAD on the…

  8. Computer-aided detection of early interstitial lung diseases using low-dose CT images

    NASA Astrophysics Data System (ADS)

    Park, Sang Cheol; Tan, Jun; Wang, Xingwei; Lederman, Dror; Leader, Joseph K.; Kim, Soo Hyung; Zheng, Bin

    2011-02-01

    This study aims to develop a new computer-aided detection (CAD) scheme to detect early interstitial lung disease (ILD) using low-dose computed tomography (CT) examinations. The CAD scheme classifies each pixel depicted on the segmented lung areas into positive or negative groups for ILD using a mesh-grid-based region growth method and a multi-feature-based artificial neural network (ANN). A genetic algorithm was applied to select optimal image features and the ANN structure. In testing each CT examination, only pixels selected by the mesh-grid region growth method were analyzed and classified by the ANN to improve computational efficiency. All unselected pixels were classified as negative for ILD. After classifying all pixels into the positive and negative groups, CAD computed a detection score based on the ratio of the number of positive pixels to all pixels in the segmented lung areas, which indicates the likelihood of the test case being positive for ILD. When applying to an independent testing dataset of 15 positive and 15 negative cases, the CAD scheme yielded the area under receiver operating characteristic curve (AUC = 0.884 ± 0.064) and 80.0% sensitivity at 85.7% specificity. The results demonstrated the feasibility of applying the CAD scheme to automatically detect early ILD using low-dose CT examinations.

  9. Getting into CAD at the Savannah River Plant

    SciTech Connect

    Scoggins, W.R.

    1984-01-01

    In 1978, the Savannah River Plant (SRP) Project Department was producing approximately 1100 new drawings and 3000 revisions per year, with a force of 30 draftsmen. Design services for the Plant were increasing due to changing programs, obsolescent equipment replacements and added security requirements. This increasing workload greatly increased the engineering drafting backlog. At the same time, many draftsmen were approaching retirement age and were to be replaced with unskilled draftsman trainees. A proposal was presented to management to acquire a Computer Aided Drafting (CAD) system to produce instrument and electrical drawings which comprised 30% of the work load.

  10. CAD/CAM and scientific data management at Dassault

    NASA Technical Reports Server (NTRS)

    Bohn, P.

    1984-01-01

    The history of CAD/CAM and scientific data management at Dassault are presented. Emphasis is put on the targets of the now commercially available software CATIA. The links with scientific computations such as aerodynamics and structural analysis are presented. Comments are made on the principles followed within the company. The consequences of the approximative nature of scientific data are examined. Consequence of the new history function is mainly its protection against copy or alteration. Future plans at Dassault for scientific data appear to be in opposite directions compared to some general tendencies.

  11. A computationally efficient approach for hidden-Markov model-augmented fingerprint-based positioning

    NASA Astrophysics Data System (ADS)

    Roth, John; Tummala, Murali; McEachen, John

    2016-09-01

    This paper presents a computationally efficient approach for mobile subscriber position estimation in wireless networks. A method of data scaling assisted by timing adjust is introduced in fingerprint-based location estimation under a framework which allows for minimising computational cost. The proposed method maintains a comparable level of accuracy to the traditional case where no data scaling is used and is evaluated in a simulated environment under varying channel conditions. The proposed scheme is studied when it is augmented by a hidden-Markov model to match the internal parameters to the channel conditions that present, thus minimising computational cost while maximising accuracy. Furthermore, the timing adjust quantity, available in modern wireless signalling messages, is shown to be able to further reduce computational cost and increase accuracy when available. The results may be seen as a significant step towards integrating advanced position-based modelling with power-sensitive mobile devices.

  12. Different CAD/CAM-processing routes for zirconia restorations: influence on fitting accuracy.

    PubMed

    Kohorst, Philipp; Junghanns, Janet; Dittmer, Marc P; Borchers, Lothar; Stiesch, Meike

    2011-08-01

    The aim of the present in vitro study was to evaluate the influence of different processing routes on the fitting accuracy of four-unit zirconia fixed dental prostheses (FDPs) fabricated by computer-aided design/computer-aided manufacturing (CAD/CAM). Three groups of zirconia frameworks with ten specimens each were fabricated. Frameworks of one group (CerconCAM) were produced by means of a laboratory CAM-only system. The other frameworks were made with different CAD/CAM systems; on the one hand by in-laboratory production (CerconCAD/CAM) and on the other hand by centralized production in a milling center (Compartis) after forwarding geometrical data. Frameworks were then veneered with the recommended ceramics, and marginal accuracy was determined using a replica technique. Horizontal marginal discrepancy, vertical marginal discrepancy, absolute marginal discrepancy, and marginal gap were evaluated. Statistical analyses were performed by one-way analysis of variance (ANOVA), with the level of significance chosen at 0.05. Mean horizontal discrepancies ranged between 22 μm (CerconCAM) and 58 μm (Compartis), vertical discrepancies ranged between 63 μm (CerconCAD/CAM) and 162 μm (CerconCAM), and absolute marginal discrepancies ranged between 94 μm (CerconCAD/CAM) and 181 μm (CerconCAM). The marginal gap varied between 72 μm (CerconCAD/CAM) and 112 μm (CerconCAM, Compartis). Statistical analysis revealed that, with all measurements, the marginal accuracy of the zirconia FDPs was significantly influenced by the processing route used (p < 0.05). Within the limitations of this study, all restorations showed a clinically acceptable marginal accuracy; however, the results suggest that the CAD/CAM systems are more precise than the CAM-only system for the manufacture of four-unit FDPs. PMID:20495937

  13. From Artisanal to CAD-CAM Blocks: State of the Art of Indirect Composites.

    PubMed

    Mainjot, A K; Dupont, N M; Oudkerk, J C; Dewael, T Y; Sadoun, M J

    2016-05-01

    Indirect composites have been undergoing an impressive evolution over the last few years. Specifically, recent developments in computer-aided design-computer-aided manufacturing (CAD-CAM) blocks have been associated with new polymerization modes, innovative microstructures, and different compositions. All these recent breakthroughs have introduced important gaps among the properties of the different materials. This critical state-of-the-art review analyzes the strengths and weaknesses of the different varieties of CAD-CAM composite materials, especially as compared with direct and artisanal indirect composites. Indeed, new polymerization modes used for CAD-CAM blocks-especially high temperature (HT) and, most of all, high temperature-high pressure (HT-HP)-are shown to significantly increase the degree of conversion in comparison with light-cured composites. Industrial processes also allow for the augmentation of the filler content and for the realization of more homogeneous structures with fewer flaws. In addition, due to their increased degree of conversion and their different monomer composition, some CAD-CAM blocks are more advantageous in terms of toxicity and monomer release. Finally, materials with a polymer-infiltrated ceramic network (PICN) microstructure exhibit higher flexural strength and a more favorable elasticity modulus than materials with a dispersed filler microstructure. Consequently, some high-performance composite CAD-CAM blocks-particularly experimental PICNs-can now rival glass-ceramics, such as lithium-disilicate glass-ceramics, for use as bonded partial restorations and crowns on natural teeth and implants. Being able to be manufactured in very low thicknesses, they offer the possibility of developing innovative minimally invasive treatment strategies, such as "no prep" treatment of worn dentition. Current issues are related to the study of bonding and wear properties of the different varieties of CAD-CAM composites. There is also a crucial

  14. Different CAD/CAM-processing routes for zirconia restorations: influence on fitting accuracy.

    PubMed

    Kohorst, Philipp; Junghanns, Janet; Dittmer, Marc P; Borchers, Lothar; Stiesch, Meike

    2011-08-01

    The aim of the present in vitro study was to evaluate the influence of different processing routes on the fitting accuracy of four-unit zirconia fixed dental prostheses (FDPs) fabricated by computer-aided design/computer-aided manufacturing (CAD/CAM). Three groups of zirconia frameworks with ten specimens each were fabricated. Frameworks of one group (CerconCAM) were produced by means of a laboratory CAM-only system. The other frameworks were made with different CAD/CAM systems; on the one hand by in-laboratory production (CerconCAD/CAM) and on the other hand by centralized production in a milling center (Compartis) after forwarding geometrical data. Frameworks were then veneered with the recommended ceramics, and marginal accuracy was determined using a replica technique. Horizontal marginal discrepancy, vertical marginal discrepancy, absolute marginal discrepancy, and marginal gap were evaluated. Statistical analyses were performed by one-way analysis of variance (ANOVA), with the level of significance chosen at 0.05. Mean horizontal discrepancies ranged between 22 μm (CerconCAM) and 58 μm (Compartis), vertical discrepancies ranged between 63 μm (CerconCAD/CAM) and 162 μm (CerconCAM), and absolute marginal discrepancies ranged between 94 μm (CerconCAD/CAM) and 181 μm (CerconCAM). The marginal gap varied between 72 μm (CerconCAD/CAM) and 112 μm (CerconCAM, Compartis). Statistical analysis revealed that, with all measurements, the marginal accuracy of the zirconia FDPs was significantly influenced by the processing route used (p < 0.05). Within the limitations of this study, all restorations showed a clinically acceptable marginal accuracy; however, the results suggest that the CAD/CAM systems are more precise than the CAM-only system for the manufacture of four-unit FDPs.

  15. Computationally efficient analysis of extraordinary optical transmission through infinite and truncated subwavelength hole arrays

    NASA Astrophysics Data System (ADS)

    Camacho, Miguel; Boix, Rafael R.; Medina, Francisco

    2016-06-01

    The authors present a computationally efficient technique for the analysis of extraordinary transmission through both infinite and truncated periodic arrays of slots in perfect conductor screens of negligible thickness. An integral equation is obtained for the tangential electric field in the slots both in the infinite case and in the truncated case. The unknown functions are expressed as linear combinations of known basis functions, and the unknown weight coefficients are determined by means of Galerkin's method. The coefficients of Galerkin's matrix are obtained in the spatial domain in terms of double finite integrals containing the Green's functions (which, in the infinite case, is efficiently computed by means of Ewald's method) times cross-correlations between both the basis functions and their divergences. The computation in the spatial domain is an efficient alternative to the direct computation in the spectral domain since this latter approach involves the determination of either slowly convergent double infinite summations (infinite case) or slowly convergent double infinite integrals (truncated case). The results obtained are validated by means of commercial software, and it is found that the integral equation technique presented in this paper is at least two orders of magnitude faster than commercial software for a similar accuracy. It is also shown that the phenomena related to periodicity such as extraordinary transmission and Wood's anomaly start to appear in the truncated case for arrays with more than 100 (10 ×10 ) slots.

  16. Can computational efficiency alone drive the evolution of modularity in neural networks?

    PubMed Central

    Tosh, Colin R.

    2016-01-01

    Some biologists have abandoned the idea that computational efficiency in processing multipart tasks or input sets alone drives the evolution of modularity in biological networks. A recent study confirmed that small modular (neural) networks are relatively computationally-inefficient but large modular networks are slightly more efficient than non-modular ones. The present study determines whether these efficiency advantages with network size can drive the evolution of modularity in networks whose connective architecture can evolve. The answer is no, but the reason why is interesting. All simulations (run in a wide variety of parameter states) involving gradualistic connective evolution end in non-modular local attractors. Thus while a high performance modular attractor exists, such regions cannot be reached by gradualistic evolution. Non-gradualistic evolutionary simulations in which multi-modularity is obtained through duplication of existing architecture appear viable. Fundamentally, this study indicates that computational efficiency alone does not drive the evolution of modularity, even in large biological networks, but it may still be a viable mechanism when networks evolve by non-gradualistic means. PMID:27573614

  17. Can computational efficiency alone drive the evolution of modularity in neural networks?

    PubMed

    Tosh, Colin R

    2016-08-30

    Some biologists have abandoned the idea that computational efficiency in processing multipart tasks or input sets alone drives the evolution of modularity in biological networks. A recent study confirmed that small modular (neural) networks are relatively computationally-inefficient but large modular networks are slightly more efficient than non-modular ones. The present study determines whether these efficiency advantages with network size can drive the evolution of modularity in networks whose connective architecture can evolve. The answer is no, but the reason why is interesting. All simulations (run in a wide variety of parameter states) involving gradualistic connective evolution end in non-modular local attractors. Thus while a high performance modular attractor exists, such regions cannot be reached by gradualistic evolution. Non-gradualistic evolutionary simulations in which multi-modularity is obtained through duplication of existing architecture appear viable. Fundamentally, this study indicates that computational efficiency alone does not drive the evolution of modularity, even in large biological networks, but it may still be a viable mechanism when networks evolve by non-gradualistic means.

  18. Reducing Vehicle Weight and Improving U.S. Energy Efficiency Using Integrated Computational Materials Engineering

    NASA Astrophysics Data System (ADS)

    Joost, William J.

    2012-09-01

    Transportation accounts for approximately 28% of U.S. energy consumption with the majority of transportation energy derived from petroleum sources. Many technologies such as vehicle electrification, advanced combustion, and advanced fuels can reduce transportation energy consumption by improving the efficiency of cars and trucks. Lightweight materials are another important technology that can improve passenger vehicle fuel efficiency by 6-8% for each 10% reduction in weight while also making electric and alternative vehicles more competitive. Despite the opportunities for improved efficiency, widespread deployment of lightweight materials for automotive structures is hampered by technology gaps most often associated with performance, manufacturability, and cost. In this report, the impact of reduced vehicle weight on energy efficiency is discussed with a particular emphasis on quantitative relationships determined by several researchers. The most promising lightweight materials systems are described along with a brief review of the most significant technical barriers to their implementation. For each material system, the development of accurate material models is critical to support simulation-intensive processing and structural design for vehicles; improved models also contribute to an integrated computational materials engineering (ICME) approach for addressing technical barriers and accelerating deployment. The value of computational techniques is described by considering recent ICME and computational materials science success stories with an emphasis on applying problem-specific methods.

  19. An efficient sparse matrix multiplication scheme for the CYBER 205 computer

    NASA Technical Reports Server (NTRS)

    Lambiotte, Jules J., Jr.

    1988-01-01

    This paper describes the development of an efficient algorithm for computing the product of a matrix and vector on a CYBER 205 vector computer. The desire to provide software which allows the user to choose between the often conflicting goals of minimizing central processing unit (CPU) time or storage requirements has led to a diagonal-based algorithm in which one of four types of storage is selected for each diagonal. The candidate storage types employed were chosen to be efficient on the CYBER 205 for diagonals which have nonzero structure which is dense, moderately sparse, very sparse and short, or very sparse and long; however, for many densities, no diagonal type is most efficient with respect to both resource requirements, and a trade-off must be made. For each diagonal, an initialization subroutine estimates the CPU time and storage required for each storage type based on results from previously performed numerical experimentation. These requirements are adjusted by weights provided by the user which reflect the relative importance the user places on the two resources. The adjusted resource requirements are then compared to select the most efficient storage and computational scheme.

  20. Efficient scatter model for simulation of ultrasound images from computed tomography data

    NASA Astrophysics Data System (ADS)

    D'Amato, J. P.; Lo Vercio, L.; Rubi, P.; Fernandez Vera, E.; Barbuzza, R.; Del Fresno, M.; Larrabide, I.

    2015-12-01

    Background and motivation: Real-time ultrasound simulation refers to the process of computationally creating fully synthetic ultrasound images instantly. Due to the high value of specialized low cost training for healthcare professionals, there is a growing interest in the use of this technology and the development of high fidelity systems that simulate the acquisitions of echographic images. The objective is to create an efficient and reproducible simulator that can run either on notebooks or desktops using low cost devices. Materials and methods: We present an interactive ultrasound simulator based on CT data. This simulator is based on ray-casting and provides real-time interaction capabilities. The simulation of scattering that is coherent with the transducer position in real time is also introduced. Such noise is produced using a simplified model of multiplicative noise and convolution with point spread functions (PSF) tailored for this purpose. Results: The computational efficiency of scattering maps generation was revised with an improved performance. This allowed a more efficient simulation of coherent scattering in the synthetic echographic images while providing highly realistic result. We describe some quality and performance metrics to validate these results, where a performance of up to 55fps was achieved. Conclusion: The proposed technique for real-time scattering modeling provides realistic yet computationally efficient scatter distributions. The error between the original image and the simulated scattering image was compared for the proposed method and the state-of-the-art, showing negligible differences in its distribution.

  1. Computer-aided design for metabolic engineering.

    PubMed

    Fernández-Castané, Alfred; Fehér, Tamás; Carbonell, Pablo; Pauthenier, Cyrille; Faulon, Jean-Loup

    2014-12-20

    The development and application of biotechnology-based strategies has had a great socio-economical impact and is likely to play a crucial role in the foundation of more sustainable and efficient industrial processes. Within biotechnology, metabolic engineering aims at the directed improvement of cellular properties, often with the goal of synthesizing a target chemical compound. The use of computer-aided design (CAD) tools, along with the continuously emerging advanced genetic engineering techniques have allowed metabolic engineering to broaden and streamline the process of heterologous compound-production. In this work, we review the CAD tools available for metabolic engineering with an emphasis, on retrosynthesis methodologies. Recent advances in genetic engineering strategies for pathway implementation and optimization are also reviewed as well as a range of bionalytical tools to validate in silico predictions. A case study applying retrosynthesis is presented as an experimental verification of the output from Retropath, the first complete automated computational pipeline applicable to metabolic engineering. Applying this CAD pipeline, together with genetic reassembly and optimization of culture conditions led to improved production of the plant flavonoid pinocembrin. Coupling CAD tools with advanced genetic engineering strategies and bioprocess optimization is crucial for enhanced product yields and will be of great value for the development of non-natural products through sustainable biotechnological processes.

  2. Computer-aided design for metabolic engineering.

    PubMed

    Fernández-Castané, Alfred; Fehér, Tamás; Carbonell, Pablo; Pauthenier, Cyrille; Faulon, Jean-Loup

    2014-12-20

    The development and application of biotechnology-based strategies has had a great socio-economical impact and is likely to play a crucial role in the foundation of more sustainable and efficient industrial processes. Within biotechnology, metabolic engineering aims at the directed improvement of cellular properties, often with the goal of synthesizing a target chemical compound. The use of computer-aided design (CAD) tools, along with the continuously emerging advanced genetic engineering techniques have allowed metabolic engineering to broaden and streamline the process of heterologous compound-production. In this work, we review the CAD tools available for metabolic engineering with an emphasis, on retrosynthesis methodologies. Recent advances in genetic engineering strategies for pathway implementation and optimization are also reviewed as well as a range of bionalytical tools to validate in silico predictions. A case study applying retrosynthesis is presented as an experimental verification of the output from Retropath, the first complete automated computational pipeline applicable to metabolic engineering. Applying this CAD pipeline, together with genetic reassembly and optimization of culture conditions led to improved production of the plant flavonoid pinocembrin. Coupling CAD tools with advanced genetic engineering strategies and bioprocess optimization is crucial for enhanced product yields and will be of great value for the development of non-natural products through sustainable biotechnological processes. PMID:24704607

  3. Does computer-aided surgical simulation improve efficiency in bimaxillary orthognathic surgery?

    PubMed

    Schwartz, H C

    2014-05-01

    The purpose of this study was to compare the efficiency of bimaxillary orthognathic surgery using computer-aided surgical simulation (CASS), with cases planned using traditional methods. Total doctor time was used to measure efficiency. While costs vary widely in different localities and in different health schemes, time is a valuable and limited resource everywhere. For this reason, total doctor time is a more useful measure of efficiency than is cost. Even though we use CASS primarily for planning more complex cases at the present time, this study showed an average saving of 60min for each case. In the context of a department that performs 200 bimaxillary cases each year, this would represent a saving of 25 days of doctor time, if applied to every case. It is concluded that CASS offers great potential for improving efficiency when used in the planning of bimaxillary orthognathic surgery. It saves significant doctor time that can be applied to additional surgical work.

  4. Automated knowledge base development from CAD/CAE databases

    NASA Technical Reports Server (NTRS)

    Wright, R. Glenn; Blanchard, Mary

    1988-01-01

    Knowledge base development requires a substantial investment in time, money, and resources in order to capture the knowledge and information necessary for anything other than trivial applications. This paper addresses a means to integrate the design and knowledge base development process through automated knowledge base development from CAD/CAE databases and files. Benefits of this approach include the development of a more efficient means of knowledge engineering, resulting in the timely creation of large knowledge based systems that are inherently free of error.

  5. An efficient parallel implementation of explicit multirate Runge–Kutta schemes for discontinuous Galerkin computations

    SciTech Connect

    Seny, Bruno Lambrechts, Jonathan; Toulorge, Thomas; Legat, Vincent; Remacle, Jean-François

    2014-01-01

    Although explicit time integration schemes require small computational efforts per time step, their efficiency is severely restricted by their stability limits. Indeed, the multi-scale nature of some physical processes combined with highly unstructured meshes can lead some elements to impose a severely small stable time step for a global problem. Multirate methods offer a way to increase the global efficiency by gathering grid cells in appropriate groups under local stability conditions. These methods are well suited to the discontinuous Galerkin framework. The parallelization of the multirate strategy is challenging because grid cells have different workloads. The computational cost is different for each sub-time step depending on the elements involved and a classical partitioning strategy is not adequate any more. In this paper, we propose a solution that makes use of multi-constraint mesh partitioning. It tends to minimize the inter-processor communications, while ensuring that the workload is almost equally shared by every computer core at every stage of the algorithm. Particular attention is given to the simplicity of the parallel multirate algorithm while minimizing computational and communication overheads. Our implementation makes use of the MeTiS library for mesh partitioning and the Message Passing Interface for inter-processor communication. Performance analyses for two and three-dimensional practical applications confirm that multirate methods preserve important computational advantages of explicit methods up to a significant number of processors.

  6. A strategy for improved computational efficiency of the method of anchored distributions

    NASA Astrophysics Data System (ADS)

    Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram

    2013-06-01

    This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.

  7. Efficient computation of partial expected value of sample information using Bayesian approximation.

    PubMed

    Brennan, Alan; Kharroubi, Samer A

    2007-01-01

    We describe a novel process for transforming the efficiency of partial expected value of sample information (EVSI) computation in decision models. Traditional EVSI computation begins with Monte Carlo sampling to produce new simulated data-sets with a specified sample size. Each data-set is synthesised with prior information to give posterior distributions for model parameters, either via analytic formulae or a further Markov Chain Monte Carlo (MCMC) simulation. A further 'inner level' Monte Carlo sampling then quantifies the effect of the simulated data on the decision. This paper describes a novel form of Bayesian Laplace approximation, which can be replace both the Bayesian updating and the inner Monte Carlo sampling to compute the posterior expectation of a function. We compare the accuracy of EVSI estimates in two case study cost-effectiveness models using 1st and 2nd order versions of our approximation formula, the approximation of Tierney and Kadane, and traditional Monte Carlo. Computational efficiency gains depend on the complexity of the net benefit functions, the number of inner level Monte Carlo samples used, and the requirement or otherwise for MCMC methods to produce the posterior distributions. This methodology provides a new and valuable approach for EVSI computation in health economic decision models and potential wider benefits in many fields requiring Bayesian approximation. PMID:16945438

  8. Efficient O(N) recursive computation of the operational space inertial matrix

    SciTech Connect

    Lilly, K.W.; Orin, D.E.

    1993-09-01

    The operational space inertia matrix {Lambda} reflects the dynamic properties of a robot manipulator to its tip. In the control domain, it may be used to decouple force and/or motion control about the manipulator workspace axes. The matrix {Lambda} also plays an important role in the development of efficient algorithms for the dynamic simulation of closed-chain robotic mechanisms, including simple closed-chain mechanisms such as multiple manipulator systems and walking machines. The traditional approach used to compute {Lambda} has a computational complexity of O(N{sup 3}) for an N degree-of-freedom manipulator. This paper presents the development of a recursive algorithm for computing the operational space inertia matrix (OSIM) that reduces the computational complexity to O(N). This algorithm, the inertia propagation method, is based on a single recursion that begins at the base of the manipulator and progresses out to the last link. Also applicable to redundant systems and mechanisms with multiple-degree-of-freedom joints, the inertia propagation method is the most efficient method known for computing {Lambda} for N {>=} 6. The numerical accuracy of the algorithm is discussed for a PUMA 560 robot with a fixed base.

  9. An efficient parallel implementation of explicit multirate Runge-Kutta schemes for discontinuous Galerkin computations

    NASA Astrophysics Data System (ADS)

    Seny, Bruno; Lambrechts, Jonathan; Toulorge, Thomas; Legat, Vincent; Remacle, Jean-François

    2014-01-01

    Although explicit time integration schemes require small computational efforts per time step, their efficiency is severely restricted by their stability limits. Indeed, the multi-scale nature of some physical processes combined with highly unstructured meshes can lead some elements to impose a severely small stable time step for a global problem. Multirate methods offer a way to increase the global efficiency by gathering grid cells in appropriate groups under local stability conditions. These methods are well suited to the discontinuous Galerkin framework. The parallelization of the multirate strategy is challenging because grid cells have different workloads. The computational cost is different for each sub-time step depending on the elements involved and a classical partitioning strategy is not adequate any more. In this paper, we propose a solution that makes use of multi-constraint mesh partitioning. It tends to minimize the inter-processor communications, while ensuring that the workload is almost equally shared by every computer core at every stage of the algorithm. Particular attention is given to the simplicity of the parallel multirate algorithm while minimizing computational and communication overheads. Our implementation makes use of the MeTiS library for mesh partitioning and the Message Passing Interface for inter-processor communication. Performance analyses for two and three-dimensional practical applications confirm that multirate methods preserve important computational advantages of explicit methods up to a significant number of processors.

  10. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  11. Unified commutation-pruning technique for efficient computation of composite DFTs

    NASA Astrophysics Data System (ADS)

    Castro-Palazuelos, David E.; Medina-Melendrez, Modesto Gpe.; Torres-Roman, Deni L.; Shkvarko, Yuriy V.

    2015-12-01

    An efficient computation of a composite length discrete Fourier transform (DFT), as well as a fast Fourier transform (FFT) of both time and space data sequences in uncertain (non-sparse or sparse) computational scenarios, requires specific processing algorithms. Traditional algorithms typically employ some pruning methods without any commutations, which prevents them from attaining the potential computational efficiency. In this paper, we propose an alternative unified approach with automatic commutations between three computational modalities aimed at efficient computations of the pruned DFTs adapted for variable composite lengths of the non-sparse input-output data. The first modality is an implementation of the direct computation of a composite length DFT, the second one employs the second-order recursive filtering method, and the third one performs the new pruned decomposed transform. The pruned decomposed transform algorithm performs the decimation in time or space (DIT) data acquisition domain and, then, decimation in frequency (DIF). The unified combination of these three algorithms is addressed as the DFTCOMM technique. Based on the treatment of the combinational-type hypotheses testing optimization problem of preferable allocations between all feasible commuting-pruning modalities, we have found the global optimal solution to the pruning problem that always requires a fewer or, at most, the same number of arithmetic operations than other feasible modalities. The DFTCOMM method outperforms the existing competing pruning techniques in the sense of attainable savings in the number of required arithmetic operations. It requires fewer or at most the same number of arithmetic operations for its execution than any other of the competing pruning methods reported in the literature. Finally, we provide the comparison of the DFTCOMM with the recently developed sparse fast Fourier transform (SFFT) algorithmic family. We feature that, in the sensing scenarios with

  12. Performance evaluation for volumetric segmentation of multiple sclerosis lesions using MATLAB and computing engine in the graphical processing unit (GPU)

    NASA Astrophysics Data System (ADS)

    Le, Anh H.; Park, Young W.; Ma, Kevin; Jacobs, Colin; Liu, Brent J.

    2010-03-01

    Multiple Sclerosis (MS) is a progressive neurological disease affecting myelin pathways in the brain. Multiple lesions in the white matter can cause paralysis and severe motor disabilities of the affected patient. To solve the issue of inconsistency and user-dependency in manual lesion measurement of MRI, we have proposed a 3-D automated lesion quantification algorithm to enable objective and efficient lesion volume tracking. The computer-aided detection (CAD) of MS, written in MATLAB, utilizes K-Nearest Neighbors (KNN) method to compute the probability of lesions on a per-voxel basis. Despite the highly optimized algorithm of imaging processing that is used in CAD development, MS CAD integration and evaluation in clinical workflow is technically challenging due to the requirement of high computation rates and memory bandwidth in the recursive nature of the algorithm. In this paper, we present the development and evaluation of using a computing engine in the graphical processing unit (GPU) with MATLAB for segmentation of MS lesions. The paper investigates the utilization of a high-end GPU for parallel computing of KNN in the MATLAB environment to improve algorithm performance. The integration is accomplished using NVIDIA's CUDA developmental toolkit for MATLAB. The results of this study will validate the practicality and effectiveness of the prototype MS CAD in a clinical setting. The GPU method may allow MS CAD to rapidly integrate in an electronic patient record or any disease-centric health care system.

  13. Storage and computationally efficient permutations of factorized covariance and square-root information arrays

    NASA Technical Reports Server (NTRS)

    Muellerschoen, R. J.

    1988-01-01

    A unified method to permute vector stored Upper triangular Diagonal factorized covariance and vector stored upper triangular Square Root Information arrays is presented. The method involves cyclic permutation of the rows and columns of the arrays and retriangularization with fast (slow) Givens rotations (reflections). Minimal computation is performed, and a one dimensional scratch array is required. To make the method efficient for large arrays on a virtual memory machine, computations are arranged so as to avoid expensive paging faults. This method is potentially important for processing large volumes of radio metric data in the Deep Space Network.

  14. Efficient path-based computations on pedigree graphs with compact encodings

    PubMed Central

    2012-01-01

    A pedigree is a diagram of family relationships, and it is often used to determine the mode of inheritance (dominant, recessive, etc.) of genetic diseases. Along with rapidly growing knowledge of genetics and accumulation of genealogy information, pedigree data is becoming increasingly important. In large pedigree graphs, path-based methods for efficiently computing genealogical measurements, such as inbreeding and kinship coefficients of individuals, depend on efficient identification and processing of paths. In this paper, we propose a new compact path encoding scheme on large pedigrees, accompanied by an efficient algorithm for identifying paths. We demonstrate the utilization of our proposed method by applying it to the inbreeding coefficient computation. We present time and space complexity analysis, and also manifest the efficiency of our method for evaluating inbreeding coefficients as compared to previous methods by experimental results using pedigree graphs with real and synthetic data. Both theoretical and experimental results demonstrate that our method is more scalable and efficient than previous methods in terms of time and space requirements. PMID:22536898

  15. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE PAGESBeta

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-08-19

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~101 to ~102 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  16. An improvement to computational efficiency of the drain current model for double-gate MOSFET

    NASA Astrophysics Data System (ADS)

    Zhou, Xing-Ye; Zhang, Jian; Zhou, Zhi-Ze; Zhang, Li-Ning; Ma, Chen-Yue; Wu, Wen; Zhao, Wei; Zhang, Xing

    2011-09-01

    As a connection between the process and the circuit design, the device model is greatly desired for emerging devices, such as the double-gate MOSFET. Time efficiency is one of the most important requirements for device modeling. In this paper, an improvement to the computational efficiency of the drain current model for double-gate MOSFETs is extended, and different calculation methods are compared and discussed. The results show that the calculation speed of the improved model is substantially enhanced. A two-dimensional device simulation is performed to verify the improved model. Furthermore, the model is implemented into the HSPICE circuit simulator in Verilog-A for practical application.

  17. Computer Aided Drafting Workshop. Workshop Booklet.

    ERIC Educational Resources Information Center

    Goetsch, David L.

    This mini-course and article are presentations from a workshop on computer-aided drafting. The purpose of the mini-course is to assist drafting instructors in updating their occupational knowledge to include computer-aided drafting (CAD). Topics covered in the course include general computer information, the computer in drafting, CAD terminology,…

  18. CAD system for automatic analysis of CT perfusion maps

    NASA Astrophysics Data System (ADS)

    Hachaj, T.; Ogiela, M. R.

    2011-03-01

    In this article, authors present novel algorithms developed for the computer-assisted diagnosis (CAD) system for analysis of dynamic brain perfusion, computer tomography (CT) maps, cerebral blood flow (CBF), and cerebral blood volume (CBV). Those methods perform both quantitative analysis [detection and measurement and description with brain anatomy atlas (AA) of potential asymmetries/lesions] and qualitative analysis (semantic interpretation of visualized symptoms). The semantic interpretation (decision about type of lesion: ischemic/hemorrhagic, is the brain tissue at risk of infraction or not) of visualized symptoms is done by, so-called, cognitive inference processes allowing for reasoning on character of pathological regions based on specialist image knowledge. The whole system is implemented in.NET platform (C# programming language) and can be used on any standard PC computer with.NET framework installed.

  19. Reentry-Vehicle Shape Optimization Using a Cartesian Adjoint Method and CAD Geometry

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.

    2006-01-01

    A DJOINT solutions of the governing flow equations are becoming increasingly important for the development of efficient analysis and optimization algorithms. A well-known use of the adjoint method is gradient-based shape. Given an objective function that defines some measure of performance, such as the lift and drag functionals, its gradient is computed at a cost that is essentially independent of the number of design variables (e.g., geometric parameters that control the shape). Classic aerodynamic applications of gradient-based optimization include the design of cruise configurations for transonic and supersonic flow, as well as the design of high-lift systems. are perhaps the most promising approach for addressing the issues of flow solution automation for aerodynamic design problems. In these methods, the discretization of the wetted surface is decoupled from that of the volume mesh. This not only enables fast and robust mesh generation for geometry of arbitrary complexity, but also facilitates access to geometry modeling and manipulation using parametric computer-aided design (CAD). In previous work on Cartesian adjoint solvers, Melvin et al. developed an adjoint formulation for the TRANAIR code, which is based on the full-potential equation with viscous corrections. More recently, Dadone and Grossman presented an adjoint formulation for the two-dimensional Euler equations using a ghost-cell method to enforce the wall boundary conditions. In Refs. 18 and 19, we presented an accurate and efficient algorithm for the solution of the adjoint Euler equations discretized on Cartesian meshes with embedded, cut-cell boundaries. Novel aspects of the algorithm were the computation of surface shape sensitivities for triangulations based on parametric-CAD models and the linearization of the coupling between the surface triangulation and the cut-cells. The accuracy of the gradient computation was verified using several three-dimensional test cases, which included design

  20. A comparison of computational efficiencies of stochastic algorithms in terms of two infection models.

    PubMed

    Banks, H Thomas; Hu, Shuhua; Joyner, Michele; Broido, Anna; Canter, Brandi; Gayvert, Kaitlyn; Link, Kathryn

    2012-07-01

    In this paper, we investigate three particular algorithms: a stochastic simulation algorithm (SSA), and explicit and implicit tau-leaping algorithms. To compare these methods, we used them to analyze two infection models: a Vancomycin-resistant enterococcus (VRE) infection model at the population level, and a Human Immunodeficiency Virus (HIV) within host infection model. While the first has a low species count and few transitions, the second is more complex with a comparable number of species involved. The relative efficiency of each algorithm is determined based on computational time and degree of precision required. The numerical results suggest that all three algorithms have the similar computational efficiency for the simpler VRE model, and the SSA is the best choice due to its simplicity and accuracy. In addition, we have found that with the larger and more complex HIV model, implementation and modification of tau-Leaping methods are preferred.

  1. A Power Efficient Exaflop Computer Design for Global Cloud System Resolving Climate Models.

    NASA Astrophysics Data System (ADS)

    Wehner, M. F.; Oliker, L.; Shalf, J.

    2008-12-01

    Exascale computers would allow routine ensemble modeling of the global climate system at the cloud system resolving scale. Power and cost requirements of traditional architecture systems are likely to delay such capability for many years. We present an alternative route to the exascale using embedded processor technology to design a system optimized for ultra high resolution climate modeling. These power efficient processors, used in consumer electronic devices such as mobile phones, portable music players, cameras, etc., can be tailored to the specific needs of scientific computing. We project that a system capable of integrating a kilometer scale climate model a thousand times faster than real time could be designed and built in a five year time scale for US$75M with a power consumption of 3MW. This is cheaper, more power efficient and sooner than any other existing technology.

  2. Efficient high-fidelity quantum computation using matter qubits and linear optics

    SciTech Connect

    Barrett, Sean D.; Kok, Pieter

    2005-06-15

    We propose a practical, scalable, and efficient scheme for quantum computation using spatially separated matter qubits and single-photon interference effects. The qubit systems can be nitrogen-vacancy centers in diamond, Pauli-blockade quantum dots with an excess electron, or trapped ions with optical transitions, which are each placed in a cavity and subsequently entangled using a double-heralded single-photon detection scheme. The fidelity of the resulting entanglement is extremely robust against the most important errors such as detector loss, spontaneous emission, and mismatch of cavity parameters. We demonstrate how this entangling operation can be used to efficiently generate cluster states of many qubits, which, together with single-qubit operations and readout, can be used to implement universal quantum computation. Existing experimental parameters indicate that high-fidelity clusters can be generated with a moderate constant overhead.

  3. An efficient method for computing the QTAIM topology of a scalar field: the electron density case.

    PubMed

    Rodríguez, Juan I

    2013-03-30

    An efficient method for computing the quantum theory of atoms in molecules (QTAIM) topology of the electron density (or other scalar field) is presented. A modified Newton-Raphson algorithm was implemented for finding the critical points (CP) of the electron density. Bond paths were constructed with the second-order Runge-Kutta method. Vectorization of the present algorithm makes it to scale linearly with the system size. The parallel efficiency decreases with the number of processors (from 70% to 50%) with an average of 54%. The accuracy and performance of the method are demonstrated by computing the QTAIM topology of the electron density of a series of representative molecules. Our results show that our algorithm might allow to apply QTAIM analysis to large systems (carbon nanotubes, polymers, fullerenes) considered unreachable until now.

  4. Next Generation CAD/CAM/CAE Systems

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler); Malone, John B. (Compiler)

    1997-01-01

    This document contains presentations from the joint UVA/NASA Workshop on Next Generation CAD/CAM/CAE Systems held at NASA Langley Research Center in Hampton, Virginia on March 18-19, 1997. The presentations focused on current capabilities and future directions of CAD/CAM/CAE systems, aerospace industry projects, and university activities related to simulation-based design. Workshop attendees represented NASA, commercial software developers, the aerospace industry, government labs, and academia. The workshop objectives were to assess the potential of emerging CAD/CAM/CAE technology for use in intelligent simulation-based design and to provide guidelines for focused future research leading to effective use of CAE systems for simulating the entire life cycle of aerospace systems.

  5. cadDX Operon of Streptococcus salivarius 57.I▿

    PubMed Central

    Chen, Yi-Ywan M.; Feng, C. W.; Chiu, C. F.; Burne, Robert A.

    2008-01-01

    A CadDX system that confers resistance to Cd2+ and Zn2+ was identified in Streptococcus salivarius 57.I. Unlike with other CadDX systems, the expression of the cad promoter was negatively regulated by CadX, and the repression was inducible by Cd2+ and Zn2+, similar to what was found for CadCA systems. The lower G+C content of the S. salivarius cadDX genes suggests acquisition by horizontal gene transfer. PMID:18165364

  6. cadDX operon of Streptococcus salivarius 57.I.

    PubMed

    Chen, Yi-Ywan M; Feng, C W; Chiu, C F; Burne, Robert A

    2008-03-01

    A CadDX system that confers resistance to Cd(2+) and Zn(2+) was identified in Streptococcus salivarius 57.I. Unlike with other CadDX systems, the expression of the cad promoter was negatively regulated by CadX, and the repression was inducible by Cd(2+) and Zn(2+), similar to what was found for CadCA systems. The lower G+C content of the S. salivarius cadDX genes suggests acquisition by horizontal gene transfer. PMID:18165364

  7. Step-by-step magic state encoding for efficient fault-tolerant quantum computation.

    PubMed

    Goto, Hayato

    2014-12-16

    Quantum error correction allows one to make quantum computers fault-tolerant against unavoidable errors due to decoherence and imperfect physical gate operations. However, the fault-tolerant quantum computation requires impractically large computational resources for useful applications. This is a current major obstacle to the realization of a quantum computer. In particular, magic state distillation, which is a standard approach to universality, consumes the most resources in fault-tolerant quantum computation. For the resource problem, here we propose step-by-step magic state encoding for concatenated quantum codes, where magic states are encoded step by step from the physical level to the logical one. To manage errors during the encoding, we carefully use error detection. Since the sizes of intermediate codes are small, it is expected that the resource overheads will become lower than previous approaches based on the distillation at the logical level. Our simulation results suggest that the resource requirements for a logical magic state will become comparable to those for a single logical controlled-NOT gate. Thus, the present method opens a new possibility for efficient fault-tolerant quantum computation.

  8. Efficient Probability of Failure Calculations for QMU using Computational Geometry LDRD 13-0144 Final Report

    SciTech Connect

    Mitchell, Scott A.; Ebeida, Mohamed Salah; Romero, Vicente J.; Swiler, Laura Painton; Rushdi, Ahmad A.; Abdelkader, Ahmad

    2015-09-01

    This SAND report summarizes our work on the Sandia National Laboratory LDRD project titled "Efficient Probability of Failure Calculations for QMU using Computational Geometry" which was project #165617 and proposal #13-0144. This report merely summarizes our work. Those interested in the technical details are encouraged to read the full published results, and contact the report authors for the status of the software and follow-on projects.

  9. An efficient surrogate-based method for computing rare failure probability

    NASA Astrophysics Data System (ADS)

    Li, Jing; Li, Jinglai; Xiu, Dongbin

    2011-10-01

    In this paper, we present an efficient numerical method for evaluating rare failure probability. The method is based on a recently developed surrogate-based method from Li and Xiu [J. Li, D. Xiu, Evaluation of failure probability via surrogate models, J. Comput. Phys. 229 (2010) 8966-8980] for failure probability computation. The method by Li and Xiu is of hybrid nature, in the sense that samples of both the surrogate model and the true physical model are used, and its efficiency gain relies on using only very few samples of the true model. Here we extend the capability of the method to rare probability computation by using the idea of importance sampling (IS). In particular, we employ cross-entropy (CE) method, which is an effective method to determine the biasing distribution in IS. We demonstrate that, by combining with the CE method, a surrogate-based IS algorithm can be constructed and is highly efficient for rare failure probability computation—it incurs much reduced simulation efforts compared to the traditional CE-IS method. In many cases, the new method is capable of capturing failure probability as small as 10 -12 ˜ 10 -6 with only several hundreds samples.

  10. Two-dimensional perfect reconstruction analysis/synthesis filter bank system with very high computational efficiency

    NASA Astrophysics Data System (ADS)

    Liu, C. P.

    1997-07-01

    An effective design structure for 2-D analysis/synthesis filter banks with high computational efficiency are proposed. The system involves a 2-D single-sideband (SSB) system, which is developed in terms of a 2-D separable weighted overlap-add (OLA) method of analysis/synthesis and enables overlap between adjacent spatial domain windows. This implies that spatial domain aliasing introduced in the analysis is canceled in the synthesis process and provides perfect reconstruction. Achieving perfect reconstruction, each individual analysis/synthesis filter bank with SSB modulation is satisfied to be a cosine modulated version of a common baseband filter. Since a cosine-modulated structure is imposed in the design procedure, the system can reduce the number of parameters required to achieve the best computational efficiency. It can be shown that the resulting cosine- modulated filters are very efficient in terms of computational complexity and are relatively easy to design. Moreover, the design approach can be imposed on the system with relatively low reconstruction delays.

  11. Redundancy management for efficient fault recovery in NASA's distributed computing system

    NASA Technical Reports Server (NTRS)

    Malek, Miroslaw; Pandya, Mihir; Yau, Kitty

    1991-01-01

    The management of redundancy in computer systems was studied and guidelines were provided for the development of NASA's fault-tolerant distributed systems. Fault recovery and reconfiguration mechanisms were examined. A theoretical foundation was laid for redundancy management by efficient reconfiguration methods and algorithmic diversity. Algorithms were developed to optimize the resources for embedding of computational graphs of tasks in the system architecture and reconfiguration of these tasks after a failure has occurred. The computational structure represented by a path and the complete binary tree was considered and the mesh and hypercube architectures were targeted for their embeddings. The innovative concept of Hybrid Algorithm Technique was introduced. This new technique provides a mechanism for obtaining fault tolerance while exhibiting improved performance.

  12. Dendritic nonlinearities are tuned for efficient spike-based computations in cortical circuits.

    PubMed

    Ujfalussy, Balázs B; Makara, Judit K; Branco, Tiago; Lengyel, Máté

    2015-01-01

    Cortical neurons integrate thousands of synaptic inputs in their dendrites in highly nonlinear ways. It is unknown how these dendritic nonlinearities in individual cells contribute to computations at the level of neural circuits. Here, we show that dendritic nonlinearities are critical for the efficient integration of synaptic inputs in circuits performing analog computations with spiking neurons. We developed a theory that formalizes how a neuron's dendritic nonlinearity that is optimal for integrating synaptic inputs depends on the statistics of its presynaptic activity patterns. Based on their in vivo preynaptic population statistics (firing rates, membrane potential fluctuations, and correlations due to ensemble dynamics), our theory accurately predicted the responses of two different types of cortical pyramidal cells to patterned stimulation by two-photon glutamate uncaging. These results reveal a new computational principle underlying dendritic integration in cortical neurons by suggesting a functional link between cellular and systems--level properties of cortical circuits. PMID:26705334

  13. Efficient computation of the stability of three-dimensional compressible boundary layers

    NASA Technical Reports Server (NTRS)

    Malik, M. R.; Orszag, S. A.

    1981-01-01

    Methods for the computer analysis of the stability of three-dimensional compressible boundary layers are discussed and the user-oriented Compressible Stability Analysis (COSAL) computer code is described. The COSAL code uses a matrix finite-difference method for local eigenvalue solution when a good guess for the eigenvalue is available and is significantly more computationally efficient than the commonly used initial-value approach. The local eigenvalue search procedure also results in eigenfunctions and, at little extra work, group velocities. A globally convergent eigenvalue procedure is also developed which may be used when no guess for the eigenvalue is available. The global problem is formulated in such a way that no unstable spurious modes appear so that the method is suitable for use in a black-box stability code. Sample stability calculations are presented for the boundary layer profiles of an LFC swept wing.

  14. An efficient FPGA architecture for integer ƞth root computation

    NASA Astrophysics Data System (ADS)

    Rangel-Valdez, Nelson; Barron-Zambrano, Jose Hugo; Torres-Huitzil, Cesar; Torres-Jimenez, Jose

    2015-10-01

    In embedded computing, it is common to find applications such as signal processing, image processing, computer graphics or data compression that might benefit from hardware implementation for the computation of integer roots of order ?. However, the scientific literature lacks architectural designs that implement such operations for different values of N, using a low amount of resources. This article presents a parameterisable field programmable gate array (FPGA) architecture for an efficient Nth root calculator that uses only adders/subtractors and ? location memory elements. The architecture was tested for different values of ?, using 64-bit number representation. The results show a consumption up to 10% of the logical resources of a Xilinx XC6SLX45-CSG324C device, depending on the value of N. The hardware implementation improved the performance of its corresponding software implementations in one order of magnitude. The architecture performance varies from several thousands to seven millions of root operations per second.

  15. Generation and use of human 3D-CAD models

    NASA Astrophysics Data System (ADS)

    Grotepass, Juergen; Speyer, Hartmut; Kaiser, Ralf

    2002-05-01

    Individualized Products are one of the ten mega trends of the 21st Century with human modeling as the key issue for tomorrow's design and product development. The use of human modeling software for computer based ergonomic simulations within the production process increases quality while reducing costs by 30- 50 percent and shortening production time. This presentation focuses on the use of human 3D-CAD models for both, the ergonomic design of working environments and made to measure garment production. Today, the entire production chain can be designed, individualized models generated and analyzed in 3D computer environments. Anthropometric design for ergonomics is matched to human needs, thus preserving health. Ergonomic simulation includes topics as human vision, reachability, kinematics, force and comfort analysis and international design capabilities. In German more than 17 billions of Mark are moved to other industries, because clothes do not fit. Individual clothing tailored to the customer's preference means surplus value, pleasure and perfect fit. The body scanning technology is the key to generation and use of human 3D-CAD models for both, the ergonomic design of working environments and made to measure garment production.

  16. Extending Engineering Design Graphics Laboratories to Have a CAD/CAM Component: Implementation Issues.

    ERIC Educational Resources Information Center

    Juricic, Davor; Barr, Ronald E.

    1996-01-01

    Reports on a project that extended the Engineering Design Graphics curriculum to include instruction and laboratory experience in computer-aided design, analysis, and manufacturing (CAD/CAM). Discusses issues in project implementation, including introduction of finite element analysis to lower-division students, feasibility of classroom prototype…

  17. Using Claris CAD To Develop a Floor Plan. High-Technology Training Module.

    ERIC Educational Resources Information Center

    Pawlowicz, Bruce; Johnson, Tom

    This learning module for a high school architectural drafting course introduces students to the use of Claris CAD (Computer Aided Drafting) to develop a floor plan. The six sections of the module are the following: module objectives, content outline, teaching methods, student activities, resource list, and evaluation (pretest, posttest). Student…

  18. Bridging CAGD knowledge into CAD/CG applications: Mathematical theories as stepping stones of innovations

    NASA Astrophysics Data System (ADS)

    Gobithaasan, R. U.; Miura, Kenjiro T.; Hassan, Mohamad Nor

    2014-07-01

    Computer Aided Geometric Design (CAGD) which surpasses the underlying theories of Computer Aided Design (CAD) and Computer Graphics (CG) has been taught in a number of Malaysian universities under the umbrella of Mathematical Sciences' faculty/department. On the other hand, CAD/CG is taught either under the Engineering or Computer Science Faculty. Even though CAGD researchers/educators/students (denoted as contributors) have been enriching this field of study by means of article/journal publication, many fail to convert the idea into constructive innovation due to the gap that occurs between CAGD contributors and practitioners (engineers/product/designers/architects/artists). This paper addresses this issue by advocating a number of technologies that can be used to transform CAGD contributors into innovators where immediate impact in terms of practical application can be experienced by the CAD/CG practitioners. The underlying principle of solving this issue is twofold. First would be to expose the CAGD contributors on ways to turn mathematical ideas into plug-ins and second is to impart relevant CAGD theories to CAD/CG to practitioners. Both cases are discussed in detail and the final section shows examples to illustrate the importance of turning mathematical knowledge into innovations.

  19. Preparing for High Technology: CAD/CAM Programs. Research & Development Series No. 234.

    ERIC Educational Resources Information Center

    Abram, Robert; And Others

    This guide is one of three developed to provide information and resources to assist in planning and developing postsecondary technican training programs in high technology areas. It is specifically intended for vocational-technical educators and planners in the initial stages of planning a specialized training option in computer-aided design (CAD)…

  20. Classroom Experiences in an Engineering Design Graphics Course with a CAD/CAM Extension.

    ERIC Educational Resources Information Center

    Barr, Ronald E.; Juricic, Davor

    1997-01-01

    Reports on the development of a new CAD/CAM laboratory experience for an Engineering Design Graphics (EDG) course. The EDG curriculum included freehand sketching, introduction to Computer-Aided Design and Drafting (CADD), and emphasized 3-D solid modeling. Reviews the project and reports on the testing of the new laboratory components which were…

  1. A distributed data base management facility for the CAD/CAM environment

    NASA Technical Reports Server (NTRS)

    Balza, R. M.; Beaudet, R. W.; Johnson, H. R.

    1984-01-01

    Current/PAD research in the area of distributed data base management considers facilities for supporting CAD/CAM data management in a heterogeneous network of computers encompassing multiple data base managers supporting a variety of data models. These facilities include coordinated execution of multiple DBMSs to provide for administration of and access to data distributed across them.

  2. Computationally Efficient Algorithms for Parameter Estimation and Uncertainty Propagation in Numerical Models of Groundwater Flow

    NASA Astrophysics Data System (ADS)

    Townley, Lloyd R.; Wilson, John L.

    1985-12-01

    Finite difference and finite element methods are frequently used to study aquifer flow; however, additional analysis is required when model parameters, and hence predicted heads are uncertain. Computational algorithms are presented for steady and transient models in which aquifer storage coefficients, transmissivities, distributed inputs, and boundary values may all be simultaneously uncertain. Innovative aspects of these algorithms include a new form of generalized boundary condition; a concise discrete derivation of the adjoint problem for transient models with variable time steps; an efficient technique for calculating the approximate second derivative during line searches in weighted least squares estimation; and a new efficient first-order second-moment algorithm for calculating the covariance of predicted heads due to a large number of uncertain parameter values. The techniques are presented in matrix form, and their efficiency depends on the structure of sparse matrices which occur repeatedly throughout the calculations. Details of matrix structures are provided for a two-dimensional linear triangular finite element model.

  3. Cad Graphics in Facilities Planning.

    ERIC Educational Resources Information Center

    Collier, Linda M.

    1984-01-01

    By applying a computer-aided drafting system to a range of facilities layouts and plans, a division of Tektronix, Inc., Oregon, is maintaining staffing levels with an added workload. The tool is also being used in other areas of the company for illustration, design, and administration. (MLF)

  4. An Improved CAD System for Breast Cancer Diagnosis Based on Generalized Pseudo-Zernike Moment and Ada-DEWNN Classifier.

    PubMed

    Singh, Satya P; Urooj, Shabana

    2016-04-01

    In this paper, a novel framework of computer-aided diagnosis (CAD) system has been presented for the classification of benign/malignant breast tissues. The properties of the generalized pseudo-Zernike moments (GPZM) and pseudo-Zernike moments (PZM) are utilized as suitable texture descriptors of the suspicious region in the mammogram. An improved classifier- adaptive differential evolution wavelet neural network (Ada-DEWNN) is proposed to improve the classification accuracy of the CAD system. The efficiency of the proposed system is tested on mammograms from the Mammographic Image Analysis Society (mini-MIAS) database using the leave-one-out cross validation as well as on mammograms from the Digital Database for Screening Mammography (DDSM) database using 10-fold cross validation. The proposed method on MIAS-database attains a fair accuracy of 0.8938 and AUC of 0.935 (95 % CI = 0.8213-0.9831). The proposed method is also tested for in-plane rotation and found to be highly rotation invariant. In addition, the proposed classifier is tested and compared with some well-known existing methods using receiver operating characteristic (ROC) analysis using DDSM- database. It is concluded the proposed classifier has better area under the curve (AUC) (0.9289) and highly précised with 95 % CI, 0.8216 to 0.9834 and 0.0384 standard error.

  5. An Improved CAD System for Breast Cancer Diagnosis Based on Generalized Pseudo-Zernike Moment and Ada-DEWNN Classifier.

    PubMed

    Singh, Satya P; Urooj, Shabana

    2016-04-01

    In this paper, a novel framework of computer-aided diagnosis (CAD) system has been presented for the classification of benign/malignant breast tissues. The properties of the generalized pseudo-Zernike moments (GPZM) and pseudo-Zernike moments (PZM) are utilized as suitable texture descriptors of the suspicious region in the mammogram. An improved classifier- adaptive differential evolution wavelet neural network (Ada-DEWNN) is proposed to improve the classification accuracy of the CAD system. The efficiency of the proposed system is tested on mammograms from the Mammographic Image Analysis Society (mini-MIAS) database using the leave-one-out cross validation as well as on mammograms from the Digital Database for Screening Mammography (DDSM) database using 10-fold cross validation. The proposed method on MIAS-database attains a fair accuracy of 0.8938 and AUC of 0.935 (95 % CI = 0.8213-0.9831). The proposed method is also tested for in-plane rotation and found to be highly rotation invariant. In addition, the proposed classifier is tested and compared with some well-known existing methods using receiver operating characteristic (ROC) analysis using DDSM- database. It is concluded the proposed classifier has better area under the curve (AUC) (0.9289) and highly précised with 95 % CI, 0.8216 to 0.9834 and 0.0384 standard error. PMID:26892455

  6. Computationally-Efficient Minimum-Time Aircraft Routes in the Presence of Winds

    NASA Technical Reports Server (NTRS)

    Jardin, Matthew R.

    2004-01-01

    A computationally efficient algorithm for minimizing the flight time of an aircraft in a variable wind field has been invented. The algorithm, referred to as Neighboring Optimal Wind Routing (NOWR), is based upon neighboring-optimal-control (NOC) concepts and achieves minimum-time paths by adjusting aircraft heading according to wind conditions at an arbitrary number of wind measurement points along the flight route. The NOWR algorithm may either be used in a fast-time mode to compute minimum- time routes prior to flight, or may be used in a feedback mode to adjust aircraft heading in real-time. By traveling minimum-time routes instead of direct great-circle (direct) routes, flights across the United States can save an average of about 7 minutes, and as much as one hour of flight time during periods of strong jet-stream winds. The neighboring optimal routes computed via the NOWR technique have been shown to be within 1.5 percent of the absolute minimum-time routes for flights across the continental United States. On a typical 450-MHz Sun Ultra workstation, the NOWR algorithm produces complete minimum-time routes in less than 40 milliseconds. This corresponds to a rate of 25 optimal routes per second. The closest comparable optimization technique runs approximately 10 times slower. Airlines currently use various trial-and-error search techniques to determine which of a set of commonly traveled routes will minimize flight time. These algorithms are too computationally expensive for use in real-time systems, or in systems where many optimal routes need to be computed in a short amount of time. Instead of operating in real-time, airlines will typically plan a trajectory several hours in advance using wind forecasts. If winds change significantly from forecasts, the resulting flights will no longer be minimum-time. The need for a computationally efficient wind-optimal routing algorithm is even greater in the case of new air-traffic-control automation concepts. For air

  7. Computationally efficient characterization of potential energy surfaces based on fingerprint distances

    NASA Astrophysics Data System (ADS)

    Schaefer, Bastian; Goedecker, Stefan

    2016-07-01

    An analysis of the network defined by the potential energy minima of multi-atomic systems and their connectivity via reaction pathways that go through transition states allows us to understand important characteristics like thermodynamic, dynamic, and structural properties. Unfortunately computing the transition states and reaction pathways in addition to the significant energetically low-lying local minima is a computationally demanding task. We here introduce a computationally efficient method that is based on a combination of the minima hopping global optimization method and the insight that uphill barriers tend to increase with increasing structural distances of the educt and product states. This method allows us to replace the exact connectivity information and transition state energies with alternative and approximate concepts. Without adding any significant additional cost to the minima hopping global optimization approach, this method allows us to generate an approximate network of the minima, their connectivity, and a rough measure for the energy needed for their interconversion. This can be used to obtain a first qualitative idea on important physical and chemical properties by means of a disconnectivity graph analysis. Besides the physical insight obtained by such an analysis, the gained knowledge can be used to make a decision if it is worthwhile or not to invest computational resources for an exact computation of the transition states and the reaction pathways. Furthermore it is demonstrated that the here presented method can be used for finding physically reasonable interconversion pathways that are promising input pathways for methods like transition path sampling or discrete path sampling.

  8. An efficient method for computing high PT elascticity by first principles

    NASA Astrophysics Data System (ADS)

    Wu, Z.; Wentzcovitch, R. M.

    2007-12-01

    First principles quasiharmonic (QHA) free energy computations play a very important role in mineral physics because they can predict accurately the structure and thermodynamic properties of materials at pressure and temperature conditions that are still challenging for experiments. They also enable calculations of thermoelastic properties by obtaining the second derivatives of the free energies with respect to Lagrangian strains. However, these are demanding computations requiring 100 to 1000 medium size jobs. Here we introduce and test an approximate method that requires only calculations of static elastic constants, phonon VDOS, and mode Gruneisen parameters for unstrained configurations. This approach is computationally efficient and decreases the computational time by more than one order of magnitude. The human workload is also reduced substantially. We test this approach by computing high PT elasticity of MgO and forsterite. We show one can obtain very good agreement with full first principles results and experimental data. Research supported by NSF/EAR, NSF/ITR (VLab), and MSI (U of MN)

  9. Enhancing simulation of efficiency with analytical tools. [combining computer simulation and analytical techniques for cost reduction

    NASA Technical Reports Server (NTRS)

    Seltzer, S. M.

    1974-01-01

    Some means of combining both computer simulation and anlytical techniques are indicated in order to mutually enhance their efficiency as design tools and to motivate those involved in engineering design to consider using such combinations. While the idea is not new, heavy reliance on computers often seems to overshadow the potential utility of analytical tools. Although the example used is drawn from the area of dynamics and control, the principles espoused are applicable to other fields. In the example the parameter plane stability analysis technique is described briefly and extended beyond that reported in the literature to increase its utility (through a simple set of recursive formulas) and its applicability (through the portrayal of the effect of varying the sampling period of the computer). The numerical values that were rapidly selected by analysis were found to be correct for the hybrid computer simulation for which they were needed. This obviated the need for cut-and-try methods to choose the numerical values, thereby saving both time and computer utilization.

  10. Development of CAD prototype system for Crohn's disease

    NASA Astrophysics Data System (ADS)

    Oda, Masahiro; Kitasaka, Takayuki; Furukawa, Kazuhiro; Watanabe, Osamu; Ando, Takafumi; Goto, Hidemi; Mori, Kensaku

    2010-03-01

    The purpose of this paper is to present a CAD prototype system for Crohn's disease. Crohn's disease causes inflammation or ulcers of the gastrointestinal tract. The number of patients of Crohn's disease is increasing in Japan. Symptoms of Crohn's disease include intestinal stenosis, longitudinal ulcers, and fistulae. Optical endoscope cannot pass through intestinal stenosis in some cases. We propose a new CAD system using abdominal fecal tagging CT images for efficient diagnosis of Crohn's disease. The system displays virtual unfolded (VU), virtual endoscopic, curved planar reconstruction, multi planar reconstruction, and outside views of both small and large intestines. To generate the VU views, we employ a small and large intestines extraction method followed by a simple electronic cleansing method. The intestine extraction is based on the region growing process, which uses a characteristic that tagged fluid neighbor air in the intestine. The electronic cleansing enables observation of intestinal wall under tagged fluid. We change the height of the VU views according to the perimeter of the intestine. In addition, we developed a method to enhance the longitudinal ulcer on views of the system. We enhance concave parts on the intestinal wall, which are caused by the longitudinal ulcer, based on local intensity structure analysis. We examined the small and the large intestines of eleven CT images by the proposed system. The VU views enabled efficient observation of the intestinal wall. The height change of the VU views helps finding intestinal stenosis on the VU views. The concave region enhancement made longitudinal ulcers clear on the views.

  11. Implant-supported fixed dental prostheses with CAD/CAM-fabricated porcelain crown and zirconia-based framework.

    PubMed

    Takaba, Masayuki; Tanaka, Shinpei; Ishiura, Yuichi; Baba, Kazuyoshi

    2013-07-01

    Recently, fixed dental prostheses (FDPs) with a hybrid structure of CAD/CAM porcelain crowns adhered to a CAD/CAM zirconia framework (PAZ) have been developed. The aim of this report was to describe the clinical application of a newly developed implant-supported FDP fabrication system, which uses PAZ, and to evaluate the outcome after a maximum application period of 36 months. Implants were placed in three patients with edentulous areas in either the maxilla or mandible. After the implant fixtures had successfully integrated with bone, gold-platinum alloy or zirconia custom abutments were first fabricated. Zirconia framework wax-up was performed on the custom abutments, and the CAD/CAM zirconia framework was prepared using the CAD/CAM system. Next, wax-up was performed on working models for porcelain crown fabrication, and CAD/CAM porcelain crowns were fabricated. The CAD/CAM zirconia frameworks and CAD/CAM porcelain crowns were bonded using adhesive resin cement, and the PAZ was cemented. Cementation of the implant superstructure improved the esthetics and masticatory efficiency in all patients. No undesirable outcomes, such as superstructure chipping, stomatognathic dysfunction, or periimplant bone resorption, were observed in any of the patients. PAZ may be a potential solution for ceramic-related clinical problems such as chipping and fracture and associated complicated repair procedures in implant-supported FDPs.

  12. Efficient mapping algorithms for scheduling robot inverse dynamics computation on a multiprocessor system

    NASA Technical Reports Server (NTRS)

    Lee, C. S. G.; Chen, C. L.

    1989-01-01

    Two efficient mapping algorithms for scheduling the robot inverse dynamics computation consisting of m computational modules with precedence relationship to be executed on a multiprocessor system consisting of p identical homogeneous processors with processor and communication costs to achieve minimum computation time are presented. An objective function is defined in terms of the sum of the processor finishing time and the interprocessor communication time. The minimax optimization is performed on the objective function to obtain the best mapping. This mapping problem can be formulated as a combination of the graph partitioning and the scheduling problems; both have been known to be NP-complete. Thus, to speed up the searching for a solution, two heuristic algorithms were proposed to obtain fast but suboptimal mapping solutions. The first algorithm utilizes the level and the communication intensity of the task modules to construct an ordered priority list of ready modules and the module assignment is performed by a weighted bipartite matching algorithm. For a near-optimal mapping solution, the problem can be solved by the heuristic algorithm with simulated annealing. These proposed optimization algorithms can solve various large-scale problems within a reasonable time. Computer simulations were performed to evaluate and verify the performance and the validity of the proposed mapping algorithms. Finally, experiments for computing the inverse dynamics of a six-jointed PUMA-like manipulator based on the Newton-Euler dynamic equations were implemented on an NCUBE/ten hypercube computer to verify the proposed mapping algorithms. Computer simulation and experimental results are compared and discussed.

  13. Potential reasons for differences in CAD effectiveness evaluated using laboratory and clinical studies

    NASA Astrophysics Data System (ADS)

    He, Xin; Samuelson, Frank; Zeng, Rongping; Sahiner, Berkman

    2015-03-01

    Research studies have investigated a number of factors that may impact the performance assessment of computer aided detection (CAD) effectiveness, such as the inherent design of the CAD, the image and reader samples, and the assessment methods. In this study, we focused on the effect of prevalence on cue validity (co-occurrence of cue and signal) and learning as potentially important factors in CAD assessment. For example, the prevalence of cases with breast cancer is around 50% in laboratory CAD studies, which is 100 times higher than that in breast cancer screening. Although ROC is prevalence-independent, an observer's use of CAD involves tasks that are more complicated than binary classification, including: search, detection, classification, cueing and learning. We developed models to investigate the potential impact of prevalence on cue validity and the learning of cue validity tasks. We hope this work motivates new studies that investigate previously under-explored factors involved in image interpretation with a new modality in its assessment.

  14. Accuracy of machine milling and spark erosion with a CAD/CAM system.

    PubMed

    Andersson, M; Carlsson, L; Persson, M; Bergman, B

    1996-08-01

    A method for manufacturing crowns and fixed partial dentures based on CAD/CAM has been developed as an alternative to the lost wax technique and the casting of an alloy. In this process two steps are included: milling and spark erosion. The computer-assisted design (CAD) relies heavily on the accuracy of the milling and spark erosion processes to achieve a clinically acceptable restoration. These two processes must be able to produce the crown data generated in the CAD files. This study evaluated the accuracy of the Procera CAD/CAM system in creating specific geometric bodies that were compared with the known dimensions in the CAD files for these bodies. The manufacturing errors of milling (ellipse +/- 6.5 microm, square +/- 3.4 microm, and cylinder +/- 5.8 microm) and spark erosion (ellipse +/- 8.6 microm and square +/- 10.4 microm) were determined. The accuracy of this manufacturing process demonstrated that this system was capable of producing a crown with a clinically accepted range for marginal opening gap dimension of less than 100 microm.

  15. On the Use of Parmetric-CAD Systems and Cartesian Methods for Aerodynamic Design

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.; Pulliam, Thomas H.

    2004-01-01

    Automated, high-fidelity tools for aerodynamic design face critical issues in attempting to optimize real-life geometry arid in permitting radical design changes. Success in these areas promises not only significantly shorter design- cycle times, but also superior and unconventional designs. To address these issues, we investigate the use of a parmetric-CAD system in conjunction with an embedded-boundary Cartesian method. Our goal is to combine the modeling capabilities of feature-based CAD with the robustness and flexibility of component-based Cartesian volume-mesh generation for complex geometry problems. We present the development of an automated optimization frame-work with a focus on the deployment of such a CAD-based design approach in a heterogeneous parallel computing environment.

  16. Study on the integration approaches to CAD/CAPP/FMS in garment CIMS

    NASA Astrophysics Data System (ADS)

    Wang, Xiankui; Tian, Wensheng; Liu, Chengying; Li, Zhizhong

    1995-08-01

    Computer integrated manufacturing system (CIMS), as an advanced methodology, has been applied in many industry fields. There is, however, little research on the application of CIMS in the garment industry, especially on the integrated approach to CAD, CAPP, and FMS in garment CIMS. In this paper, the current situations of CAD, CAPP, and FMS in the garment industry are discussed, and information requirements between them as well as the integrated approaches are also investigated. The representation of the garments' product data by the group technology coding is proposed. Based on the group technology, a shared data base as an integration element can be constructed, which leads to the integration of CAD/CAPP/FMS in garment CIMS.

  17. Mechanical design productivity using CAD graphics - A user's point of view

    NASA Astrophysics Data System (ADS)

    Boltz, R. J.; Avery, J. T., Jr.

    1985-02-01

    The present investigation is concerned with the mechanical design productivity resulting from the use of Computer-Aided Design (CAD) graphics as a design tool. The considered studies had been conducted by a company which is involved in the design, development, and manufacture of government and defense products. Attention is given to CAD graphics for mechanical design, productivity, an overall productivity assessment, the use of CAD graphics for basic mechanical design, productivity in engineering-related areas, and an overall engineering productivity assessment. The investigation shows that there was no appreciable improvement in productivity with respect to basic mechanical design. However, rather substantial increases could be realized in productivity for engineering-related activities.

  18. Switchgrass PviCAD1: Understanding residues important for substrate preferences and activity

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Lignin is a major component of plant cell walls and is a complex aromatic heteropolymer. Reducing lignin content improves conversion efficiency into liquid fuels, and enzymes involved in lignin biosynthesis are attractive targets for bioengineering. Cinnamyl alcohol dehydrogenase (CAD) catalyzes t...

  19. Efficiency Improvement Opportunities for Personal Computer Monitors. Implications for Market Transformation Programs

    SciTech Connect

    Park, Won Young; Phadke, Amol; Shah, Nihar

    2012-06-29

    Displays account for a significant portion of electricity consumed in personal computer (PC) use, and global PC monitor shipments are expected to continue to increase. We assess the market trends in the energy efficiency of PC monitors that are likely to occur without any additional policy intervention and estimate that display efficiency will likely improve by over 40% by 2015 compared to today’s technology. We evaluate the cost effectiveness of a key technology which further improves efficiency beyond this level by at least 20% and find that its adoption is cost effective. We assess the potential for further improving efficiency taking into account the recent development of universal serial bus (USB) powered liquid crystal display (LCD) monitors and find that the current technology available and deployed in USB powered monitors has the potential to deeply reduce energy consumption by as much as 50%. We provide insights for policies and programs that can be used to accelerate the adoption of efficient technologies to capture global energy saving potential from PC monitors which we estimate to be 9.2 terawatt-hours [TWh] per year in 2015.

  20. Capture Efficiency of Biocompatible Magnetic Nanoparticles in Arterial Flow: A Computer Simulation for Magnetic Drug Targeting.

    PubMed

    Lunnoo, Thodsaphon; Puangmali, Theerapong

    2015-12-01

    The primary limitation of magnetic drug targeting (MDT) relates to the strength of an external magnetic field which decreases with increasing distance. Small nanoparticles (NPs) displaying superparamagnetic behaviour are also required in order to reduce embolization in the blood vessel. The small NPs, however, make it difficult to vector NPs and keep them in the desired location. The aims of this work were to investigate parameters influencing the capture efficiency of the drug carriers in mimicked arterial flow. In this work, we computationally modelled and evaluated capture efficiency in MDT with COMSOL Multiphysics 4.4. The studied parameters were (i) magnetic nanoparticle size, (ii) three classes of magnetic cores (Fe3O4, Fe2O3, and Fe), and (iii) the thickness of biocompatible coating materials (Au, SiO2, and PEG). It was found that the capture efficiency of small particles decreased with decreasing size and was less than 5 % for magnetic particles in the superparamagnetic regime. The thickness of non-magnetic coating materials did not significantly influence the capture efficiency of MDT. It was difficult to capture small drug carriers (D<200 nm) in the arterial flow. We suggest that the MDT with high-capture efficiency can be obtained in small vessels and low-blood velocities such as micro-capillary vessels. PMID:26515074

  1. Custom hip prostheses by integrating CAD and casting technology

    NASA Astrophysics Data System (ADS)

    Silva, Pedro F.; Leal, Nuno; Neto, Rui J.; Lino, F. Jorge; Reis, Ana

    2012-09-01

    Total Hip Arthroplasty (THA) is a surgical intervention that is being achieving high rates of success, leaving room to research on long run durability, patient comfort and costs reduction. Even so, up to the present, little research has been done to improve the method of manufacturing customized prosthesis. The common customized prostheses are made by full machining. This document presents a different approach methodology which combines the study of medical images, through CAD (Computer Aided Design) software, SLadditive manufacturing, ceramic shell manufacture, precision foundry with Titanium alloys and Computer Aided Manufacturing (CAM). The goal is to achieve the best comfort for the patient, stress distribution and the maximum lifetime of the prosthesis produced by this integrated methodology. The way to achieve this desiderate is to make custom hip prosthesis which are adapted to each patient needs and natural physiognomy. Not only the process is reliable, but also represents a cost reduction comparing to the conventional full machined custom hip prosthesis.

  2. Uncertainty in Aspiration Efficiency Estimates from Torso Simplifications in Computational Fluid Dynamics Simulations

    PubMed Central

    Anthony, T. Renée

    2013-01-01

    Computational fluid dynamics (CFD) has been used to report particle inhalability in low velocity freestreams, where realistic faces but simplified, truncated, and cylindrical human torsos were used. When compared to wind tunnel velocity studies, the truncated models were found to underestimate the air’s upward velocity near the humans, raising questions about aspiration estimation. This work compares aspiration efficiencies for particles ranging from 7 to 116 µm using three torso geometries: (i) a simplified truncated cylinder, (ii) a non-truncated cylinder, and (iii) an anthropometrically realistic humanoid body. The primary aim of this work is to (i) quantify the errors introduced by using a simplified geometry and (ii) determine the required level of detail to adequately represent a human form in CFD studies of aspiration efficiency. Fluid simulations used the standard k-epsilon turbulence models, with freestream velocities at 0.1, 0.2, and 0.4 m s−1 and breathing velocities at 1.81 and 12.11 m s−1 to represent at-rest and heavy breathing rates, respectively. Laminar particle trajectory simulations were used to determine the upstream area, also known as the critical area, where particles would be inhaled. These areas were used to compute aspiration efficiencies for facing the wind. Significant differences were found in both vertical velocity estimates and the location of the critical area between the three models. However, differences in aspiration efficiencies between the three forms were <8.8% over all particle sizes, indicating that there is little difference in aspiration efficiency between torso models. PMID:23006817

  3. Extension of the TDCR model to compute counting efficiencies for radionuclides with complex decay schemes.

    PubMed

    Kossert, K; Cassette, Ph; Carles, A Grau; Jörg, G; Gostomski, Christroph Lierse V; Nähle, O; Wolf, Ch

    2014-05-01

    The triple-to-double coincidence ratio (TDCR) method is frequently used to measure the activity of radionuclides decaying by pure β emission or electron capture (EC). Some radionuclides with more complex decays have also been studied, but accurate calculations of decay branches which are accompanied by many coincident γ transitions have not yet been investigated. This paper describes recent extensions of the model to make efficiency computations for more complex decay schemes possible. In particular, the MICELLE2 program that applies a stochastic approach of the free parameter model was extended. With an improved code, efficiencies for β(-), β(+) and EC branches with up to seven coincident γ transitions can be calculated. Moreover, a new parametrization for the computation of electron stopping powers has been implemented to compute the ionization quenching function of 10 commercial scintillation cocktails. In order to demonstrate the capabilities of the TDCR method, the following radionuclides are discussed: (166m)Ho (complex β(-)/γ), (59)Fe (complex β(-)/γ), (64)Cu (β(-), β(+), EC and EC/γ) and (229)Th in equilibrium with its progenies (decay chain with many α, β and complex β(-)/γ transitions).

  4. Computational Recipe for Efficient Description of Large-Scale Conformational Changes in Biomolecular Systems.

    PubMed

    Moradi, Mahmoud; Tajkhorshid, Emad

    2014-07-01

    Characterizing large-scale structural transitions in biomolecular systems poses major technical challenges to both experimental and computational approaches. On the computational side, efficient sampling of the configuration space along the transition pathway remains the most daunting challenge. Recognizing this issue, we introduce a knowledge-based computational approach toward describing large-scale conformational transitions using (i) nonequilibrium, driven simulations combined with work measurements and (ii) free energy calculations using empirically optimized biasing protocols. The first part is based on designing mechanistically relevant, system-specific reaction coordinates whose usefulness and applicability in inducing the transition of interest are examined using knowledge-based, qualitative assessments along with nonequilirbrium work measurements which provide an empirical framework for optimizing the biasing protocol. The second part employs the optimized biasing protocol resulting from the first part to initiate free energy calculations and characterize the transition quantitatively. Using a biasing protocol fine-tuned to a particular transition not only improves the accuracy of the resulting free energies but also speeds up the convergence. The efficiency of the sampling will be assessed by employing dimensionality reduction techniques to help detect possible flaws and provide potential improvements in the design of the biasing protocol. Structural transition of a membrane transporter will be used as an example to illustrate the workings of the proposed approach.

  5. Efficient Calibration of Computationally Intensive Groundwater Models through Surrogate Modelling with Lower Levels of Fidelity

    NASA Astrophysics Data System (ADS)

    Razavi, S.; Anderson, D.; Martin, P.; MacMillan, G.; Tolson, B.; Gabriel, C.; Zhang, B.

    2012-12-01

    Many sophisticated groundwater models tend to be computationally intensive as they rigorously represent detailed scientific knowledge about the groundwater systems. Calibration (model inversion), which is a vital step of groundwater model development, can require hundreds or thousands of model evaluations (runs) for different sets of parameters and as such demand prohibitively large computational time and resources. One common strategy to circumvent this computational burden is surrogate modelling which is concerned with developing and utilizing fast-to-run surrogates of the original computationally intensive models (also called fine models). Surrogates can be either based on statistical and data-driven models such as kriging and neural networks or simplified physically-based models with lower fidelity to the original system (also called coarse models). Fidelity in this context refers to the degree of the realism of a simulation model. This research initially investigates different strategies for developing lower-fidelity surrogates of a fine groundwater model and their combinations. These strategies include coarsening the fine model, relaxing the numerical convergence criteria, and simplifying the model geological conceptualisation. Trade-offs between model efficiency and fidelity (accuracy) are of special interest. A methodological framework is developed for coordinating the original fine model with its lower-fidelity surrogates with the objective of efficiently calibrating the parameters of the original model. This framework is capable of mapping the original model parameters to the corresponding surrogate model parameters and also mapping the surrogate model response for the given parameters to the original model response. This framework is general in that it can be used with different optimization and/or uncertainty analysis techniques available for groundwater model calibration and parameter/predictive uncertainty assessment. A real-world computationally

  6. CAD/CAM interface design of excimer laser micro-processing system

    NASA Astrophysics Data System (ADS)

    Jing, Liang; Chen, Tao; Zuo, Tiechuan

    2005-12-01

    Recently CAD/CAM technology has been gradually used in the field of laser processing. The excimer laser micro-processing system just identified G instruction before CAD/CAM interface was designed. However the course of designing a part with G instruction for users is too hard. The efficiency is low and probability of making errors is high. By secondary development technology of AutoCAD with Visual Basic, an application was developed to pick-up each entity's information in graph and convert them to each entity's processing parameters. Also an additional function was added into former controlling software to identify these processing parameters of each entity and realize continue processing of graphic. Based on the above CAD/CAM interface, Users can design a part in AutoCAD instead of using G instruction. The period of designing a part is sharply shortened. This new way of design greatly guarantees the processing parameters of the part is right and exclusive. The processing of complex novel bio-chip has been realized by this new function.

  7. A new data integration approach for AutoCAD and GIS

    NASA Astrophysics Data System (ADS)

    Ye, Hongmei; Li, Yuhong; Wang, Cheng; Li, Lijun

    2006-10-01

    GIS has its advantages both on spatial data analysis and management, particularly on the geometric and attributive information management, which has also attracted lots attentions among researchers around world. AutoCAD plays more and more important roles as one of the main data sources of GIS. Various work and achievements can be found in the related literature. However, the conventional data integration from AutoCAD to GIS is time-consuming, which also can cause the information loss both in the geometric aspects and the attributive aspects for a large system. It is necessary and urgent to sort out new approach and algorithm for the efficient high-quality data integration. In this paper, a novel data integration approach from AutoCAD to GIS will be introduced based on the spatial data mining technique through the data structure analysis both in the AutoCAD and GIS. A practicable algorithm for the data conversion from CAD to GIS will be given as well. By a designed evaluation scheme, the accuracy of the conversion both in the geometric and the attributive information will be demonstrated. Finally, the validity and feasibility of the new approach will be shown by an experimental analysis.

  8. Improving Computational Efficiency of Prediction in Model-Based Prognostics Using the Unscented Transform

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew John; Goebel, Kai Frank

    2010-01-01

    Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of end of life (EOL). EOL is predicted based on the estimated current state distribution of a component and expected profiles of future usage. In general, this requires simulations of the component using the underlying models. In this paper, we develop a simulation-based prediction methodology that achieves computational efficiency by performing only the minimal number of simulations needed in order to accurately approximate the mean and variance of the complete EOL distribution. This is performed through the use of the unscented transform, which predicts the means and covariances of a distribution passed through a nonlinear transformation. In this case, the EOL simulation acts as that nonlinear transformation. In this paper, we review the unscented transform, and describe how this concept is applied to efficient EOL prediction. As a case study, we develop a physics-based model of a solenoid valve, and perform simulation experiments to demonstrate improved computational efficiency without sacrificing prediction accuracy.

  9. Computational Efficient Upscaling Methodology for Predicting Thermal Conductivity of Nuclear Waste forms

    SciTech Connect

    Li, Dongsheng; Sun, Xin; Khaleel, Mohammad A.

    2011-09-28

    This study evaluated different upscaling methods to predict thermal conductivity in loaded nuclear waste form, a heterogeneous material system. The efficiency and accuracy of these methods were compared. Thermal conductivity in loaded nuclear waste form is an important property specific to scientific researchers, in waste form Integrated performance and safety code (IPSC). The effective thermal conductivity obtained from microstructure information and local thermal conductivity of different components is critical in predicting the life and performance of waste form during storage. How the heat generated during storage is directly related to thermal conductivity, which in turn determining the mechanical deformation behavior, corrosion resistance and aging performance. Several methods, including the Taylor model, Sachs model, self-consistent model, and statistical upscaling models were developed and implemented. Due to the absence of experimental data, prediction results from finite element method (FEM) were used as reference to determine the accuracy of different upscaling models. Micrographs from different loading of nuclear waste were used in the prediction of thermal conductivity. Prediction results demonstrated that in term of efficiency, boundary models (Taylor and Sachs model) are better than self consistent model, statistical upscaling method and FEM. Balancing the computation resource and accuracy, statistical upscaling is a computational efficient method in predicting effective thermal conductivity for nuclear waste form.

  10. A stitch in time: Efficient computation of genomic DNA melting bubbles

    PubMed Central

    Tøstesen, Eivind

    2008-01-01

    Background It is of biological interest to make genome-wide predictions of the locations of DNA melting bubbles using statistical mechanics models. Computationally, this poses the challenge that a generic search through all combinations of bubble starts and ends is quadratic. Results An efficient algorithm is described, which shows that the time complexity of the task is O(NlogN) rather than quadratic. The algorithm exploits that bubble lengths may be limited, but without a prior assumption of a maximal bubble length. No approximations, such as windowing, have been introduced to reduce the time complexity. More than just finding the bubbles, the algorithm produces a stitch profile, which is a probabilistic graphical model of bubbles and helical regions. The algorithm applies a probability peak finding method based on a hierarchical analysis of the energy barriers in the Poland-Scheraga model. Conclusion Exact and fast computation of genomic stitch profiles is thus feasible. Sequences of several megabases have been computed, only limited by computer memory. Possible applications are the genome-wide comparisons of bubbles with promotors, TSS, viral integration sites, and other melting-related regions. PMID:18637171

  11. Accurate and computationally efficient mixing models for the simulation of turbulent mixing with PDF methods

    NASA Astrophysics Data System (ADS)

    Meyer, Daniel W.; Jenny, Patrick

    2013-08-01

    Different simulation methods are applicable to study turbulent mixing. When applying probability density function (PDF) methods, turbulent transport, and chemical reactions appear in closed form, which is not the case in second moment closure methods (RANS). Moreover, PDF methods provide the entire joint velocity-scalar PDF instead of a limited set of moments. In PDF methods, however, a mixing model is required to account for molecular diffusion. In joint velocity-scalar PDF methods, mixing models should also account for the joint velocity-scalar statistics, which is often under appreciated in applications. The interaction by exchange with the conditional mean (IECM) model accounts for these joint statistics, but requires velocity-conditional scalar means that are expensive to compute in spatially three dimensional settings. In this work, two alternative mixing models are presented that provide more accurate PDF predictions at reduced computational cost compared to the IECM model, since no conditional moments have to be computed. All models are tested for different mixing benchmark cases and their computational efficiencies are inspected thoroughly. The benchmark cases involve statistically homogeneous and inhomogeneous settings dealing with three streams that are characterized by two passive scalars. The inhomogeneous case clearly illustrates the importance of accounting for joint velocity-scalar statistics in the mixing model. Failure to do so leads to significant errors in the resulting scalar means, variances and other statistics.

  12. Algorithms for Efficient Computation of Transfer Functions for Large Order Flexible Systems

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Giesy, Daniel P.

    1998-01-01

    An efficient and robust computational scheme is given for the calculation of the frequency response function of a large order, flexible system implemented with a linear, time invariant control system. Advantage is taken of the highly structured sparsity of the system matrix of the plant based on a model of the structure using normal mode coordinates. The computational time per frequency point of the new computational scheme is a linear function of system size, a significant improvement over traditional, still-matrix techniques whose computational times per frequency point range from quadratic to cubic functions of system size. This permits the practical frequency domain analysis of systems of much larger order than by traditional, full-matrix techniques. Formulations are given for both open- and closed-loop systems. Numerical examples are presented showing the advantages of the present formulation over traditional approaches, both in speed and in accuracy. Using a model with 703 structural modes, the present method was up to two orders of magnitude faster than a traditional method. The present method generally showed good to excellent accuracy throughout the range of test frequencies, while traditional methods gave adequate accuracy for lower frequencies, but generally deteriorated in performance at higher frequencies with worst case errors being many orders of magnitude times the correct values.

  13. Efficient Computation of Closed-loop Frequency Response for Large Order Flexible Systems

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Giesy, Daniel P.

    1997-01-01

    An efficient and robust computational scheme is given for the calculation of the frequency response function of a large order, flexible system implemented with a linear, time invariant control system. Advantage is taken of the highly structured sparsity of the system matrix of the plant based on a model of the structure using normal mode coordinates. The computational time per frequency point of the new computational scheme is a linear function of system size, a significant improvement over traditional, full-matrix techniques whose computational times per frequency point range from quadratic to cubic functions of system size. This permits the practical frequency domain analysis of systems of much larger order than by traditional, full-matrix techniques. Formulations are given for both open and closed loop loop systems. Numerical examples are presented showing the advantages of the present formulation over traditional approaches, both in speed and in accuracy. Using a model with 703 structural modes, a speed-up of almost two orders of magnitude was observed while accuracy improved by up to 5 decimal places.

  14. A computationally efficient description of heterogeneous freezing: A simplified version of the Soccer ball model

    NASA Astrophysics Data System (ADS)

    Niedermeier, Dennis; Ervens, Barbara; Clauss, Tina; Voigtländer, Jens; Wex, Heike; Hartmann, Susan; Stratmann, Frank

    2014-01-01

    In a recent study, the Soccer ball model (SBM) was introduced for modeling and/or parameterizing heterogeneous ice nucleation processes. The model applies classical nucleation theory. It allows for a consistent description of both apparently singular and stochastic ice nucleation behavior, by distributing contact angles over the nucleation sites of a particle population assuming a Gaussian probability density function. The original SBM utilizes the Monte Carlo technique, which hampers its usage in atmospheric models, as fairly time-consuming calculations must be performed to obtain statistically significant results. Thus, we have developed a simplified and computationally more efficient version of the SBM. We successfully used the new SBM to parameterize experimental nucleation data of, e.g., bacterial ice nucleation. Both SBMs give identical results; however, the new model is computationally less expensive as confirmed by cloud parcel simulations. Therefore, it is a suitable tool for describing heterogeneous ice nucleation processes in atmospheric models.

  15. Recent improvements in efficiency, accuracy, and convergence for implicit approximate factorization algorithms. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Pulliam, T. H.; Steger, J. L.

    1985-01-01

    In 1977 and 1978, general purpose centrally space differenced implicit finite difference codes in two and three dimensions have been introduced. These codes, now called ARC2D and ARC3D, can run either in inviscid or viscous mode for steady or unsteady flow. Since the introduction of the ARC2D and ARC3D codes, overall computational efficiency could be improved by making use of a number of algorithmic changes. These changes are related to the use of a spatially varying time step, the use of a sequence of mesh refinements to establish approximate solutions, implementation of various ways to reduce inversion work, improved numerical dissipation terms, and more implicit treatment of terms. The present investigation has the objective to describe the considered improvements and to quantify advantages and disadvantages. It is found that using established and simple procedures, a computer code can be maintained which is competitive with specialized codes.

  16. An efficient method for computing genus expansions and counting numbers in the Hermitian matrix model

    NASA Astrophysics Data System (ADS)

    Álvarez, Gabriel; Martínez Alonso, Luis; Medina, Elena

    2011-07-01

    We present a method to compute the genus expansion of the free energy of Hermitian matrix models from the large N expansion of the recurrence coefficients of the associated family of orthogonal polynomials. The method is based on the Bleher-Its deformation of the model, on its associated integral representation of the free energy, and on a method for solving the string equation which uses the resolvent of the Lax operator of the underlying Toda hierarchy. As a byproduct we obtain an efficient algorithm to compute generating functions for the enumeration of labeled k-maps which does not require the explicit expressions of the coefficients of the topological expansion. Finally we discuss the regularization of singular one-cut models within this approach.

  17. Use of global functions for improvement in efficiency of nonlinear analysis. [in computer structural displacement estimation

    NASA Technical Reports Server (NTRS)

    Almroth, B. O.; Stehlin, P.; Brogan, F. A.

    1981-01-01

    A method for improving the efficiency of nonlinear structural analysis by the use of global displacement functions is presented. The computer programs include options to define the global functions as input or let the program automatically select and update these functions. The program was applied to a number of structures: (1) 'pear-shaped cylinder' in compression, (2) bending of a long cylinder, (3) spherical shell subjected to point force, (4) panel with initial imperfections, (5) cylinder with cutouts. The sample cases indicate the usefulness of the procedure in the solution of nonlinear structural shell problems by the finite element method. It is concluded that the use of global functions for extrapolation will lead to savings in computer time.

  18. A more efficient formulation for computation of the maximum loading points in electric power systems

    SciTech Connect

    Chiang, H.D.; Jean-Jumeau, R.

    1995-05-01

    This paper presents a more efficient formulation for computation of the maximum loading points. A distinguishing feature of the new formulation is that it is of dimension (n + 1), instead of the existing formulation of dimension (2n + 1), for n-dimensional load flow equations. This feature makes computation of the maximum loading points very inexpensive in comparison with those required in the existing formulation. A theoretical basis for the new formulation is provided. The new problem formulation is derived by using a simple reparameterization scheme and exploiting the special properties of the power flow model. Moreover, the proposed test function is shown to be monotonic in the vicinity of a maximum loading point. Therefore, it allows one to monitor the approach to maximum loading points during the solution search process. Simulation results on a 234-bus system are presented.

  19. Modeling weakly-ionized plasmas in magnetic field: A new computationally-efficient approach

    NASA Astrophysics Data System (ADS)

    Parent, Bernard; Macheret, Sergey O.; Shneider, Mikhail N.

    2015-11-01

    Despite its success at simulating accurately both non-neutral and quasi-neutral weakly-ionized plasmas, the drift-diffusion model has been observed to be a particularly stiff set of equations. Recently, it was demonstrated that the stiffness of the system could be relieved by rewriting the equations such that the potential is obtained from Ohm's law rather than Gauss's law while adding some source terms to the ion transport equation to ensure that Gauss's law is satisfied in non-neutral regions. Although the latter was applicable to multicomponent and multidimensional plasmas, it could not be used for plasmas in which the magnetic field was significant. This paper hence proposes a new computationally-efficient set of electron and ion transport equations that can be used not only for a plasma with multiple types of positive and negative ions, but also for a plasma in magnetic field. Because the proposed set of equations is obtained from the same physical model as the conventional drift-diffusion equations without introducing new assumptions or simplifications, it results in the same exact solution when the grid is refined sufficiently while being more computationally efficient: not only is the proposed approach considerably less stiff and hence requires fewer iterations to reach convergence but it yields a converged solution that exhibits a significantly higher resolution. The combined faster convergence and higher resolution is shown to result in a hundredfold increase in computational efficiency for some typical steady and unsteady plasma problems including non-neutral cathode and anode sheaths as well as quasi-neutral regions.

  20. An Accurate and Computationally Efficient Model for Membrane-Type Circular-Symmetric Micro-Hotplates

    PubMed Central

    Khan, Usman; Falconi, Christian

    2014-01-01

    Ideally, the design of high-performance micro-hotplates would require a large number of simulations because of the existence of many important design parameters as well as the possibly crucial effects of both spread and drift. However, the computational cost of FEM simulations, which are the only available tool for accurately predicting the temperature in micro-hotplates, is very high. As a result, micro-hotplate designers generally have no effective simulation-tools for the optimization. In order to circumvent these issues, here, we propose a model for practical circular-symmetric micro-hot-plates which takes advantage of modified Bessel functions, computationally efficient matrix-approach for considering the relevant boundary conditions, Taylor linearization for modeling the Joule heating and radiation losses, and external-region-segmentation strategy in order to accurately take into account radiation losses in the entire micro-hotplate. The proposed model is almost as accurate as FEM simulations and two to three orders of magnitude more computationally efficient (e.g., 45 s versus more than 8 h). The residual errors, which are mainly associated to the undesired heating in the electrical contacts, are small (e.g., few degrees Celsius for an 800 °C operating temperature) and, for important analyses, almost constant. Therefore, we also introduce a computationally-easy single-FEM-compensation strategy in order to reduce the residual errors to about 1 °C. As illustrative examples of the power of our approach, we report the systematic investigation of a spread in the membrane thermal conductivity and of combined variations of both ambient and bulk temperatures. Our model enables a much faster characterization of micro-hotplates and, thus, a much more effective optimization prior to fabrication. PMID:24763214

  1. CAD-RADS(TM) Coronary Artery Disease - Reporting and Data System. An expert consensus document of the Society of Cardiovascular Computed Tomography (SCCT), the American College of Radiology (ACR) and the North American Society for Cardiovascular Imaging (NASCI). Endorsed by the American College of Cardiology.

    PubMed

    Cury, Ricardo C; Abbara, Suhny; Achenbach, Stephan; Agatston, Arthur; Berman, Daniel S; Budoff, Matthew J; Dill, Karin E; Jacobs, Jill E; Maroules, Christopher D; Rubin, Geoffrey D; Rybicki, Frank J; Schoepf, U Joseph; Shaw, Leslee J; Stillman, Arthur E; White, Charles S; Woodard, Pamela K; Leipsic, Jonathon A

    2016-01-01

    The intent of CAD-RADS - Coronary Artery Disease Reporting and Data System is to create a standardized method to communicate findings of coronary CT angiography (coronary CTA) in order to facilitate decision-making regarding further patient management. The suggested CAD-RADS classification is applied on a per-patient basis and represents the highest-grade coronary artery lesion documented by coronary CTA. It ranges from CAD-RADS 0 (Zero) for the complete absence of stenosis and plaque to CAD-RADS 5 for the presence of at least one totally occluded coronary artery and should always be interpreted in conjunction with the impression found in the report. Specific recommendations are provided for further management of patients with stable or acute chest pain based on the CAD-RADS classification. The main goal of CAD-RADS is to standardize reporting of coronary CTA results and to facilitate communication of test results to referring physicians along with suggestions for subsequent patient management. In addition, CAD-RADS will provide a framework of standardization that may benefit education, research, peer-review and quality assurance with the potential to ultimately result in improved quality of care. PMID:27318587

  2. Hierarchy of Efficiently Computable and Faithful Lower Bounds to Quantum Discord.

    PubMed

    Piani, Marco

    2016-08-19

    Quantum discord expresses a fundamental nonclassicality of correlations that is more general than entanglement, but that, in its standard definition, is not easily evaluated. We derive a hierarchy of computationally efficient lower bounds to the standard quantum discord. Every nontrivial element of the hierarchy constitutes by itself a valid discordlike measure, based on a fundamental feature of quantum correlations: their lack of shareability. Our approach emphasizes how the difference between entanglement and discord depends on whether shareability is intended as a static property or as a dynamical process. PMID:27588837

  3. Computationally efficient method for estimation of angle of arrival with non-uniform reconfigurable receiver arrays.

    PubMed

    Lee, Hua

    2016-04-01

    The main focus of this paper is the design and formulation of a computationally efficient approach to the estimation of the angle of arrival with non-uniform reconfigurable receiver arrays. Subsequent to demodulation and matched filtering, the main signal processing task is a double-integration operation. The simplicity of this algorithm enables the implementation of the estimation procedure with simple operational amplifier (op-amp) circuits for real-time realization. This technique does not require uniform and structured array configurations, and is most effective for the estimation of angle of arrival with dynamically reconfigurable receiver arrays.

  4. Hierarchy of Efficiently Computable and Faithful Lower Bounds to Quantum Discord

    NASA Astrophysics Data System (ADS)

    Piani, Marco

    2016-08-01

    Quantum discord expresses a fundamental nonclassicality of correlations that is more general than entanglement, but that, in its standard definition, is not easily evaluated. We derive a hierarchy of computationally efficient lower bounds to the standard quantum discord. Every nontrivial element of the hierarchy constitutes by itself a valid discordlike measure, based on a fundamental feature of quantum correlations: their lack of shareability. Our approach emphasizes how the difference between entanglement and discord depends on whether shareability is intended as a static property or as a dynamical process.

  5. Efficient quantum-classical method for computing thermal rate constant of recombination: application to ozone formation.

    PubMed

    Ivanov, Mikhail V; Babikov, Dmitri

    2012-05-14

    Efficient method is proposed for computing thermal rate constant of recombination reaction that proceeds according to the energy transfer mechanism, when an energized molecule is formed from reactants first, and is stabilized later by collision with quencher. The mixed quantum-classical theory for the collisional energy transfer and the ro-vibrational energy flow [M. Ivanov and D. Babikov, J. Chem. Phys. 134, 144107 (2011)] is employed to treat the dynamics of molecule + quencher collision. Efficiency is achieved by sampling simultaneously (i) the thermal collision energy, (ii) the impact parameter, and (iii) the incident direction of quencher, as well as (iv) the rotational state of energized molecule. This approach is applied to calculate third-order rate constant of the recombination reaction that forms the (16)O(18)O(16)O isotopomer of ozone. Comparison of the predicted rate vs. experimental result is presented.

  6. Interoperation of heterogeneous CAD tools in Ptolemy II

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Wu, Bicheng; Liu, Xiaojun; Lee, Edward A.

    1999-03-01

    Typical complex systems that involve microsensors and microactuators exhibit heterogeneity both at the implementation level and the problem level. For example, a system can be modeled using discrete events for digital circuits and SPICE-like analog descriptions for sensors. This heterogeneity exist not only in different implementation domains, but also at different level of abstraction. This naturally leads to a heterogeneous approach to system design that uses domain-specific models of computation (MoC) at various levels of abstractions to define a system, and leverages multiple CAD tools to do simulation, verification and synthesis. As the size and scope of the system increase, the integration becomes too difficult and unmanageable if different tools are coordinated using simple scripts. In addition, for MEMS devices and mixed-signal circuits, it is essential to integrate tools with different MoC to simulate the whole system. Ptolemy II, a heterogeneous system-level design tool, supports the interaction among different MoCs. This paper discusses heterogeneous CAD tool interoperability in the Ptolemy II framework. The key is to understand the semantic interface and classify the tools by their MoC and their level of abstraction. Interfaces are designed for each domain so that the external tools can be easily wrapped. Then the interoperability of the tools becomes the interoperability of the semantics. Ptolemy II can act as the standard interface among different tools to achieve the overall design modeling. A micro-accelerometer with digital feedback is studied as an example.

  7. Computer-Assisted Dieting: Effects of a Randomized Nutrition Intervention

    ERIC Educational Resources Information Center

    Schroder, Kerstin E. E.

    2011-01-01

    Objectives: To compare the effects of a computer-assisted dieting intervention (CAD) with and without self-management training on dieting among 55 overweight and obese adults. Methods: Random assignment to a single-session nutrition intervention (CAD-only) or a combined CAD plus self-management group intervention (CADG). Dependent variables were…

  8. A Computationally-Efficient Inverse Approach to Probabilistic Strain-Based Damage Diagnosis

    NASA Technical Reports Server (NTRS)

    Warner, James E.; Hochhalter, Jacob D.; Leser, William P.; Leser, Patrick E.; Newman, John A

    2016-01-01

    This work presents a computationally-efficient inverse approach to probabilistic damage diagnosis. Given strain data at a limited number of measurement locations, Bayesian inference and Markov Chain Monte Carlo (MCMC) sampling are used to estimate probability distributions of the unknown location, size, and orientation of damage. Substantial computational speedup is obtained by replacing a three-dimensional finite element (FE) model with an efficient surrogate model. The approach is experimentally validated on cracked test specimens where full field strains are determined using digital image correlation (DIC). Access to full field DIC data allows for testing of different hypothetical sensor arrangements, facilitating the study of strain-based diagnosis effectiveness as the distance between damage and measurement locations increases. The ability of the framework to effectively perform both probabilistic damage localization and characterization in cracked plates is demonstrated and the impact of measurement location on uncertainty in the predictions is shown. Furthermore, the analysis time to produce these predictions is orders of magnitude less than a baseline Bayesian approach with the FE method by utilizing surrogate modeling and effective numerical sampling approaches.

  9. An efficient algorithm to compute row and column counts for sparse Cholesky factorization

    SciTech Connect

    Gilbert, J.R. ); Ng, E.G.; Peyton, B.W. )

    1992-09-01

    Let an undirected graph G be given, along with a specified depth- first spanning tree T. We give almost-linear-time algorithms to solve the following two problems: First, for every vertex v, compute the number of descendants w of v for which some descendant of w is adjacent (in G) to v. Second, for every vertx v, compute the number of ancestors of v that are adjacent (in G) to at least one descendant of v. These problems arise in Cholesky and QR factorizations of sparse matrices. Our algorithms can be used to determine the number of nonzero entries in each row and column of the triangular factor of a matrix from the zero/nonzero structure of the matrix. Such a prediction makes storage allocation for sparse matrix factorizations more efficient. Our algorithms run in time linear in the size of the input times a slowly-growing inverse of Ackermann's function. The best previously known algorithms for these problems ran in time linear in the sum of the nonzero counts, which is usually much larger. We give experimental results demonstrating the practical efficiency of the new algorithms.

  10. An efficient algorithm to compute row and column counts for sparse Cholesky factorization

    SciTech Connect

    Gilbert, J.R.; Ng, E.G.; Peyton, B.W.

    1992-09-01

    Let an undirected graph G be given, along with a specified depth- first spanning tree T. We give almost-linear-time algorithms to solve the following two problems: First, for every vertex v, compute the number of descendants w of v for which some descendant of w is adjacent (in G) to v. Second, for every vertx v, compute the number of ancestors of v that are adjacent (in G) to at least one descendant of v. These problems arise in Cholesky and QR factorizations of sparse matrices. Our algorithms can be used to determine the number of nonzero entries in each row and column of the triangular factor of a matrix from the zero/nonzero structure of the matrix. Such a prediction makes storage allocation for sparse matrix factorizations more efficient. Our algorithms run in time linear in the size of the input times a slowly-growing inverse of Ackermann`s function. The best previously known algorithms for these problems ran in time linear in the sum of the nonzero counts, which is usually much larger. We give experimental results demonstrating the practical efficiency of the new algorithms.

  11. A Novel, Computationally Efficient Multipolar Model Employing Distributed Charges for Molecular Dynamics Simulations.

    PubMed

    Devereux, Mike; Raghunathan, Shampa; Fedorov, Dmitri G; Meuwly, Markus

    2014-10-14

    A truncated multipole expansion can be re-expressed exactly using an appropriate arrangement of point charges. This means that groups of point charges that are shifted away from nuclear coordinates can be used to achieve accurate electrostatics for molecular systems. We introduce a multipolar electrostatic model formulated in this way for use in computationally efficient multipolar molecular dynamics simulations with well-defined forces and energy conservation in NVE (constant number-volume-energy) simulations. A framework is introduced to distribute torques arising from multipole moments throughout a molecule, and a refined fitting approach is suggested to obtain atomic multipole moments that are optimized for accuracy and numerical stability in a force field context. The formulation of the charge model is outlined as it has been implemented into CHARMM, with application to test systems involving H2O and chlorobenzene. As well as ease of implementation and computational efficiency, the approach can be used to provide snapshots for multipolar QM/MM calculations in QM/MM-MD studies and easily combined with a standard point-charge force field to allow mixed multipolar/point charge simulations of large systems. PMID:26588121

  12. CBT Pilot Program Instructional Guide. Basic Drafting Skills Curriculum Delivered through CAD Workstations and Artificial Intelligence Software.

    ERIC Educational Resources Information Center

    Smith, Richard J.; Sauer, Mardelle A.

    This guide is intended to assist teachers in using computer-aided design (CAD) workstations and artificial intelligence software to teach basic drafting skills. The guide outlines a 7-unit shell program that may also be used as a generic authoring system capable of supporting computer-based training (CBT) in other subject areas. The first section…

  13. Complete-mouth rehabilitation using a 3D printing technique and the CAD/CAM double scanning method: A clinical report.

    PubMed

    Joo, Han-Sung; Park, Sang-Won; Yun, Kwi-Dug; Lim, Hyun-Pil

    2016-07-01

    According to evolving computer-aided design/computer-aided manufacturing (CAD/CAM) technology, ceramic materials such as zirconia can be used to create fixed dental prostheses for partial removable dental prostheses. Since 3D printing technology was introduced a few years ago, dental applications of this technique have gradually increased. This clinical report presents a complete-mouth rehabilitation using 3D printing and the CAD/CAM double-scanning method.

  14. Complete-mouth rehabilitation using a 3D printing technique and the CAD/CAM double scanning method: A clinical report.

    PubMed

    Joo, Han-Sung; Park, Sang-Won; Yun, Kwi-Dug; Lim, Hyun-Pil

    2016-07-01

    According to evolving computer-aided design/computer-aided manufacturing (CAD/CAM) technology, ceramic materials such as zirconia can be used to create fixed dental prostheses for partial removable dental prostheses. Since 3D printing technology was introduced a few years ago, dental applications of this technique have gradually increased. This clinical report presents a complete-mouth rehabilitation using 3D printing and the CAD/CAM double-scanning method. PMID:26946918

  15. AutoCAD-To-GIFTS Translator Program

    NASA Technical Reports Server (NTRS)

    Jones, Andrew

    1989-01-01

    AutoCAD-to-GIFTS translator program, ACTOG, developed to facilitate quick generation of small finite-element models using CASA/GIFTS finite-element modeling program. Reads geometric data of drawing from Data Exchange File (DXF) used in AutoCAD and other PC-based drafting programs. Geometric entities recognized by ACTOG include points, lines, arcs, solids, three-dimensional lines, and three-dimensional faces. From this information, ACTOG creates GIFTS SRC file, which then reads into GIFTS preprocessor BULKM or modified and reads into EDITM to create finite-element model. SRC file used as is or edited for any number of uses. Written in Microsoft Quick-Basic (Version 2.0).

  16. DFE workbench: a CAD integrated DFE tool

    NASA Astrophysics Data System (ADS)

    Man, Elena; Diez-Campo, Juan E.; Roche, Thomas

    2002-02-01

    Because of the emergent legislation (e.g. WEEE and EOLV), environmental standards (e.g. ISO 14000) and a shift in consumer opinion toward environmentally superior products, there is an increased need for CAD integrated Design for Environment tools to assist the designer in the development of environmentally superior products (ESP). Implementing Design for the Environment practices is an extremely effective strategy, as it is widely believed that 95% of development costs are determined at this stage. Many methodologies and tools have been developed to perform environmental analysis; however, many existing methodologies are inadequately integrated in the design process. This paper addresses these problems and presents a CAD integrated DFE tool that has been under development in the authors' institutes for the last number of years.

  17. Management of CAD/CAM information: Key to improved manufacturing productivity

    NASA Technical Reports Server (NTRS)

    Fulton, R. E.; Brainin, J.

    1984-01-01

    A key element to improved industry productivity is effective management of CAD/CAM information. To stimulate advancements in this area, a joint NASA/Navy/Industry project designated Integrated Programs for Aerospace-Vehicle Design (IPAD) is underway with the goal of raising aerospace industry productivity through advancement of technology to integrate and manage information involved in the design and manufacturing process. The project complements traditional NASA/DOD research to develop aerospace design technology and the Air Force's Integrated Computer-Aided Manufacturing (ICAM) program to advance CAM technology. IPAD research is guided by an Industry Technical Advisory Board (ITAB) composed of over 100 repesentatives from aerospace and computer companies. The IPAD accomplishments to date in development of requirements and prototype software for various levels of company-wide CAD/CAM data management are summarized and plans for development of technology for management of distributed CAD/CAM data and information required to control future knowledge-based CAD/CAM systems are discussed.

  18. Comparison of CAD-CAM and hand made sockets for PTB prostheses.

    PubMed

    Köhler, P; Lindh, L; Netz, P

    1989-04-01

    The aim of the present study was to compare sockets for below-knee (BK) prostheses made by Computer Aided Design-Computer Aided Manufacture (CAD-CAM) to those made by hand. The patients in the study were provided with two prostheses each, which apart from the sockets, were identical. One socket was made by the CAD-CAM technique developed at the Bioengineering Centre, Roehampton, University College London and one was made by hand at the OT-Centre, Stockholm, Sweden. The results were based on investigation of eight unilateral below-knee amputees evaluating their own sockets by Visual Analogous Scale with respect to comfort, pressure, and pain. The sockets were evaluated on seven occasions, at two tests, on delivery, after use every second day for six days and every second week for two weeks. All CAD-CAM sockets except one had to be changed once as compared to the hand made of which only two had to be changed. As to comfort it could not be demonstrated that there was any significant difference between the two types of sockets and both types were well accepted by all patients. Differences in pressure and pain were rarely reported. There were obvious differences between the two types of socket with respect to height, width, and inner surface configuration. The authors feel that CAD-CAM will in the near future be an excellent tool for design and manufacture of prosthetic sockets.

  19. Image preprocessing for improving computational efficiency in implementation of restoration and superresolution algorithms.

    PubMed

    Sundareshan, Malur K; Bhattacharjee, Supratik; Inampudi, Radhika; Pang, Ho-Yuen

    2002-12-10

    Computational complexity is a major impediment to the real-time implementation of image restoration and superresolution algorithms in many applications. Although powerful restoration algorithms have been developed within the past few years utilizing sophisticated mathematical machinery (based on statistical optimization and convex set theory), these algorithms are typically iterative in nature and require a sufficient number of iterations to be executed to achieve the desired resolution improvement that may be needed to meaningfully perform postprocessing image exploitation tasks in practice. Additionally, recent technological breakthroughs have facilitated novel sensor designs (focal plane arrays, for instance) that make it possible to capture megapixel imagery data at video frame rates. A major challenge in the processing of these large-format images is to complete the execution of the image processing steps within the frame capture times and to keep up with the output rate of the sensor so that all data captured by the sensor can be efficiently utilized. Consequently, development of novel methods that facilitate real-time implementation of image restoration and superresolution algorithms is of significant practical interest and is the primary focus of this study. The key to designing computationally efficient processing schemes lies in strategically introducing appropriate preprocessing steps together with the superresolution iterations to tailor optimized overall processing sequences for imagery data of specific formats. For substantiating this assertion, three distinct methods for tailoring a preprocessing filter and integrating it with the superresolution processing steps are outlined. These methods consist of a region-of-interest extraction scheme, a background-detail separation procedure, and a scene-derived information extraction step for implementing a set-theoretic restoration of the image that is less demanding in computation compared with the

  20. Computationally efficient image restoration and super-resolution algorithns for real-time implementation

    NASA Astrophysics Data System (ADS)

    Sundareshan, Malur K.

    2002-07-01

    Computational complexity is a major impediment to the real- time implementation of image restoration and super- resolution algorithms. Although powerful restoration algorithms have been developed within the last few years utilizing sophisticated mathematical machinery (based on statistical optimization and convex set theory), these algorithms are typically iterative in nature and require enough number of iterations to be executed to achieve desired resolution gains in order to meaningfully perform detection and recognition tasks in practice. Additionally, recent technological breakthroughs have facilitated novel sensor designs (focal plane arrays, for instance) that make it possible to capture mega-pixel imagery data at video frame rates. A major challenge in the processing of these large format images is to complete the execution of the image processing steps within the frame capture times and to keep up with the output rate of the sensor so that all data captured by the sensor can be efficiently utilized. Consequently, development of novel methods that facilitate real-time implementation of image restoration and super- resolution algorithms is of significant practical interest and will be the primary focus of this paper. The key to designing computationally efficient processing schemes lies in strategically introducing appropriate pre-processing and post-processing steps together with the super-resolution iterations in order to tailor optimized overall processing sequences for imagery data of specific formats. Three distinct methods for tailoring a pre-processing filter and integrating it with the super-resolution processing steps will be outlined in this paper. These methods consist of a Region-of-Interest (ROI) extraction scheme, a background- detail separation procedure, and a scene-derived information extraction step for implementing a set-theoretic restoration of the image that is less demanding in computation compared to the super-resolution iterations. A

  1. Image preprocessing for improving computational efficiency in implementation of restoration and superresolution algorithms

    NASA Astrophysics Data System (ADS)

    Sundareshan, Malur K.; Bhattacharjee, Supratik; Inampudi, Radhika; Pang, Ho-Yuen

    2002-12-01

    Computational complexity is a major impediment to the real-time implementation of image restoration and superresolution algorithms in many applications. Although powerful restoration algorithms have been developed within the past few years utilizing sophisticated mathematical machinery (based on statistical optimization and convex set theory), these algorithms are typically iterative in nature and require a sufficient number of iterations to be executed to achieve the desired resolution improvement that may be needed to meaningfully perform postprocessing image exploitation tasks in practice. Additionally, recent technological breakthroughs have facilitated novel sensor designs (focal plane arrays, for instance) that make it possible to capture megapixel imagery data at video frame rates. A major challenge in the processing of these large-format images is to complete the execution of the image processing steps within the frame capture times and to keep up with the output rate of the sensor so that all data captured by the sensor can be efficiently utilized. Consequently, development of novel methods that facilitate real-time implementation of image restoration and superresolution algorithms is of significant practical interest and is the primary focus of this study. The key to designing computationally efficient processing schemes lies in strategically introducing appropriate preprocessing steps together with the superresolution iterations to tailor optimized overall processing sequences for imagery data of specific formats. For substantiating this assertion, three distinct methods for tailoring a preprocessing filter and integrating it with the superresolution processing steps are outlined. These methods consist of a region-of-interest extraction scheme, a background-detail separation procedure, and a scene-derived information extraction step for implementing a set-theoretic restoration of the image that is less demanding in computation compared with the

  2. Computationally Efficient Multiscale Reactive Molecular Dynamics to Describe Amino Acid Deprotonation in Proteins

    PubMed Central

    2016-01-01

    An important challenge in the simulation of biomolecular systems is a quantitative description of the protonation and deprotonation process of amino acid residues. Despite the seeming simplicity of adding or removing a positively charged hydrogen nucleus, simulating the actual protonation/deprotonation process is inherently difficult. It requires both the explicit treatment of the excess proton, including its charge defect delocalization and Grotthuss shuttling through inhomogeneous moieties (water and amino residues), and extensive sampling of coupled condensed phase motions. In a recent paper (J. Chem. Theory Comput.2014, 10, 2729−273725061442), a multiscale approach was developed to map high-level quantum mechanics/molecular mechanics (QM/MM) data into a multiscale reactive molecular dynamics (MS-RMD) model in order to describe amino acid deprotonation in bulk water. In this article, we extend the fitting approach (called FitRMD) to create MS-RMD models for ionizable amino acids within proteins. The resulting models are shown to faithfully reproduce the free energy profiles of the reference QM/MM Hamiltonian for PT inside an example protein, the ClC-ec1 H+/Cl– antiporter. Moreover, we show that the resulting MS-RMD models are computationally efficient enough to then characterize more complex 2-dimensional free energy surfaces due to slow degrees of freedom such as water hydration of internal protein cavities that can be inherently coupled to the excess proton charge translocation. The FitRMD method is thus shown to be an effective way to map ab initio level accuracy into a much more computationally efficient reactive MD method in order to explicitly simulate and quantitatively describe amino acid protonation/deprotonation in proteins. PMID:26734942

  3. A Solution Methodology and Computer Program to Efficiently Model Thermodynamic and Transport Coefficients of Mixtures

    NASA Technical Reports Server (NTRS)

    Ferlemann, Paul G.

    2000-01-01

    A solution methodology has been developed to efficiently model multi-specie, chemically frozen, thermally perfect gas mixtures. The method relies on the ability to generate a single (composite) set of thermodynamic and transport coefficients prior to beginning a CFD solution. While not fundamentally a new concept, many applied CFD users are not aware of this capability nor have a mechanism to easily and confidently generate new coefficients. A database of individual specie property coefficients has been created for 48 species. The seven coefficient form of the thermodynamic functions is currently used rather then the ten coefficient form due to the similarity of the calculated properties, low temperature behavior and reduced CPU requirements. Sutherland laminar viscosity and thermal conductivity coefficients were computed in a consistent manner from available reference curves. A computer program has been written to provide CFD users with a convenient method to generate composite specie coefficients for any mixture. Mach 7 forebody/inlet calculations demonstrated nearly equivalent results and significant CPU time savings compared to a multi-specie solution approach. Results from high-speed combustor analysis also illustrate the ability to model inert test gas contaminants without additional computational expense.

  4. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems

    PubMed Central

    Ehsan, Shoaib; Clark, Adrian F.; ur Rehman, Naveed; McDonald-Maier, Klaus D.

    2015-01-01

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems. PMID:26184211

  5. Dendritic nonlinearities are tuned for efficient spike-based computations in cortical circuits

    PubMed Central

    Ujfalussy, Balázs B; Makara, Judit K; Branco, Tiago; Lengyel, Máté

    2015-01-01

    Cortical neurons integrate thousands of synaptic inputs in their dendrites in highly nonlinear ways. It is unknown how these dendritic nonlinearities in individual cells contribute to computations at the level of neural circuits. Here, we show that dendritic nonlinearities are critical for the efficient integration of synaptic inputs in circuits performing analog computations with spiking neurons. We developed a theory that formalizes how a neuron's dendritic nonlinearity that is optimal for integrating synaptic inputs depends on the statistics of its presynaptic activity patterns. Based on their in vivo preynaptic population statistics (firing rates, membrane potential fluctuations, and correlations due to ensemble dynamics), our theory accurately predicted the responses of two different types of cortical pyramidal cells to patterned stimulation by two-photon glutamate uncaging. These results reveal a new computational principle underlying dendritic integration in cortical neurons by suggesting a functional link between cellular and systems--level properties of cortical circuits. DOI: http://dx.doi.org/10.7554/eLife.10056.001 PMID:26705334

  6. Computationally Efficient Partial Crosstalk Cancellation in Fast Time-Varying DSL Crosstalk Environments

    NASA Astrophysics Data System (ADS)

    Forouzan, Amir R.; Garth, Lee M.

    2007-12-01

    Line selection (LS), tone selection (TS), and joint tone-line selection (JTLS) partial crosstalk cancellers have been proposed to reduce the online computational complexity of far-end crosstalk (FEXT) cancellers in digital subscriber lines (DSL). However, when the crosstalk profile changes rapidly over time, there is an additional requirement that the partial crosstalk cancellers, particularly the LS and JTLS schemes, should also provide a low preprocessing complexity. This is in contrast to the case for perfect crosstalk cancellers. In this paper, we propose two novel channel matrix inversion methods, the approximate inverse (AI) and reduced inverse (RI) schemes, which reduce the recurrent complexity of the LS and JTLS schemes. Moreover, we propose two new classes of JTLS algorithms, the subsort and Lagrange JTLS algorithms, with significantly lower computational complexity than the recently proposed optimal greedy JTLS scheme. The computational complexity analysis of our algorithms shows that they provide much lower recurrent complexities than the greedy JTLS algorithm, allowing them to work efficiently in very fast time-varying crosstalk environments. Moreover, the analytical and simulation results demonstrate that our techniques are close to the optimal solution from the crosstalk cancellation point of view. The results also reveal that partial crosstalk cancellation is more beneficial in upstream DSL, particularly for short loops.

  7. Methods for Efficiently and Accurately Computing Quantum Mechanical Free Energies for Enzyme Catalysis.

    PubMed

    Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L

    2016-01-01

    Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples.

  8. A universal and efficient method to compute maps from image-based prediction models.

    PubMed

    Sabuncu, Mert R

    2014-01-01

    Discriminative supervised learning algorithms, such as Support Vector Machines, are becoming increasingly popular in biomedical image computing. One of their main uses is to construct image-based prediction models, e.g., for computer aided diagnosis or "mind reading." A major challenge in these applications is the biological interpretation of the machine learning models, which can be arbitrarily complex functions of the input features (e.g., as induced by kernel-based methods). Recent work has proposed several strategies for deriving maps that highlight regions relevant for accurate prediction. Yet most of these methods o n strong assumptions about t he prediction model (e.g., linearity, sparsity) and/or data (e.g., Gaussianity), or fail to exploit the covariance structure in the data. In this work, we propose a computationally efficient and universal framework for quantifying associations captured by black box machine learning models. Furthermore, our theoretical perspective reveals that examining associations with predictions, in the absence of ground truth labels, can be very informative. We apply the proposed method to machine learning models trained to predict cognitive impairment from structural neuroimaging data. We demonstrate that our approach yields biologically meaningful maps of association. PMID:25320819

  9. Methods for Efficiently and Accurately Computing Quantum Mechanical Free Energies for Enzyme Catalysis.

    PubMed

    Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L

    2016-01-01

    Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples. PMID:27498635

  10. Quantum propagation of electronic excitations in macromolecules: A computationally efficient multiscale approach

    NASA Astrophysics Data System (ADS)

    Schneider, E.; a Beccara, S.; Mascherpa, F.; Faccioli, P.

    2016-07-01

    We introduce a theoretical approach to study the quantum-dissipative dynamics of electronic excitations in macromolecules, which enables to perform calculations in large systems and cover long-time intervals. All the parameters of the underlying microscopic Hamiltonian are obtained from ab initio electronic structure calculations, ensuring chemical detail. In the short-time regime, the theory is solvable using a diagrammatic perturbation theory, enabling analytic insight. To compute the time evolution of the density matrix at intermediate times, typically ≲ps , we develop a Monte Carlo algorithm free from any sign or phase problem, hence computationally efficient. Finally, the dynamics in the long-time and large-distance limit can be studied combining the microscopic calculations with renormalization group techniques to define a rigorous low-resolution effective theory. We benchmark our Monte Carlo algorithm against the results obtained in perturbation theory and using a semiclassical nonperturbative scheme. Then, we apply it to compute the intrachain charge mobility in a realistic conjugated polymer.

  11. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems.

    PubMed

    Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D

    2015-01-01

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems. PMID:26184211

  12. Computing the energy of a water molecule using multideterminants: A simple, efficient algorithm

    SciTech Connect

    Clark, Bryan K.; Morales, Miguel A; Mcminis, Jeremy; Kim, Jeongnim; Scuseria, Gustavo E

    2011-01-01

    Quantum Monte Carlo (QMC) methods such as variational Monte Carlo and fixed node diffusion Monte Carlo depend heavily on the quality of the trial wave function. Although Slater-Jastrow wave functions are the most commonly used variational ansatz in electronic structure, more sophisticated wave functions are critical to ascertaining new physics. One such wave function is the multi-Slater- Jastrow wave function which consists of a Jastrow function multiplied by the sum of Slater deter- minants. In this paper we describe a method for working with these wave functions in QMC codes that is easy to implement, efficient both in computational speed as well as memory, and easily par- allelized. The computational cost scales quadratically with particle number making this scaling no worse than the single determinant case and linear with the total number of excitations. Addition- ally, we implement this method and use it to compute the ground state energy of a water molecule. 2011 American Institute of Physics. [doi:10.1063/1.3665391

  13. Efficient computation of net analyte signal vector in inverse multivariate calibration models.

    PubMed

    Faber, N K

    1998-12-01

    The net analyte signal vector has been defined by Lorber as the part of a mixture spectrum that is unique for the analyte of interest; i.e., it is orthogonal to the spectra of the interferences. It plays a key role in the development of multivariate analytical figures of merit. Applications have been reported that imply its utility for spectroscopic wavelength selection as well as calibration method comparison. Currently available methods for computing the net analyte signal vector in inverse multivariate calibration models are based on the evaluation of projection matrices. Due to the size of these matrices (p × p, with p the number of wavelengths) the computation may be highly memory- and time-consuming. This paper shows that the net analyte signal vector can be obtained in a highly efficient manner by a suitable scaling of the regression vector. Computing the scaling factor only requires the evaluation of an inner product (p multiplications and additions). The mathematical form of the newly derived expression is discussed, and the generalization to multiway calibration models is briefly outlined.

  14. CAD Integration : new optical design possibilities

    NASA Astrophysics Data System (ADS)

    Haumonte, Jean-Baptiste; Venturino, Jean-Claude

    2005-09-01

    The development of optical design and analysis tools in a CAD software can help to optimise the design, size and performance of tomorrow's consumer products. While optics was still held back by software limitations, CAD programs were moving forward in leaps and bounds, improving manufacturing technologies and making it possible to design and produce highly innovative and sophisticated products. The problem was that in the past, 'traditional' optical design programs were only able to simulate spherical and aspherical lenses, meaning that the optical designers were limited to designing systems which were a series of imperfect lenses, each one correcting the last. That is why OPTIS has created the first optical design program to be fully integrated into a CAD program. The technology is available from OPTIS in an integrated SOLIDWORKS or CATIA V5 version. Users of this software can reduce the number of lenses needed in a system. Designers will now have access to complex surfaces such as NURBS meaning they will now be able to define free shape progressive lenses and even improve on optical performances using fewer lenses. This revolutionary technology will allow mechanical designers to work on optical systems and to share information with optical designers for the first time. Previously not possible in a CAD program you may now determine all the optical performances of any optical system, providing first order and third order performances, sequential and non-sequential ray-tracing, wavefront surfaces, point spread function, MTF, spot-diagram, using real optical surfaces and guaranteeing the mechanical precision necessary for an optical system.

  15. Confidence-based stratification of CAD recommendations with application to breast cancer detection

    NASA Astrophysics Data System (ADS)

    Habas, Piotr A.; Zurada, Jacek M.; Elmaghraby, Adel S.; Tourassi, Georgia D.

    2006-03-01

    We present a risk stratification methodology for predictions made by computer-assisted detection (CAD) systems. For each positive CAD prediction, the proposed technique assigns an individualized confidence measure as a function of the actual CAD output, the case-specific uncertainty of the prediction estimated from the system's performance for similar cases and the value of the operating decision threshold. The study was performed using a mammographic database containing 1,337 regions of interest (ROIs) with known ground truth (681 with masses, 656 with normal parenchyma). Two types of decision models (1) a support vector machine (SVM) with a radial basis function kernel and (2) a back-propagation neural network (BPNN) were developed to detect masses based on 8 morphological features automatically extracted from each ROI. The study shows that as requirements on the minimum confidence value are being restricted, the positive predictive value (PPV) for qualifying cases steadily improves (from PPV = 0.73 to PPV = 0.97 for the SVM, from PPV = 0.67 to PPV = 0.95 for the BPNN). The proposed confidence metric was successfully applied for stratification of CAD recommendations into 3 categories of different expected reliability: HIGH (PPV = 0.90), LOW (PPV = 0.30) and MEDIUM (all remaining cases). Since radiologists often disregard accurate CAD cues, an individualized confidence measure should improve their ability to correctly process visual cues and thus reduce the interpretation error associated with the detection task. While keeping the clinically determined operating point satisfied, the proposed methodology draws the CAD users' attention to cases/regions of highest risk while helping them confidently eliminate cases with low risk.

  16. On the Use of CAD-Native Predicates and Geometry in Surface Meshing

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.

    1999-01-01

    Several paradigms for accessing CAD geometry during surface meshing for CFD are discussed. File translation, inconsistent geometry engines and non-native point construction are all identified as sources of non-robustness. The paper argues in favor of accessing CAD parts and assemblies in their native format, without translation, and for the use of CAD-native predicates and constructors in surface mesh generation. The discussion also emphasizes the importance of examining the computational requirements for exact evaluation of triangulation predicates during surface meshing. The native approach is demonstrated through an algorithm for the generation of closed manifold surface triangulations from CAD geometry. CAD parts and assemblies are used in their native format, and a part's native geometry engine is accessed through a modeler-independent application programming interface (API). In seeking a robust and fully automated procedure, the algorithm is based on a new physical space manifold triangulation technique specially developed to avoid robustness issues associated with poorly conditioned mappings. In addition, this approach avoids the usual ambiguities associated with floating-point predicate evaluation on constructed coordinate geometry in a mapped space. The technique is incremental, so that each new site improves the triangulation by some well defined quality measure. The algorithm terminates after achieving a prespecified measure of mesh quality and produces a triangulation such that no angle is less than a given angle bound, a or greater than pi - 2alpha. This result also sets bounds on the maximum vertex degree, triangle aspect-ratio and maximum stretching rate for the triangulation. In addition to the output triangulations for a variety of CAD parts, the discussion presents related theoretical results which assert the existence of such an angle bound, and demonstrate that maximum bounds of between 25 deg and 30 deg may be achieved in practice.

  17. Computationally Efficient Numerical Model for the Evolution of Directional Ocean Surface Waves

    NASA Astrophysics Data System (ADS)

    Malej, M.; Choi, W.; Goullet, A.

    2011-12-01

    The main focus of this work has been the asymptotic and numerical modeling of weakly nonlinear ocean surface wave fields. In particular, a development of an efficient numerical model for the evolution of nonlinear ocean waves, including extreme waves known as Rogue/Freak waves, is of direct interest. Due to their elusive and destructive nature, the media often portrays Rogue waves as unimaginatively huge and unpredictable monsters of the sea. To address some of these concerns, derivations of reduced phase-resolving numerical models, based on the small wave steepness assumption, are presented and their corresponding numerical simulations via Fourier pseudo-spectral methods are discussed. The simulations are initialized with a well-known JONSWAP wave spectrum and different angular distributions are employed. Both deterministic and Monte-Carlo ensemble average simulations were carried out. Furthermore, this work concerns the development of a new computationally efficient numerical model for the short term prediction of evolving weakly nonlinear ocean surface waves. The derivations are originally based on the work of West et al. (1987) and since the waves in the ocean tend to travel primarily in one direction, the aforementioned new numerical model is derived with an additional assumption of a weak transverse dependence. In turn, comparisons of the ensemble averaged randomly initialized spectra, as well as deterministic surface-to-surface correlations are presented. The new model is shown to behave well in various directional wave fields and can potentially be a candidate for computationally efficient prediction and propagation of extreme ocean surface waves - Rogue/Freak waves.

  18. Geometrical splitting technique to improve the computational efficiency in Monte Carlo calculations for proton therapy

    PubMed Central

    Ramos-Méndez, José; Perl, Joseph; Faddegon, Bruce; Schümann, Jan; Paganetti, Harald

    2013-01-01

    Purpose: To present the implementation and validation of a geometrical based variance reduction technique for the calculation of phase space data for proton therapy dose calculation. Methods: The treatment heads at the Francis H Burr Proton Therapy Center were modeled with a new Monte Carlo tool (TOPAS based on Geant4). For variance reduction purposes, two particle-splitting planes were implemented. First, the particles were split upstream of the second scatterer or at the second ionization chamber. Then, particles reaching another plane immediately upstream of the field specific aperture were split again. In each case, particles were split by a factor of 8. At the second ionization chamber and at the latter plane, the cylindrical symmetry of the proton beam was exploited to position the split particles at randomly spaced locations rotated around the beam axis. Phase space data in IAEA format were recorded at the treatment head exit and the computational efficiency was calculated. Depth–dose curves and beam profiles were analyzed. Dose distributions were compared for a voxelized water phantom for different treatment fields for both the reference and optimized simulations. In addition, dose in two patients was simulated with and without particle splitting to compare the efficiency and accuracy of the technique. Results: A normalized computational efficiency gain of a factor of 10–20.3 was reached for phase space calculations for the different treatment head options simulated. Depth–dose curves and beam profiles were in reasonable agreement with the simulation done without splitting: within 1% for depth–dose with an average difference of (0.2 ± 0.4)%, 1 standard deviation, and a 0.3% statistical uncertainty of the simulations in the high dose region; 1.6% for planar fluence with an average difference of (0.4 ± 0.5)% and a statistical uncertainty of 0.3% in the high fluence region. The percentage differences between dose distributions in water for

  19. Geometrical splitting technique to improve the computational efficiency in Monte Carlo calculations for proton therapy

    SciTech Connect

    Ramos-Mendez, Jose; Perl, Joseph; Faddegon, Bruce; Schuemann, Jan; Paganetti, Harald

    2013-04-15

    Purpose: To present the implementation and validation of a geometrical based variance reduction technique for the calculation of phase space data for proton therapy dose calculation. Methods: The treatment heads at the Francis H Burr Proton Therapy Center were modeled with a new Monte Carlo tool (TOPAS based on Geant4). For variance reduction purposes, two particle-splitting planes were implemented. First, the particles were split upstream of the second scatterer or at the second ionization chamber. Then, particles reaching another plane immediately upstream of the field specific aperture were split again. In each case, particles were split by a factor of 8. At the second ionization chamber and at the latter plane, the cylindrical symmetry of the proton beam was exploited to position the split particles at randomly spaced locations rotated around the beam axis. Phase space data in IAEA format were recorded at the treatment head exit and the computational efficiency was calculated. Depth-dose curves and beam profiles were analyzed. Dose distributions were compared for a voxelized water phantom for different treatment fields for both the reference and optimized simulations. In addition, dose in two patients was simulated with and without particle splitting to compare the efficiency and accuracy of the technique. Results: A normalized computational efficiency gain of a factor of 10-20.3 was reached for phase space calculations for the different treatment head options simulated. Depth-dose curves and beam profiles were in reasonable agreement with the simulation done without splitting: within 1% for depth-dose with an average difference of (0.2 {+-} 0.4)%, 1 standard deviation, and a 0.3% statistical uncertainty of the simulations in the high dose region; 1.6% for planar fluence with an average difference of (0.4 {+-} 0.5)% and a statistical uncertainty of 0.3% in the high fluence region. The percentage differences between dose distributions in water for simulations

  20. Low-cost, high-performance and efficiency computational photometer design

    NASA Astrophysics Data System (ADS)

    Siewert, Sam B.; Shihadeh, Jeries; Myers, Randall; Khandhar, Jay; Ivanov, Vitaly

    2014-05-01

    Researchers at the University of Alaska Anchorage and University of Colorado Boulder have built a low cost high performance and efficiency drop-in-place Computational Photometer (CP) to test in field applications ranging from port security and safety monitoring to environmental compliance monitoring and surveying. The CP integrates off-the-shelf visible spectrum cameras with near to long wavelength infrared detectors and high resolution digital snapshots in a single device. The proof of concept combines three or more detectors into a single multichannel imaging system that can time correlate read-out, capture, and image process all of the channels concurrently with high performance and energy efficiency. The dual-channel continuous read-out is combined with a third high definition digital snapshot capability and has been designed using an FPGA (Field Programmable Gate Array) to capture, decimate, down-convert, re-encode, and transform images from two standard definition CCD (Charge Coupled Device) cameras at 30Hz. The continuous stereo vision can be time correlated to megapixel high definition snapshots. This proof of concept has been fabricated as a fourlayer PCB (Printed Circuit Board) suitable for use in education and research for low cost high efficiency field monitoring applications that need multispectral and three dimensional imaging capabilities. Initial testing is in progress and includes field testing in ports, potential test flights in un-manned aerial systems, and future planned missions to image harsh environments in the arctic including volcanic plumes, ice formation, and arctic marine life.

  1. A new computer-oriented approach with efficient variables for multibody dynamics with motion constraints

    NASA Astrophysics Data System (ADS)

    Hu, Quan; Jia, Yinghong; Xu, Shijie

    2012-12-01

    This paper presents a new formulation for automatic generation of the motion equations of arbitrary multibody systems. The method is applicable to systems with rigid and flexible bodies. The number of degrees of freedom (DOF) of the bodies' interconnection joints is allowed to vary from 0 to 6. It permits the system to have tree topology or closed structural loops. The formulation is based on Kane's method. Each rigid or flexible body's contribution to the system generalized inertia force is expressed in a similar manner; therefore, it makes the formulation quite amenable to computer solution. All the recursive kinematic relations are developed, and efficient motion variables describing the elastic motion and the hinge motion are adopted to improve modeling efficiency. Motion constraints are handled by the new form of Kane's equation. The final mathematical model has the same dimension with the generalized speeds of the system and involves no Lagrange multipliers, so it is useful for control system design. A sample example is given to interpret several concepts it involves, while the numerical simulations are shown to validate the algorithm's accuracy and efficiency.

  2. Measuring and tuning energy efficiency on large scale high performance computing platforms.

    SciTech Connect

    Laros, James H., III

    2011-08-01

    Recognition of the importance of power in the field of High Performance Computing, whether it be as an obstacle, expense or design consideration, has never been greater and more pervasive. While research has been conducted on many related aspects, there is a stark absence of work focused on large scale High Performance Computing. Part of the reason is the lack of measurement capability currently available on small or large platforms. Typically, research is conducted using coarse methods of measurement such as inserting a power meter between the power source and the platform, or fine grained measurements using custom instrumented boards (with obvious limitations in scale). To collect the measurements necessary to analyze real scientific computing applications at large scale, an in-situ measurement capability must exist on a large scale capability class platform. In response to this challenge, we exploit the unique power measurement capabilities of the Cray XT architecture to gain an understanding of power use and the effects of tuning. We apply these capabilities at the operating system level by deterministically halting cores when idle. At the application level, we gain an understanding of the power requirements of a range of important DOE/NNSA production scientific computing applications running at large scale (thousands of nodes), while simultaneously collecting current and voltage measurements on the hosting nodes. We examine the effects of both CPU and network bandwidth tuning and demonstrate energy savings opportunities of up to 39% with little or no impact on run-time performance. Capturing scale effects in our experimental results was key. Our results provide strong evidence that next generation large-scale platforms should not only approach CPU frequency scaling differently, but could also benefit from the capability to tune other platform components, such as the network, to achieve energy efficient performance.

  3. Developing an Efficient Computational Method that Estimates the Ability of Students in a Web-Based Learning Environment

    ERIC Educational Resources Information Center

    Lee, Young-Jin

    2012-01-01

    This paper presents a computational method that can efficiently estimate the ability of students from the log files of a Web-based learning environment capturing their problem solving processes. The computational method developed in this study approximates the posterior distribution of the student's ability obtained from the conventional Bayes…

  4. Using CAD/CAM to improve productivity - The IPAD approach

    NASA Technical Reports Server (NTRS)

    Fulton, R. E.

    1981-01-01

    Progress in designing and implementing CAD/CAM systems as a result of the NASA Integrated Programs for Aerospace-Vehicle Design is discussed. Essential software packages have been identified as executive, data management, general user, and geometry and graphics software. Data communication, as a means to integrate data over a network of computers of different vendors, provides data management with the capability of meeting design and manufacturing requirements of the vendors. Geometry software is dependent on developmental success with solid geometry software, which is necessary for continual measurements of, for example, a block of metal while it is being machined. Applications in the aerospace industry, such as for design, analysis, tooling, testing, quality control, etc., are outlined.

  5. The development of a computationally efficient high-resolution viscous-plastic sea ice model

    NASA Astrophysics Data System (ADS)

    Lemieux, Jean Francois

    This thesis presents the development of a high-resolution viscous-plastic (VP) sea ice model. Because of the fine mesh and the size of the domain, an efficient and parallelizable numerical scheme is desirable. In a first step, we have implemented the nonlinear solver used in existing VP models (referred to as the standard solver). It is based on a linear solver and an outer loop (OL) iteration. For the linear solver, we introduced the preconditioned Generalized Minimum RESidual (pGMRES) method. The preconditioner is a line successive overrelaxation solver (SOR). When compared to the SOR and the line SOR (LSOR) methods, two solvers commonly used in the sea ice modeling community, pGMRES increases the computational efficiency by a factor of 16 and 3 respectively. For pGMRES, the symmetry of the system matrix is not a prerequisite. The Coriolis term and the off-diagonal part of the water drag can then be treated implicitly. Theoretical and simulation results show that this implicit treatment eliminates a numerical instability present with an explicit treatment. During this research, we have also observed that the approximate nonlinear solution converges slowly with the number of OL iterations. Furthermore, simulation results reveal: the existence of multiple solutions and occasional convergence failures of the nonlinear solver. For a time step comparable to the forcing time scale, a few OL iterations lead to errors in the velocity field that are of the same order of magnitude as the mean drift. The slow convergence is an issue at all spatial resolutions but is more severe as the grid is refined. It is attributed in part to the standard VP formulation that leads to a momentum equation that is not continuously differentiable. To obtain a smooth formulation, we replaced the standard viscous coefficient expression with capping by a hyperbolic tangent function. This provides a unique solution and reduces the computational time and failure rate. To further improve the

  6. Evaluation of intradural stimulation efficiency and selectivity in a computational model of spinal cord stimulation.

    PubMed

    Howell, Bryan; Lad, Shivanand P; Grill, Warren M

    2014-01-01

    Spinal cord stimulation (SCS) is an alternative or adjunct therapy to treat chronic pain, a prevalent and clinically challenging condition. Although SCS has substantial clinical success, the therapy is still prone to failures, including lead breakage, lead migration, and poor pain relief. The goal of this study was to develop a computational model of SCS and use the model to compare activation of neural elements during intradural and extradural electrode placement. We constructed five patient-specific models of SCS. Stimulation thresholds predicted by the model were compared to stimulation thresholds measured intraoperatively, and we used these models to quantify the efficiency and selectivity of intradural and extradural SCS. Intradural placement dramatically increased stimulation efficiency and reduced the power required to stimulate the dorsal columns by more than 90%. Intradural placement also increased selectivity, allowing activation of a greater proportion of dorsal column fibers before spread of activation to dorsal root fibers, as well as more selective activation of individual dermatomes at different lateral deviations from the midline. Further, the results suggest that current electrode designs used for extradural SCS are not optimal for intradural SCS, and a novel azimuthal tripolar design increased stimulation selectivity, even beyond that achieved with an intradural paddle array. Increased stimulation efficiency is expected to increase the battery life of implantable pulse generators, increase the recharge interval of rechargeable implantable pulse generators, and potentially reduce stimulator volume. The greater selectivity of intradural stimulation may improve the success rate of SCS by mitigating the sensitivity of pain relief to malpositioning of the electrode. The outcome of this effort is a better quantitative understanding of how intradural electrode placement can potentially increase the selectivity and efficiency of SCS, which, in turn

  7. Fabrication of the mandibular implant-supported fixed restoration using CAD/CAM technology: a clinical report.

    PubMed

    Reshad, Mamaly; Cascione, Domenico; Aalam, Alexandre Amir

    2009-11-01

    The mandibular implant-supported fixed restoration is an appropriate treatment choice for patients with inadequate bone volume in the posterior mandible. Computer-aided design/computer-aided manufacturing (CAD/CAM) technology has broadened the scope and application for this treatment option. A milled titanium bar retaining individual all-ceramic zirconium oxide crowns, with composite resin replicating gingival tissues, is recommended as an acceptable variation for this type of prosthesis. An alternative method for fabricating a mandibular implant-supported fixed restoration using CAD/CAM technology is described.

  8. Research related to improved computer aided design software package. [comparative efficiency of finite, boundary, and hybrid element methods in elastostatics

    NASA Technical Reports Server (NTRS)

    Walston, W. H., Jr.

    1986-01-01

    The comparative computational efficiencies of the finite element (FEM), boundary element (BEM), and hybrid boundary element-finite element (HVFEM) analysis techniques are evaluated for representative bounded domain interior and unbounded domain exterior problems in elastostatics. Computational efficiency is carefully defined in this study as the computer time required to attain a specified level of solution accuracy. The study found the FEM superior to the BEM for the interior problem, while the reverse was true for the exterior problem. The hybrid analysis technique was found to be comparable or superior to both the FEM and BEM for both the interior and exterior problems.

  9. A Suggested Computer Aided Drafting Curriculum (Dacum Based).

    ERIC Educational Resources Information Center

    Pedras, Melvin J.; Hoggard, David

    Computer-aided drawing can bring new technology into the drafting classroom. One approach to computer-aided drafting (CAD) involves use of a personal computer and purchased software. Existing school computers could be shared to reduce costs. Following this narrative introduction, a suggested curriculum for the teaching of CAD is presented in…

  10. A computationally efficient alternative for the Liljencrants-Fant model and its perceptual evaluation.

    PubMed

    Veldhuis, R

    1998-01-01

    An alternative for the Liljencrants-Fant (LF) glottal-pulse model is presented. This alternative is derived from the Rosenberg model. Therefore, it is called the Rosenberg++ model. In the derivation a general framework is used for glottal-pulse models. The Rosenberg++ model is described by the same set of T or R parameters as the LF model but it has the advantage over the LF model that it is computationally more efficient. It is compared with the LF model in a psychoacoustic experiment, from which it is concluded that in a practical situation it is capable of producing synthetic speech which is perceptually equivalent to speech generated with the LF model. PMID:9440341

  11. An efficient algorithm for mapping imaging data to 3D unstructured grids in computational biomechanics.

    PubMed

    Einstein, Daniel R; Kuprat, Andrew P; Jiao, Xiangmin; Carson, James P; Einstein, David M; Jacob, Richard E; Corley, Richard A

    2013-01-01

    Geometries for organ scale and multiscale simulations of organ function are now routinely derived from imaging data. However, medical images may also contain spatially heterogeneous information other than geometry that are relevant to such simulations either as initial conditions or in the form of model parameters. In this manuscript, we present an algorithm for the efficient and robust mapping of such data to imaging-based unstructured polyhedral grids in parallel. We then illustrate the application of our mapping algorithm to three different mapping problems: (i) the mapping of MRI diffusion tensor data to an unstructured ventricular grid; (ii) the mapping of serial cyrosection histology data to an unstructured mouse brain grid; and (iii) the mapping of computed tomography-derived volumetric strain data to an unstructured multiscale lung grid. Execution times and parallel performance are reported for each case. PMID:23293066

  12. Tempest - Efficient Computation of Atmospheric Flows Using High-Order Local Discretization Methods

    NASA Astrophysics Data System (ADS)

    Ullrich, P. A.; Guerra, J. E.

    2014-12-01

    The Tempest Framework composes several compact numerical methods to easily facilitate intercomparison of atmospheric flow calculations on the sphere and in rectangular domains. This framework includes the implementations of Spectral Elements, Discontinuous Galerkin, Flux Reconstruction, and Hybrid Finite Element methods with the goal of achieving optimal accuracy in the solution of atmospheric problems. Several advantages of this approach are discussed such as: improved pressure gradient calculation, numerical stability by vertical/horizontal splitting, arbitrary order of accuracy, etc. The local numerical discretization allows for high performance parallel computation and efficient inclusion of parameterizations. These techniques are used in conjunction with a non-conformal, locally refined, cubed-sphere grid for global simulations and standard Cartesian grids for simulations at the mesoscale. A complete implementation of the methods described is demonstrated in a non-hydrostatic setting.

  13. Exploring Infiniband Hardware Virtualization in OpenNebula towards Efficient High-Performance Computing

    SciTech Connect

    Pais Pitta de Lacerda Ruivo, Tiago; Bernabeu Altayo, Gerard; Garzoglio, Gabriele; Timm, Steven; Kim, Hyun-Woo; Noh, Seo-Young; Raicu, Ioan

    2014-11-11

    has been widely accepted that software virtualization has a big negative impact on high-performance computing (HPC) application performance. This work explores the potential use of Infiniband hardware virtualization in an OpenNebula cloud towards the efficient support of MPI-based workloads. We have implemented, deployed, and tested an Infiniband network on the FermiCloud private Infrastructure-as-a-Service (IaaS) cloud. To avoid software virtualization towards minimizing the virtualization overhead, we employed a technique called Single Root Input/Output Virtualization (SRIOV). Our solution spanned modifications to the Linux’s Hypervisor as well as the OpenNebula manager. We evaluated the performance of the hardware virtualization on up to 56 virtual machines connected by up to 8 DDR Infiniband network links, with micro-benchmarks (latency and bandwidth) as well as w a MPI-intensive application (the HPL Linpack benchmark).

  14. Efficient method for computing the maximum-likelihood quantum state from measurements with additive Gaussian noise.

    PubMed

    Smolin, John A; Gambetta, Jay M; Smith, Graeme

    2012-02-17

    We provide an efficient method for computing the maximum-likelihood mixed quantum state (with density matrix ρ) given a set of measurement outcomes in a complete orthonormal operator basis subject to Gaussian noise. Our method works by first changing basis yielding a candidate density matrix μ which may have nonphysical (negative) eigenvalues, and then finding the nearest physical state under the 2-norm. Our algorithm takes at worst O(d(4)) for the basis change plus O(d(3)) for finding ρ where d is the dimension of the quantum state. In the special case where the measurement basis is strings of Pauli operators, the basis change takes only O(d(3)) as well. The workhorse of the algorithm is a new linear-time method for finding the closest probability distribution (in Euclidean distance) to a set of real numbers summing to one.

  15. Towards an efficient computational mining approach to identify EST-SSR markers.

    PubMed

    Sahu, Jagajjit; Sen, Priyabrata; Choudhury, Manabendra Dutta; Barooah, Madhumita; Modi, Mahendra Kumar; Talukdar, Anupam Das

    2012-01-01

    Microsatellites are the markers of choice due to their high abundance reproducibility, degree of polymorphism and co-dominant nature. These are mainly used for studying the genetic variability in different species and Marker assisted selection. Expressed Sequence Tags (ESTs) serve as the main resource for Simple Sequence Repeats (SSRs). The computational approach for detecting SSRs and developing SSR markers from EST-SSRs is preferred over the conventional methods as it reduces time and cost to a great extent. The available EST sequence databases, various web interfaces and standalone tools provide the platform for an easy analysis of the EST sequences leading to the development of potential EST-SSR Markers. This paper is an overview of in silico approach to develop SSR Markers from the EST sequence using some of the most efficient tools that are available freely for academic purpose.

  16. Bouc-Wen model parameter identification for a MR fluid damper using computationally efficient GA.

    PubMed

    Kwok, N M; Ha, Q P; Nguyen, M T; Li, J; Samali, B

    2007-04-01

    A non-symmetrical Bouc-Wen model is proposed in this paper for magnetorheological (MR) fluid dampers. The model considers the effect of non-symmetrical hysteresis which has not been taken into account in the original Bouc-Wen model. The model parameters are identified with a Genetic Algorithm (GA) using its flexibility in identification of complex dynamics. The computational efficiency of the proposed GA is improved with the absorption of the selection stage into the crossover and mutation operations. Crossover and mutation are also made adaptive to the fitness values such that their probabilities need not be user-specified. Instead of using a sufficiently number of generations or a pre-determined fitness value, the algorithm termination criterion is formulated on the basis of a statistical hypothesis test, thus enhancing the performance of the parameter identification. Experimental test data of the damper displacement and force are used to verify the proposed approach with satisfactory parameter identification results. PMID:17349644

  17. Storage and computationally efficient permutations of factorized covariance and square-root information matrices

    NASA Technical Reports Server (NTRS)

    Muellerschoen, R. J.

    1988-01-01

    A unified method to permute vector-stored upper-triangular diagonal factorized covariance (UD) and vector stored upper-triangular square-root information filter (SRIF) arrays is presented. The method involves cyclical permutation of the rows and columns of the arrays and retriangularization with appropriate square-root-free fast Givens rotations or elementary slow Givens reflections. A minimal amount of computation is performed and only one scratch vector of size N is required, where N is the column dimension of the arrays. To make the method efficient for large SRIF arrays on a virtual memory machine, three additional scratch vectors each of size N are used to avoid expensive paging faults. The method discussed is compared with the methods and routines of Bierman's Estimation Subroutine Library (ESL).

  18. Computational Intelligence and Wavelet Transform Based Metamodel for Efficient Generation of Not-Yet Simulated Waveforms

    PubMed Central

    Oltean, Gabriel; Ivanciu, Laura-Nicoleta

    2016-01-01

    The design and verification of complex electronic systems, especially the analog and mixed-signal ones, prove to be extremely time consuming tasks, if only circuit-level simulations are involved. A significant amount of time can be saved if a cost effective solution is used for the extensive analysis of the system, under all conceivable conditions. This paper proposes a data-driven method to build fast to evaluate, but also accurate metamodels capable of generating not-yet simulated waveforms as a function of different combinations of the parameters of the system. The necessary data are obtained by early-stage simulation of an electronic control system from the automotive industry. The metamodel development is based on three key elements: a wavelet transform for waveform characterization, a genetic algorithm optimization to detect the optimal wavelet transform and to identify the most relevant decomposition coefficients, and an artificial neuronal network to derive the relevant coefficients of the wavelet transform for any new parameters combination. The resulted metamodels for three different waveform families are fully reliable. They satisfy the required key points: high accuracy (a maximum mean squared error of 7.1x10-5 for the unity-based normalized waveforms), efficiency (fully affordable computational effort for metamodel build-up: maximum 18 minutes on a general purpose computer), and simplicity (less than 1 second for running the metamodel, the user only provides the parameters combination). The metamodels can be used for very efficient generation of new waveforms, for any possible combination of dependent parameters, offering the possibility to explore the entire design space. A wide range of possibilities becomes achievable for the user, such as: all design corners can be analyzed, possible worst-case situations can be investigated, extreme values of waveforms can be discovered, sensitivity analyses can be performed (the influence of each parameter on the

  19. Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals

    PubMed Central

    Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G.

    2016-01-01

    This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors’ previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp–p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat. PMID:27382478

  20. On-board computational efficiency in real time UAV embedded terrain reconstruction

    NASA Astrophysics Data System (ADS)

    Partsinevelos, Panagiotis; Agadakos, Ioannis; Athanasiou, Vasilis; Papaefstathiou, Ioannis; Mertikas, Stylianos; Kyritsis, Sarantis; Tripolitsiotis, Achilles; Zervos, Panagiotis

    2014-05-01

    In the last few years, there is a surge of applications for object recognition, interpretation and mapping using unmanned aerial vehicles (UAV). Specifications in constructing those UAVs are highly diverse with contradictory characteristics including cost-efficiency, carrying weight, flight time, mapping precision, real time processing capabilities, etc. In this work, a hexacopter UAV is employed for near real time terrain mapping. The main challenge addressed is to retain a low cost flying platform with real time processing capabilities. The UAV weight limitation affecting the overall flight time, makes the selection of the on-board processing components particularly critical. On the other hand, surface reconstruction, as a computational demanding task, calls for a highly demanding processing unit on board. To merge these two contradicting aspects along with customized development, a System on a Chip (SoC) integrated circuit is proposed as a low-power, low-cost processor, which natively supports camera sensors and positioning and navigation systems. Modern SoCs, such as Omap3530 or Zynq, are classified as heterogeneous devices and provide a versatile platform, allowing access to both general purpose processors, such as the ARM11, as well as specialized processors, such as a digital signal processor and floating field-programmable gate array. A UAV equipped with the proposed embedded processors, allows on-board terrain reconstruction using stereo vision in near real time. Furthermore, according to the frame rate required, additional image processing may concurrently take place, such as image rectification andobject detection. Lastly, the onboard positioning and navigation (e.g., GNSS) chip may further improve the quality of the generated map. The resulting terrain maps are compared to ground truth geodetic measurements in order to access the accuracy limitations of the overall process. It is shown that with our proposed novel system,there is much potential in

  1. Computational Intelligence and Wavelet Transform Based Metamodel for Efficient Generation of Not-Yet Simulated Waveforms.

    PubMed

    Oltean, Gabriel; Ivanciu, Laura-Nicoleta

    2016-01-01

    The design and verification of complex electronic systems, especially the analog and mixed-signal ones, prove to be extremely time consuming tasks, if only circuit-level simulations are involved. A significant amount of time can be saved if a cost effective solution is used for the extensive analysis of the system, under all conceivable conditions. This paper proposes a data-driven method to build fast to evaluate, but also accurate metamodels capable of generating not-yet simulated waveforms as a function of different combinations of the parameters of the system. The necessary data are obtained by early-stage simulation of an electronic control system from the automotive industry. The metamodel development is based on three key elements: a wavelet transform for waveform characterization, a genetic algorithm optimization to detect the optimal wavelet transform and to identify the most relevant decomposition coefficients, and an artificial neuronal network to derive the relevant coefficients of the wavelet transform for any new parameters combination. The resulted metamodels for three different waveform families are fully reliable. They satisfy the required key points: high accuracy (a maximum mean squared error of 7.1x10-5 for the unity-based normalized waveforms), efficiency (fully affordable computational effort for metamodel build-up: maximum 18 minutes on a general purpose computer), and simplicity (less than 1 second for running the metamodel, the user only provides the parameters combination). The metamodels can be used for very efficient generation of new waveforms, for any possible combination of dependent parameters, offering the possibility to explore the entire design space. A wide range of possibilities becomes achievable for the user, such as: all design corners can be analyzed, possible worst-case situations can be investigated, extreme values of waveforms can be discovered, sensitivity analyses can be performed (the influence of each parameter on the

  2. Manipulating cinnamyl alcohol dehydrogenase (CAD) expression in flax affects fibre composition and properties

    PubMed Central

    2014-01-01

    Background In recent decades cultivation of flax and its application have dramatically decreased. One of the reasons for this is unpredictable quality and properties of flax fibre, because they depend on environmental factors, retting duration and growing conditions. These factors have contribution to the fibre composition, which consists of cellulose, hemicelluloses, lignin and pectin. By far, it is largely established that in flax, lignin reduces an accessibility of enzymes either to pectin, hemicelluloses or cellulose (during retting or in biofuel synthesis and paper production). Therefore, in this study we evaluated composition and properties of flax fibre from plants with silenced CAD (cinnamyl alcohol dehydrogenase) gene, which is key in the lignin biosynthesis. There is evidence that CAD is a useful tool to improve lignin digestibility and/or to lower the lignin levels in plants. Results Two studied lines responded differentially to the introduced modification due to the efficiency of the CAD silencing. Phylogenetic analysis revealed that flax CAD belongs to the “bona-fide” CAD family. CAD down-regulation had an effect in the reduced lignin amount in the flax fibre cell wall and as FT-IR results suggests, disturbed lignin composition and structure. Moreover introduced modification activated a compensatory mechanism which was manifested in the accumulation of cellulose and/or pectin. These changes had putative correlation with observed improved fiber’s tensile strength. Moreover, CAD down-regulation did not disturb at all or has only slight effect on flax plants’ development in vivo, however, the resistance against flax major pathogen Fusarium oxysporum decreased slightly. The modification positively affected fibre possessing; it resulted in more uniform retting. Conclusion The major finding of our paper is that the modification targeted directly to block lignin synthesis caused not only reduced lignin level in fibre, but also affected amount and

  3. A Hybrid Model for the Computationally-Efficient Simulation of the Cerebellar Granular Layer.

    PubMed

    Cattani, Anna; Solinas, Sergio; Canuto, Claudio

    2016-01-01

    The aim of the present paper is to efficiently describe the membrane potential dynamics of neural populations formed by species having a high density difference in specific brain areas. We propose a hybrid model whose main ingredients are a conductance-based model (ODE system) and its continuous counterpart (PDE system) obtained through a limit process in which the number of neurons confined in a bounded region of the brain tissue is sent to infinity. Specifically, in the discrete model, each cell is described by a set of time-dependent variables, whereas in the continuum model, cells are grouped into populations that are described by a set of continuous variables. Communications between populations, which translate into interactions among the discrete and the continuous models, are the essence of the hybrid model we present here. The cerebellum and cerebellum-like structures show in their granular layer a large difference in the relative density of neuronal species making them a natural testing ground for our hybrid model. By reconstructing the ensemble activity of the cerebellar granular layer network and by comparing our results to a more realistic computational network, we demonstrate that our description of the network activity, even though it is not biophysically detailed, is still capable of reproducing salient features of neural network dynamics. Our modeling approach yields a significant computational cost reduction by increasing the simulation speed at least 270 times. The hybrid model reproduces interesting dynamics such as local microcircuit synchronization, traveling waves, center-surround, and time-windowing. PMID:27148027

  4. A Hybrid Model for the Computationally-Efficient Simulation of the Cerebellar Granular Layer

    PubMed Central

    Cattani, Anna; Solinas, Sergio; Canuto, Claudio

    2016-01-01

    The aim of the present paper is to efficiently describe the membrane potential dynamics of neural populations formed by species having a high density difference in specific brain areas. We propose a hybrid model whose main ingredients are a conductance-based model (ODE system) and its continuous counterpart (PDE system) obtained through a limit process in which the number of neurons confined in a bounded region of the brain tissue is sent to infinity. Specifically, in the discrete model, each cell is described by a set of time-dependent variables, whereas in the continuum model, cells are grouped into populations that are described by a set of continuous variables. Communications between populations, which translate into interactions among the discrete and the continuous models, are the essence of the hybrid model we present here. The cerebellum and cerebellum-like structures show in their granular layer a large difference in the relative density of neuronal species making them a natural testing ground for our hybrid model. By reconstructing the ensemble activity of the cerebellar granular layer network and by comparing our results to a more realistic computational network, we demonstrate that our description of the network activity, even though it is not biophysically detailed, is still capable of reproducing salient features of neural network dynamics. Our modeling approach yields a significant computational cost reduction by increasing the simulation speed at least 270 times. The hybrid model reproduces interesting dynamics such as local microcircuit synchronization, traveling waves, center-surround, and time-windowing. PMID:27148027

  5. A computationally efficient method for simulating fluid flow in elastic pipes in three dimensions

    NASA Astrophysics Data System (ADS)

    Doctors, G. M.; Mazzeo, M. D.; Coveney, P. V.

    2010-09-01

    We propose a new method for carrying out lattice-Boltzmann simulations of pulsatile fluid flow in three-dimensional elastic pipes. It is based on estimating the distances from sites at the edge of the simulation box to the wall along the lattice directions from the displacement of the closest point on the wall and the curvature there, followed by application of a nonequilibrium extrapolation method. Viscous flow in an elastic pipe is studied in three dimensions at a wall displacement of 5% of the radius of the pipe, which is realistic for blood flow through large cerebral arteries. The numerical results for the pressure difference, wall displacement and flow velocity agree well with the analytical predictions. At all sites, the calculation depends only on information from nearest neighbours, so the method proposed is suitable for efficient computation on multicore machines. Compared to simulations with rigid walls, simulations with elastic walls require only 13% more computational effort at the parameters chosen in this study.

  6. Computer Controlled Portable Greenhouse Climate Control System for Enhanced Energy Efficiency

    NASA Astrophysics Data System (ADS)

    Datsenko, Anthony; Myer, Steve; Petties, Albert; Hustek, Ryan; Thompson, Mark

    2010-04-01

    This paper discusses a student project at Kettering University focusing on the design and construction of an energy efficient greenhouse climate control system. In order to maintain acceptable temperatures and stabilize temperature fluctuations in a portable plastic greenhouse economically, a computer controlled climate control system was developed to capture and store thermal energy incident on the structure during daylight periods and release the stored thermal energy during dark periods. The thermal storage mass for the greenhouse system consisted of a water filled base unit. The heat exchanger consisted of a system of PVC tubing. The control system used a programmable LabView computer interface to meet functional specifications that minimized temperature fluctuations and recorded data during operation. The greenhouse was a portable sized unit with a 5' x 5' footprint. Control input sensors were temperature, water level, and humidity sensors and output control devices were fan actuating relays and water fill solenoid valves. A Graphical User Interface was developed to monitor the system, set control parameters, and to provide programmable data recording times and intervals.

  7. Approaches for the computationally efficient assessment of the plug-in HEV impact on the grid

    NASA Astrophysics Data System (ADS)

    Lee, Tae-Kyung; Filipi, Zoran S.

    2012-11-01

    Realistic duty cycles are critical for design and assessment of hybrid propulsion systems, in particular, plug-in hybrid electric vehicles. The analysis of the PHEV impact requires a large amount of data about daily missions for ensuring realism in predicted temporal loads on the grid. This paper presents two approaches for the reduction of the computational effort while assessing the large scale PHEV impact on the grid, namely 1) "response surface modelling" approach; and 2) "daily driving schedule modelling" approach. The response surface modelling approach replaces the time-consuming vehicle simulations by response surfaces constructed off-line with the consideration of the real-world driving. The daily driving modelling approach establishes a correlation between departure and arrival times, and it predicts representative driving patterns with a significantly reduced number of simulation cases. In both cases, representative synthetic driving cycles are used to capture the naturalistic driving characteristics for a given trip length. The proposed approaches enable construction of 24-hour missions, assessments of charging requirements at the time of plugging-in, and temporal distributions of the load on the grid with high computational efficiency.

  8. Modeling the evolution of channel shape: Balancing computational efficiency with hydraulic fidelity

    USGS Publications Warehouse

    Wobus, C.W.; Kean, J.W.; Tucker, G.E.; Anderson, R. Scott

    2008-01-01

    The cross-sectional shape of a natural river channel controls the capacity of the system to carry water off a landscape, to convey sediment derived from hillslopes, and to erode its bed and banks. Numerical models that describe the response of a landscape to changes in climate or tectonics therefore require formulations that can accommodate evolution of channel cross-sectional geometry. However, fully two-dimensional (2-D) flow models are too computationally expensive to implement in large-scale landscape evolution models, while available simple empirical relationships between width and discharge do not adequately capture the dynamics of channel adjustment. We have developed a simplified 2-D numerical model of channel evolution in a cohesive, detachment-limited substrate subject to steady, unidirectional flow. Erosion is assumed to be proportional to boundary shear stress, which is calculated using an approximation of the flow field in which log-velocity profiles are assumed to apply along vectors that are perpendicular to the local channel bed. Model predictions of the velocity structure, peak boundary shear stress, and equilibrium channel shape compare well with predictions of a more sophisticated but more computationally demanding ray-isovel model. For example, the mean velocities computed by the two models are consistent to within ???3%, and the predicted peak shear stress is consistent to within ???7%. Furthermore, the shear stress distributions predicted by our model compare favorably with available laboratory measurements for prescribed channel shapes. A modification to our simplified code in which the flow includes a high-velocity core allows the model to be extended to estimate shear stress distributions in channels with large width-to-depth ratios. Our model is efficient enough to incorporate into large-scale landscape evolution codes and can be used to examine how channels adjust both cross-sectional shape and slope in response to tectonic and climatic

  9. Separation efficiency of a hydrodynamic separator using a 3D computational fluid dynamics multiscale approach.

    PubMed

    Schmitt, Vivien; Dufresne, Matthieu; Vazquez, Jose; Fischer, Martin; Morin, Antoine

    2014-01-01

    The aim of this study is to investigate the use of computational fluid dynamics (CFD) to predict the solid separation efficiency of a hydrodynamic separator. The numerical difficulty concerns the discretization of the geometry to simulate both the global behavior and the local phenomena that occur near the screen. In this context, a CFD multiscale approach was used: a global model (at the scale of the device) is used to observe the hydrodynamic behavior within the device; a local model (portion of the screen) is used to determine the local phenomena that occur near the screen. The Eulerian-Lagrangian approach was used to model the particle trajectories in both models. The global model shows the influence of the particles' characteristics on the trapping efficiency. A high density favors the sedimentation. In contrast, particles with small densities (1,040 kg/m(3)) are steered by the hydrodynamic behavior and can potentially be trapped by the separator. The use of the local model allows us to observe the particle trajectories near the screen. A comparison between two types of screens (perforated plate vs expanded metal) highlights the turbulent effects created by the shape of the screen.

  10. Computationally-efficient finite-element-based thermal and electromagnetic models of electric machines

    NASA Astrophysics Data System (ADS)

    Zhou, Kan

    With the modern trend of transportation electrification, electric machines are a key component of electric/hybrid electric vehicle (EV/HEV) powertrains. It is therefore important that vehicle powertrain-level and system-level designers and control engineers have access to accurate yet computationally-efficient (CE), physics-based modeling tools of the thermal and electromagnetic (EM) behavior of electric machines. In this dissertation, CE yet sufficiently-accurate thermal and EM models for electric machines, which are suitable for use in vehicle powertrain design, optimization, and control, are developed. This includes not only creating fast and accurate thermal and EM models for specific machine designs, but also the ability to quickly generate and determine the performance of new machine designs through the application of scaling techniques to existing designs. With the developed techniques, the thermal and EM performance can be accurately and efficiently estimated. Furthermore, powertrain or system designers can easily and quickly adjust the characteristics and the performance of the machine in ways that are favorable to the overall vehicle performance.

  11. Efficient real gas Navier-Stokes computations of high speed flows using an LU scheme

    NASA Technical Reports Server (NTRS)

    Coirier, William J.

    1990-01-01

    An efficient method to account for the chemically frozen thermodynamic and transport properties of air in three dimensional Navier-Stokes calculations was demonstrated. This approach uses an explicitly specified equation of state (EOS) so that the fluid pressure, temperature and transport properties are directly related to the flow variables. Since the pressure is explicitly known as a general function of the flow variables no assumptions are made regarding the pressure derivatives in the construction of the flux Jacobians. The method is efficient since no sub-iterations are required to deduce the pressure and temperature from the flux variables and allows different equations of state to be easily supplied to the code. The flexibility of the EOS approach is demonstrated by implementing a high order TVD upwinding scheme based upon flux differencing and Van Leer's flux vector splitting. The EOS approach is demonstrated by computing the hypersonic flow through the corner region of two mutually perpendicular flat plates and through a simplified model of a scramjet module gap-seal configuration.

  12. Using partial least squares to compute efficient channels for the Bayesian ideal observer

    NASA Astrophysics Data System (ADS)

    Witten, Joel M.; Park, Subok; Myers, Kyle J.

    2009-02-01

    We define image quality by how accurately an observer, human or otherwise, can perform a given task, such as determining to which class an image belongs. For detection tasks, the Bayesian ideal observer is the best observer, in that it sets an upper bound for observer performance, summarized by the area under the receiver operating characteristic curve. However, the use of this observer is frequently infeasible because of unknown image statistics, whose estimation is computationally costly. As a result, a channelized ideal observer (CIO) was investigated to reduce the dimensionality of the data, yet approximate the performance of the ideal observer. Previously investigated channels include Laguerre Gauss (LG) channels and channels via the singular value decomposition of the given linear system (SVD). Though both types are highly efficient for the ideal observer, they nevertheless have the weakness that they may not be as efficient for general detection tasks involving complex/realistic images; the former is particular to the signal and background shape, and the latter is particular to the system operator. In this work, we attempt to develop channels that can be applied to a system with any signal and background type and without knowledge of any characteristics of the system. The method used is a partial least squares algorithm (PLS), in which channels are chosen to maximize the squared covariance between images and their classes. Preliminary results show that the CIO with PLS channels outperforms one with either the LG or SVD channels and very closely approximates ideal-observer performance.

  13. Competency Reference for Computer Assisted Drafting.

    ERIC Educational Resources Information Center

    Oregon State Dept. of Education, Salem. Div. of Vocational Technical Education.

    This guide, developed in Oregon, lists competencies essential for students in computer-assisted drafting (CAD). Competencies are organized in eight categories: computer hardware, file usage and manipulation, basic drafting techniques, mechanical drafting, specialty disciplines, three dimensional drawing/design, plotting/printing, and advanced CAD.…

  14. Computer Aided Design in Engineering Education.

    ERIC Educational Resources Information Center

    Gobin, R.

    1986-01-01

    Discusses the use of Computer Aided Design (CAD) and Computer Aided Manufacturing (CAM) systems in an undergraduate engineering education program. Provides a rationale for CAD/CAM use in the already existing engineering program. Describes the methods used in choosing the systems, some initial results, and warnings for first-time users. (TW)

  15. From Artisanal to CAD-CAM Blocks: State of the Art of Indirect Composites.

    PubMed

    Mainjot, A K; Dupont, N M; Oudkerk, J C; Dewael, T Y; Sadoun, M J

    2016-05-01

    Indirect composites have been undergoing an impressive evolution over the last few years. Specifically, recent developments in computer-aided design-computer-aided manufacturing (CAD-CAM) blocks have been associated with new polymerization modes, innovative microstructures, and different compositions. All these recent breakthroughs have introduced important gaps among the properties of the different materials. This critical state-of-the-art review analyzes the strengths and weaknesses of the different varieties of CAD-CAM composite materials, especially as compared with direct and artisanal indirect composites. Indeed, new polymerization modes used for CAD-CAM blocks-especially high temperature (HT) and, most of all, high temperature-high pressure (HT-HP)-are shown to significantly increase the degree of conversion in comparison with light-cured composites. Industrial processes also allow for the augmentation of the filler content and for the realization of more homogeneous structures with fewer flaws. In addition, due to their increased degree of conversion and their different monomer composition, some CAD-CAM blocks are more advantageous in terms of toxicity and monomer release. Finally, materials with a polymer-infiltrated ceramic network (PICN) microstructure exhibit higher flexural strength and a more favorable elasticity modulus than materials with a dispersed filler microstructure. Consequently, some high-performance composite CAD-CAM blocks-particularly experimental PICNs-can now rival glass-ceramics, such as lithium-disilicate glass-ceramics, for use as bonded partial restorations and crowns on natural teeth and implants. Being able to be manufactured in very low thicknesses, they offer the possibility of developing innovative minimally invasive treatment strategies, such as "no prep" treatment of worn dentition. Current issues are related to the study of bonding and wear properties of the different varieties of CAD-CAM composites. There is also a crucial

  16. True Concurrent Thermal Engineering Integrating CAD Model Building with Finite Element and Finite Difference Methods

    NASA Technical Reports Server (NTRS)

    Panczak, Tim; Ring, Steve; Welch, Mark

    1999-01-01

    Thermal engineering has long been left out of the concurrent engineering environment dominated by CAD (computer aided design) and FEM (finite element method) software. Current tools attempt to force the thermal design process into an environment primarily created to support structural analysis, which results in inappropriate thermal models. As a result, many thermal engineers either build models "by hand" or use geometric user interfaces that are separate from and have little useful connection, if any, to CAD and FEM systems. This paper describes the development of a new thermal design environment called the Thermal Desktop. This system, while fully integrated into a neutral, low cost CAD system, and which utilizes both FEM and FD methods, does not compromise the needs of the thermal engineer. Rather, the features needed for concurrent thermal analysis are specifically addressed by combining traditional parametric surface based radiation and FD based conduction modeling with CAD and FEM methods. The use of flexible and familiar temperature solvers such as SINDA/FLUINT (Systems Improved Numerical Differencing Analyzer/Fluid Integrator) is retained.

  17. C.A.D. representation of ternary and quaternary phase diagrams

    NASA Technical Reports Server (NTRS)

    Delao, James D.

    1986-01-01

    This work is concerned with the utilization of C.A.D. solid-modeling software for the computer representation of three-dimensional phase diagrams. The work was undertaken in two parts. First, the C.A.D. software (I-DEAS, by Structural Dynamics Research Corp.) was integrated with a variety of auxiliary Fortran 77 and I-DEAS language programs which were written specifically for the purpose of phase diagram representation. The capabilities of the resulting suite of software for three-dimensional phase diagram representation were developed and illustrated by the construction, display and manipulation of solid-model phase diagrams for a hypothetical quaternary eutectic system. The results of this work are discussed in some detail in the attached publication ('Solid-modeling: a C.A.D. Alternative for Three-dimensional Phase Diagram Representation'). Such a technique is of general applicability, having utility in both research and education. Secondly, using the C.A.D. technique, data from the literature (gleaned from some 70 separate publications), which represent experimentally determined phase boundaries, were combined to form solid-model representations of the CMS2-M2S-S ternary space diagram and the CMS2-CAS2-M2S-S quaternary liquidus projection (where C=CaO, M=MgO, A=Al2O3, and S=SiO2). These diagrams were utilized in a concurrent study of solidification in the CMAS system.

  18. A novel approach to CAD system for the detection of lung nodules in CT images.

    PubMed

    Javaid, Muzzamil; Javid, Moazzam; Rehman, Muhammad Zia Ur; Shah, Syed Irtiza Ali

    2016-10-01

    Detection of pulmonary nodule plays a significant role in the diagnosis of lung cancer in early stage that improves the chances of survival of an individual. In this paper, a computer aided nodule detection method is proposed for the segmentation and detection of challenging nodules like juxtavascular and juxtapleural nodules. Lungs are segmented from computed tomography (CT) images using intensity thresholding; brief analysis of CT image histogram is done to select a suitable threshold value for better segmentation results. Simple morphological closing is used to include juxtapleural nodules in segmented lung regions. K-means clustering is applied for the initial detection and segmentation of potential nodules; shape specific morphological opening is implemented to refine segmentation outcomes. These segmented potential nodules are then divided into six groups on the basis of their thickness and percentage connectivity with lung walls. Grouping not only helped in improving system's efficiency but also reduced computational time, otherwise consumed in calculating and analyzing unnecessary features for all nodules. Different sets of 2D and 3D features are extracted from nodules in each group to eliminate false positives. Small size nodules are differentiated from false positives (FPs) on the basis of their salient features; sensitivity of the system for small nodules is 83.33%. SVM classifier is used for the classification of large nodules, for which the sensitivity of the proposed system is 93.8% applying 10-fold cross-validation. Receiver Operating Characteristic (ROC) curve is used for the analysis of CAD system. Overall sensitivity of the system is 91.65% with 3.19 FPs per case, and accuracy is 96.22%. The system took 3.8 seconds to analyze each image.

  19. A novel approach to CAD system for the detection of lung nodules in CT images.

    PubMed

    Javaid, Muzzamil; Javid, Moazzam; Rehman, Muhammad Zia Ur; Shah, Syed Irtiza Ali

    2016-10-01

    Detection of pulmonary nodule plays a significant role in the diagnosis of lung cancer in early stage that improves the chances of survival of an individual. In this paper, a computer aided nodule detection method is proposed for the segmentation and detection of challenging nodules like juxtavascular and juxtapleural nodules. Lungs are segmented from computed tomography (CT) images using intensity thresholding; brief analysis of CT image histogram is done to select a suitable threshold value for better segmentation results. Simple morphological closing is used to include juxtapleural nodules in segmented lung regions. K-means clustering is applied for the initial detection and segmentation of potential nodules; shape specific morphological opening is implemented to refine segmentation outcomes. These segmented potential nodules are then divided into six groups on the basis of their thickness and percentage connectivity with lung walls. Grouping not only helped in improving system's efficiency but also reduced computational time, otherwise consumed in calculating and analyzing unnecessary features for all nodules. Different sets of 2D and 3D features are extracted from nodules in each group to eliminate false positives. Small size nodules are differentiated from false positives (FPs) on the basis of their salient features; sensitivity of the system for small nodules is 83.33%. SVM classifier is used for the classification of large nodules, for which the sensitivity of the proposed system is 93.8% applying 10-fold cross-validation. Receiver Operating Characteristic (ROC) curve is used for the analysis of CAD system. Overall sensitivity of the system is 91.65% with 3.19 FPs per case, and accuracy is 96.22%. The system took 3.8 seconds to analyze each image. PMID:27586486

  20. Library-based statistical reproduction as a tool for computationally efficient climate model emulation

    NASA Astrophysics Data System (ADS)

    Castruccio, S.; McInerney, D.; Stein, M. L.; Moyer, E. J.

    2011-12-01

    downscaling of the model (pattern scaling). For a monotonic forcing scenario, the results suggest that statistical emulation can be used to produce computationally efficient tools based on pre-computed libraries of model output that can aid in basic science, in model intercomparison studies, and in policy analysis.