Science.gov

Sample records for computationally efficient cad

  1. A new computationally efficient CAD system for pulmonary nodule detection in CT imagery.

    PubMed

    Messay, Temesguen; Hardie, Russell C; Rogers, Steven K

    2010-06-01

    Early detection of lung nodules is extremely important for the diagnosis and clinical management of lung cancer. In this paper, a novel computer aided detection (CAD) system for the detection of pulmonary nodules in thoracic computed tomography (CT) imagery is presented. The paper describes the architecture of the CAD system and assesses its performance on a publicly available database to serve as a benchmark for future research efforts. Training and tuning of all modules in our CAD system is done using a separate and independent dataset provided courtesy of the University of Texas Medical Branch (UTMB). The publicly available testing dataset is that created by the Lung Image Database Consortium (LIDC). The LIDC data used here is comprised of 84 CT scans containing 143 nodules ranging from 3 to 30mm in effective size that are manually segmented at least by one of the four radiologists. The CAD system uses a fully automated lung segmentation algorithm to define the boundaries of the lung regions. It combines intensity thresholding with morphological processing to detect and segment nodule candidates simultaneously. A set of 245 features is computed for each segmented nodule candidate. A sequential forward selection process is used to determine the optimum subset of features for two distinct classifiers, a Fisher Linear Discriminant (FLD) classifier and a quadratic classifier. A performance comparison between the two classifiers is presented, and based on this, the FLD classifier is selected for the CAD system. With an average of 517.5 nodule candidates per case/scan (517.5+/-72.9), the proposed front-end detector/segmentor is able to detect 92.8% of all the nodules in the LIDC/testing dataset (based on merged ground truth). The mean overlap between the nodule regions delineated by three or more radiologists and the ones segmented by the proposed segmentation algorithm is approximately 63%. Overall, with a specificity of 3 false positives (FPs) per case/patient on

  2. Computing Mass Properties From AutoCAD

    NASA Technical Reports Server (NTRS)

    Jones, A.

    1990-01-01

    Mass properties of structures computed from data in drawings. AutoCAD to Mass Properties (ACTOMP) computer program developed to facilitate quick calculations of mass properties of structures containing many simple elements in such complex configurations as trusses or sheet-metal containers. Mathematically modeled in AutoCAD or compatible computer-aided design (CAD) system in minutes by use of three-dimensional elements. Written in Microsoft Quick-Basic (Version 2.0).

  3. Impact of a computer-aided detection (CAD) system integrated into a picture archiving and communication system (PACS) on reader sensitivity and efficiency for the detection of lung nodules in thoracic CT exams.

    PubMed

    Bogoni, Luca; Ko, Jane P; Alpert, Jeffrey; Anand, Vikram; Fantauzzi, John; Florin, Charles H; Koo, Chi Wan; Mason, Derek; Rom, William; Shiau, Maria; Salganicoff, Marcos; Naidich, David P

    2012-12-01

    The objective of this study is to assess the impact on nodule detection and efficiency using a computer-aided detection (CAD) device seamlessly integrated into a commercially available picture archiving and communication system (PACS). Forty-eight consecutive low-dose thoracic computed tomography studies were retrospectively included from an ongoing multi-institutional screening study. CAD results were sent to PACS as a separate image series for each study. Five fellowship-trained thoracic radiologists interpreted each case first on contiguous 5 mm sections, then evaluated the CAD output series (with CAD marks on corresponding axial sections). The standard of reference was based on three-reader agreement with expert adjudication. The time to interpret CAD marking was automatically recorded. A total of 134 true-positive nodules, measuring 3 mm and larger were included in our study; with 85 ≥ 4 and 50 ≥ 5 mm in size. Readers detection improved significantly in each size category when using CAD, respectively, from 44 to 57 % for ≥3 mm, 48 to 61 % for ≥4 mm, and 44 to 60 % for ≥5 mm. CAD stand-alone sensitivity was 65, 68, and 66 % for nodules ≥3, ≥4, and ≥5 mm, respectively, with CAD significantly increasing the false positives for two readers only. The average time to interpret and annotate a CAD mark was 15.1 s, after localizing it in the original image series. The integration of CAD into PACS increases reader sensitivity with minimal impact on interpretation time and supports such implementation into daily clinical practice. PMID:22710985

  4. A CAD (Classroom Assessment Design) of a Computer Programming Course

    ERIC Educational Resources Information Center

    Hawi, Nazir S.

    2012-01-01

    This paper presents a CAD (classroom assessment design) of an entry-level undergraduate computer programming course "Computer Programming I". CAD has been the product of a long experience in teaching computer programming courses including teaching "Computer Programming I" 22 times. Each semester, CAD is evaluated and modified for the subsequent…

  5. Computer-Aided Design (CAD).

    ERIC Educational Resources Information Center

    Burns, William E.

    1986-01-01

    Discusses the field of computer-aided design, which combines the skills and creativity of the architect, designer, drafter, and engineer with the power of the computer. Reports on job tasks, applications, background of the field, job outlook, and necessary training. (CH)

  6. Computer-aided diagnosis (CAD) for colonoscopy

    NASA Astrophysics Data System (ADS)

    Gu, Jia; Poirson, Allen

    2007-03-01

    Colorectal cancer is the second leading cause of cancer deaths, and ranks third for new cancer cases and cancer mortality for both men and women. However, its death rate can be dramatically reduced by appropriate treatment when early detection is available. The purpose of colonoscopy is to identify and assess the severity of lesions, which may be flat or protruding. Due to the subjective nature of the examination, colonoscopic proficiency is highly variable and dependent upon the colonoscopist's knowledge and experience. An automated image processing system providing an objective, rapid, and inexpensive analysis of video from a standard colonoscope could provide a valuable tool for screening and diagnosis. In this paper, we present the design, functionality and preliminary results of its Computer-Aided-Diagnosis (CAD) system for colonoscopy - ColonoCAD TM. ColonoCAD is a complex multi-sensor, multi-data and multi-algorithm image processing system, incorporating data management and visualization, video quality assessment and enhancement, calibration, multiple view based reconstruction, feature extraction and classification. As this is a new field in medical image processing, our hope is that this paper will provide the framework to encourage and facilitate collaboration and discussion between industry, academia, and medical practitioners.

  7. Computer-aided-diagnosis (CAD) for colposcopy

    NASA Astrophysics Data System (ADS)

    Lange, Holger; Ferris, Daron G.

    2005-04-01

    Uterine cervical cancer is the second most common cancer among women worldwide. Colposcopy is a diagnostic method, whereby a physician (colposcopist) visually inspects the lower genital tract (cervix, vulva and vagina), with special emphasis on the subjective appearance of metaplastic epithelium comprising the transformation zone on the cervix. Cervical cancer precursor lesions and invasive cancer exhibit certain distinctly abnormal morphologic features. Lesion characteristics such as margin; color or opacity; blood vessel caliber, intercapillary spacing and distribution; and contour are considered by colposcopists to derive a clinical diagnosis. Clinicians and academia have suggested and shown proof of concept that automated image analysis of cervical imagery can be used for cervical cancer screening and diagnosis, having the potential to have a direct impact on improving women"s health care and reducing associated costs. STI Medical Systems is developing a Computer-Aided-Diagnosis (CAD) system for colposcopy -- ColpoCAD. At the heart of ColpoCAD is a complex multi-sensor, multi-data and multi-feature image analysis system. A functional description is presented of the envisioned ColpoCAD system, broken down into: Modality Data Management System, Image Enhancement, Feature Extraction, Reference Database, and Diagnosis and directed Biopsies. The system design and development process of the image analysis system is outlined. The system design provides a modular and open architecture built on feature based processing. The core feature set includes the visual features used by colposcopists. This feature set can be extended to include new features introduced by new instrument technologies, like fluorescence and impedance, and any other plausible feature that can be extracted from the cervical data. Preliminary results of our research on detecting the three most important features: blood vessel structures, acetowhite regions and lesion margins are shown. As this is a new

  8. Preparing Students for Computer Aided Drafting (CAD). A Conceptual Approach.

    ERIC Educational Resources Information Center

    Putnam, A. R.; Duelm, Brian

    This presentation outlines guidelines for developing and implementing an introductory course in computer-aided drafting (CAD) that is geared toward secondary-level students. The first section of the paper, which deals with content identification and selection, includes lists of mechanical drawing and CAD competencies and a list of rationales for…

  9. CAD-centric Computation Management System for a Virtual TBM

    SciTech Connect

    Ramakanth Munipalli; K.Y. Szema; P.Y. Huang; C.M. Rowell; A.Ying; M. Abdou

    2011-05-03

    HyPerComp Inc. in research collaboration with TEXCEL has set out to build a Virtual Test Blanket Module (VTBM) computational system to address the need in contemporary fusion research for simulating the integrated behavior of the blanket, divertor and plasma facing components in a fusion environment. Physical phenomena to be considered in a VTBM will include fluid flow, heat transfer, mass transfer, neutronics, structural mechanics and electromagnetics. We seek to integrate well established (third-party) simulation software in various disciplines mentioned above. The integrated modeling process will enable user groups to interoperate using a common modeling platform at various stages of the analysis. Since CAD is at the core of the simulation (as opposed to computational meshes which are different for each problem,) VTBM will have a well developed CAD interface, governing CAD model editing, cleanup, parameter extraction, model deformation (based on simulation,) CAD-based data interpolation. In Phase-I, we built the CAD-hub of the proposed VTBM and demonstrated its use in modeling a liquid breeder blanket module with coupled MHD and structural mechanics using HIMAG and ANSYS. A complete graphical user interface of the VTBM was created, which will form the foundation of any future development. Conservative data interpolation via CAD (as opposed to mesh-based transfer), the regeneration of CAD models based upon computed deflections, are among the other highlights of phase-I activity.

  10. Introduction to CAD/Computers. High-Technology Training Module.

    ERIC Educational Resources Information Center

    Lockerby, Hugh

    This learning module for an eighth-grade introductory technology course is designed to help teachers introduce students to computer-assisted design (CAD) in a communications unit on graphics. The module contains a module objective and five specific objectives, a content outline, suggested instructor methodology, student activities, a list of six…

  11. Role of computer aided detection (CAD) integration: case study with meniscal and articular cartilage CAD applications

    NASA Astrophysics Data System (ADS)

    Safdar, Nabile; Ramakrishna, Bharath; Saiprasad, Ganesh; Siddiqui, Khan; Siegel, Eliot

    2008-03-01

    Knee-related injuries involving the meniscal or articular cartilage are common and require accurate diagnosis and surgical intervention when appropriate. With proper techniques and experience, confidence in detection of meniscal tears and articular cartilage abnormalities can be quite high. However, for radiologists without musculoskeletal training, diagnosis of such abnormalities can be challenging. In this paper, the potential of improving diagnosis through integration of computer-aided detection (CAD) algorithms for automatic detection of meniscal tears and articular cartilage injuries of the knees is studied. An integrated approach in which the results of algorithms evaluating either meniscal tears or articular cartilage injuries provide feedback to each other is believed to improve the diagnostic accuracy of the individual CAD algorithms due to the known association between abnormalities in these distinct anatomic structures. The correlation between meniscal tears and articular cartilage injuries is exploited to improve the final diagnostic results of the individual algorithms. Preliminary results from the integrated application are encouraging and more comprehensive tests are being planned.

  12. Differences in computer exposure between university administrators and CAD draftsmen.

    PubMed

    Wu, Hsin-Chieh; Liu, Yung-Ping; Chen, Hsieh-Ching

    2010-10-01

    This study utilized an external logger system for onsite measurements of computer activities of two professional groups-twelve university administrators and twelve computer-aided design (CAD) draftsmen. Computer use of each participant was recorded for 10 consecutive days-an average of 7.9+/-1.8 workdays and 7.8+/-1.5 workdays for administrators and draftsmen, respectively. Quantitative parameters computed using recorded data were daily dynamic duration (DD) and static duration, daily keystrokes, mouse clicks, wheel scrolling counts, mouse movement and dragged distance, average typing and clicking rates, and average time holding down keys and mouse buttons. Significant group differences existed in the number of daily keystrokes (p<0.0005) and mouse clicks (p<0.0005), mouse distance moved (p<0.0005), typing rate (p<0.0001), daily mouse DD (p<0.0001), and keyboard DD (p<0.005). Both groups had significantly longer mouse DD than keyboard DD (p<0.0001). Statistical analysis indicates that the duration of computer use for different computer tasks cannot be represented by a single formula with same set of quantitative parameters as those associated with mouse and keyboard activities. Results of this study demonstrate that computer exposure during different tasks cannot be estimated solely by computer use duration. Quantification of onsite computer activities is necessary when determining computer-associated risk of musculoskeletal disorders. Other significant findings are discussed. PMID:20392434

  13. Computer Aided Detection (CAD) Systems for Mammography and the Use of GRID in Medicine

    NASA Astrophysics Data System (ADS)

    Lauria, Adele

    It is well known that the most effective way to defeat breast cancer is early detection, as surgery and medical therapies are more efficient when the disease is diagnosed at an early stage. The principal diagnostic technique for breast cancer detection is X-ray mammography. Screening programs have been introduced in many European countries to invite women to have periodic radiological breast examinations. In such screenings, radiologists are often required to examine large numbers of mammograms with a double reading, that is, two radiologists examine the images independently and then compare their results. In this way an increment in sensitivity (the rate of correctly identified images with a lesion) of up to 15% is obtained.1,2 In most radiological centres, it is a rarity to find two radiologists to examine each report. In recent years different Computer Aided Detection (CAD) systems have been developed as a support to radiologists working in mammography: one may hope that the "second opinion" provided by CAD might represent a lower cost alternative to improve the diagnosis. At present, four CAD systems have obtained the FDA approval in the USA. † Studies3,4 show an increment in sensitivity when CAD systems are used. Freer and Ulissey in 2001 5 demonstrated that the use of a commercial CAD system (ImageChecker M1000, R2 Technology) increases the number of cancers detected up to 19.5% with little increment in recall rate. Ciatto et al.,5 in a study simulating a double reading with a commercial CAD system (SecondLook‡), showed a moderate increment in sensitivity while reducing specificity (the rate of correctly identified images without a lesion). Notwithstanding these optimistic results, there is an ongoing debate to define the advantages of the use of CAD as second reader: the main limits underlined, e.g., by Nishikawa6 are that retrospective studies are considered much too optimistic and that clinical studies must be performed to demonstrate a statistically

  14. Traz - An Interactive Ray-Tracing Computer Program Integrated With A Solid-Modeling CAD System

    NASA Astrophysics Data System (ADS)

    Dolan, Ariel

    1986-02-01

    The combination of an optical ray-tracing program with a solid modeling C.A.D. (computer-aided-design) system creates a very flexible tool for optical system analysis and evaluation. The program uses the CAD data-structure and user-friendly menus for creation, manipulation and visualization of the optical system. Furthermore, it is capable of dealing with problems which are impossible or difficult to handle by existing optical design programs, such as calculations of three-dimensional sensitivities, multiple reflections, multiple-surface apertures, specular stray radiation, image rotation and complex-prism design. It can also be used as an efficient tool for error-budget and error-analysis, and can be fully interfaced with a finite-elements analysis program, thus enabling the evaluation of the effects of mechanical or thermal loads on the optical performance.

  15. Creation of Anatomically Accurate Computer-Aided Design (CAD) Solid Models from Medical Images

    NASA Technical Reports Server (NTRS)

    Stewart, John E.; Graham, R. Scott; Samareh, Jamshid A.; Oberlander, Eric J.; Broaddus, William C.

    1999-01-01

    Most surgical instrumentation and implants used in the world today are designed with sophisticated Computer-Aided Design (CAD)/Computer-Aided Manufacturing (CAM) software. This software automates the mechanical development of a product from its conceptual design through manufacturing. CAD software also provides a means of manipulating solid models prior to Finite Element Modeling (FEM). Few surgical products are designed in conjunction with accurate CAD models of human anatomy because of the difficulty with which these models are created. We have developed a novel technique that creates anatomically accurate, patient specific CAD solids from medical images in a matter of minutes.

  16. Performance evaluation of computer aided diagnostic tool (CAD) for detection of ultrasonic based liver disease.

    PubMed

    Sriraam, N; Roopa, J; Saranya, M; Dhanalakshmi, M

    2009-08-01

    Recent advances in digital imaging technology have greatly enhanced the interpretation of critical/pathology conditions from the 2-dimensional medical images. This has become realistic due to the existence of the computer aided diagnostic tool. A computer aided diagnostic (CAD) tool generally possesses components like preprocessing, identification/selection of region of interest, extraction of typical features and finally an efficient classification system. This paper enumerates on development of CAD tool for classification of chronic liver disease through the 2-D image acquired from ultrasonic device. Characterization of tissue through qualitative treatment leads to the detection of abnormality which is not viable through qualitative visual inspection by the radiologist. Common liver diseases are the indicators of changes in tissue elasticity. One can show the detection of normal, fatty or malignant condition based on the application of CAD tool thereby, further investigation required by radiologist can be avoided. The proposed work involves an optimal block analysis (64 x 64) of the liver image of actual size 256 x 256 by incorporating Gabor wavelet transform which does the texture classification through automated mode. Statistical features such as gray level mean as well as variance values are estimated after this preprocessing mode. A non-linear back propagation neural network (BPNN) is applied for classifying the normal (vs) fatty and normal (vs) malignant liver which yields a classification accuracy of 96.8%. Further multi classification is also performed and a classification accuracy of 94% is obtained. It can be concluded that the proposed CAD can be used as an expert system to aid the automated diagnosis of liver diseases. PMID:19697693

  17. Computer Use and CAD in Assisting Schools in the Creation of Facilities.

    ERIC Educational Resources Information Center

    Beach, Robert H.; Essex, Nathan

    1987-01-01

    Computer-aided design (CAD) programs are powerful drafting tools, but are also able to assist with many other facility planning functions. Describes the hardware, software, and the learning process that led to understanding the CAD software at the University of Alabama. (MLF)

  18. A Multidisciplinary Research Team Approach to Computer-Aided Drafting (CAD) System Selection. Final Report.

    ERIC Educational Resources Information Center

    Franken, Ken; And Others

    A multidisciplinary research team was assembled to review existing computer-aided drafting (CAD) systems for the purpose of enabling staff in the Design Drafting Department at Linn Technical College (Missouri) to select the best system out of the many CAD systems in existence. During the initial stage of the evaluation project, researchers…

  19. Role of Computer Aided Diagnosis (CAD) in the detection of pulmonary nodules on 64 row multi detector computed tomography

    PubMed Central

    Prakashini, K; Babu, Satish; Rajgopal, KV; Kokila, K Raja

    2016-01-01

    Aims and Objectives: To determine the overall performance of an existing CAD algorithm with thin-section computed tomography (CT) in the detection of pulmonary nodules and to evaluate detection sensitivity at a varying range of nodule density, size, and location. Materials and Methods: A cross-sectional prospective study was conducted on 20 patients with 322 suspected nodules who underwent diagnostic chest imaging using 64-row multi-detector CT. The examinations were evaluated on reconstructed images of 1.4 mm thickness and 0.7 mm interval. Detection of pulmonary nodules, initially by a radiologist of 2 years experience (RAD) and later by CAD lung nodule software was assessed. Then, CAD nodule candidates were accepted or rejected accordingly. Detected nodules were classified based on their size, density, and location. The performance of the RAD and CAD system was compared with the gold standard that is true nodules confirmed by consensus of senior RAD and CAD together. The overall sensitivity and false-positive (FP) rate of CAD software was calculated. Observations and Results: Of the 322 suspected nodules, 221 were classified as true nodules on the consensus of senior RAD and CAD together. Of the true nodules, the RAD detected 206 (93.2%) and 202 (91.4%) by the CAD. CAD and RAD together picked up more number of nodules than either CAD or RAD alone. Overall sensitivity for nodule detection with the CAD program was 91.4%, and FP detection per patient was 5.5%. The CAD showed comparatively higher sensitivity for nodules of size 4–10 mm (93.4%) and nodules in hilar (100%) and central (96.5%) location when compared to RAD's performance. Conclusion: CAD performance was high in detecting pulmonary nodules including the small size and low-density nodules. CAD even with relatively high FP rate, assists and improves RAD's performance as a second reader, especially for nodules located in the central and hilar region and for small nodules by saving RADs time.

  20. Longitudinal Study of Factors Impacting the Implementation of Notebook Computer Based CAD Instruction

    ERIC Educational Resources Information Center

    Goosen, Richard F.

    2009-01-01

    This study provides information for higher education leaders that have or are considering conducting Computer Aided Design (CAD) instruction using student owned notebook computers. Survey data were collected during the first 8 years of a pilot program requiring engineering technology students at a four year public university to acquire a notebook…

  1. Surgical retained foreign object (RFO) prevention by computer aided detection (CAD)

    NASA Astrophysics Data System (ADS)

    Marentis, Theodore C.; Hadjiiyski, Lubomir; Chaudhury, Amrita R.; Rondon, Lucas; Chronis, Nikolaos; Chan, Heang-Ping

    2014-03-01

    Surgical Retained Foreign Objects (RFOs) cause significant morbidity and mortality. They are associated with $1.5 billion annually in preventable medical costs. The detection accuracy of radiographs for RFOs is a mediocre 59%. We address the RFO problem with two complementary technologies: a three dimensional (3D) Gossypiboma Micro Tag (μTa) that improves the visibility of RFOs on radiographs, and a Computer Aided Detection (CAD) system that detects the μTag. The 3D geometry of the μTag produces a similar 2D depiction on radiographs regardless of its orientation in the human body and ensures accurate detection by a radiologist and the CAD. We create a database of cadaveric radiographs with the μTag and other common man-made objects positioned randomly. We develop the CAD modules that include preprocessing, μTag enhancement, labeling, segmentation, feature analysis, classification and detection. The CAD can operate in a high specificity mode for the surgeon to allow for seamless workflow integration and function as a first reader. The CAD can also operate in a high sensitivity mode for the radiologist to ensure accurate detection. On a data set of 346 cadaveric radiographs, the CAD system performed at a high specificity (85.5% sensitivity, 0.02 FPs/image) for the OR and a high sensitivity (96% sensitivity, 0.73 FPs/image) for the radiologists.

  2. Computationally efficient multibody simulations

    NASA Technical Reports Server (NTRS)

    Ramakrishnan, Jayant; Kumar, Manoj

    1994-01-01

    Computationally efficient approaches to the solution of the dynamics of multibody systems are presented in this work. The computational efficiency is derived from both the algorithmic and implementational standpoint. Order(n) approaches provide a new formulation of the equations of motion eliminating the assembly and numerical inversion of a system mass matrix as required by conventional algorithms. Computational efficiency is also gained in the implementation phase by the symbolic processing and parallel implementation of these equations. Comparison of this algorithm with existing multibody simulation programs illustrates the increased computational efficiency.

  3. Adjoint Sensitivity Computations for an Embedded-Boundary Cartesian Mesh Method and CAD Geometry

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis,Michael J.

    2006-01-01

    Cartesian-mesh methods are perhaps the most promising approach for addressing the issues of flow solution automation for aerodynamic design problems. In these methods, the discretization of the wetted surface is decoupled from that of the volume mesh. This not only enables fast and robust mesh generation for geometry of arbitrary complexity, but also facilitates access to geometry modeling and manipulation using parametric Computer-Aided Design (CAD) tools. Our goal is to combine the automation capabilities of Cartesian methods with an eficient computation of design sensitivities. We address this issue using the adjoint method, where the computational cost of the design sensitivities, or objective function gradients, is esseutially indepeudent of the number of design variables. In previous work, we presented an accurate and efficient algorithm for the solution of the adjoint Euler equations discretized on Cartesian meshes with embedded, cut-cell boundaries. Novel aspects of the algorithm included the computation of surface shape sensitivities for triangulations based on parametric-CAD models and the linearization of the coupling between the surface triangulation and the cut-cells. The objective of the present work is to extend our adjoint formulation to problems involving general shape changes. Central to this development is the computation of volume-mesh sensitivities to obtain a reliable approximation of the objective finction gradient. Motivated by the success of mesh-perturbation schemes commonly used in body-fitted unstructured formulations, we propose an approach based on a local linearization of a mesh-perturbation scheme similar to the spring analogy. This approach circumvents most of the difficulties that arise due to non-smooth changes in the cut-cell layer as the boundary shape evolves and provides a consistent approximation tot he exact gradient of the discretized abjective function. A detailed gradient accurace study is presented to verify our approach

  4. Web-based computer-aided-diagnosis (CAD) system for bone age assessment (BAA) of children

    NASA Astrophysics Data System (ADS)

    Zhang, Aifeng; Uyeda, Joshua; Tsao, Sinchai; Ma, Kevin; Vachon, Linda A.; Liu, Brent J.; Huang, H. K.

    2008-03-01

    Bone age assessment (BAA) of children is a clinical procedure frequently performed in pediatric radiology to evaluate the stage of skeletal maturation based on a left hand and wrist radiograph. The most commonly used standard: Greulich and Pyle (G&P) Hand Atlas was developed 50 years ago and exclusively based on Caucasian population. Moreover, inter- & intra-observer discrepancies using this method create a need of an objective and automatic BAA method. A digital hand atlas (DHA) has been collected with 1,400 hand images of normal children from Asian, African American, Caucasian and Hispanic descends. Based on DHA, a fully automatic, objective computer-aided-diagnosis (CAD) method was developed and it was adapted to specific population. To bring DHA and CAD method to the clinical environment as a useful tool in assisting radiologist to achieve higher accuracy in BAA, a web-based system with direct connection to a clinical site is designed as a novel clinical implementation approach for online and real time BAA. The core of the system, a CAD server receives the image from clinical site, processes it by the CAD method and finally, generates report. A web service publishes the results and radiologists at the clinical site can review it online within minutes. This prototype can be easily extended to multiple clinical sites and will provide the foundation for broader use of the CAD system for BAA.

  5. An Analysis of Computer Aided Design (CAD) Packages Used at MSFC for the Recent Initiative to Integrate Engineering Activities

    NASA Technical Reports Server (NTRS)

    Smith, Leigh M.; Parker, Nelson C. (Technical Monitor)

    2002-01-01

    This paper analyzes the use of Computer Aided Design (CAD) packages at NASA's Marshall Space Flight Center (MSFC). It examines the effectiveness of recent efforts to standardize CAD practices across MSFC engineering activities. An assessment of the roles played by management, designers, analysts, and manufacturers in this initiative will be explored. Finally, solutions are presented for better integration of CAD across MSFC in the future.

  6. 3D object optonumerical acquisition methods for CAD/CAM and computer graphics systems

    NASA Astrophysics Data System (ADS)

    Sitnik, Robert; Kujawinska, Malgorzata; Pawlowski, Michal E.; Woznicki, Jerzy M.

    1999-08-01

    The creation of a virtual object for CAD/CAM and computer graphics on the base of data gathered by full-field optical measurement of 3D object is presented. The experimental co- ordinates are alternatively obtained by combined fringe projection/photogrammetry based system or fringe projection/virtual markers setup. The new and fully automatic procedure which process the cloud of measured points into triangular mesh accepted by CAD/CAM and computer graphics systems is presented. Its applicability for various classes of objects is tested including the error analysis of virtual objects generated. The usefulness of the method is proved by applying the virtual object in rapid prototyping system and in computer graphics environment.

  7. Computer-aided detection (CAD) of breast masses in mammography: combined detection and ensemble classification

    NASA Astrophysics Data System (ADS)

    Choi, Jae Young; Kim, Dae Hoe; Plataniotis, Konstantinos N.; Ro, Yong Man

    2014-07-01

    We propose a novel computer-aided detection (CAD) framework of breast masses in mammography. To increase detection sensitivity for various types of mammographic masses, we propose the combined use of different detection algorithms. In particular, we develop a region-of-interest combination mechanism that integrates detection information gained from unsupervised and supervised detection algorithms. Also, to significantly reduce the number of false-positive (FP) detections, the new ensemble classification algorithm is developed. Extensive experiments have been conducted on a benchmark mammogram database. Results show that our combined detection approach can considerably improve the detection sensitivity with a small loss of FP rate, compared to representative detection algorithms previously developed for mammographic CAD systems. The proposed ensemble classification solution also has a dramatic impact on the reduction of FP detections; as much as 70% (from 15 to 4.5 per image) at only cost of 4.6% sensitivity loss (from 90.0% to 85.4%). Moreover, our proposed CAD method performs as well or better (70.7% and 80.0% per 1.5 and 3.5 FPs per image respectively) than the results of mammography CAD algorithms previously reported in the literature.

  8. Computer-aided detection (CAD) of breast masses in mammography: combined detection and ensemble classification.

    PubMed

    Choi, Jae Young; Kim, Dae Hoe; Plataniotis, Konstantinos N; Ro, Yong Man

    2014-07-21

    We propose a novel computer-aided detection (CAD) framework of breast masses in mammography. To increase detection sensitivity for various types of mammographic masses, we propose the combined use of different detection algorithms. In particular, we develop a region-of-interest combination mechanism that integrates detection information gained from unsupervised and supervised detection algorithms. Also, to significantly reduce the number of false-positive (FP) detections, the new ensemble classification algorithm is developed. Extensive experiments have been conducted on a benchmark mammogram database. Results show that our combined detection approach can considerably improve the detection sensitivity with a small loss of FP rate, compared to representative detection algorithms previously developed for mammographic CAD systems. The proposed ensemble classification solution also has a dramatic impact on the reduction of FP detections; as much as 70% (from 15 to 4.5 per image) at only cost of 4.6% sensitivity loss (from 90.0% to 85.4%). Moreover, our proposed CAD method performs as well or better (70.7% and 80.0% per 1.5 and 3.5 FPs per image respectively) than the results of mammography CAD algorithms previously reported in the literature. PMID:24923292

  9. Project Integration Architecture (PIA) and Computational Analysis Programming Interface (CAPRI) for Accessing Geometry Data from CAD Files

    NASA Technical Reports Server (NTRS)

    Benyo, Theresa L.

    2002-01-01

    Integration of a supersonic inlet simulation with a computer aided design (CAD) system is demonstrated. The integration is performed using the Project Integration Architecture (PIA). PIA provides a common environment for wrapping many types of applications. Accessing geometry data from CAD files is accomplished by incorporating appropriate function calls from the Computational Analysis Programming Interface (CAPRI). CAPRI is a CAD vendor neutral programming interface that aids in acquiring geometry data directly from CAD files. The benefits of wrapping a supersonic inlet simulation into PIA using CAPRI are; direct access of geometry data, accurate capture of geometry data, automatic conversion of data units, CAD vendor neutral operation, and on-line interactive history capture. This paper describes the PIA and the CAPRI wrapper and details the supersonic inlet simulation demonstration.

  10. CAD-based graphical computer simulation in endoscopic surgery.

    PubMed

    Kuehnapfel, U G; Neisius, B

    1993-06-01

    This article presents new techniques for three-dimensional, kinematic realtime simulation of dextrous endoscopic instruments. The integrated simulation package KISMET is used for engineering design verification and evaluation. Geometric and kinematic computer models of the mechanisms and the laparoscopic workspace were created. Using the advanced capabilities of high-performance graphical workstations combined with state-of-the-art simulation software, it is possible to generate displays of the surgical instruments acting realistically on the organs of the digestive system. The organ geometry is modelled in a high degree of detail. Apart from discussing the use of KISMET for the development of MFM-II (Modular Flexible MIS Instrument, Release II), the paper indicates further applications of realtime 3D graphical simulation methods in endoscopic surgery. PMID:8055320

  11. Task analysis for computer-aided design (CAD) at a keystroke level.

    PubMed

    Chi, C F; Chung, K L

    1996-08-01

    The purpose of this research was to develop a new model to describe and predict a computerized task. AutoCAD was utilized as the experimental tool to collect operating procedure and time data at a keystroke level for a computer aided design (CAD) task. Six undergraduate students participated in the experiment. They were required to complete one simple and one complex engineering drawing. A model which characterized the task performance by software commands and predicted task execution time using keystroke-level model operators was proposed and applied to the analysis of the dialogue data. This task parameter model adopted software commands, e.g. LINE, OFFSET in AutoCAD, to describe the function of a task unit and used up to five parameters to indicate the number of keystrokes, chosen function for a command and ways of starting and ending a command. Each task unit in the task parameter model can be replaced by a number of primitive operators as in the keystroke level model to predict the task execution time. The observed task execution times of all task units were found to be highly correlated with the task execution times predicted by the keystroke level model. Therefore, the task parameter model was proved to be a usable analytical tool for evaluating the human-computer interface (HCI). PMID:15677066

  12. Computer-Aided Diagnostic (CAD) Scheme by Use of Contralateral Subtraction Technique

    NASA Astrophysics Data System (ADS)

    Nagashima, Hiroyuki; Harakawa, Tetsumi

    We developed a computer-aided diagnostic (CAD) scheme for detection of subtle image findings of acute cerebral infarction in brain computed tomography (CT) by using a contralateral subtraction technique. In our computerized scheme, the lateral inclination of image was first corrected automatically by rotating and shifting. The contralateral subtraction image was then derived by subtraction of reversed image from original image. Initial candidates for acute cerebral infarctions were identified using the multiple-thresholding and image filtering techniques. As the 1st step for removing false positive candidates, fourteen image features were extracted in each of the initial candidates. Halfway candidates were detected by applying the rule-based test with these image features. At the 2nd step, five image features were extracted using the overlapping scale with halfway candidates in interest slice and upper/lower slice image. Finally, acute cerebral infarction candidates were detected by applying the rule-based test with five image features. The sensitivity in the detection for 74 training cases was 97.4% with 3.7 false positives per image. The performance of CAD scheme for 44 testing cases had an approximate result to training cases. Our CAD scheme using the contralateral subtraction technique can reveal suspected image findings of acute cerebral infarctions in CT images.

  13. Computationally efficient Bayesian tracking

    NASA Astrophysics Data System (ADS)

    Aughenbaugh, Jason; La Cour, Brian

    2012-06-01

    In this paper, we describe the progress we have achieved in developing a computationally efficient, grid-based Bayesian fusion tracking system. In our approach, the probability surface is represented by a collection of multidimensional polynomials, each computed adaptively on a grid of cells representing state space. Time evolution is performed using a hybrid particle/grid approach and knowledge of the grid structure, while sensor updates use a measurement-based sampling method with a Delaunay triangulation. We present an application of this system to the problem of tracking a submarine target using a field of active and passive sonar buoys.

  14. Teaching for CAD Expertise

    ERIC Educational Resources Information Center

    Chester, Ivan

    2007-01-01

    CAD (Computer Aided Design) has now become an integral part of Technology Education. The recent introduction of highly sophisticated, low-cost CAD software and CAM hardware capable of running on desktop computers has accelerated this trend. There is now quite widespread introduction of solid modeling CAD software into secondary schools but how…

  15. Computationally efficient control allocation

    NASA Technical Reports Server (NTRS)

    Durham, Wayne (Inventor)

    2001-01-01

    A computationally efficient method for calculating near-optimal solutions to the three-objective, linear control allocation problem is disclosed. The control allocation problem is that of distributing the effort of redundant control effectors to achieve some desired set of objectives. The problem is deemed linear if control effectiveness is affine with respect to the individual control effectors. The optimal solution is that which exploits the collective maximum capability of the effectors within their individual physical limits. Computational efficiency is measured by the number of floating-point operations required for solution. The method presented returned optimal solutions in more than 90% of the cases examined; non-optimal solutions returned by the method were typically much less than 1% different from optimal and the errors tended to become smaller than 0.01% as the number of controls was increased. The magnitude of the errors returned by the present method was much smaller than those that resulted from either pseudo inverse or cascaded generalized inverse solutions. The computational complexity of the method presented varied linearly with increasing numbers of controls; the number of required floating point operations increased from 5.5 i, to seven times faster than did the minimum-norm solution (the pseudoinverse), and at about the same rate as did the cascaded generalized inverse solution. The computational requirements of the method presented were much better than that of previously described facet-searching methods which increase in proportion to the square of the number of controls.

  16. Selection and implementation of a computer aided design and drafting (CAD/D) system

    SciTech Connect

    Davis, J.P.

    1981-01-01

    Faced with very heavy workloads and limited engineering and graphics personnel, Transco opted for a computer-aided design and drafting system that can produce intelligent drawings, which have associated data bases that can be integrated with other graphical and nongraphical data bases to form comprehensive sets of data for construction projects. Because so much time was being spent in all phases of materials and inventory control, Transco decided to integrate materials-management capabilities into the CAD/D system. When a specific item of material is requested on the graphics equipment, the request triggers production of both the drawing and a materials list. Transco plans to extend its computer applications into mapping tasks as well.

  17. Computer-Aided Design/Manufacturing (CAD/M) for high-speed interconnect

    NASA Astrophysics Data System (ADS)

    Santoski, N. F.

    1981-10-01

    The objective of the Computer-Aided Design/Manufacturing (CAD/M) for High-Speed Interconnect Program study was to assess techniques for design, analysis and fabrication of interconnect structures between high-speed logic ICs that are clocked in the 200 MHz to 5 GHz range. Interconnect structure models were investigated and integrated with existing device models. Design rules for interconnects were developed in terms of parameters that can be installed in software that is used for the design, analysis and fabrication of circuits. To implement these design rules in future software development, algorithms and software development techniques were defined. Major emphasis was on Printed Wiring Board and hybrid level circuits as opposed to monolithic chips. Various packaging schemes were considered, including controlled impedance lines in the 50 to 200 ohms range where needed. The design rules developed are generic in nature, in that various architecture classes and device technologies were considered.

  18. Improvement of MS (multiple sclerosis) CAD (computer aided diagnosis) performance using C/C++ and computing engine in the graphical processing unit (GPU)

    NASA Astrophysics Data System (ADS)

    Suh, Joohyung; Ma, Kevin; Le, Anh

    2011-03-01

    Multiple Sclerosis (MS) is a disease which is caused by damaged myelin around axons of the brain and spinal cord. Currently, MR Imaging is used for diagnosis, but it is very highly variable and time-consuming since the lesion detection and estimation of lesion volume are performed manually. For this reason, we developed a CAD (Computer Aided Diagnosis) system which would assist segmentation of MS to facilitate physician's diagnosis. The MS CAD system utilizes K-NN (k-nearest neighbor) algorithm to detect and segment the lesion volume in an area based on the voxel. The prototype MS CAD system was developed under the MATLAB environment. Currently, the MS CAD system consumes a huge amount of time to process data. In this paper we will present the development of a second version of MS CAD system which has been converted into C/C++ in order to take advantage of the GPU (Graphical Processing Unit) which will provide parallel computation. With the realization of C/C++ and utilizing the GPU, we expect to cut running time drastically. The paper investigates the conversion from MATLAB to C/C++ and the utilization of a high-end GPU for parallel computing of data to improve algorithm performance of MS CAD.

  19. The computation of all plane/surface intersections for CAD/CAM applications

    NASA Technical Reports Server (NTRS)

    Hoitsma, D. H., Jr.; Roche, M.

    1984-01-01

    The problem of the computation and display of all intersections of a given plane with a rational bicubic surface patch for use on an interactive CAD/CAM system is examined. The general problem of calculating all intersections of a plane and a surface consisting of rational bicubic patches is reduced to the case of a single generic patch by applying a rejection algorithm which excludes all patches that do not intersect the plane. For each pertinent patch the algorithm presented computed the intersection curves by locating an initial point on each curve, and computes successive points on the curve using a tolerance step equation. A single cubic equation solver is used to compute the initial curve points lying on the boundary of a surface patch, and the method of resultants as applied to curve theory is used to determine critical points which, in turn, are used to locate initial points that lie on intersection curves which are in the interior of the patch. Examples are given to illustrate the ability of this algorithm to produce all intersection curves.

  20. Revision of Electro-Mechanical Drafting Program to Include CAD/D (Computer-Aided Drafting/Design). Final Report.

    ERIC Educational Resources Information Center

    Snyder, Nancy V.

    North Seattle Community College decided to integrate computer-aided design/drafting (CAD/D) into its Electro-Mechanical Drafting Program. This choice necessitated a redefinition of the program through new curriculum and course development. To initiate the project, a new industrial advisory council was formed. Major electronic and recruiting firms…

  1. A computer-aided detection (CAD) system with a 3D algorithm for small acute intracranial hemorrhage

    NASA Astrophysics Data System (ADS)

    Wang, Ximing; Fernandez, James; Deshpande, Ruchi; Lee, Joon K.; Chan, Tao; Liu, Brent

    2012-02-01

    Acute Intracranial hemorrhage (AIH) requires urgent diagnosis in the emergency setting to mitigate eventual sequelae. However, experienced radiologists may not always be available to make a timely diagnosis. This is especially true for small AIH, defined as lesion smaller than 10 mm in size. A computer-aided detection (CAD) system for the detection of small AIH would facilitate timely diagnosis. A previously developed 2D algorithm shows high false positive rates in the evaluation based on LAC/USC cases, due to the limitation of setting up correct coordinate system for the knowledge-based classification system. To achieve a higher sensitivity and specificity, a new 3D algorithm is developed. The algorithm utilizes a top-hat transformation and dynamic threshold map to detect small AIH lesions. Several key structures of brain are detected and are used to set up a 3D anatomical coordinate system. A rule-based classification of the lesion detected is applied based on the anatomical coordinate system. For convenient evaluation in clinical environment, the CAD module is integrated with a stand-alone system. The CAD is evaluated by small AIH cases and matched normal collected in LAC/USC. The result of 3D CAD and the previous 2D CAD has been compared.

  2. Development of problem-oriented software packages for numerical studies and computer-aided design (CAD) of gyrotrons

    NASA Astrophysics Data System (ADS)

    Damyanova, M.; Sabchevski, S.; Zhelyazkov, I.; Vasileva, E.; Balabanova, E.; Dankov, P.; Malinov, P.

    2016-03-01

    Gyrotrons are the most powerful sources of coherent CW (continuous wave) radiation in the frequency range situated between the long-wavelength edge of the infrared light (far-infrared region) and the microwaves, i.e., in the region of the electromagnetic spectrum which is usually called the THz-gap (or T-gap), since the output power of other devices (e.g., solid-state oscillators) operating in this interval is by several orders of magnitude lower. In the recent years, the unique capabilities of the sub-THz and THz gyrotrons have opened the road to many novel and future prospective applications in various physical studies and advanced high-power terahertz technologies. In this paper, we present the current status and functionality of the problem-oriented software packages (most notably GYROSIM and GYREOSS) used for numerical studies, computer-aided design (CAD) and optimization of gyrotrons for diverse applications. They consist of a hierarchy of codes specialized to modelling and simulation of different subsystems of the gyrotrons (EOS, resonant cavity, etc.) and are based on adequate physical models, efficient numerical methods and algorithms.

  3. CAD/CAM/CNC.

    ERIC Educational Resources Information Center

    Domermuth, Dave; And Others

    1996-01-01

    Includes "Quick Start CNC (computer numerical control) with a Vacuum Filter and Laminated Plastic" (Domermuth); "School and Industry Cooperate for Mutual Benefit" (Buckler); and "CAD (computer-assisted drafting) Careers--What Professionals Have to Say" (Skinner). (JOW)

  4. Efficient universal blind quantum computation.

    PubMed

    Giovannetti, Vittorio; Maccone, Lorenzo; Morimae, Tomoyuki; Rudolph, Terry G

    2013-12-01

    We give a cheat sensitive protocol for blind universal quantum computation that is efficient in terms of computational and communication resources: it allows one party to perform an arbitrary computation on a second party's quantum computer without revealing either which computation is performed, or its input and output. The first party's computational capabilities can be extremely limited: she must only be able to create and measure single-qubit superposition states. The second party is not required to use measurement-based quantum computation. The protocol requires the (optimal) exchange of O(Jlog2(N)) single-qubit states, where J is the computational depth and N is the number of qubits needed for the computation. PMID:24476238

  5. Efficient Universal Blind Quantum Computation

    NASA Astrophysics Data System (ADS)

    Giovannetti, Vittorio; Maccone, Lorenzo; Morimae, Tomoyuki; Rudolph, Terry G.

    2013-12-01

    We give a cheat sensitive protocol for blind universal quantum computation that is efficient in terms of computational and communication resources: it allows one party to perform an arbitrary computation on a second party’s quantum computer without revealing either which computation is performed, or its input and output. The first party’s computational capabilities can be extremely limited: she must only be able to create and measure single-qubit superposition states. The second party is not required to use measurement-based quantum computation. The protocol requires the (optimal) exchange of O(Jlog⁡2(N)) single-qubit states, where J is the computational depth and N is the number of qubits needed for the computation.

  6. Shape optimization and CAD

    NASA Technical Reports Server (NTRS)

    Rasmussen, John

    1990-01-01

    Structural optimization has attracted the attention since the days of Galileo. Olhoff and Taylor have produced an excellent overview of the classical research within this field. However, the interest in structural optimization has increased greatly during the last decade due to the advent of reliable general numerical analysis methods and the computer power necessary to use them efficiently. This has created the possibility of developing general numerical systems for shape optimization. Several authors, eg., Esping; Braibant & Fleury; Bennet & Botkin; Botkin, Yang, and Bennet; and Stanton have published practical and successful applications of general optimization systems. Ding and Homlein have produced extensive overviews of available systems. Furthermore, a number of commercial optimization systems based on well-established finite element codes have been introduced. Systems like ANSYS, IDEAS, OASIS, and NISAOPT are widely known examples. In parallel to this development, the technology of computer aided design (CAD) has gained a large influence on the design process of mechanical engineering. The CAD technology has already lived through a rapid development driven by the drastically growing capabilities of digital computers. However, the systems of today are still considered as being only the first generation of a long row of computer integrated manufacturing (CIM) systems. These systems to come will offer an integrated environment for design, analysis, and fabrication of products of almost any character. Thus, the CAD system could be regarded as simply a database for geometrical information equipped with a number of tools with the purpose of helping the user in the design process. Among these tools are facilities for structural analysis and optimization as well as present standard CAD features like drawing, modeling, and visualization tools. The state of the art of structural optimization is that a large amount of mathematical and mechanical techniques are

  7. CAD/CAE Integration Enhanced by New CAD Services Standard

    NASA Technical Reports Server (NTRS)

    Claus, Russell W.

    2002-01-01

    A Government-industry team led by the NASA Glenn Research Center has developed a computer interface standard for accessing data from computer-aided design (CAD) systems. The Object Management Group, an international computer standards organization, has adopted this CAD services standard. The new standard allows software (e.g., computer-aided engineering (CAE) and computer-aided manufacturing software to access multiple CAD systems through one programming interface. The interface is built on top of a distributed computing system called the Common Object Request Broker Architecture (CORBA). CORBA allows the CAD services software to operate in a distributed, heterogeneous computing environment.

  8. Development of a full body CAD dataset for computational modeling: a multi-modality approach.

    PubMed

    Gayzik, F S; Moreno, D P; Geer, C P; Wuertzer, S D; Martin, R S; Stitzel, J D

    2011-10-01

    The objective of this study was to develop full body CAD geometry of a seated 50th percentile male. Model development was based on medical image data acquired for this study, in conjunction with extensive data from the open literature. An individual (height, 174.9 cm, weight, 78.6 ± 0.77 kg, and age 26 years) was enrolled in the study for a period of 4 months. 72 scans across three imaging modalities (CT, MRI, and upright MRI) were collected. The whole-body dataset contains 15,622 images. Over 300 individual components representing human anatomy were generated through segmentation. While the enrolled individual served as a template, segmented data were verified against, or augmented with, data from over 75 literature sources on the average morphology of the human body. Non-Uniform Rational B-Spline (NURBS) surfaces with tangential (G1) continuity were constructed over all the segmented data. The sagittally symmetric model consists of 418 individual components representing bones, muscles, organs, blood vessels, ligaments, tendons, cartilaginous structures, and skin. Length, surface area, and volumes of components germane to crash injury prediction are presented. The total volume (75.7 × 103 cm(3)) and surface area (1.86 × 102 cm(2)) of the model closely agree with the literature data. The geometry is intended for subsequent use in nonlinear dynamics solvers, and serves as the foundation of a global effort to develop the next-generation computational human body model for injury prediction and prevention. PMID:21785882

  9. Computer Graphic Design Using Auto-CAD and Plug Nozzle Research

    NASA Technical Reports Server (NTRS)

    Rogers, Rayna C.

    2004-01-01

    The purpose of creating computer generated images varies widely. They can be use for computational fluid dynamics (CFD), or as a blueprint for designing parts. The schematic that I will be working on the summer will be used to create nozzles that are a part of a larger system. At this phase in the project, the nozzles needed for the systems have been fabricated. One part of my mission is to create both three dimensional and two dimensional models on Auto-CAD 2002 of the nozzles. The research on plug nozzles will allow me to have a better understanding of how they assist in the thrust need for a missile to take off. NASA and the United States military are working together to develop a new design concept. On most missiles a convergent-divergent nozzle is used to create thrust. However, the two are looking into different concepts for the nozzle. The standard convergent-divergent nozzle forces a mixture of combustible fluids and air through a smaller area in comparison to where the combination was mixed. Once it passes through the smaller area known as A8 it comes out the end of the nozzle which is larger the first or area A9. This creates enough thrust for the mechanism whether it is an F-18 fighter jet or a missile. The A9 section of the convergent-divergent nozzle has a mechanism that controls how large A9 can be. This is needed because the pressure of the air coming out nozzle must be equal to that of the ambient pressure other wise there will be a loss of performance in the machine. The plug nozzle however does not need to have an A9 that can vary. When the air flow comes out it can automatically sense what the ambient pressure is and will adjust accordingly. The objective of this design is to create a plug nozzle that is not as complicated mechanically as it counterpart the convergent-divergent nozzle.

  10. The VE/CAD synergism

    SciTech Connect

    Sperling, R.B.

    1993-03-19

    Value Engineering (VE) and Computer-Aided Design (CAD) can be used synergistically to reduce costs and improve facilities designs. The cost and schedule impacts of implementing alternative design ideas developed by VE teams can be greatly reduced when the drawings have been produced with interactive CAD systems. To better understand the interrelationship between VE and CAD, the fundamentals of the VE process are explained; and example of a VE proposal is described and the way CAD drawings facilitated its implementation is illustrated.

  11. A computational investigation on radiation damage and activation of structural material for C-ADS

    NASA Astrophysics Data System (ADS)

    Liang, Tairan; Shen, Fei; Yin, Wen; Yu, Quanzhi; Liang, Tianjiao

    2015-11-01

    The C-ADS (China Accelerator-Driven Subcritical System) project, which aims at transmuting high-level radiotoxic waste (HLW) and power generation, is now in the research and development stage. In this paper, a simplified ADS model is set up based on the IAEA Th-ADS benchmark calculation model, then the radiation damage as well as the residual radioactivity of the structural material are estimated using the Monte Carlo simulation method. The peak displacement production rate, gas productions, activity and residual dose rate of the structural components like beam window and outer casing of subcritical reactor core are calculated. The calculation methods and the corresponding results provide the basic reference for making reasonable predictions for the lifetime and maintenance operations of the structural material of C-ADS.

  12. Modification to the Monte Carlo N-Particle (MCNP) Visual Editor (MCNPVised) to Read in Computer Aided Design (CAD) Files

    SciTech Connect

    Randolph Schwarz; Leland L. Carter; Alysia Schwarz

    2005-08-23

    Monte Carlo N-Particle Transport Code (MCNP) is the code of choice for doing complex neutron/photon/electron transport calculations for the nuclear industry and research institutions. The Visual Editor for Monte Carlo N-Particle is internationally recognized as the best code for visually creating and graphically displaying input files for MCNP. The work performed in this grant was used to enhance the capabilities of the MCNP Visual Editor to allow it to read in both 2D and 3D Computer Aided Design (CAD) files, allowing the user to electronically generate a valid MCNP input geometry.

  13. Implementation and display of Computer Aided Design (CAD) models in Monte Carlo radiation transport and shielding applications

    SciTech Connect

    Burns, T.J.

    1994-03-01

    An Xwindow application capable of importing geometric information directly from two Computer Aided Design (CAD) based formats for use in radiation transport and shielding analyses is being developed at ORNL. The application permits the user to graphically view the geometric models imported from the two formats for verification and debugging. Previous models, specifically formatted for the radiation transport and shielding codes can also be imported. Required extensions to the existing combinatorial geometry analysis routines are discussed. Examples illustrating the various options and features which will be implemented in the application are presented. The use of the application as a visualization tool for the output of the radiation transport codes is also discussed.

  14. Evaluation of Five Microcomputer CAD Packages.

    ERIC Educational Resources Information Center

    Leach, James A.

    1987-01-01

    Discusses the similarities, differences, advanced features, applications and number of users of five microcomputer computer-aided design (CAD) packages. Included are: "AutoCAD (V.2.17)"; "CADKEY (V.2.0)"; "CADVANCE (V.1.0)"; "Super MicroCAD"; and "VersaCAD Advanced (V.4.00)." Describes the evaluation of the packages and makes recommendations for…

  15. Efficient computation of NACT seismograms

    NASA Astrophysics Data System (ADS)

    Zheng, Z.; Romanowicz, B. A.

    2009-12-01

    We present a modification to the NACT formalism (Li and Romanowicz, 1995) for computing synthetic seismograms and sensitivity kernels in global seismology. In the NACT theory, the perturbed seismogram consists of an along-branch coupling term, which is computed under the well-known PAVA approximation (e.g. Woodhouse and Dziewonski, 1984), and an across-branch coupling term, which is computed under the linear Born approximation. In the classical formalism, the Born part is obtained by a double summation over all pairs of coupling modes, where the numerical cost grows as (number of sources * number of receivers) * (corner frequency)^4. Here, however, by adapting the approach of Capdeville (2005), we are able to separate the computation into two single summations, which are responsible for the “source to scatterer” and the “scatterer to receiver” contributions, respectively. As a result, the numerical cost of the new scheme grows as (number of sources + number of receivers) * (corner frequency)^2. Moreover, by expanding eigen functions on a wavelet basis, a compression factor of at least 3 (larger at lower frequency) is achieved, leading to a factor of ~10 saving in disk storage. Numerical experiments show that the synthetic seismograms computed from the new approach agree well with those from the classical mode coupling method. The new formalism is significantly more efficient when approaching higher frequencies and in cases of large numbers of sources and receivers, while the across-branch mode coupling feature is still preserved, though not explicitly.

  16. Computer-assisted detection (CAD) methodology for early detection of response to pharmaceutical therapy in tuberculosis patients

    NASA Astrophysics Data System (ADS)

    Lieberman, Robert; Kwong, Heston; Liu, Brent; Huang, H. K.

    2009-02-01

    The chest x-ray radiological features of tuberculosis patients are well documented, and the radiological features that change in response to successful pharmaceutical therapy can be followed with longitudinal studies over time. The patients can also be classified as either responsive or resistant to pharmaceutical therapy based on clinical improvement. We have retrospectively collected time series chest x-ray images of 200 patients diagnosed with tuberculosis receiving the standard pharmaceutical treatment. Computer algorithms can be created to utilize image texture features to assess the temporal changes in the chest x-rays of the tuberculosis patients. This methodology provides a framework for a computer-assisted detection (CAD) system that may provide physicians with the ability to detect poor treatment response earlier in pharmaceutical therapy. Early detection allows physicians to respond with more timely treatment alternatives and improved outcomes. Such a system has the potential to increase treatment efficacy for millions of patients each year.

  17. CAD for small hydro projects

    SciTech Connect

    Bishop, N.A. Jr. )

    1994-04-01

    Over the past decade, computer-aided design (CAD) has become a practical and economical design tool. Today, specifying CAD hardware and software is relatively easy once you know what the design requirements are. But finding experienced CAD professionals is often more difficult. Most CAD users have only two or three years of design experience; more experienced design personnel are frequently not CAD literate. However, effective use of CAD can be the key to lowering design costs and improving design quality--a quest familiar to every manager and designer. By emphasizing computer-aided design literacy at all levels of the firm, a Canadian joint-venture company that specializes in engineering small hydroelectric projects has cut costs, become more productive and improved design quality. This article describes how they did it.

  18. Efficient computation of optimal actions

    PubMed Central

    Todorov, Emanuel

    2009-01-01

    Optimal choice of actions is a fundamental problem relevant to fields as diverse as neuroscience, psychology, economics, computer science, and control engineering. Despite this broad relevance the abstract setting is similar: we have an agent choosing actions over time, an uncertain dynamical system whose state is affected by those actions, and a performance criterion that the agent seeks to optimize. Solving problems of this kind remains hard, in part, because of overly generic formulations. Here, we propose a more structured formulation that greatly simplifies the construction of optimal control laws in both discrete and continuous domains. An exhaustive search over actions is avoided and the problem becomes linear. This yields algorithms that outperform Dynamic Programming and Reinforcement Learning, and thereby solve traditional problems more efficiently. Our framework also enables computations that were not possible before: composing optimal control laws by mixing primitives, applying deterministic methods to stochastic systems, quantifying the benefits of error tolerance, and inferring goals from behavioral data via convex optimization. Development of a general class of easily solvable problems tends to accelerate progress—as linear systems theory has done, for example. Our framework may have similar impact in fields where optimal choice of actions is relevant. PMID:19574462

  19. Computationally efficient lossless image coder

    NASA Astrophysics Data System (ADS)

    Sriram, Parthasarathy; Sudharsanan, Subramania I.

    1999-12-01

    Lossless coding of image data has been a very active area of research in the field of medical imaging, remote sensing and document processing/delivery. While several lossless image coders such as JPEG and JBIG have been in existence for a while, their compression performance for encoding continuous-tone images were rather poor. Recently, several state of the art techniques like CALIC and LOCO were introduced with significant improvement in compression performance over traditional coders. However, these coders are very difficult to implement using dedicated hardware or in software using media processors due to their inherently serial nature of their encoding process. In this work, we propose a lossless image coding technique with a compression performance that is very close to the performance of CALIC and LOCO while being very efficient to implement both in hardware and software. Comparisons for encoding the JPEG- 2000 image set show that the compression performance of the proposed coder is within 2 - 5% of the more complex coders while being computationally very efficient. In addition, the encoder is shown to be parallelizabl at a hierarchy of levels. The execution time of the proposed encoder is smaller than what is required by LOCO while the decoder is 2 - 3 times faster that the execution time required by LOCO decoder.

  20. Use of CAD systems in design of Space Station and space robots

    NASA Technical Reports Server (NTRS)

    Dwivedi, Suren N.; Yadav, P.; Jones, Gary; Travis, Elmer W.

    1988-01-01

    The evolution of CAD systems is traced. State-of-the-art CAD systems are reviewed and various advanced CAD facilities and supplementing systems being used at NASA-Goddard are described. CAD hardware, computer software, and protocols are detailed.

  1. TRAD or CAD? A Comparison.

    ERIC Educational Resources Information Center

    Resetarits, Paul J.

    1989-01-01

    Studies whether traditional drafting equipment (TRAD) or computer aided drafting equipment (CAD) is more effective. Proposes that students using only CAD can learn principles of drafting as well as students using only TRAD. Reports no significant difference either on achievement or attitude. (MVL)

  2. Gathering Empirical Evidence Concerning Links between Computer Aided Design (CAD) and Creativity

    ERIC Educational Resources Information Center

    Musta'amal, Aede Hatib; Norman, Eddie; Hodgson, Tony

    2009-01-01

    Discussion is often reported concerning potential links between computer-aided designing and creativity, but there is a lack of systematic enquiry to gather empirical evidence concerning such links. This paper reports an indication of findings from other research studies carried out in contexts beyond general education that have sought evidence…

  3. Incorporating CAD Instruction into the Drafting Curriculum.

    ERIC Educational Resources Information Center

    Yuen, Steve Chi-Yin

    1990-01-01

    If education is to meet the challenged posed by the U.S. productivity crisis and the large number of computer-assisted design (CAD) workstations forecast as necessary in the future, schools must integrate CAD into the drafting curriculum and become aggressive in providing CAD training. Teachers need to maintain close contact with local industries…

  4. Coronary artery computed tomography as the first-choice imaging diagnostics in patients with high pre-test probability of coronary artery disease (CAT-CAD)

    PubMed Central

    Rudziński, Piotr N.; Demkow, Marcin; Dzielińska, Zofia; Pręgowski, Jerzy; Witkowski, Adam; Rużyłło, Witold; Kępka, Cezary

    2015-01-01

    Introduction The primary diagnostic examination performed in patients with a high pre-test probability of coronary artery disease (CAD) is invasive coronary angiography. Currently, approximately 50% of all invasive coronary angiographies do not end with percutaneous coronary intervention (PCI) because of the absence of significant coronary artery lesions. It is desirable to eliminate such situations. There is an alternative, non-invasive method useful for exclusion of significant CAD, which is coronary computed tomography angiography (CCTA). Aim We hypothesize that use of CCTA as the first choice method in the diagnosis of patients with high pre-test probability of CAD may reduce the number of invasive coronary angiographies not followed by interventional treatment. Coronary computed tomography angiography also seems not to be connected with additional risks and costs of the diagnosis. Confirmation of these assumptions may impact cardiology guidelines. Material and methods One hundred and twenty patients with indications for invasive coronary angiography determined by current ESC guidelines regarding stable CAD are randomized 1 : 1 to classic invasive coronary angiography group and the CCTA group. Results All patients included in the study are monitored for the occurrence of possible end points during the diagnostic and therapeutic cycle (from the first imaging examination to either complete revascularization or disqualification from the invasive treatment), or during the follow-up period. Conclusions Based on the literature, it appears that the use of modern CT systems in patients with high pre-test probability of CAD, as well as appropriate clinical interpretation of the imaging study by invasive cardiologists, enables precise planning of invasive therapeutic procedures. Our randomized study will provide data to verify these assumptions. PMID:26677376

  5. Leadless chip carrier packaging and CAD/CAM (Computer-Aided Design/Computer-Aided Manufacturing) supported wire wrap interconnect technology for subnanosecond ECL (Emitter Coupled Logic)

    NASA Astrophysics Data System (ADS)

    Gilbert, B. K.

    1982-12-01

    This document is the third year interim report for a four-year program to refine and develop Computer-Aided Design protocols for implementation of subnanosceond Emitter Coupled Logic in High-Speed Computer Modules using a wire wrap interconnection medium. The software and user manual for implementation guides are not part of the actual report. This report describes the results of work conducted in the third year of a four year program to develop rapid methods for designing and prototyping high-speed digital processor systems using subnanosecond emitter coupled logic (ECL). The third year effort was divided into two separate sets of tasks. In Task 1, described in Sections 3 - 7 of this report, we have nearly completed development of new sets of design rules, interconnection protocols, special components, and logic panels, for a technology based upon specially designed leadless ceramic chip carriers developed at Mayo Foundation. Task 2, described in Sections 8 and IX of this report, continued the development of a comprehensive computer-aided design/computer-aided manufacturing (CAD/CAM) software package which is specifically tailored to support the peculiar design requirements of processors operating in a high clock rate, transmission line environment, either with subnanosecond ECL components or with any other families of subnanosecond devices.

  6. CAD/CAM (Computer Aided Design/Computer Aided Manufacture). A Brief Guide to Materials in the Library of Congress.

    ERIC Educational Resources Information Center

    Havas, George D.

    This brief guide to materials in the Library of Congress (LC) on computer aided design and/or computer aided manufacturing lists reference materials and other information sources under 13 headings: (1) brief introductions; (2) LC subject headings used for such materials; (3) textbooks; (4) additional titles; (5) glossaries and handbooks; (6)…

  7. HistoCAD: Machine Facilitated Quantitative Histoimaging with Computer Assisted Diagnosis

    NASA Astrophysics Data System (ADS)

    Tomaszewski, John E.

    Prostatic adenocarcinoma (CAP) is the most common malignancy in American men. In 2010 there will be an estimated 217,730 new cases and 32,050 deaths from CAP in the US. The diagnosis of prostatic adenocarcinoma is made exclusively from the histological evaluation of prostate tissue. The sampling protocols used to obtain 18 gauge (1.5 mm diameter) needle cores are standard sampling templates consisting of 6-12 cores performed in the context of an elevated serum value for prostate specific antigen (PSA). In this context, the prior probability of cancer is somewhat increased. However, even in this screened population, the efficiency of finding cancer is low at only approximately 20%. Histopathologists are faced with the task of reviewing the 5-10 million cores of tissue resulting from approximately 1,000,000 biopsy procedures yearly, parsing all the benign scenes from the worrisome scenes, and deciding which of the worrisome images are cancer.

  8. Efficient Computational Model of Hysteresis

    NASA Technical Reports Server (NTRS)

    Shields, Joel

    2005-01-01

    A recently developed mathematical model of the output (displacement) versus the input (applied voltage) of a piezoelectric transducer accounts for hysteresis. For the sake of computational speed, the model is kept simple by neglecting the dynamic behavior of the transducer. Hence, the model applies to static and quasistatic displacements only. A piezoelectric transducer of the type to which the model applies is used as an actuator in a computer-based control system to effect fine position adjustments. Because the response time of the rest of such a system is usually much greater than that of a piezoelectric transducer, the model remains an acceptably close approximation for the purpose of control computations, even though the dynamics are neglected. The model (see Figure 1) represents an electrically parallel, mechanically series combination of backlash elements, each having a unique deadband width and output gain. The zeroth element in the parallel combination has zero deadband width and, hence, represents a linear component of the input/output relationship. The other elements, which have nonzero deadband widths, are used to model the nonlinear components of the hysteresis loop. The deadband widths and output gains of the elements are computed from experimental displacement-versus-voltage data. The hysteresis curve calculated by use of this model is piecewise linear beyond deadband limits.

  9. Immersive CAD

    SciTech Connect

    Ames, A.L.

    1999-02-01

    This paper documents development of a capability for performing shape-changing editing operations on solid model representations in an immersive environment. The capability includes part- and assembly-level operations, with part modeling supporting topology-invariant and topology-changing modifications. A discussion of various design considerations in developing an immersive capability is included, along with discussion of a prototype implementation we have developed and explored. The project investigated approaches to providing both topology-invariant and topology-changing editing. A prototype environment was developed to test the approaches and determine the usefulness of immersive editing. The prototype showed exciting potential in redefining the CAD interface. It is fun to use. Editing is much faster and friendlier than traditional feature-based CAD software. The prototype algorithms did not reliably provide a sufficient frame rate for complex geometries, but has provided the necessary roadmap for development of a production capability.

  10. Computing Efficiency Of Transfer Of Microwave Power

    NASA Technical Reports Server (NTRS)

    Pinero, L. R.; Acosta, R.

    1995-01-01

    BEAM computer program enables user to calculate microwave power-transfer efficiency between two circular apertures at arbitrary range. Power-transfer efficiency obtained numerically. Two apertures have generally different sizes and arbitrary taper illuminations. BEAM also analyzes effect of distance and taper illumination on transmission efficiency for two apertures of equal size. Written in FORTRAN.

  11. A supervised 'lesion-enhancement' filter by use of a massive-training artificial neural network (MTANN) in computer-aided diagnosis (CAD)

    NASA Astrophysics Data System (ADS)

    Suzuki, Kenji

    2009-09-01

    Computer-aided diagnosis (CAD) has been an active area of study in medical image analysis. A filter for the enhancement of lesions plays an important role for improving the sensitivity and specificity in CAD schemes. The filter enhances objects similar to a model employed in the filter; e.g. a blob-enhancement filter based on the Hessian matrix enhances sphere-like objects. Actual lesions, however, often differ from a simple model; e.g. a lung nodule is generally modeled as a solid sphere, but there are nodules of various shapes and with internal inhomogeneities such as a nodule with spiculations and ground-glass opacity. Thus, conventional filters often fail to enhance actual lesions. Our purpose in this study was to develop a supervised filter for the enhancement of actual lesions (as opposed to a lesion model) by use of a massive-training artificial neural network (MTANN) in a CAD scheme for detection of lung nodules in CT. The MTANN filter was trained with actual nodules in CT images to enhance actual patterns of nodules. By use of the MTANN filter, the sensitivity and specificity of our CAD scheme were improved substantially. With a database of 69 lung cancers, nodule candidate detection by the MTANN filter achieved a 97% sensitivity with 6.7 false positives (FPs) per section, whereas nodule candidate detection by a difference-image technique achieved a 96% sensitivity with 19.3 FPs per section. Classification-MTANNs were applied for further reduction of the FPs. The classification-MTANNs removed 60% of the FPs with a loss of one true positive; thus, it achieved a 96% sensitivity with 2.7 FPs per section. Overall, with our CAD scheme based on the MTANN filter and classification-MTANNs, an 84% sensitivity with 0.5 FPs per section was achieved. First presented at the Seventh International Conference on Machine Learning and Applications, San Diego, CA, USA, 11-13 December 2008.

  12. Efficient, massively parallel eigenvalue computation

    NASA Technical Reports Server (NTRS)

    Huo, Yan; Schreiber, Robert

    1993-01-01

    In numerical simulations of disordered electronic systems, one of the most common approaches is to diagonalize random Hamiltonian matrices and to study the eigenvalues and eigenfunctions of a single electron in the presence of a random potential. An effort to implement a matrix diagonalization routine for real symmetric dense matrices on massively parallel SIMD computers, the Maspar MP-1 and MP-2 systems, is described. Results of numerical tests and timings are also presented.

  13. Use of CAD Geometry in MDO

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.

    1996-01-01

    The purpose of this paper is to discuss the use of Computer-Aided Design (CAD) geometry in a Multi-Disciplinary Design Optimization (MDO) environment. Two techniques are presented to facilitate the use of CAD geometry by different disciplines, such as Computational Fluid Dynamics (CFD) and Computational Structural Mechanics (CSM). One method is to transfer the load from a CFD grid to a CSM grid. The second method is to update the CAD geometry for CSM deflection.

  14. Lower bounds on the computational efficiency of optical computing systems

    NASA Astrophysics Data System (ADS)

    Barakat, Richard; Reif, John

    1987-03-01

    A general model for determining the computational efficiency of optical computing systems, termed the VLSIO model, is described. It is a 3-dimensional generalization of the wire model of a 2-dimensional VLSI with optical beams (via Gabor's theorem) replacing the wires as communication channels. Lower bounds (in terms of simultaneous volume and time) on the computational resources of the VLSIO are obtained for computing various problems such as matrix multiplication.

  15. The polar phase response property of monopolar ECG voltages using a Computer-Aided Design and Drafting (CAD)-based data acquisition system.

    PubMed

    Goswami, B; Mitra, M; Nag, B; Mitra, T K

    1993-11-01

    The present paper discusses a Computer-Aided Design and Drafting (CAD) based data acquisition and polar phase response study of the ECG. The scalar ECG does not show vector properties although such properties are embedded in it. In the present paper the polar phase response property of monopolar chest lead (V1 to V6) ECG voltages has been studied. A software tool has been used to evaluate the relative phase response of ECG voltages. The data acquisition of monopolar ECG records of chest leads V1 to V6 from the chart recorder has been done with the help of the AutoCAD application package. The spin harmonic constituents of ECG voltages are evaluated at each harmonic plane and the polar phase responses are studied at each plane. Some interesting results have been observed in some typical cases which are discussed in the paper. PMID:8307653

  16. A Computationally Efficient Bedrock Model

    NASA Astrophysics Data System (ADS)

    Fastook, J. L.

    2002-05-01

    Full treatments of the Earth's crust, mantle, and core for ice sheet modeling are often computationally overwhelming, in that the requirements to calculate a full self-gravitating spherical Earth model for the time-varying load history of an ice sheet are considerably greater than the computational requirements for the ice dynamics and thermodynamics combined. For this reason, we adopt a ``reasonable'' approximation for the behavior of the deforming bedrock beneath the ice sheet. This simpler model of the Earth treats the crust as an elastic plate supported from below by a hydrostatic fluid. Conservation of linear and angular momentum for an elastic plate leads to the classical Poisson-Kirchhoff fourth order differential equation in the crustal displacement. By adding a time-dependent term this treatment allows for an exponentially-decaying response of the bed to loading and unloading events. This component of the ice sheet model (along with the ice dynamics and thermodynamics) is solved using the Finite Element Method (FEM). C1 FEMs are difficult to implement in more than one dimension, and as such the engineering community has turned away from classical Poisson-Kirchhoff plate theory to treatments such as Reissner-Mindlin plate theory, which are able to accommodate transverse shear and hence require only C0 continuity of basis functions (only the function, and not the derivative, is required to be continuous at the element boundary) (Hughes 1987). This method reduces the complexity of the C1 formulation by adding additional degrees of freedom (the transverse shear in x and y) at each node. This ``reasonable'' solution is compared with two self-gravitating spherical Earth models (1. Ivins et al. (1997) and James and Ivins (1998) } and 2. Tushingham and Peltier 1991 ICE3G run by Jim Davis and Glenn Milne), as well as with preliminary results of residual rebound rates measured with GPS by the BIFROST project. Modeled responses of a simulated ice sheet experiencing a

  17. CAD/CAM systems in machine construction

    NASA Astrophysics Data System (ADS)

    Hellwig, H.-E.; Paulus, M.

    1985-09-01

    A description is provided of the present status of Computer-Aided Design (CAD) and Computer-Aided Manufacturing (CAM) technology, taking into account applications, and risks related to the introduction and employment of CAD/CAM methods. The employment of CAD/CAM systems in the area of machine construction is discussed, giving attention to the situation in West Germany. With respect to the system component 'hardware', the transition to a new hardware generation is taking place. In addition to computer centers with large-scale computers, minicomputers and superminicomputers, designed especially for technical applications, have become available. However, existing CAD software does not yet permit the full exploitation of the changes in hardware technology. Attention is given to CAD potential and current utilization in various application areas, developments related to graphics and geometry, advantages of a suitable macro language, the current employment of CAD/CAM technology, and cost considerations.

  18. PC Board Layout and Electronic Drafting with CAD. Teacher Edition.

    ERIC Educational Resources Information Center

    Bryson, Jimmy

    This teacher's guide contains 11 units of instruction for a course on computer electronics and computer-assisted drafting (CAD) using a personal computer (PC). The course covers the following topics: introduction to electronic drafting with CAD; CAD system and software; basic electronic theory; component identification; basic integrated circuit…

  19. Fabricating a tooth- and implant-supported maxillary obturator for a patient after maxillectomy with computer-guided surgery and CAD/CAM technology: A clinical report.

    PubMed

    Noh, Kwantae; Pae, Ahran; Lee, Jung-Woo; Kwon, Yong-Dae

    2016-05-01

    An obturator prosthesis with insufficient retention and support may be improved with implant placement. However, implant surgery in patients after maxillary tumor resection can be complicated because of limited visibility and anatomic complexity. Therefore, computer-guided surgery can be advantageous even for experienced surgeons. In this clinical report, the use of computer-guided surgery is described for implant placement using a bone-supported surgical template for a patient with maxillary defects. The prosthetic procedure was facilitated and simplified by using computer-aided design/computer-aided manufacture (CAD/CAM) technology. Oral function and phonetics were restored using a tooth- and implant-supported obturator prosthesis. No clinical symptoms and no radiographic signs of significant bone loss around the implants were found at a 3-year follow-up. The treatment approach presented here can be a viable option for patients with insufficient remaining zygomatic bone after a hemimaxillectomy. PMID:26774316

  20. Improving the radiologist-CAD interaction: designing for appropriate trust.

    PubMed

    Jorritsma, W; Cnossen, F; van Ooijen, P M A

    2015-02-01

    Computer-aided diagnosis (CAD) has great potential to improve radiologists' diagnostic performance. However, the reported performance of the radiologist-CAD team is lower than what might be expected based on the performance of the radiologist and the CAD system in isolation. This indicates that the interaction between radiologists and the CAD system is not optimal. An important factor in the interaction between humans and automated aids (such as CAD) is trust. Suboptimal performance of the human-automation team is often caused by an inappropriate level of trust in the automation. In this review, we examine the role of trust in the radiologist-CAD interaction and suggest ways to improve the output of the CAD system so that it allows radiologists to calibrate their trust in the CAD system more effectively. Observer studies of the CAD systems show that radiologists often have an inappropriate level of trust in the CAD system. They sometimes under-trust CAD, thereby reducing its potential benefits, and sometimes over-trust it, leading to diagnostic errors they would not have made without CAD. Based on the literature on trust in human-automation interaction and the results of CAD observer studies, we have identified four ways to improve the output of CAD so that it allows radiologists to form a more appropriate level of trust in CAD. Designing CAD systems for appropriate trust is important and can improve the performance of the radiologist-CAD team. Future CAD research and development should acknowledge the importance of the radiologist-CAD interaction, and specifically the role of trust therein, in order to create the perfect artificial partner for the radiologist. This review focuses on the role of trust in the radiologist-CAD interaction. The aim of the review is to encourage CAD developers to design for appropriate trust and thereby improve the performance of the radiologist-CAD team. PMID:25459198

  1. A Survey of CAD Software.

    ERIC Educational Resources Information Center

    Sisk, Alan

    1987-01-01

    Computer-aided design (CAD) has been around for a number of years. An overview is provided of a number of major computer-aided design programs. A short analysis of each program includes the addresses of the software producers. (MLF)

  2. Efficient Methods to Compute Genomic Predictions

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Efficient methods for processing genomic data were developed to increase reliability of estimated breeding values and simultaneously estimate thousands of marker effects. Algorithms were derived and computer programs tested on simulated data for 50,000 markers and 2,967 bulls. Accurate estimates of ...

  3. Viewing CAD Drawings on the Internet

    ERIC Educational Resources Information Center

    Schwendau, Mark

    2004-01-01

    Computer aided design (CAD) has been producing 3-D models for years. AutoCAD software is frequently used to create sophisticated 3-D models. These CAD files can be exported as 3DS files for import into Autodesk's 3-D Studio Viz. In this program, the user can render and modify the 3-D model before exporting it out as a WRL (world file hyperlinked)…

  4. Efficient computation of Lorentzian 6J symbols

    NASA Astrophysics Data System (ADS)

    Willis, Joshua

    2007-04-01

    Spin foam models are a proposal for a quantum theory of gravity, and an important open question is whether they reproduce classical general relativity in the low energy limit. One approach to tackling that problem is to simulate spin-foam models on the computer, but this is hampered by the high computational cost of evaluating the basic building block of these models, the so-called 10J symbol. For Euclidean models, Christensen and Egan have developed an efficient algorithm, but for Lorentzian models this problem remains open. In this talk we describe an efficient method developed for Lorentzian 6J symbols, and we also report on recent work in progress to use this efficient algorithm in calculating the 10J symbols that are of real interest.

  5. CAD/CAM. High-Technology Training Module.

    ERIC Educational Resources Information Center

    Zuleger, Robert

    This high technology training module is an advanced course on computer-assisted design/computer-assisted manufacturing (CAD/CAM) for grades 11 and 12. This unit, to be used with students in advanced drafting courses, introduces the concept of CAD/CAM. The content outline includes the following seven sections: (1) CAD/CAM software; (2) computer…

  6. An Efficient Method for Computing All Reducts

    NASA Astrophysics Data System (ADS)

    Bao, Yongguang; Du, Xiaoyong; Deng, Mingrong; Ishii, Naohiro

    In the process of data mining of decision table using Rough Sets methodology, the main computational effort is associated with the determination of the reducts. Computing all reducts is a combinatorial NP-hard computational problem. Therefore the only way to achieve its faster execution is by providing an algorithm, with a better constant factor, which may solve this problem in reasonable time for real-life data sets. The purpose of this presentation is to propose two new efficient algorithms to compute reducts in information systems. The proposed algorithms are based on the proposition of reduct and the relation between the reduct and discernibility matrix. Experiments have been conducted on some real world domains in execution time. The results show it improves the execution time when compared with the other methods. In real application, we can combine the two proposed algorithms.

  7. CAD Services: an Industry Standard Interface for Mechanical CAD Interoperability

    NASA Technical Reports Server (NTRS)

    Claus, Russell; Weitzer, Ilan

    2002-01-01

    Most organizations seek to design and develop new products in increasingly shorter time periods. At the same time, increased performance demands require a team-based multidisciplinary design process that may span several organizations. One approach to meet these demands is to use 'Geometry Centric' design. In this approach, design engineers team their efforts through one united representation of the design that is usually captured in a CAD system. Standards-based interfaces are critical to provide uniform, simple, distributed services that enable the 'Geometry Centric' design approach. This paper describes an industry-wide effort, under the Object Management Group's (OMG) Manufacturing Domain Task Force, to define interfaces that enable the interoperability of CAD, Computer Aided Manufacturing (CAM), and Computer Aided Engineering (CAE) tools. This critical link to enable 'Geometry Centric' design is called: Cad Services V1.0. This paper discusses the features of this standard and proposed application.

  8. Computerized design of CAD

    NASA Astrophysics Data System (ADS)

    Paul, B. E.; Pham, T. A.

    1982-11-01

    A computerized ballistic design technique for CAD/PAD is described by which a set of ballistic design parameters are determined, all of which satisfy a particular performance requirement. In addition, the program yields the remaining performance predictions, so that only a very few computer runs of the design program can quickly bring the ballistic design within the specification limits prescribed. An example is presented for a small propulsion device, such as a remover or actuator, for which the input specifications define a maximum allowable thrust and minimum end-of-stroke velocity. The resulting output automatically satisfies the input requirements, and will always yield an acceptable ballistic design.

  9. Impact of image normalization and quantization on the performance of sonar computer-aided detection/computer-aided classification (CAD/CAC) algorithms

    NASA Astrophysics Data System (ADS)

    Ciany, Charles M.; Zurawski, William C.

    2007-04-01

    Raytheon has extensively processed high-resolution sonar images with its CAD/CAC algorithms to provide real-time classification of mine-like bottom objects in a wide range of shallow-water environments. The algorithm performance is measured in terms of probability of correct classification (Pcc) as a function of false alarm rate, and is impacted by variables associated with both the physics of the problem and the signal processing design choices. Some examples of prominent variables pertaining to the choices of signal processing parameters are image resolution (i.e., pixel dimensions), image normalization scheme, and pixel intensity quantization level (i.e., number of bits used to represent the intensity of each image pixel). Improvements in image resolution associated with the technology transition from sidescan to synthetic aperture sonars have prompted the use of image decimation algorithms to reduce the number of pixels per image that are processed by the CAD/CAC algorithms, in order to meet real-time processor throughput requirements. Additional improvements in digital signal processing hardware have also facilitated the use of an increased quantization level in converting the image data from analog to digital format. This study evaluates modifications to the normalization algorithm and image pixel quantization level within the image processing prior to CAD/CAC processing, and examines their impact on the resulting CAD/CAC algorithm performance. The study utilizes a set of at-sea data from multiple test exercises in varying shallow water environments.

  10. Changing computing paradigms towards power efficiency.

    PubMed

    Klavík, Pavel; Malossi, A Cristiano I; Bekas, Costas; Curioni, Alessandro

    2014-06-28

    Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications. PMID:24842033

  11. CAD systems simplify engineering drawings

    SciTech Connect

    Holt, J.

    1986-10-01

    Computer assisted drafting systems, with today's technology, provide high-quality, timely drawings that can be justified by the lower costs for the final product. The author describes Exxon Pipeline Co.'s experience in deciding on hardware and software for a CAD system installation and the benefits effected by this procedure and equipment.

  12. Efficient communication in massively parallel computers

    SciTech Connect

    Cypher, R.E.

    1989-01-01

    A fundamental operation in parallel computation is sorting. Sorting is important not only because it is required by many algorithms, but also because it can be used to implement irregular, pointer-based communication. The author studies two algorithms for sorting in massively parallel computers. First, he examines Shellsort. Shellsort is a sorting algorithm that is based on a sequence of parameters called increments. Shellsort can be used to create a parallel sorting device known as a sorting network. Researchers have suggested that if the correct increment sequence is used, an optimal size sorting network can be obtained. All published increment sequences have been monotonically decreasing. He shows that no monotonically decreasing increment sequence will yield an optimal size sorting network. Second, he presents a sorting algorithm called Cubesort. Cubesort is the fastest known sorting algorithm for a variety of parallel computers aver a wide range of parameters. He also presents a paradigm for developing parallel algorithms that have efficient communication. The paradigm, called the data reduction paradigm, consists of using a divide-and-conquer strategy. Both the division and combination phases of the divide-and-conquer algorithm may require irregular, pointer-based communication between processors. However, the problem is divided so as to limit the amount of data that must be communicated. As a result the communication can be performed efficiently. He presents data reduction algorithms for the image component labeling problem, the closest pair problem and four versions of the parallel prefix problem.

  13. Computational efficiency improvements for image colorization

    NASA Astrophysics Data System (ADS)

    Yu, Chao; Sharma, Gaurav; Aly, Hussein

    2013-03-01

    We propose an efficient algorithm for colorization of greyscale images. As in prior work, colorization is posed as an optimization problem: a user specifies the color for a few scribbles drawn on the greyscale image and the color image is obtained by propagating color information from the scribbles to surrounding regions, while maximizing the local smoothness of colors. In this formulation, colorization is obtained by solving a large sparse linear system, which normally requires substantial computation and memory resources. Our algorithm improves the computational performance through three innovations over prior colorization implementations. First, the linear system is solved iteratively without explicitly constructing the sparse matrix, which significantly reduces the required memory. Second, we formulate each iteration in terms of integral images obtained by dynamic programming, reducing repetitive computation. Third, we use a coarseto- fine framework, where a lower resolution subsampled image is first colorized and this low resolution color image is upsampled to initialize the colorization process for the fine level. The improvements we develop provide significant speedup and memory savings compared to the conventional approach of solving the linear system directly using off-the-shelf sparse solvers, and allow us to colorize images with typical sizes encountered in realistic applications on typical commodity computing platforms.

  14. Education and Training Packages for CAD/CAM.

    ERIC Educational Resources Information Center

    Wright, I. C.

    1986-01-01

    Discusses educational efforts in the fields of Computer Assisted Design and Manufacturing (CAD/CAM). Describes two educational training initiatives underway in the United Kingdom, one of which is a resource materials package for teachers of CAD/CAM at the undergraduate level, and the other a training course for managers of CAD/CAM systems. (TW)

  15. A primer on the energy efficiency of computing

    SciTech Connect

    Koomey, Jonathan G.

    2015-03-30

    The efficiency of computing at peak output has increased rapidly since the dawn of the computer age. This paper summarizes some of the key factors affecting the efficiency of computing in all usage modes. While there is still great potential for improving the efficiency of computing devices, we will need to alter how we do computing in the next few decades because we are finally approaching the limits of current technologies.

  16. A primer on the energy efficiency of computing

    NASA Astrophysics Data System (ADS)

    Koomey, Jonathan G.

    2015-03-01

    The efficiency of computing at peak output has increased rapidly since the dawn of the computer age. This paper summarizes some of the key factors affecting the efficiency of computing in all usage modes. While there is still great potential for improving the efficiency of computing devices, we will need to alter how we do computing in the next few decades because we are finally approaching the limits of current technologies.

  17. Project CAD as of July 1978: CAD support project, situation in July 1978

    NASA Technical Reports Server (NTRS)

    Boesch, L.; Lang-Lendorff, G.; Rothenberg, R.; Stelzer, V.

    1979-01-01

    The structure of Computer Aided Design (CAD) and the requirements for program developments in past and future are described. The actual standard and the future aims of CAD programs are presented. The developed programs in: (1) civil engineering; (2) mechanical engineering; (3) chemical engineering/shipbuilding; (4) electrical engineering; and (5) general programs are discussed.

  18. Efficient computation of Wigner-Eisenbud functions

    NASA Astrophysics Data System (ADS)

    Raffah, Bahaaudin M.; Abbott, Paul C.

    2013-06-01

    The R-matrix method, introduced by Wigner and Eisenbud (1947) [1], has been applied to a broad range of electron transport problems in nanoscale quantum devices. With the rapid increase in the development and modeling of nanodevices, efficient, accurate, and general computation of Wigner-Eisenbud functions is required. This paper presents the Mathematica package WignerEisenbud, which uses the Fourier discrete cosine transform to compute the Wigner-Eisenbud functions in dimensionless units for an arbitrary potential in one dimension, and two dimensions in cylindrical coordinates. Program summaryProgram title: WignerEisenbud Catalogue identifier: AEOU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html Distribution format: tar.gz Programming language: Mathematica Operating system: Any platform supporting Mathematica 7.0 and above Keywords: Wigner-Eisenbud functions, discrete cosine transform (DCT), cylindrical nanowires Classification: 7.3, 7.9, 4.6, 5 Nature of problem: Computing the 1D and 2D Wigner-Eisenbud functions for arbitrary potentials using the DCT. Solution method: The R-matrix method is applied to the physical problem. Separation of variables is used for eigenfunction expansion of the 2D Wigner-Eisenbud functions. Eigenfunction computation is performed using the DCT to convert the Schrödinger equation with Neumann boundary conditions to a generalized matrix eigenproblem. Limitations: Restricted to uniform (rectangular grid) sampling of the potential. In 1D the number of sample points, n, results in matrix computations involving n×n matrices. Unusual features: Eigenfunction expansion using the DCT is fast and accurate. Users can specify scattering potentials using functions, or interactively using mouse input. Use of dimensionless units permits application to a

  19. Using AutoCAD for descriptive geometry exercises. in undergraduate structural geology

    NASA Astrophysics Data System (ADS)

    Jacobson, Carl E.

    2001-02-01

    The exercises in descriptive geometry typically utilized in undergraduate structural geology courses are quickly and easily solved using the computer drafting program AutoCAD. The key to efficient use of AutoCAD for descriptive geometry involves taking advantage of User Coordinate Systems, alternative angle conventions, relative coordinates, and other aspects of AutoCAD that may not be familiar to the beginning user. A summary of these features and an illustration of their application to the creation of structure contours for a planar dipping bed provides the background necessary to solve other problems in descriptive geometry with the computer. The ease of the computer constructions reduces frustration for the student and provides more time to think about the principles of the problems.

  20. Productivity increase through implementation of CAD/CAE workstation

    NASA Technical Reports Server (NTRS)

    Bromley, L. K.

    1985-01-01

    The tracking and communication division computer aided design/computer aided engineering system is now operational. The system is utilized in an effort to automate certain tasks that were previously performed manually. These tasks include detailed test configuration diagrams of systems under certification test in the ESTL, floorplan layouts of future planned laboratory reconfigurations, and other graphical documentation of division activities. The significant time savings achieved with this CAD/CAE system are examined: (1) input of drawings and diagrams; (2) editing of initial drawings; (3) accessibility of the data; and (4) added versatility. It is shown that the Applicon CAD/CAE system, with its ease of input and editing, the accessibility of data, and its added versatility, has made more efficient many of the necessary but often time-consuming tasks associated with engineering design and testing.

  1. Efficient gradient computation for dynamical models

    PubMed Central

    Sengupta, B.; Friston, K.J.; Penny, W.D.

    2014-01-01

    Data assimilation is a fundamental issue that arises across many scales in neuroscience — ranging from the study of single neurons using single electrode recordings to the interaction of thousands of neurons using fMRI. Data assimilation involves inverting a generative model that can not only explain observed data but also generate predictions. Typically, the model is inverted or fitted using conventional tools of (convex) optimization that invariably extremise some functional — norms, minimum descriptive length, variational free energy, etc. Generally, optimisation rests on evaluating the local gradients of the functional to be optimized. In this paper, we compare three different gradient estimation techniques that could be used for extremising any functional in time — (i) finite differences, (ii) forward sensitivities and a method based on (iii) the adjoint of the dynamical system. We demonstrate that the first-order gradients of a dynamical system, linear or non-linear, can be computed most efficiently using the adjoint method. This is particularly true for systems where the number of parameters is greater than the number of states. For such systems, integrating several sensitivity equations – as required with forward sensitivities – proves to be most expensive, while finite-difference approximations have an intermediate efficiency. In the context of neuroimaging, adjoint based inversion of dynamical causal models (DCMs) can, in principle, enable the study of models with large numbers of nodes and parameters. PMID:24769182

  2. Dimensioning storage and computing clusters for efficient high throughput computing

    NASA Astrophysics Data System (ADS)

    Accion, E.; Bria, A.; Bernabeu, G.; Caubet, M.; Delfino, M.; Espinal, X.; Merino, G.; Lopez, F.; Martinez, F.; Planas, E.

    2012-12-01

    Scientific experiments are producing huge amounts of data, and the size of their datasets and total volume of data continues increasing. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of scientific data centers has shifted from efficiently coping with PetaByte scale storage to deliver quality data processing throughput. The dimensioning of the internal components in High Throughput Computing (HTC) data centers is of crucial importance to cope with all the activities demanded by the experiments, both the online (data acceptance) and the offline (data processing, simulation and user analysis). This requires a precise setup involving disk and tape storage services, a computing cluster and the internal networking to prevent bottlenecks, overloads and undesired slowness that lead to losses cpu cycles and batch jobs failures. In this paper we point out relevant features for running a successful data storage and processing service in an intensive HTC environment.

  3. Train effectively for CAD/D

    SciTech Connect

    Not Available

    1983-04-01

    After failing with an unstructured computer-aided drafting/ design CAD/D program, Bechtel changed to a structured training program. Five considerations are presented here: teach CAD/D to engineers, not engineering to CAD/D experts; keep the program flexible enough to avoid rewriting due to fast technology evolution; pace information delivery; and rote learning of sequences only works if the students have a conceptual model first. On the job training is necessary, and better monitoring systems to test the OJT are needed. One such test is presented.

  4. AutoCAD-To-NASTRAN Translator Program

    NASA Technical Reports Server (NTRS)

    Jones, A.

    1989-01-01

    Program facilitates creation of finite-element mathematical models from geometric entities. AutoCAD to NASTRAN translator (ACTON) computer program developed to facilitate quick generation of small finite-element mathematical models for use with NASTRAN finite-element modeling program. Reads geometric data of drawing from Data Exchange File (DXF) used in AutoCAD and other PC-based drafting programs. Written in Microsoft Quick-Basic (Version 2.0).

  5. CAD/CAM/CAE reshapes engineering processes

    NASA Astrophysics Data System (ADS)

    Ludwinski, Thomas A.

    1993-06-01

    A development history and development trends evaluation is undertaken for computer-aided design/manufacturing/engineering techniques. Attention is drawn to the failure of the Initial Graphics Exchange Specification for standardized transfer of CAD/CAM data among different data bases to support information concerning solids; it is anticipated that the ability to transfer data transparently among CAD/CAM systems will result in major savings to all users, but this directly impinges on company relations.

  6. CAD/CAM-coupled image processing systems

    NASA Astrophysics Data System (ADS)

    Ahlers, Rolf-Juergen; Rauh, W.

    1990-08-01

    Image processing systems have found wide application in industry. For most computer integrated manufacturing faci- lities it is necessary to adapt these systems thus that they can automate the interaction with and the integration of CAD and CAM Systems. In this paper new approaches will be described that make use of the coupling of CAD and image processing as well as the automatic generation of programmes for the machining of products.

  7. Aerodynamic Design of Complex Configurations Using Cartesian Methods and CAD Geometry

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.; Pulliam, Thomas H.

    2003-01-01

    The objective for this paper is to present the development of an optimization capability for the Cartesian inviscid-flow analysis package of Aftosmis et al. We evaluate and characterize the following modules within the new optimization framework: (1) A component-based geometry parameterization approach using a CAD solid representation and the CAPRI interface. (2) The use of Cartesian methods in the development Optimization techniques using a genetic algorithm. The discussion and investigations focus on several real world problems of the optimization process. We examine the architectural issues associated with the deployment of a CAD-based design approach in a heterogeneous parallel computing environment that contains both CAD workstations and dedicated compute nodes. In addition, we study the influence of noise on the performance of optimization techniques, and the overall efficiency of the optimization process for aerodynamic design of complex three-dimensional configurations. of automated optimization tools. rithm and a gradient-based algorithm.

  8. Multi-site evaluation of a computer aided detection (CAD) algorithm for small acute intra-cranial hemorrhage and development of a stand-alone CAD system ready for deployment in a clinical environment

    NASA Astrophysics Data System (ADS)

    Deshpande, Ruchi R.; Fernandez, James; Lee, Joon K.; Chan, Tao; Liu, Brent J.; Huang, H. K.

    2010-03-01

    Timely detection of Acute Intra-cranial Hemorrhage (AIH) in an emergency environment is essential for the triage of patients suffering from Traumatic Brain Injury. Moreover, the small size of lesions and lack of experience on the reader's part could lead to difficulties in the detection of AIH. A CT based CAD algorithm for the detection of AIH has been developed in order to improve upon the current standard of identification and treatment of AIH. A retrospective analysis of the algorithm has already been carried out with 135 AIH CT studies with 135 matched normal head CT studies from the Los Angeles County General Hospital/ University of Southern California Hospital System (LAC/USC). In the next step, AIH studies have been collected from Walter Reed Army Medical Center, and are currently being processed using the AIH CAD system as part of implementing a multi-site assessment and evaluation of the performance of the algorithm. The sensitivity and specificity numbers from the Walter Reed study will be compared with the numbers from the LAC/USC study to determine if there are differences in the presentation and detection due to the difference in the nature of trauma between the two sites. Simultaneously, a stand-alone system with a user friendly GUI has been developed to facilitate implementation in a clinical setting.

  9. CAD/CAM: Practical and Persuasive in Canadian Schools

    ERIC Educational Resources Information Center

    Willms, Ed

    2007-01-01

    Chances are that many high school students would not know how to use drafting instruments, but some might want to gain competence in computer-assisted design (CAD) and possibly computer-assisted manufacturing (CAM). These students are often attracted to tech courses by the availability of CAD/CAM instructions, and many go on to impress employers…

  10. An application protocol for CAD to CAD transfer of electronic information

    NASA Technical Reports Server (NTRS)

    Azu, Charles C., Jr.

    1993-01-01

    The exchange of Computer Aided Design (CAD) information between dissimilar CAD systems is a problem. This is especially true for transferring electronics CAD information such as multi-chip module (MCM), hybrid microcircuit assembly (HMA), and printed circuit board (PCB) designs. Currently, there exists several neutral data formats for transferring electronics CAD information. These include IGES, EDIF, and DXF formats. All these formats have limitations for use in exchanging electronic data. In an attempt to overcome these limitations, the Navy's MicroCIM program implemented a project to transfer hybrid microcircuit design information between dissimilar CAD systems. The IGES (Initial Graphics Exchange Specification) format is used since it is well established within the CAD industry. The goal of the project is to have a complete transfer of microelectronic CAD information, using IGES, without any data loss. An Application Protocol (AP) is being developed to specify how hybrid microcircuit CAD information will be represented by IGES entity constructs. The AP defines which IGES data items are appropriate for describing HMA geometry, connectivity, and processing as well as HMA material characteristics.

  11. Efficient Computation Of Behavior Of Aircraft Tires

    NASA Technical Reports Server (NTRS)

    Tanner, John A.; Noor, Ahmed K.; Andersen, Carl M.

    1989-01-01

    NASA technical paper discusses challenging application of computational structural mechanics to numerical simulation of responses of aircraft tires during taxing, takeoff, and landing. Presents details of three main elements of computational strategy: use of special three-field, mixed-finite-element models; use of operator splitting; and application of technique reducing substantially number of degrees of freedom. Proposed computational strategy applied to two quasi-symmetric problems: linear analysis of anisotropic tires through use of two-dimensional-shell finite elements and nonlinear analysis of orthotropic tires subjected to unsymmetric loading. Three basic types of symmetry and combinations exhibited by response of tire identified.

  12. The Effect of Preparation Design on the Fracture Resistance of Zirconia Crown Copings (Computer Associated Design/Computer Associated Machine, CAD/CAM System)

    PubMed Central

    Jalalian, E.; Atashkar, B.; Rostami, R.

    2011-01-01

    Objective One of the major problems of all ceramic restorations is their probable fracture against the occlusal force. The aim of the present in-vitro study is was to compare the effect of two marginal designs (chamfer & shoulder) on the fracture resistance of zirconia copings, CERCON (CAD/CAM). MATERIALS AND METHODS This in vitro study was done with single blind experimental technique. One stainless steel dye with 50′ chamfer finish line design (0.8 mm depth) was prepared using milling machine. Ten epoxy resin dyes were made, The same dye was retrieved and 50′ chamfer was converted into shoulder (1 mm).again ten epoxy resin dyes were made from shoulder dyes. Zirconia cores with 0.4 mm thickness and 35 μm cement Space fabricated on the 20 epoxy resin dyes (10 samples chamfer and 10 samples shoulder) in a dental laboratory. Then the zirconia cores were cemented on the epoxy resin dyes and underwent a fracture test with a universal testing machine (GOTECH AI-700LAC, Arson, USA) and samples were investigated from the point of view of the origin of the failure. RESULT The mean value of fracture resistance for shoulder margins were 788.90±99.56 N and for the chamfer margins were 991.75±112.00 N. The student’s T-test revealed a statistically significant difference between groups (P=0.001). CONCLUSION The result of this study indicates that marginal design of the zirconia cores effects on their fracture resistance. A chamfer margin could improve the biomechanical performance of posterior single zirconia crown restorations. This may be because of strong unity and round internal angle in chamfer margin. PMID:22457839

  13. Cone beam computed tomography imaging as a primary diagnostic tool for computer-guided surgery and CAD-CAM interim removable and fixed dental prostheses.

    PubMed

    Charette, Jyme R; Goldberg, Jack; Harris, Bryan T; Morton, Dean; Llop, Daniel R; Lin, Wei-Shao

    2016-08-01

    This article describes a digital workflow using cone beam computed tomography imaging as the primary diagnostic tool in the virtual planning of the computer-guided surgery and fabrication of a maxillary interim complete removable dental prosthesis and mandibular interim implant-supported complete fixed dental prosthesis with computer-aided design and computer-aided manufacturing technology. Diagnostic impressions (conventional or digital) and casts are unnecessary in this proposed digital workflow, providing clinicians with an alternative treatment in the indicated clinical scenario. PMID:27086108

  14. Some Workplace Effects of CAD and CAM.

    ERIC Educational Resources Information Center

    Ebel, Karl-H.; Ulrich, Erhard

    1987-01-01

    Examines the impact of computer-aided design (CAD) and computer-aided manufacturing (CAM) on employment, work organization, working conditions, job content, training, and industrial relations in several countries. Finds little evidence of negative employment effects since productivity gains are offset by various compensatory factors. (Author/CH)

  15. Pipe Drafting with CAD. Teacher Edition.

    ERIC Educational Resources Information Center

    Smithson, Buddy

    This teacher's guide contains nine units of instruction for a course on computer-assisted pipe drafting. The course covers the following topics: introduction to pipe drafting with CAD (computer-assisted design); flow diagrams; pipe and pipe components; valves; piping plans and elevations; isometrics; equipment fabrication drawings; piping design…

  16. Computationally efficient prediction of area per lipid

    NASA Astrophysics Data System (ADS)

    Chaban, Vitaly

    2014-11-01

    Area per lipid (APL) is an important property of biological and artificial membranes. Newly constructed bilayers are characterized by their APL and newly elaborated force fields must reproduce APL. Computer simulations of APL are very expensive due to slow conformational dynamics. The simulated dynamics increases exponentially with respect to temperature. APL dependence on temperature is linear over an entire temperature range. I provide numerical evidence that thermal expansion coefficient of a lipid bilayer can be computed at elevated temperatures and extrapolated to the temperature of interest. Thus, sampling times to predict accurate APL are reduced by a factor of ∼10.

  17. Efficient Parallel Engineering Computing on Linux Workstations

    NASA Technical Reports Server (NTRS)

    Lou, John Z.

    2010-01-01

    A C software module has been developed that creates lightweight processes (LWPs) dynamically to achieve parallel computing performance in a variety of engineering simulation and analysis applications to support NASA and DoD project tasks. The required interface between the module and the application it supports is simple, minimal and almost completely transparent to the user applications, and it can achieve nearly ideal computing speed-up on multi-CPU engineering workstations of all operating system platforms. The module can be integrated into an existing application (C, C++, Fortran and others) either as part of a compiled module or as a dynamically linked library (DLL).

  18. CAD software lights up the environmental scene

    SciTech Connect

    Basta, N.

    1996-01-01

    There seems to be a natural affinity between the data requirements of environmental work and computer-aided design (CAD) software. Perhaps the best example of this is the famous shots of the ozone hole produced by computer-enhanced satellite imagery in the mid-1980s. Once this image was published, the highly abstract discussion of ozone concentrations and arctic wind patterns suddenly became very real. On ground level, in the day-to-day work of environmental managers and site restorers, CAD software is proving its value over and over. Graphic images are a convenient, readily understandable way of presenting the large volumes of data produced by environmental projects. With the latest CAD systems, the work of specifying process equipment or subsurface conditions can be reused again and again as projects move from the study and design phase to the construction or remediation phases. An important subset of CAD is geographic information systems (GIS), which are used to organize data on a site-specific basis. Like CAD itself, GIS reaches out beyond the borders of the computer screen or printout, in such ways making use of the Geostationary Positioning System (a global method of locating position precisely), and matching current with historical data. Good GIS software can also make use of the large database of geological data produced by government and industry, thus saving on surveying costs and exploratory well construction.

  19. Efficient Computation Of Manipulator Inertia Matrix

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1991-01-01

    Improved method for computation of manipulator inertia matrix developed, based on concept of spatial inertia of composite rigid body. Required for implementation of advanced dynamic-control schemes as well as dynamic simulation of manipulator motion. Motivated by increasing demand for fast algorithms to provide real-time control and simulation capability and, particularly, need for faster-than-real-time simulation capability, required in many anticipated space teleoperation applications.

  20. Experimental Realization of High-Efficiency Counterfactual Computation

    NASA Astrophysics Data System (ADS)

    Kong, Fei; Ju, Chenyong; Huang, Pu; Wang, Pengfei; Kong, Xi; Shi, Fazhan; Jiang, Liang; Du, Jiangfeng

    2015-08-01

    Counterfactual computation (CFC) exemplifies the fascinating quantum process by which the result of a computation may be learned without actually running the computer. In previous experimental studies, the counterfactual efficiency is limited to below 50%. Here we report an experimental realization of the generalized CFC protocol, in which the counterfactual efficiency can break the 50% limit and even approach unity in principle. The experiment is performed with the spins of a negatively charged nitrogen-vacancy color center in diamond. Taking advantage of the quantum Zeno effect, the computer can remain in the not-running subspace due to the frequent projection by the environment, while the computation result can be revealed by final detection. The counterfactual efficiency up to 85% has been demonstrated in our experiment, which opens the possibility of many exciting applications of CFC, such as high-efficiency quantum integration and imaging.

  1. Efficient Associative Computation with Discrete Synapses.

    PubMed

    Knoblauch, Andreas

    2016-01-01

    Neural associative networks are a promising computational paradigm for both modeling neural circuits of the brain and implementing associative memory and Hebbian cell assemblies in parallel VLSI or nanoscale hardware. Previous work has extensively investigated synaptic learning in linear models of the Hopfield type and simple nonlinear models of the Steinbuch/Willshaw type. Optimized Hopfield networks of size n can store a large number of about n(2)/k memories of size k (or associations between them) but require real-valued synapses, which are expensive to implement and can store at most C = 0.72 bits per synapse. Willshaw networks can store a much smaller number of about n(2)/k(2) memories but get along with much cheaper binary synapses. Here I present a learning model employing synapses with discrete synaptic weights. For optimal discretization parameters, this model can store, up to a factor ζ close to one, the same number of memories as for optimized Hopfield-type learning--for example, ζ = 0.64 for binary synapses, ζ = 0.88 for 2 bit (four-state) synapses, ζ = 0.96 for 3 bit (8-state) synapses, and ζ > 0.99 for 4 bit (16-state) synapses. The model also provides the theoretical framework to determine optimal discretization parameters for computer implementations or brainlike parallel hardware including structural plasticity. In particular, as recently shown for the Willshaw network, it is possible to store C(I) = 1 bit per computer bit and up to C(S) = log n bits per nonsilent synapse, whereas the absolute number of stored memories can be much larger than for the Willshaw model. PMID:26599711

  2. Reliability and Efficiency of a DNA-Based Computation

    NASA Astrophysics Data System (ADS)

    Deaton, R.; Garzon, M.; Murphy, R. C.; Rose, J. A.; Franceschetti, D. R.; Stevens, S. E., Jr.

    1998-01-01

    DNA-based computing uses the tendency of nucleotide bases to bind (hybridize) in preferred combinations to do computation. Depending on reaction conditions, oligonucleotides can bind despite noncomplementary base pairs. These mismatched hybridizations are a source of false positives and negatives, which limit the efficiency and scalability of DNA-based computing. The ability of specific base sequences to support error-tolerant Adleman-style computation is analyzed, and criteria are proposed to increase reliability and efficiency. A method is given to calculate reaction conditions from estimates of DNA melting.

  3. Efficient computation of parameter confidence intervals

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.

    1987-01-01

    An important step in system identification of aircraft is the estimation of stability and control derivatives from flight data along with an assessment of parameter accuracy. When the maximum likelihood estimation technique is used, parameter accuracy is commonly assessed by the Cramer-Rao lower bound. It is known, however, that in some cases the lower bound can be substantially different from the parameter variance. Under these circumstances the Cramer-Rao bounds may be misleading as an accuracy measure. This paper discusses the confidence interval estimation problem based on likelihood ratios, which offers a more general estimate of the error bounds. Four approaches are considered for computing confidence intervals of maximum likelihood parameter estimates. Each approach is applied to real flight data and compared.

  4. Efficient tree codes on SIMD computer architectures

    NASA Astrophysics Data System (ADS)

    Olson, Kevin M.

    1996-11-01

    This paper describes changes made to a previous implementation of an N -body tree code developed for a fine-grained, SIMD computer architecture. These changes include (1) switching from a balanced binary tree to a balanced oct tree, (2) addition of quadrupole corrections, and (3) having the particles search the tree in groups rather than individually. An algorithm for limiting errors is also discussed. In aggregate, these changes have led to a performance increase of over a factor of 10 compared to the previous code. For problems several times larger than the processor array, the code now achieves performance levels of ~ 1 Gflop on the Maspar MP-2 or roughly 20% of the quoted peak performance of this machine. This percentage is competitive with other parallel implementations of tree codes on MIMD architectures. This is significant, considering the low relative cost of SIMD architectures.

  5. Efficient algorithm to compute the Berry conductivity

    NASA Astrophysics Data System (ADS)

    Dauphin, A.; Müller, M.; Martin-Delgado, M. A.

    2014-07-01

    We propose and construct a numerical algorithm to calculate the Berry conductivity in topological band insulators. The method is applicable to cold atom systems as well as solid state setups, both for the insulating case where the Fermi energy lies in the gap between two bulk bands as well as in the metallic regime. This method interpolates smoothly between both regimes. The algorithm is gauge-invariant by construction, efficient, and yields the Berry conductivity with known and controllable statistical error bars. We apply the algorithm to several paradigmatic models in the field of topological insulators, including Haldane's model on the honeycomb lattice, the multi-band Hofstadter model, and the BHZ model, which describes the 2D spin Hall effect observed in CdTe/HgTe/CdTe quantum well heterostructures.

  6. An Evaluation of Internet-Based CAD Collaboration Tools

    ERIC Educational Resources Information Center

    Smith, Shana Shiang-Fong

    2004-01-01

    Due to the now widespread use of the Internet, most companies now require computer aided design (CAD) tools that support distributed collaborative design on the Internet. Such CAD tools should enable designers to share product models, as well as related data, from geographically distant locations. However, integrated collaborative design…

  7. Making a Case for CAD in the Curriculum.

    ERIC Educational Resources Information Center

    Threlfall, K. Denise

    1995-01-01

    Computer-assisted design (CAD) technology is transforming the apparel industry. Students of fashion merchandising and clothing design must be prepared on state-of-the-art equipment. ApparelCAD software is one example of courseware for instruction in pattern design and production. (SK)

  8. From Bad to CAD: Maintaining Records of Maintenance Projects.

    ERIC Educational Resources Information Center

    Shea, Diane C.

    1994-01-01

    A computer-assisted design (CAD) software program is used in a Connecticut school district to graphically provide information on maintenance projects by school and category of project over time. CAD supplies a computerized building plan, a "foot-print," used as a recordkeeping system for maintenance projects. (MLF)

  9. On the Use of CAD and Cartesian Methods for Aerodynamic Optimization

    NASA Technical Reports Server (NTRS)

    Nemec, M.; Aftosmis, M. J.; Pulliam, T. H.

    2004-01-01

    The objective for this paper is to present the development of an optimization capability for Curt3D, a Cartesian inviscid-flow analysis package. We present the construction of a new optimization framework and we focus on the following issues: 1) Component-based geometry parameterization approach using parametric-CAD models and CAPRI. A novel geometry server is introduced that addresses the issue of parallel efficiency while only sparingly consuming CAD resources; 2) The use of genetic and gradient-based algorithms for three-dimensional aerodynamic design problems. The influence of noise on the optimization methods is studied. Our goal is to create a responsive and automated framework that efficiently identifies design modifications that result in substantial performance improvements. In addition, we examine the architectural issues associated with the deployment of a CAD-based approach in a heterogeneous parallel computing environment that contains both CAD workstations and dedicated compute engines. We demonstrate the effectiveness of the framework for a design problem that features topology changes and complex geometry.

  10. Texture functions in image analysis: A computationally efficient solution

    NASA Technical Reports Server (NTRS)

    Cox, S. C.; Rose, J. F.

    1983-01-01

    A computationally efficient means for calculating texture measurements from digital images by use of the co-occurrence technique is presented. The calculation of the statistical descriptors of image texture and a solution that circumvents the need for calculating and storing a co-occurrence matrix are discussed. The results show that existing efficient algorithms for calculating sums, sums of squares, and cross products can be used to compute complex co-occurrence relationships directly from the digital image input.

  11. Computationally efficient Bayesian inference for inverse problems.

    SciTech Connect

    Marzouk, Youssef M.; Najm, Habib N.; Rahn, Larry A.

    2007-10-01

    Bayesian statistics provides a foundation for inference from noisy and incomplete data, a natural mechanism for regularization in the form of prior information, and a quantitative assessment of uncertainty in the inferred results. Inverse problems - representing indirect estimation of model parameters, inputs, or structural components - can be fruitfully cast in this framework. Complex and computationally intensive forward models arising in physical applications, however, can render a Bayesian approach prohibitive. This difficulty is compounded by high-dimensional model spaces, as when the unknown is a spatiotemporal field. We present new algorithmic developments for Bayesian inference in this context, showing strong connections with the forward propagation of uncertainty. In particular, we introduce a stochastic spectral formulation that dramatically accelerates the Bayesian solution of inverse problems via rapid evaluation of a surrogate posterior. We also explore dimensionality reduction for the inference of spatiotemporal fields, using truncated spectral representations of Gaussian process priors. These new approaches are demonstrated on scalar transport problems arising in contaminant source inversion and in the inference of inhomogeneous material or transport properties. We also present a Bayesian framework for parameter estimation in stochastic models, where intrinsic stochasticity may be intermingled with observational noise. Evaluation of a likelihood function may not be analytically tractable in these cases, and thus several alternative Markov chain Monte Carlo (MCMC) schemes, operating on the product space of the observations and the parameters, are introduced.

  12. Duality quantum computer and the efficient quantum simulations

    NASA Astrophysics Data System (ADS)

    Wei, Shi-Jie; Long, Gui-Lu

    2016-03-01

    Duality quantum computing is a new mode of a quantum computer to simulate a moving quantum computer passing through a multi-slit. It exploits the particle wave duality property for computing. A quantum computer with n qubits and a qudit simulates a moving quantum computer with n qubits passing through a d-slit. Duality quantum computing can realize an arbitrary sum of unitaries and therefore a general quantum operator, which is called a generalized quantum gate. All linear bounded operators can be realized by the generalized quantum gates, and unitary operators are just the extreme points of the set of generalized quantum gates. Duality quantum computing provides flexibility and a clear physical picture in designing quantum algorithms, and serves as a powerful bridge between quantum and classical algorithms. In this paper, after a brief review of the theory of duality quantum computing, we will concentrate on the applications of duality quantum computing in simulations of Hamiltonian systems. We will show that duality quantum computing can efficiently simulate quantum systems by providing descriptions of the recent efficient quantum simulation algorithm of Childs and Wiebe (Quantum Inf Comput 12(11-12):901-924, 2012) for the fast simulation of quantum systems with a sparse Hamiltonian, and the quantum simulation algorithm by Berry et al. (Phys Rev Lett 114:090502, 2015), which provides exponential improvement in precision for simulating systems with a sparse Hamiltonian.

  13. Earthquake detection through computationally efficient similarity search

    PubMed Central

    Yoon, Clara E.; O’Reilly, Ossian; Bergen, Karianne J.; Beroza, Gregory C.

    2015-01-01

    Seismology is experiencing rapid growth in the quantity of data, which has outpaced the development of processing algorithms. Earthquake detection—identification of seismic events in continuous data—is a fundamental operation for observational seismology. We developed an efficient method to detect earthquakes using waveform similarity that overcomes the disadvantages of existing detection methods. Our method, called Fingerprint And Similarity Thresholding (FAST), can analyze a week of continuous seismic waveform data in less than 2 hours, or 140 times faster than autocorrelation. FAST adapts a data mining algorithm, originally designed to identify similar audio clips within large databases; it first creates compact “fingerprints” of waveforms by extracting key discriminative features, then groups similar fingerprints together within a database to facilitate fast, scalable search for similar fingerprint pairs, and finally generates a list of earthquake detections. FAST detected most (21 of 24) cataloged earthquakes and 68 uncataloged earthquakes in 1 week of continuous data from a station located near the Calaveras Fault in central California, achieving detection performance comparable to that of autocorrelation, with some additional false detections. FAST is expected to realize its full potential when applied to extremely long duration data sets over a distributed network of seismic stations. The widespread application of FAST has the potential to aid in the discovery of unexpected seismic signals, improve seismic monitoring, and promote a greater understanding of a variety of earthquake processes. PMID:26665176

  14. Earthquake detection through computationally efficient similarity search.

    PubMed

    Yoon, Clara E; O'Reilly, Ossian; Bergen, Karianne J; Beroza, Gregory C

    2015-12-01

    Seismology is experiencing rapid growth in the quantity of data, which has outpaced the development of processing algorithms. Earthquake detection-identification of seismic events in continuous data-is a fundamental operation for observational seismology. We developed an efficient method to detect earthquakes using waveform similarity that overcomes the disadvantages of existing detection methods. Our method, called Fingerprint And Similarity Thresholding (FAST), can analyze a week of continuous seismic waveform data in less than 2 hours, or 140 times faster than autocorrelation. FAST adapts a data mining algorithm, originally designed to identify similar audio clips within large databases; it first creates compact "fingerprints" of waveforms by extracting key discriminative features, then groups similar fingerprints together within a database to facilitate fast, scalable search for similar fingerprint pairs, and finally generates a list of earthquake detections. FAST detected most (21 of 24) cataloged earthquakes and 68 uncataloged earthquakes in 1 week of continuous data from a station located near the Calaveras Fault in central California, achieving detection performance comparable to that of autocorrelation, with some additional false detections. FAST is expected to realize its full potential when applied to extremely long duration data sets over a distributed network of seismic stations. The widespread application of FAST has the potential to aid in the discovery of unexpected seismic signals, improve seismic monitoring, and promote a greater understanding of a variety of earthquake processes. PMID:26665176

  15. Practical CAD/CAM aspects of CNC prepreg cutting

    NASA Astrophysics Data System (ADS)

    Connolly, Michael L.

    1991-01-01

    The use of CAD/CAM in cutting large quantities of complex shapes out of prepregs is addressed. The advantages of CAD/CAM includes reduction of prototype cycles from weeks to days, improved part quality resulting from accurately cut plies, safer and more efficient cutting operations, and lower direct labor costs.

  16. Efficiently modeling neural networks on massively parallel computers

    NASA Technical Reports Server (NTRS)

    Farber, Robert M.

    1993-01-01

    Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper describes the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD (Single Instruction Multiple Data) architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SIMD computers and can be implemented on MIMD computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors (which has a sub-linear runtime growth on the order of O(log(number of processors)). We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can extend to arbitrarily large networks (within memory limitations) by merging the memory space of separate processors with fast adjacent processor interprocessor communications. This paper will consider the simulation of only feed forward neural network although this method is extendable to recurrent networks.

  17. Mechanical Drafting with CAD. Teacher Edition.

    ERIC Educational Resources Information Center

    McClain, Gerald R.

    This instructor's manual contains 13 units of instruction for a course on mechanical drafting with options for using computer-aided drafting (CAD). Each unit includes some or all of the following basic components of a unit of instruction: objective sheet, suggested activities for the teacher, assignment sheets and answers to assignment sheets,…

  18. A Case Study in CAD Design Automation

    ERIC Educational Resources Information Center

    Lowe, Andrew G.; Hartman, Nathan W.

    2011-01-01

    Computer-aided design (CAD) software and other product life-cycle management (PLM) tools have become ubiquitous in industry during the past 20 years. Over this time they have continuously evolved, becoming programs with enormous capabilities, but the companies that use them have not evolved their design practices at the same rate. Due to the…

  19. An efficient method for computation of the manipulator inertia matrix

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1989-01-01

    An efficient method of computation of the manipulator inertia matrix is presented. Using spatial notations, the method leads to the definition of the composite rigid-body spatial inertia, which is a spatial representation of the notion of augmented body. The previously proposed methods, the physical interpretations leading to their derivation, and their redundancies are analyzed. The proposed method achieves a greater efficiency by eliminating the redundancy in the intrinsic equations as well as by a better choice of coordinate frame for their projection. In this case, removing the redundancy leads to greater efficiency of the computation in both serial and parallel senses.

  20. Refocusing CAD and CAE on O and M

    SciTech Connect

    Podczerwinski, C.A.; Wittenauer, J.P.; Irish, J.D.

    1995-09-01

    In the late 1980s, computer-aided design (CAD) software started to migrate from larger computer equipment to personal computers. Since then, competition in the desktop computer market transformed the personal computer (PC) into an office equipment commodity. The technological improvements accompanying that change transformed CAD from an expensive, specialized tool to an office software commodity that is a graphical counterpart to the word processor. The cost reductions and performance improvements have made many application concepts, previously too cumbersome to apply, cost effective and helpful. Applying these ideas has increased the level of CAD usage in their offices dramatically. Part of that growth has been an increasing number of projects directly aimed at helping reduce operation and maintenance (O and M) costs. This paper describes those projects and discusses the application of CAD to O and M work.

  1. Revisiting the Efficiency of Malicious Two-Party Computation

    NASA Astrophysics Data System (ADS)

    Woodruff, David P.

    In a recent paper Mohassel and Franklin study the efficiency of secure two-party computation in the presence of malicious behavior. Their aim is to make classical solutions to this problem, such as zero-knowledge compilation, more efficient. The authors provide several schemes which are the most efficient to date. We propose a modification to their main scheme using expanders. Our modification asymptotically improves at least one measure of efficiency of all known schemes. We also point out an error, and improve the analysis of one of their schemes.

  2. Effect of image variation on computer-aided detection systems

    NASA Astrophysics Data System (ADS)

    Rabbani, S. P.; Maduskar, P.; Philipsen, R. H. H. M.; Hogeweg, L.; van Ginneken, B.

    2014-03-01

    As the importance of Computer Aided Detection (CAD) systems application is rising in medical imaging field due to the advantages they generate; it is essential to know their weaknesses and try to find a proper solution for them. A common possible practical problem that affects CAD systems performance is: dissimilar training and testing datasets declines the efficiency of CAD systems. In this paper normalizing images is proposed, three different normalization methods are applied on chest radiographs namely (1) Simple normalization (2) Local Normalization (3) Multi Band Local Normalization. The supervised lung segmentation CAD system performance is evaluated on normalized chest radiographs with these three different normalization methods in terms of Jaccard index. As a conclusion the normalization enhances the performance of CAD system and among these three normalization methods Local Normalization and Multi band Local normalization improve performance of CAD system more significantly than the simple normalization.

  3. A scheme for efficient quantum computation with linear optics

    NASA Astrophysics Data System (ADS)

    Knill, E.; Laflamme, R.; Milburn, G. J.

    2001-01-01

    Quantum computers promise to increase greatly the efficiency of solving problems such as factoring large integers, combinatorial optimization and quantum physics simulation. One of the greatest challenges now is to implement the basic quantum-computational elements in a physical system and to demonstrate that they can be reliably and scalably controlled. One of the earliest proposals for quantum computation is based on implementing a quantum bit with two optical modes containing one photon. The proposal is appealing because of the ease with which photon interference can be observed. Until now, it suffered from the requirement for non-linear couplings between optical modes containing few photons. Here we show that efficient quantum computation is possible using only beam splitters, phase shifters, single photon sources and photo-detectors. Our methods exploit feedback from photo-detectors and are robust against errors from photon loss and detector inefficiency. The basic elements are accessible to experimental investigation with current technology.

  4. Do Computers Improve the Drawing of a Geometrical Figure for 10 Year-Old Children?

    ERIC Educational Resources Information Center

    Martin, Perrine; Velay, Jean-Luc

    2012-01-01

    Nowadays, computer aided design (CAD) is widely used by designers. Would children learn to draw more easily and more efficiently if they were taught with computerised tools? To answer this question, we made an experiment designed to compare two methods for children to do the same drawing: the classical "pen and paper" method and a CAD method. We…

  5. Overview of NASA MSFC IEC Multi-CAD Collaboration Capability

    NASA Technical Reports Server (NTRS)

    Moushon, Brian; McDuffee, Patrick

    2005-01-01

    This viewgraph presentation provides an overview of a Design and Data Management System (DDMS) for Computer Aided Design (CAD) collaboration in order to support the Integrated Engineering Capability (IEC) at Marshall Space Flight Center (MSFC).

  6. CAD-CAE in Electrical Machines and Drives Teaching.

    ERIC Educational Resources Information Center

    Belmans, R.; Geysen, W.

    1988-01-01

    Describes the use of computer-aided design (CAD) techniques in teaching the design of electrical motors. Approaches described include three technical viewpoints, such as electromagnetics, thermal, and mechanical aspects. Provides three diagrams, a table, and conclusions. (YP)

  7. CAD/CAM ceramic restorations in the operatory and laboratory.

    PubMed

    Fasbinder, Dennis J

    2003-08-01

    Computer assisted design/computer assisted machining (CAD/CAM) technology has received considerable clinical and research interest from modern dental practices as a means of delivering all-ceramic restorations. The CEREC, System offers CAD/CAM dental technology designed for clinical use by dentists, as well as a separate system designed for dental laboratory technicians. The CEREC 3 system is indicated for dental operatory applications, and the CEREC inLab, system is indicated for dental laboratory applications. Although both systems rely on similar CAD/CAM technology, several significant differences exist in the processing techniques involved, restorative materials used, and types of restoration provided. PMID:14692164

  8. I/O-Efficient Scientific Computation Using TPIE

    NASA Technical Reports Server (NTRS)

    Vengroff, Darren Erik; Vitter, Jeffrey Scott

    1996-01-01

    In recent years, input/output (I/O)-efficient algorithms for a wide variety of problems have appeared in the literature. However, systems specifically designed to assist programmers in implementing such algorithms have remained scarce. TPIE is a system designed to support I/O-efficient paradigms for problems from a variety of domains, including computational geometry, graph algorithms, and scientific computation. The TPIE interface frees programmers from having to deal not only with explicit read and write calls, but also the complex memory management that must be performed for I/O-efficient computation. In this paper we discuss applications of TPIE to problems in scientific computation. We discuss algorithmic issues underlying the design and implementation of the relevant components of TPIE and present performance results of programs written to solve a series of benchmark problems using our current TPIE prototype. Some of the benchmarks we present are based on the NAS parallel benchmarks while others are of our own creation. We demonstrate that the central processing unit (CPU) overhead required to manage I/O is small and that even with just a single disk, the I/O overhead of I/O-efficient computation ranges from negligible to the same order of magnitude as CPU time. We conjecture that if we use a number of disks in parallel this overhead can be all but eliminated.

  9. Equilibrium analysis of the efficiency of an autonomous molecular computer

    NASA Astrophysics Data System (ADS)

    Rose, John A.; Deaton, Russell J.; Hagiya, Masami; Suyama, Akira

    2002-02-01

    In the whiplash polymerase chain reaction (WPCR), autonomous molecular computation is implemented in vitro by the recursive, self-directed polymerase extension of a mixture of DNA hairpins. Although computational efficiency is known to be reduced by a tendency for DNAs to self-inhibit by backhybridization, both the magnitude of this effect and its dependence on the reaction conditions have remained open questions. In this paper, the impact of backhybridization on WPCR efficiency is addressed by modeling the recursive extension of each strand as a Markov chain. The extension efficiency per effective polymerase-DNA encounter is then estimated within the framework of a statistical thermodynamic model. Model predictions are shown to provide close agreement with the premature halting of computation reported in a recent in vitro WPCR implementation, a particularly significant result, given that backhybridization had been discounted as the dominant error process. The scaling behavior further indicates completion times to be sufficiently long to render WPCR-based massive parallelism infeasible. A modified architecture, PNA-mediated WPCR (PWPCR) is then proposed in which the occupancy of backhybridized hairpins is reduced by targeted PNA2/DNA triplex formation. The efficiency of PWPCR is discussed using a modified form of the model developed for WPCR. Predictions indicate the PWPCR efficiency is sufficient to allow the implementation of autonomous molecular computation on a massive scale.

  10. A SINDA thermal model using CAD/CAE technologies

    NASA Technical Reports Server (NTRS)

    Rodriguez, Jose A.; Spencer, Steve

    1992-01-01

    The approach to thermal analysis described by this paper is a technique that incorporates Computer Aided Design (CAD) and Computer Aided Engineering (CAE) to develop a thermal model that has the advantages of Finite Element Methods (FEM) without abandoning the unique advantages of Finite Difference Methods (FDM) in the analysis of thermal systems. The incorporation of existing CAD geometry, the powerful use of a pre and post processor and the ability to do interdisciplinary analysis, will be described.

  11. Complete denture fabrication with CAD/CAM record bases.

    PubMed

    McLaughlin, J Bryan; Ramos, Van

    2015-10-01

    One of the primary goals of new materials and processes for complete denture fabrication has been to reduce polymerization shrinkage. The introduction of computer-aided design and computer-aided manufacturing (CAD/CAM) technology into complete denture fabrication has eliminated polymerization shrinkage in the definitive denture. The use of CAD/CAM record bases for complete denture fabrication can provide a better-fitting denture with fewer postprocessing occlusal errors. PMID:26139040

  12. CAD in the processing plant environment or managing the CAD revolution

    SciTech Connect

    Woolbert, M.A.; Bennett, R.S.; Haring, W.I.

    1985-10-01

    The author presents a case report on the use of a Computer Aided Design/Computer Aided Drafting (CAD) system. Illustrated is a four-work station system, in addition to which there are two 70 megabyte disk drives, a check plotter and a 24-inch wide electrostatic plotter, a 300 megabyte disk for on-line storage, a tape drive for archive and backup, and a 1-megabyte network process server. It is a distributed logic system. The author states that the CAD system both inexpensive enough and powerful enough for the plant environment is relatively new on the market, made possible by the advent of super microcomputers. Also discussed is the impact the CAD system has had on productivity.

  13. Volume-averaged SAR in adult and child head models when using mobile phones: a computational study with detailed CAD-based models of commercial mobile phones.

    PubMed

    Keshvari, Jafar; Heikkilä, Teemu

    2011-12-01

    Previous studies comparing SAR difference in the head of children and adults used highly simplified generic models or half-wave dipole antennas. The objective of this study was to investigate the SAR difference in the head of children and adults using realistic EMF sources based on CAD models of commercial mobile phones. Four MRI-based head phantoms were used in the study. CAD models of Nokia 8310 and 6630 mobile phones were used as exposure sources. Commercially available FDTD software was used for the SAR calculations. SAR values were simulated at frequencies 900 MHz and 1747 MHz for Nokia 8310, and 900 MHz, 1747 MHz and 1950 MHz for Nokia 6630. The main finding of this study was that the SAR distribution/variation in the head models highly depends on the structure of the antenna and phone model, which suggests that the type of the exposure source is the main parameter in EMF exposure studies to be focused on. Although the previous findings regarding significant role of the anatomy of the head, phone position, frequency, local tissue inhomogeneity and tissue composition specifically in the exposed area on SAR difference were confirmed, the SAR values and SAR distributions caused by generic source models cannot be extrapolated to the real device exposures. The general conclusion is that from a volume averaged SAR point of view, no systematic differences between child and adult heads were found. PMID:22005524

  14. Full-mouth rehabilitation with monolithic CAD/CAM-fabricated hybrid and all-ceramic materials: A case report and 3-year follow up.

    PubMed

    Selz, Christian F; Vuck, Alexander; Guess, Petra C

    2016-02-01

    Esthetic full-mouth rehabilitation represents a great challenge for clinicians and dental technicians. Computer-aided design/ computer-assisted manufacture (CAD/CAM) technology and novel ceramic materials in combination with adhesive cementation provide a reliable, predictable, and economic workflow. Polychromatic feldspathic CAD/CAM ceramics that are specifically designed for anterior indications result in superior esthetics, whereas novel CAD/CAM hybrid ceramics provide sufficient fracture resistance and adsorption of the occlusal load in posterior areas. Screw-retained monolithic CAD/CAM lithium disilicate crowns (ie, hybrid abutment crowns) represent a reliable and time- and cost-efficient prosthetic implant solution. This case report details a CAD/CAM approach to the full-arch rehabilitation of a 65-year-old patient with toothand implant-supported restorations and provides an overview of the applied CAD/CAM materials and the utilized chairside intraoral scanner. The esthetics, functional occlusion, and gingival and peri-implant tissues remained stable over a follow-up period of 3 years. No signs of fractures within the restorations were observed. PMID:26417616

  15. Popescu-Rohrlich correlations imply efficient instantaneous nonlocal quantum computation

    NASA Astrophysics Data System (ADS)

    Broadbent, Anne

    2016-08-01

    In instantaneous nonlocal quantum computation, two parties cooperate in order to perform a quantum computation on their joint inputs, while being restricted to a single round of simultaneous communication. Previous results showed that instantaneous nonlocal quantum computation is possible, at the cost of an exponential amount of prior shared entanglement (in the size of the input). Here, we show that a linear amount of entanglement suffices, (in the size of the computation), as long as the parties share nonlocal correlations as given by the Popescu-Rohrlich box. This means that communication is not required for efficient instantaneous nonlocal quantum computation. Exploiting the well-known relation to position-based cryptography, our result also implies the impossibility of secure position-based cryptography against adversaries with nonsignaling correlations. Furthermore, our construction establishes a quantum analog of the classical communication complexity collapse under nonsignaling correlations.

  16. Efficient Turing-Universal Computation with DNA Polymers

    NASA Astrophysics Data System (ADS)

    Qian, Lulu; Soloveichik, David; Winfree, Erik

    Bennett's proposed chemical Turing machine is one of the most important thought experiments in the study of the thermodynamics of computation. Yet the sophistication of molecular engineering required to physically construct Bennett's hypothetical polymer substrate and enzymes has deterred experimental implementations. Here we propose a chemical implementation of stack machines - a Turing-universal model of computation similar to Turing machines - using DNA strand displacement cascades as the underlying chemical primitive. More specifically, the mechanism described herein is the addition and removal of monomers from the end of a DNA polymer, controlled by strand displacement logic. We capture the motivating feature of Bennett's scheme: that physical reversibility corresponds to logically reversible computation, and arbitrarily little energy per computation step is required. Further, as a method of embedding logic control into chemical and biological systems, polymer-based chemical computation is significantly more efficient than geometry-free chemical reaction networks.

  17. Coupling Photon Monte Carlo Simulation and CAD Software. Application to X-ray Nondestructive Evaluation

    NASA Astrophysics Data System (ADS)

    Tabary, J.; Glière, A.

    A Monte Carlo radiation transport simulation program, EGS Nova, and a Computer Aided Design software, BRL-CAD, have been coupled within the framework of Sindbad, a Nondestructive Evaluation (NDE) simulation system. In its current status, the program is very valuable in a NDE laboratory context, as it helps simulate the images due to the uncollided and scattered photon fluxes in a single NDE software environment, without having to switch to a Monte Carlo code parameters set. Numerical validations show a good agreement with EGS4 computed and published data. As the program's major drawback is the execution time, computational efficiency improvements are foreseen.

  18. Communication-efficient parallel architectures and algorithms for image computations

    SciTech Connect

    Alnuweiri, H.M.

    1989-01-01

    The main purpose of this dissertation is the design of efficient parallel techniques for image computations which require global operations on image pixels, as well as the development of parallel architectures with special communication features which can support global data movement efficiently. The class of image problems considered in this dissertation involves global operations on image pixels, and irregular (data-dependent) data movement operations. Such problems include histogramming, component labeling, proximity computations, computing the Hough Transform, computing convexity of regions and related properties such as computing the diameter and a smallest area enclosing rectangle for each region. Images with multiple figures and multiple labeled-sets of pixels are also considered. Efficient solutions to such problems involve integer sorting, graph theoretic techniques, and techniques from computational geometry. Although such solutions are not computationally intensive (they all require O(n{sup 2}) operations to be performed on an n {times} n image), they require global communications. The emphasis here is on developing parallel techniques for data movement, reduction, and distribution, which lead to processor-time optimal solutions for such problems on the proposed organizations. The proposed parallel architectures are based on a memory array which can be viewed as an arrangement of memory modules in a k-dimensional space such that the modules are connected to buses placed parallel to the orthogonal axes of the space, and each bus is connected to one processor or a group of processors. It will be shown that such organizations are communication-efficient and are thus highly suited to the image problems considered here, and also to several other classes of problems. The proposed organizations have p processors and O(n{sup 2}) words of memory to process n {times} n images.

  19. CAD-CAM at Bendix Kansas city: the BICAM system

    SciTech Connect

    Witte, D.R.

    1983-04-01

    Bendix Kansas City Division (BEKC) has been involved in Computer Aided Manufacturing (CAM) technology since the late 1950's when the numerical control (N/C) analysts installed computers to aid in N/C tape preparation for numerically controlled machines. Computer Aided Design (CAD) technology was introduced in 1976, when a number of 2D turnkey drafting stations were procured for printed wiring board (PWB) drawing definition and maintenance. In June, 1980, CAD-CAM Operations was formed to incorporate an integrated CAD-CAM capability into Bendix operations. In March 1982, a ninth division was added to the existing eight divisions at Bendix. Computer Integrated Manufacturing (CIM) is a small organization, reporting directly to the general manager, who has responsibility to coordinate the overall integration of computer aided systems at Bendix. As a long range plan, CIM has adopted a National Bureau of Standards (NBS) architecture titled Factory of the Future. Conceptually, the Bendix CAD-CAM system has a centrally located data base which can be accessed by both CAD and CAM tools, processes, and personnel thus forming an integrated Computer Aided Engineering (CAE) System. This is a key requirement of the Bendix CAD-CAM system that will be presented in more detail.

  20. Put Your Computers in the Most Efficient Environment.

    ERIC Educational Resources Information Center

    Yeaman, Andrew R. J.

    1984-01-01

    Discusses factors that should be considered in selecting video display screens and furniture and designing work spaces for computerized instruction that will provide optimal conditions for student health and learning efficiency. Use of work patterns found to be least stressful by computer workers is also suggested. (MBR)

  1. A Computationally Efficient Algorithm for Aerosol Phase Equilibrium

    SciTech Connect

    Zaveri, Rahul A.; Easter, Richard C.; Peters, Len K.; Wexler, Anthony S.

    2004-10-04

    Three-dimensional models of atmospheric inorganic aerosols need an accurate yet computationally efficient thermodynamic module that is repeatedly used to compute internal aerosol phase state equilibrium. In this paper, we describe the development and evaluation of a computationally efficient numerical solver called MESA (Multicomponent Equilibrium Solver for Aerosols). The unique formulation of MESA allows iteration of all the equilibrium equations simultaneously while maintaining overall mass conservation and electroneutrality in both the solid and liquid phases. MESA is unconditionally stable, shows robust convergence, and typically requires only 10 to 20 single-level iterations (where all activity coefficients and aerosol water content are updated) per internal aerosol phase equilibrium calculation. Accuracy of MESA is comparable to that of the highly accurate Aerosol Inorganics Model (AIM), which uses a rigorous Gibbs free energy minimization approach. Performance evaluation will be presented for a number of complex multicomponent mixtures commonly found in urban and marine tropospheric aerosols.

  2. An overview of energy efficiency techniques in cluster computing systems

    SciTech Connect

    Valentini, Giorgio Luigi; Lassonde, Walter; Khan, Samee Ullah; Min-Allah, Nasro; Madani, Sajjad A.; Li, Juan; Zhang, Limin; Wang, Lizhe; Ghani, Nasir; Kolodziej, Joanna; Li, Hongxiang; Zomaya, Albert Y.; Xu, Cheng-Zhong; Balaji, Pavan; Vishnu, Abhinav; Pinel, Fredric; Pecero, Johnatan E.; Kliazovich, Dzmitry; Bouvry, Pascal

    2011-09-10

    Two major constraints demand more consideration for energy efficiency in cluster computing: (a) operational costs, and (b) system reliability. Increasing energy efficiency in cluster systems will reduce energy consumption, excess heat, lower operational costs, and improve system reliability. Based on the energy-power relationship, and the fact that energy consumption can be reduced with strategic power management, we focus in this survey on the characteristic of two main power management technologies: (a) static power management (SPM) systems that utilize low-power components to save the energy, and (b) dynamic power management (DPM) systems that utilize software and power-scalable components to optimize the energy consumption. We present the current state of the art in both of the SPM and DPM techniques, citing representative examples. The survey is concluded with a brief discussion and some assumptions about the possible future directions that could be explored to improve the energy efficiency in cluster computing.

  3. CAD/CAM for optomechatronics

    NASA Astrophysics Data System (ADS)

    Zhou, Haiguang; Han, Min

    2003-10-01

    We focus at CAD/CAM for optomechatronics. We have developed a kind of CAD/CAM, which is not only for mechanics but also for optics and electronic. The software can be used for training and education. We introduce mechanical CAD, optical CAD and electrical CAD, we show how to draw a circuit diagram, mechanical diagram and luminous transmission diagram, from 2D drawing to 3D drawing. We introduce how to create 2D and 3D parts for optomechatronics, how to edit tool paths, how to select parameters for process, how to run the post processor, dynamic show the tool path and generate the CNC programming. We introduce the joint application of CAD&CAM. We aim at how to match the requirement of optical, mechanical and electronics.

  4. ProperCAD: A portable object-oriented parallel environment for VLSI CAD

    NASA Technical Reports Server (NTRS)

    Ramkumar, Balkrishna; Banerjee, Prithviraj

    1993-01-01

    Most parallel algorithms for VLSI CAD proposed to date have one important drawback: they work efficiently only on machines that they were designed for. As a result, algorithms designed to date are dependent on the architecture for which they are developed and do not port easily to other parallel architectures. A new project under way to address this problem is described. A Portable object-oriented parallel environment for CAD algorithms (ProperCAD) is being developed. The objectives of this research are (1) to develop new parallel algorithms that run in a portable object-oriented environment (CAD algorithms using a general purpose platform for portable parallel programming called CARM is being developed and a C++ environment that is truly object-oriented and specialized for CAD applications is also being developed); and (2) to design the parallel algorithms around a good sequential algorithm with a well-defined parallel-sequential interface (permitting the parallel algorithm to benefit from future developments in sequential algorithms). One CAD application that has been implemented as part of the ProperCAD project, flat VLSI circuit extraction, is described. The algorithm, its implementation, and its performance on a range of parallel machines are discussed in detail. It currently runs on an Encore Multimax, a Sequent Symmetry, Intel iPSC/2 and i860 hypercubes, a NCUBE 2 hypercube, and a network of Sun Sparc workstations. Performance data for other applications that were developed are provided: namely test pattern generation for sequential circuits, parallel logic synthesis, and standard cell placement.

  5. On the Use of Electrooculogram for Efficient Human Computer Interfaces

    PubMed Central

    Usakli, A. B.; Gurkan, S.; Aloise, F.; Vecchiato, G.; Babiloni, F.

    2010-01-01

    The aim of this study is to present electrooculogram signals that can be used for human computer interface efficiently. Establishing an efficient alternative channel for communication without overt speech and hand movements is important to increase the quality of life for patients suffering from Amyotrophic Lateral Sclerosis or other illnesses that prevent correct limb and facial muscular responses. We have made several experiments to compare the P300-based BCI speller and EOG-based new system. A five-letter word can be written on average in 25 seconds and in 105 seconds with the EEG-based device. Giving message such as “clean-up” could be performed in 3 seconds with the new system. The new system is more efficient than P300-based BCI system in terms of accuracy, speed, applicability, and cost efficiency. Using EOG signals, it is possible to improve the communication abilities of those patients who can move their eyes. PMID:19841687

  6. DeviceEditor visual biological CAD canvas

    PubMed Central

    2012-01-01

    Background Biological Computer Aided Design (bioCAD) assists the de novo design and selection of existing genetic components to achieve a desired biological activity, as part of an integrated design-build-test cycle. To meet the emerging needs of Synthetic Biology, bioCAD tools must address the increasing prevalence of combinatorial library design, design rule specification, and scar-less multi-part DNA assembly. Results We report the development and deployment of web-based bioCAD software, DeviceEditor, which provides a graphical design environment that mimics the intuitive visual whiteboard design process practiced in biological laboratories. The key innovations of DeviceEditor include visual combinatorial library design, direct integration with scar-less multi-part DNA assembly design automation, and a graphical user interface for the creation and modification of design specification rules. We demonstrate how biological designs are rendered on the DeviceEditor canvas, and we present effective visualizations of genetic component ordering and combinatorial variations within complex designs. Conclusions DeviceEditor liberates researchers from DNA base-pair manipulation, and enables users to create successful prototypes using standardized, functional, and visual abstractions. Open and documented software interfaces support further integration of DeviceEditor with other bioCAD tools and software platforms. DeviceEditor saves researcher time and institutional resources through correct-by-construction design, the automation of tedious tasks, design reuse, and the minimization of DNA assembly costs. PMID:22373390

  7. Ergonomics Perspective in Agricultural Research: A User-Centred Approach Using CAD and Digital Human Modeling (DHM) Technologies

    NASA Astrophysics Data System (ADS)

    Patel, Thaneswer; Sanjog, J.; Karmakar, Sougata

    2016-06-01

    Computer-aided Design (CAD) and Digital Human Modeling (DHM) (specialized CAD software for virtual human representation) technologies endow unique opportunities to incorporate human factors pro-actively in design development. Challenges of enhancing agricultural productivity through improvement of agricultural tools/machineries and better human-machine compatibility can be ensured by adoption of these modern technologies. Objectives of present work are to provide the detailed scenario of CAD and DHM applications in agricultural sector; and finding out means for wide adoption of these technologies for design and development of cost-effective, user-friendly, efficient and safe agricultural tools/equipment and operator's workplace. Extensive literature review has been conducted for systematic segregation and representation of available information towards drawing inferences. Although applications of various CAD software have momentum in agricultural research particularly for design and manufacturing of agricultural equipment/machinery, use of DHM is still at its infancy in this sector. Current review discusses about reasons of less adoption of these technologies in agricultural sector and steps to be taken for their wide adoption. It also suggests possible future research directions to come up with better ergonomic design strategies for improvement of agricultural equipment/machines and workstations through application of CAD and DHM.

  8. Efficient computations of quantum canonical Gibbs state in phase space

    NASA Astrophysics Data System (ADS)

    Bondar, Denys I.; Campos, Andre G.; Cabrera, Renan; Rabitz, Herschel A.

    2016-06-01

    The Gibbs canonical state, as a maximum entropy density matrix, represents a quantum system in equilibrium with a thermostat. This state plays an essential role in thermodynamics and serves as the initial condition for nonequilibrium dynamical simulations. We solve a long standing problem for computing the Gibbs state Wigner function with nearly machine accuracy by solving the Bloch equation directly in the phase space. Furthermore, the algorithms are provided yielding high quality Wigner distributions for pure stationary states as well as for Thomas-Fermi and Bose-Einstein distributions. The developed numerical methods furnish a long-sought efficient computation framework for nonequilibrium quantum simulations directly in the Wigner representation.

  9. A compute-Efficient Bitmap Compression Index for Database Applications

    SciTech Connect

    Wu, Kesheng; Shoshani, Arie

    2006-01-01

    FastBit: A Compute-Efficient Bitmap Compression Index for Database Applications The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is highly efficient for performing search and retrieval operations on large datasets. The WAH technique is optimized for computational efficiency. The WAH-based bitmap indexing software, called FastBit, is particularly appropriate to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry. Some commercial database products already include some Version of a bitmap index, which could possibly be replaced by the WAR bitmap compression techniques for potentially large operational speedup. Experimental results show performance improvements by an average factor of 10 over bitmap technology used by industry, as well as increased efficiencies in constructing compressed bitmaps. FastBit can be use as a stand-alone index, or integrated into a database system. ien integrated into a database system, this technique may be particularly useful for real-time business analysis applications. Additional FastRit applications may include efficient real-time exploration of scientific models, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization. FastBit was proven theoretically to be time-optimal because it provides a search time proportional to the number of elements selected by the index.

  10. A compute-Efficient Bitmap Compression Index for Database Applications

    Energy Science and Technology Software Center (ESTSC)

    2006-01-01

    FastBit: A Compute-Efficient Bitmap Compression Index for Database Applications The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is highly efficient for performing search and retrieval operations on large datasets. The WAH technique is optimized for computational efficiency. The WAH-based bitmap indexing software, called FastBit, is particularly appropriate to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry. Some commercial database products already include some Version of a bitmap index,more » which could possibly be replaced by the WAR bitmap compression techniques for potentially large operational speedup. Experimental results show performance improvements by an average factor of 10 over bitmap technology used by industry, as well as increased efficiencies in constructing compressed bitmaps. FastBit can be use as a stand-alone index, or integrated into a database system. ien integrated into a database system, this technique may be particularly useful for real-time business analysis applications. Additional FastRit applications may include efficient real-time exploration of scientific models, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization. FastBit was proven theoretically to be time-optimal because it provides a search time proportional to the number of elements selected by the index.« less

  11. A Novel Green Cloud Computing Framework for Improving System Efficiency

    NASA Astrophysics Data System (ADS)

    Lin, Chen

    As the prevalence of Cloud computing continues to rise, the need for power saving mechanisms within the Cloud also increases. In this paper we have presented a novel Green Cloud framework for improving system efficiency in a data center. To demonstrate the potential of our framework, we have presented new energy efficient scheduling, VM system image, and image management components that explore new ways to conserve power. Though our research presented in this paper, we have found new ways to save vast amounts of energy while minimally impacting performance.

  12. Cost reduction advantages of CAD/CAM

    NASA Astrophysics Data System (ADS)

    Parsons, G. T.

    1983-05-01

    Features of the CAD/CAM system implemented at the General Dynamics Convair division are summarized. CAD/CAM was initiated in 1976 to enhance engineering, manufacturing and quality assurance and thereby the company's competitive bidding position. Numerical models are substituted for hardware models wherever possible and numerical criteria are defined in design for guiding computer-controlled parts manufacturing machines. The system comprises multiple terminals, a data base, digitizer, printers, disk and tape drives, and graphics displays. The applications include the design and manufacture of parts and components for avionics, structures, scientific investigations, and aircraft structural components. Interfaces with other computers allow structural analyses by finite element codes. Although time savings have not been gained compared to manual drafting, components of greater complexity than could have been designed by hand have been designed and manufactured.

  13. Generating Composite Overlapping Grids on CAD Geometries

    SciTech Connect

    Henshaw, W.D.

    2002-02-07

    We describe some algorithms and tools that have been developed to generate composite overlapping grids on geometries that have been defined with computer aided design (CAD) programs. This process consists of five main steps. Starting from a description of the surfaces defining the computational domain we (1) correct errors in the CAD representation, (2) determine topology of the patched-surface, (3) build a global triangulation of the surface, (4) construct structured surface and volume grids using hyperbolic grid generation, and (5) generate the overlapping grid by determining the holes and the interpolation points. The overlapping grid generator which is used for the final step also supports the rapid generation of grids for block-structured adaptive mesh refinement and for moving grids. These algorithms have been implemented as part of the Overture object-oriented framework.

  14. A Computationally Efficient Method for Polyphonic Pitch Estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Ruohua; Reiss, Joshua D.; Mattavelli, Marco; Zoia, Giorgio

    2009-12-01

    This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI) as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.

  15. Computational methods for efficient structural reliability and reliability sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1993-01-01

    This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

  16. Resin-composite blocks for dental CAD/CAM applications.

    PubMed

    Ruse, N D; Sadoun, M J

    2014-12-01

    Advances in digital impression technology and manufacturing processes have led to a dramatic paradigm shift in dentistry and to the widespread use of computer-aided design/computer-aided manufacturing (CAD/CAM) in the fabrication of indirect dental restorations. Research and development in materials suitable for CAD/CAM applications are currently the most active field in dental materials. Two classes of materials are used in the production of CAD/CAM restorations: glass-ceramics/ceramics and resin composites. While glass-ceramics/ceramics have overall superior mechanical and esthetic properties, resin-composite materials may offer significant advantages related to their machinability and intra-oral reparability. This review summarizes recent developments in resin-composite materials for CAD/CAM applications, focusing on both commercial and experimental materials. PMID:25344335

  17. CAD/CAM improves productivity in nonaerospace job shops

    NASA Astrophysics Data System (ADS)

    Koenig, D. T.

    1982-12-01

    Business cost improvements that can result from Computer Aided Design/Computer Aided Manufacturing (CAD/CAM), when properly applied, are discussed. Emphasis is placed on the use of CAD/CAM for machine and process control, design and planning control, and production and measurement control. It is pointed out that the implementation of CAD/CAM should be based on the following priorities: (1) recognize interrelationships between the principal functions of CAD/CAM; (2) establish a Systems Council to determine overall strategy and specify the communications/decision-making system; (3) implement the communications/decision-making system to improve productivity; and (4) implement interactive graphics and other additions to further improve productivity.

  18. Resin-composite Blocks for Dental CAD/CAM Applications

    PubMed Central

    Ruse, N.D.; Sadoun, M.J.

    2014-01-01

    Advances in digital impression technology and manufacturing processes have led to a dramatic paradigm shift in dentistry and to the widespread use of computer-aided design/computer-aided manufacturing (CAD/CAM) in the fabrication of indirect dental restorations. Research and development in materials suitable for CAD/CAM applications are currently the most active field in dental materials. Two classes of materials are used in the production of CAD/CAM restorations: glass-ceramics/ceramics and resin composites. While glass-ceramics/ceramics have overall superior mechanical and esthetic properties, resin-composite materials may offer significant advantages related to their machinability and intra-oral reparability. This review summarizes recent developments in resin-composite materials for CAD/CAM applications, focusing on both commercial and experimental materials. PMID:25344335

  19. Computationally efficient, rotational nonequilibrium CW chemical laser model

    SciTech Connect

    Sentman, L.H.; Rushmore, W.

    1981-10-01

    The essential fluid dynamic and kinetic phenomena required for a quantitative, computationally efficient, rotational nonequilibrium model of a CW HF chemical laser are identified. It is shown that, in addition to the pumping, collisional deactivation, and rotational relaxation reactions, F-atom wall recombination, the hot pumping reaction, and multiquantum deactivation reactions play a significant role in determining laser performance. Several problems with the HF kinetics package are identified. The effect of various parameters on run time is discussed.

  20. An image database management system for conducting CAD research

    NASA Astrophysics Data System (ADS)

    Gruszauskas, Nicholas; Drukker, Karen; Giger, Maryellen L.

    2007-03-01

    The development of image databases for CAD research is not a trivial task. The collection and management of images and their related metadata from multiple sources is a time-consuming but necessary process. By standardizing and centralizing the methods in which these data are maintained, one can generate subsets of a larger database that match the specific criteria needed for a particular research project in a quick and efficient manner. A research-oriented management system of this type is highly desirable in a multi-modality CAD research environment. An online, webbased database system for the storage and management of research-specific medical image metadata was designed for use with four modalities of breast imaging: screen-film mammography, full-field digital mammography, breast ultrasound and breast MRI. The system was designed to consolidate data from multiple clinical sources and provide the user with the ability to anonymize the data. Input concerning the type of data to be stored as well as desired searchable parameters was solicited from researchers in each modality. The backbone of the database was created using MySQL. A robust and easy-to-use interface for entering, removing, modifying and searching information in the database was created using HTML and PHP. This standardized system can be accessed using any modern web-browsing software and is fundamental for our various research projects on computer-aided detection, diagnosis, cancer risk assessment, multimodality lesion assessment, and prognosis. Our CAD database system stores large amounts of research-related metadata and successfully generates subsets of cases that match the user's desired search criteria.

  1. Efficient MATLAB computations with sparse and factored tensors.

    SciTech Connect

    Bader, Brett William; Kolda, Tamara Gibson (Sandia National Lab, Livermore, CA)

    2006-12-01

    In this paper, the term tensor refers simply to a multidimensional or N-way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose storing sparse tensors using coordinate format and describe the computational efficiency of this scheme for various mathematical operations, including those typical to tensor decomposition algorithms. Second, we study factored tensors, which have the property that they can be assembled from more basic components. We consider two specific types: a Tucker tensor can be expressed as the product of a core tensor (which itself may be dense, sparse, or factored) and a matrix along each mode, and a Kruskal tensor can be expressed as the sum of rank-1 tensors. We are interested in the case where the storage of the components is less than the storage of the full tensor, and we demonstrate that many elementary operations can be computed using only the components. All of the efficiencies described in this paper are implemented in the Tensor Toolbox for MATLAB.

  2. Improving computational efficiency of Monte Carlo simulations with variance reduction

    SciTech Connect

    Turner, A.

    2013-07-01

    CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)

  3. A new CAD approach for improving efficacy of cancer screening

    NASA Astrophysics Data System (ADS)

    Zheng, Bin; Qian, Wei; Li, Lihua; Pu, Jiantao; Kang, Yan; Lure, Fleming; Tan, Maxine; Qiu, Yuchen

    2015-03-01

    Since performance and clinical utility of current computer-aided detection (CAD) schemes of detecting and classifying soft tissue lesions (e.g., breast masses and lung nodules) is not satisfactory, many researchers in CAD field call for new CAD research ideas and approaches. The purpose of presenting this opinion paper is to share our vision and stimulate more discussions of how to overcome or compensate the limitation of current lesion-detection based CAD schemes in the CAD research community. Since based on our observation that analyzing global image information plays an important role in radiologists' decision making, we hypothesized that using the targeted quantitative image features computed from global images could also provide highly discriminatory power, which are supplementary to the lesion-based information. To test our hypothesis, we recently performed a number of independent studies. Based on our published preliminary study results, we demonstrated that global mammographic image features and background parenchymal enhancement of breast MR images carried useful information to (1) predict near-term breast cancer risk based on negative screening mammograms, (2) distinguish between true- and false-positive recalls in mammography screening examinations, and (3) classify between malignant and benign breast MR examinations. The global case-based CAD scheme only warns a risk level of the cases without cueing a large number of false-positive lesions. It can also be applied to guide lesion-based CAD cueing to reduce false-positives but enhance clinically relevant true-positive cueing. However, before such a new CAD approach is clinically acceptable, more work is needed to optimize not only the scheme performance but also how to integrate with lesion-based CAD schemes in the clinical practice.

  4. CAD of control systems: Application of nonlinear programming to a linear quadratic formulation

    NASA Technical Reports Server (NTRS)

    Fleming, P.

    1983-01-01

    The familiar suboptimal regulator design approach is recast as a constrained optimization problem and incorporated in a Computer Aided Design (CAD) package where both design objective and constraints are quadratic cost functions. This formulation permits the separate consideration of, for example, model following errors, sensitivity measures and control energy as objectives to be minimized or limits to be observed. Efficient techniques for computing the interrelated cost functions and their gradients are utilized in conjunction with a nonlinear programming algorithm. The effectiveness of the approach and the degree of insight into the problem which it affords is illustrated in a helicopter regulation design example.

  5. Purchasing Computer-Aided Design Software.

    ERIC Educational Resources Information Center

    Smith, Roger A.

    1992-01-01

    Presents a model for the purchase of computer-aided design (CAD) software: collect general information, observe CAD in use, arrange onsite demonstrations, select CAD software and hardware, and choose a vendor. (JOW)

  6. CAD/CAM data management

    NASA Technical Reports Server (NTRS)

    Bray, O. H.

    1984-01-01

    The role of data base management in CAD/CAM, particularly for geometric data is described. First, long term and short term objectives for CAD/CAM data management are identified. Second, the benefits of the data base management approach are explained. Third, some of the additional work needed in the data base area is discussed.

  7. Energy Efficient Biomolecular Simulations with FPGA-based Reconfigurable Computing

    SciTech Connect

    Hampton, Scott S; Agarwal, Pratul K

    2010-05-01

    Reconfigurable computing (RC) is being investigated as a hardware solution for improving time-to-solution for biomolecular simulations. A number of popular molecular dynamics (MD) codes are used to study various aspects of biomolecules. These codes are now capable of simulating nanosecond time-scale trajectories per day on conventional microprocessor-based hardware, but biomolecular processes often occur at the microsecond time-scale or longer. A wide gap exists between the desired and achievable simulation capability; therefore, there is considerable interest in alternative algorithms and hardware for improving the time-to-solution of MD codes. The fine-grain parallelism provided by Field Programmable Gate Arrays (FPGA) combined with their low power consumption make them an attractive solution for improving the performance of MD simulations. In this work, we use an FPGA-based coprocessor to accelerate the compute-intensive calculations of LAMMPS, a popular MD code, achieving up to 5.5 fold speed-up on the non-bonded force computations of the particle mesh Ewald method and up to 2.2 fold speed-up in overall time-to-solution, and potentially an increase by a factor of 9 in power-performance efficiencies for the pair-wise computations. The results presented here provide an example of the multi-faceted benefits to an application in a heterogeneous computing environment.

  8. Efficient and accurate computation of generalized singular-value decompositions

    NASA Astrophysics Data System (ADS)

    Drmac, Zlatko

    2001-11-01

    We present a new family of algorithms for accurate floating--point computation of the singular value decomposition (SVD) of various forms of products (quotients) of two or three matrices. The main goal of such an algorithm is to compute all singular values to high relative accuracy. This means that we are seeking guaranteed number of accurate digits even in the smallest singular values. We also want to achieve computational efficiency, while maintaining high accuracy. To illustrate, consider the SVD of the product A=BTSC. The new algorithm uses certain preconditioning (based on diagonal scalings, the LU and QR factorizations) to replace A with A'=(B')TS'C', where A and A' have the same singular values and the matrix A' is computed explicitly. Theoretical analysis and numerical evidence show that, in the case of full rank B, C, S, the accuracy of the new algorithm is unaffected by replacing B, S, C with, respectively, D1B, D2SD3, D4C, where Di, i=1,...,4 are arbitrary diagonal matrices. As an application, the paper proposes new accurate algorithms for computing the (H,K)-SVD and (H1,K)-SVD of S.

  9. Energy efficient hybrid computing systems using spin devices

    NASA Astrophysics Data System (ADS)

    Sharad, Mrigank

    Emerging spin-devices like magnetic tunnel junctions (MTJ's), spin-valves and domain wall magnets (DWM) have opened new avenues for spin-based logic design. This work explored potential computing applications which can exploit such devices for higher energy-efficiency and performance. The proposed applications involve hybrid design schemes, where charge-based devices supplement the spin-devices, to gain large benefits at the system level. As an example, lateral spin valves (LSV) involve switching of nanomagnets using spin-polarized current injection through a metallic channel such as Cu. Such spin-torque based devices possess several interesting properties that can be exploited for ultra-low power computation. Analog characteristic of spin current facilitate non-Boolean computation like majority evaluation that can be used to model a neuron. The magneto-metallic neurons can operate at ultra-low terminal voltage of ˜20mV, thereby resulting in small computation power. Moreover, since nano-magnets inherently act as memory elements, these devices can facilitate integration of logic and memory in interesting ways. The spin based neurons can be integrated with CMOS and other emerging devices leading to different classes of neuromorphic/non-Von-Neumann architectures. The spin-based designs involve `mixed-mode' processing and hence can provide very compact and ultra-low energy solutions for complex computation blocks, both digital as well as analog. Such low-power, hybrid designs can be suitable for various data processing applications like cognitive computing, associative memory, and currentmode on-chip global interconnects. Simulation results for these applications based on device-circuit co-simulation framework predict more than ˜100x improvement in computation energy as compared to state of the art CMOS design, for optimal spin-device parameters.

  10. CAD Model and Visual Assisted Control System for NIF Target Area Positioners

    SciTech Connect

    Tekle, E A; Wilson, E F; Paik, T S

    2007-10-03

    The National Ignition Facility (NIF) target chamber contains precision motion control systems that reach up to 6 meters into the target chamber for handling targets and diagnostics. Systems include the target positioner, an alignment sensor, and diagnostic manipulators (collectively called positioners). Target chamber shot experiments require a variety of positioner arrangements near the chamber center to be aligned to an accuracy of 10 micrometers. Positioners are some of the largest devices in NIF, and they require careful monitoring and control in 3 dimensions to prevent interferences. The Integrated Computer Control System provides efficient and flexible multi-positioner controls. This is accomplished through advanced video-control integration incorporating remote position sensing and realtime analysis of a CAD model of target chamber devices. The control system design, the method used to integrate existing mechanical CAD models, and the offline test laboratory used to verify proper operation of the control system are described.

  11. Improving robustness and computational efficiency using modern C++

    NASA Astrophysics Data System (ADS)

    Paterno, M.; Kowalkowski, J.; Green, C.

    2014-06-01

    For nearly two decades, the C++ programming language has been the dominant programming language for experimental HEP. The publication of ISO/IEC 14882:2011, the current version of the international standard for the C++ programming language, makes available a variety of language and library facilities for improving the robustness, expressiveness, and computational efficiency of C++ code. However, much of the C++ written by the experimental HEP community does not take advantage of the features of the language to obtain these benefits, either due to lack of familiarity with these features or concern that these features must somehow be computationally inefficient. In this paper, we address some of the features of modern C+-+, and show how they can be used to make programs that are both robust and computationally efficient. We compare and contrast simple yet realistic examples of some common implementation patterns in C, currently-typical C++, and modern C++, and show (when necessary, down to the level of generated assembly language code) the quality of the executable code produced by recent C++ compilers, with the aim of allowing the HEP community to make informed decisions on the costs and benefits of the use of modern C++.

  12. Improving robustness and computational efficiency using modern C++

    SciTech Connect

    Paterno, M.; Kowalkowski, J.; Green, C.

    2014-01-01

    For nearly two decades, the C++ programming language has been the dominant programming language for experimental HEP. The publication of ISO/IEC 14882:2011, the current version of the international standard for the C++ programming language, makes available a variety of language and library facilities for improving the robustness, expressiveness, and computational efficiency of C++ code. However, much of the C++ written by the experimental HEP community does not take advantage of the features of the language to obtain these benefits, either due to lack of familiarity with these features or concern that these features must somehow be computationally inefficient. In this paper, we address some of the features of modern C+-+, and show how they can be used to make programs that are both robust and computationally efficient. We compare and contrast simple yet realistic examples of some common implementation patterns in C, currently-typical C++, and modern C++, and show (when necessary, down to the level of generated assembly language code) the quality of the executable code produced by recent C++ compilers, with the aim of allowing the HEP community to make informed decisions on the costs and benefits of the use of modern C++.

  13. Methods for increased computational efficiency of multibody simulations

    NASA Astrophysics Data System (ADS)

    Epple, Alexander

    This thesis is concerned with the efficient numerical simulation of finite element based flexible multibody systems. Scaling operations are systematically applied to the governing index-3 differential algebraic equations in order to solve the problem of ill conditioning for small time step sizes. The importance of augmented Lagrangian terms is demonstrated. The use of fast sparse solvers is justified for the solution of the linearized equations of motion resulting in significant savings of computational costs. Three time stepping schemes for the integration of the governing equations of flexible multibody systems are discussed in detail. These schemes are the two-stage Radau IIA scheme, the energy decaying scheme, and the generalized-a method. Their formulations are adapted to the specific structure of the governing equations of flexible multibody systems. The efficiency of the time integration schemes is comprehensively evaluated on a series of test problems. Formulations for structural and constraint elements are reviewed and the problem of interpolation of finite rotations in geometrically exact structural elements is revisited. This results in the development of a new improved interpolation algorithm, which preserves the objectivity of the strain field and guarantees stable simulations in the presence of arbitrarily large rotations. Finally, strategies for the spatial discretization of beams in the presence of steep variations in cross-sectional properties are developed. These strategies reduce the number of degrees of freedom needed to accurately analyze beams with discontinuous properties, resulting in improved computational efficiency.

  14. Improving the Efficiency of Abdominal Aortic Aneurysm Wall Stress Computations

    PubMed Central

    Zelaya, Jaime E.; Goenezen, Sevan; Dargon, Phong T.; Azarbal, Amir-Farzin; Rugonyi, Sandra

    2014-01-01

    An abdominal aortic aneurysm is a pathological dilation of the abdominal aorta, which carries a high mortality rate if ruptured. The most commonly used surrogate marker of rupture risk is the maximal transverse diameter of the aneurysm. More recent studies suggest that wall stress from models of patient-specific aneurysm geometries extracted, for instance, from computed tomography images may be a more accurate predictor of rupture risk and an important factor in AAA size progression. However, quantification of wall stress is typically computationally intensive and time-consuming, mainly due to the nonlinear mechanical behavior of the abdominal aortic aneurysm walls. These difficulties have limited the potential of computational models in clinical practice. To facilitate computation of wall stresses, we propose to use a linear approach that ensures equilibrium of wall stresses in the aneurysms. This proposed linear model approach is easy to implement and eliminates the burden of nonlinear computations. To assess the accuracy of our proposed approach to compute wall stresses, results from idealized and patient-specific model simulations were compared to those obtained using conventional approaches and to those of a hypothetical, reference abdominal aortic aneurysm model. For the reference model, wall mechanical properties and the initial unloaded and unstressed configuration were assumed to be known, and the resulting wall stresses were used as reference for comparison. Our proposed linear approach accurately approximates wall stresses for varying model geometries and wall material properties. Our findings suggest that the proposed linear approach could be used as an effective, efficient, easy-to-use clinical tool to estimate patient-specific wall stresses. PMID:25007052

  15. Exploiting stoichiometric redundancies for computational efficiency and network reduction

    PubMed Central

    Ingalls, Brian P.; Bembenek, Eric

    2015-01-01

    Abstract Analysis of metabolic networks typically begins with construction of the stoichiometry matrix, which characterizes the network topology. This matrix provides, via the balance equation, a description of the potential steady-state flow distribution. This paper begins with the observation that the balance equation depends only on the structure of linear redundancies in the network, and so can be stated in a succinct manner, leading to computational efficiencies in steady-state analysis. This alternative description of steady-state behaviour is then used to provide a novel method for network reduction, which complements existing algorithms for describing intracellular networks in terms of input-output macro-reactions (to facilitate bioprocess optimization and control). Finally, it is demonstrated that this novel reduction method can be used to address elementary mode analysis of large networks: the modes supported by a reduced network can capture the input-output modes of a metabolic module with significantly reduced computational effort. PMID:25547516

  16. Exploiting stoichiometric redundancies for computational efficiency and network reduction.

    PubMed

    Ingalls, Brian P; Bembenek, Eric

    2015-01-01

    Analysis of metabolic networks typically begins with construction of the stoichiometry matrix, which characterizes the network topology. This matrix provides, via the balance equation, a description of the potential steady-state flow distribution. This paper begins with the observation that the balance equation depends only on the structure of linear redundancies in the network, and so can be stated in a succinct manner, leading to computational efficiencies in steady-state analysis. This alternative description of steady-state behaviour is then used to provide a novel method for network reduction, which complements existing algorithms for describing intracellular networks in terms of input-output macro-reactions (to facilitate bioprocess optimization and control). Finally, it is demonstrated that this novel reduction method can be used to address elementary mode analysis of large networks: the modes supported by a reduced network can capture the input-output modes of a metabolic module with significantly reduced computational effort. PMID:25547516

  17. Differential area profiles: decomposition properties and efficient computation.

    PubMed

    Ouzounis, Georgios K; Pesaresi, Martino; Soille, Pierre

    2012-08-01

    Differential area profiles (DAPs) are point-based multiscale descriptors used in pattern analysis and image segmentation. They are defined through sets of size-based connected morphological filters that constitute a joint area opening top-hat and area closing bottom-hat scale-space of the input image. The work presented in this paper explores the properties of this image decomposition through sets of area zones. An area zone defines a single plane of the DAP vector field and contains all the peak components of the input image, whose size is between the zone's attribute extrema. Area zones can be computed efficiently from hierarchical image representation structures, in a way similar to regular attribute filters. Operations on the DAP vector field can then be computed without the need for exporting it first, and an example with the leveling-like convex/concave segmentation scheme is given. This is referred to as the one-pass method and it is demonstrated on the Max-Tree structure. Its computational performance is tested and compared against conventional means for computing differential profiles, relying on iterative application of area openings and closings. Applications making use of the area zone decomposition are demonstrated in problems related to remote sensing and medical image analysis. PMID:22184259

  18. Efficient parallel global garbage collection on massively parallel computers

    SciTech Connect

    Kamada, Tomio; Matsuoka, Satoshi; Yonezawa, Akinori

    1994-12-31

    On distributed-memory high-performance MPPs where processors are interconnected by an asynchronous network, efficient Garbage Collection (GC) becomes difficult due to inter-node references and references within pending, unprocessed messages. The parallel global GC algorithm (1) takes advantage of reference locality, (2) efficiently traverses references over nodes, (3) admits minimum pause time of ongoing computations, and (4) has been shown to scale up to 1024 node MPPs. The algorithm employs a global weight counting scheme to substantially reduce message traffic. The two methods for confirming the arrival of pending messages are used: one counts numbers of messages and the other uses network `bulldozing.` Performance evaluation in actual implementations on a multicomputer with 32-1024 nodes, Fujitsu AP1000, reveals various favorable properties of the algorithm.

  19. Increasing computational efficiency of cochlear models using boundary layers

    NASA Astrophysics Data System (ADS)

    Alkhairy, Samiya A.; Shera, Christopher A.

    2015-12-01

    Our goal is to develop methods to improve the efficiency of computational models of the cochlea for applications that require the solution accurately only within a basal region of interest, specifically by decreasing the number of spatial sections needed for simulation of the problem with good accuracy. We design algebraic spatial and parametric transformations to computational models of the cochlea. These transformations are applied after the basal region of interest and allow for spatial preservation, driven by the natural characteristics of approximate spatial causality of cochlear models. The project is of foundational nature and hence the goal is to design, characterize and develop an understanding and framework rather than optimization and globalization. Our scope is as follows: designing the transformations; understanding the mechanisms by which computational load is decreased for each transformation; development of performance criteria; characterization of the results of applying each transformation to a specific physical model and discretization and solution schemes. In this manuscript, we introduce one of the proposed methods (complex spatial transformation) for a case study physical model that is a linear, passive, transmission line model in which the various abstraction layers (electric parameters, filter parameters, wave parameters) are clearer than other models. This is conducted in the frequency domain for multiple frequencies using a second order finite difference scheme for discretization and direct elimination for solving the discrete system of equations. The performance is evaluated using two developed simulative criteria for each of the transformations. In conclusion, the developed methods serve to increase efficiency of a computational traveling wave cochlear model when spatial preservation can hold, while maintaining good correspondence with the solution of interest and good accuracy, for applications in which the interest is in the solution

  20. Computationally efficient strategies to perform anomaly detection in hyperspectral images

    NASA Astrophysics Data System (ADS)

    Rossi, Alessandro; Acito, Nicola; Diani, Marco; Corsini, Giovanni

    2012-11-01

    In remote sensing, hyperspectral sensors are effectively used for target detection and recognition because of their high spectral resolution that allows discrimination of different materials in the sensed scene. When a priori information about the spectrum of the targets of interest is not available, target detection turns into anomaly detection (AD), i.e. searching for objects that are anomalous with respect to the scene background. In the field of AD, anomalies can be generally associated to observations that statistically move away from background clutter, being this latter intended as a local neighborhood surrounding the observed pixel or as a large part of the image. In this context, many efforts have been put to reduce the computational load of AD algorithms so as to furnish information for real-time decision making. In this work, a sub-class of AD methods is considered that aim at detecting small rare objects that are anomalous with respect to their local background. Such techniques not only are characterized by mathematical tractability but also allow the design of real-time strategies for AD. Within these methods, one of the most-established anomaly detectors is the RX algorithm which is based on a local Gaussian model for background modeling. In the literature, the RX decision rule has been employed to develop computationally efficient algorithms implemented in real-time systems. In this work, a survey of computationally efficient methods to implement the RX detector is presented where advanced algebraic strategies are exploited to speed up the estimate of the covariance matrix and of its inverse. The comparison of the overall number of operations required by the different implementations of the RX algorithms is given and discussed by varying the RX parameters in order to show the computational improvements achieved with the introduced algebraic strategy.

  1. Efficient Parallel Kernel Solvers for Computational Fluid Dynamics Applications

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He

    1997-01-01

    Distributed-memory parallel computers dominate today's parallel computing arena. These machines, such as Intel Paragon, IBM SP2, and Cray Origin2OO, have successfully delivered high performance computing power for solving some of the so-called "grand-challenge" problems. Despite initial success, parallel machines have not been widely accepted in production engineering environments due to the complexity of parallel programming. On a parallel computing system, a task has to be partitioned and distributed appropriately among processors to reduce communication cost and to attain load balance. More importantly, even with careful partitioning and mapping, the performance of an algorithm may still be unsatisfactory, since conventional sequential algorithms may be serial in nature and may not be implemented efficiently on parallel machines. In many cases, new algorithms have to be introduced to increase parallel performance. In order to achieve optimal performance, in addition to partitioning and mapping, a careful performance study should be conducted for a given application to find a good algorithm-machine combination. This process, however, is usually painful and elusive. The goal of this project is to design and develop efficient parallel algorithms for highly accurate Computational Fluid Dynamics (CFD) simulations and other engineering applications. The work plan is 1) developing highly accurate parallel numerical algorithms, 2) conduct preliminary testing to verify the effectiveness and potential of these algorithms, 3) incorporate newly developed algorithms into actual simulation packages. The work plan has well achieved. Two highly accurate, efficient Poisson solvers have been developed and tested based on two different approaches: (1) Adopting a mathematical geometry which has a better capacity to describe the fluid, (2) Using compact scheme to gain high order accuracy in numerical discretization. The previously developed Parallel Diagonal Dominant (PDD) algorithm

  2. IMPROVING TACONITE PROCESSING PLANT EFFICIENCY BY COMPUTER SIMULATION, Final Report

    SciTech Connect

    William M. Bond; Salih Ersayin

    2007-03-30

    This project involved industrial scale testing of a mineral processing simulator to improve the efficiency of a taconite processing plant, namely the Minorca mine. The Concentrator Modeling Center at the Coleraine Minerals Research Laboratory, University of Minnesota Duluth, enhanced the capabilities of available software, Usim Pac, by developing mathematical models needed for accurate simulation of taconite plants. This project provided funding for this technology to prove itself in the industrial environment. As the first step, data representing existing plant conditions were collected by sampling and sample analysis. Data were then balanced and provided a basis for assessing the efficiency of individual devices and the plant, and also for performing simulations aimed at improving plant efficiency. Performance evaluation served as a guide in developing alternative process strategies for more efficient production. A large number of computer simulations were then performed to quantify the benefits and effects of implementing these alternative schemes. Modification of makeup ball size was selected as the most feasible option for the target performance improvement. This was combined with replacement of existing hydrocyclones with more efficient ones. After plant implementation of these modifications, plant sampling surveys were carried out to validate findings of the simulation-based study. Plant data showed very good agreement with the simulated data, confirming results of simulation. After the implementation of modifications in the plant, several upstream bottlenecks became visible. Despite these bottlenecks limiting full capacity, concentrator energy improvement of 7% was obtained. Further improvements in energy efficiency are expected in the near future. The success of this project demonstrated the feasibility of a simulation-based approach. Currently, the Center provides simulation-based service to all the iron ore mining companies operating in northern

  3. An Algorithm for Projecting Points onto a Patched CAD Model

    SciTech Connect

    Henshaw, W D

    2001-05-29

    We are interested in building structured overlapping grids for geometries defined by computer-aided-design (CAD) packages. Geometric information defining the boundary surfaces of a computation domain is often provided in the form of a collection of possibly hundreds of trimmed patches. The first step in building an overlapping volume grid on such a geometry is to build overlapping surface grids. A surface grid is typically built using hyperbolic grid generation; starting from a curve on the surface, a grid is grown by marching over the surface. A given hyperbolic grid will typically cover many of the underlying CAD surface patches. The fundamental operation needed for building surface grids is that of projecting a point in space onto the closest point on the CAD surface. We describe an fast algorithm for performing this projection, it will make use of a fairly coarse global triangulation of the CAD geometry. We describe how to build this global triangulation by first determining the connectivity of the CAD surface patches. This step is necessary since it often the case that the CAD description will contain no information specifying how a given patch connects to other neighboring patches. Determining the connectivity is difficult since the surface patches may contain mistakes such as gaps or overlaps between neighboring patches.

  4. A computational efficient modelling of laminar separation bubbles

    NASA Astrophysics Data System (ADS)

    Dini, Paolo; Maughmer, Mark D.

    1990-07-01

    In predicting the aerodynamic characteristics of airfoils operating at low Reynolds numbers, it is often important to account for the effects of laminar (transitional) separation bubbles. Previous approaches to the modelling of this viscous phenomenon range from fast but sometimes unreliable empirical correlations for the length of the bubble and the associated increase in momentum thickness, to more accurate but significantly slower displacement-thickness iteration methods employing inverse boundary-layer formulations in the separated regions. Since the penalty in computational time associated with the more general methods is unacceptable for airfoil design applications, use of an accurate yet computationally efficient model is highly desirable. To this end, a semi-empirical bubble model was developed and incorporated into the Eppler and Somers airfoil design and analysis program. The generality and the efficiency was achieved by successfully approximating the local viscous/inviscid interaction, the transition location, and the turbulent reattachment process within the framework of an integral boundary-layer method. Comparisons of the predicted aerodynamic characteristics with experimental measurements for several airfoils show excellent and consistent agreement for Reynolds numbers from 2,000,000 down to 100,000.

  5. A computationally efficient modelling of laminar separation bubbles

    NASA Astrophysics Data System (ADS)

    Dini, Paolo

    1990-08-01

    In predicting the aerodynamic characteristics of airfoils operating at low Reynolds numbers, it is often important to account for the effects of laminar (transitional) separation bubbles. Previous approaches to the modeling of this viscous phenomenon range from fast by sometimes unreliable empirical correlations for the length of the bubble and the associated increase in momentum thickness, to more accurate but significantly slower displacement thickness iteration methods employing inverse boundary layer formulations in the separated regions. Since the penalty in computational time associated with the more general methods is unacceptable for airfoil design applications, use of an accurate yet computationally efficient model is highly desirable. To this end, a semi-empirical bubble model was developed and incorporated into the Eppler and Somers airfoil design and analysis program. The generality and the efficiency were achieved by successfully approximating the local viscous/inviscid interaction, the transition location, and the turbulent reattachment process within the framework of an integral boundary-layer method. Comparisons of the predicted aerodynamic characteristics with experimental measurements for several airfoils show excellent and consistent agreement for Reynolds numbers from 2,000,000 down to 100,000.

  6. A computational efficient modelling of laminar separation bubbles

    NASA Technical Reports Server (NTRS)

    Dini, Paolo; Maughmer, Mark D.

    1990-01-01

    In predicting the aerodynamic characteristics of airfoils operating at low Reynolds numbers, it is often important to account for the effects of laminar (transitional) separation bubbles. Previous approaches to the modelling of this viscous phenomenon range from fast but sometimes unreliable empirical correlations for the length of the bubble and the associated increase in momentum thickness, to more accurate but significantly slower displacement-thickness iteration methods employing inverse boundary-layer formulations in the separated regions. Since the penalty in computational time associated with the more general methods is unacceptable for airfoil design applications, use of an accurate yet computationally efficient model is highly desirable. To this end, a semi-empirical bubble model was developed and incorporated into the Eppler and Somers airfoil design and analysis program. The generality and the efficiency was achieved by successfully approximating the local viscous/inviscid interaction, the transition location, and the turbulent reattachment process within the framework of an integral boundary-layer method. Comparisons of the predicted aerodynamic characteristics with experimental measurements for several airfoils show excellent and consistent agreement for Reynolds numbers from 2,000,000 down to 100,000.

  7. Computationally efficient sub-band coding of ECG signals.

    PubMed

    Husøy, J H; Gjerde, T

    1996-03-01

    A data compression technique is presented for the compression of discrete time electrocardiogram (ECG) signals. The compression system is based on sub-band coding, a technique traditionally used for compressing speech and images. The sub-band coder employs quadrature mirror filter banks (QMF) with up to 32 critically sampled sub-bands. Both finite impulse response (FIR) and the more computationally efficient infinite impulse response (IIR) filter banks are considered as candidates in a complete ECG coding system. The sub-bands are threshold, quantized using uniform quantizers and run-length coded. The output of the run-length coder is further compressed by a Huffman coder. Extensive simulations indicate that 16 sub-bands are a suitable choice for this application. Furthermore, IIR filter banks are preferable due to their superiority in terms of computational efficiency. We conclude that the present scheme, which is suitable for real time implementation on a PC, can provide compression ratios between 5 and 15 without loss of clinical information. PMID:8673319

  8. Efficient Computation of the Topology of Level Sets

    SciTech Connect

    Pascucci, V; Cole-McLaughlin, K

    2002-07-19

    This paper introduces two efficient algorithms that compute the Contour Tree of a 3D scalar field F and its augmented version with the Betti numbers of each isosurface. The Contour Tree is a fundamental data structure in scientific visualization that is used to pre-process the domain mesh to allow optimal computation of isosurfaces with minimal storage overhead. The Contour Tree can be also used to build user interfaces reporting the complete topological characterization of a scalar field, as shown in Figure 1. In the first part of the paper we present a new scheme that augments the Contour Tree with the Betti numbers of each isocontour in linear time. We show how to extend the scheme introduced in 3 with the Betti number computation without increasing its complexity. Thus we improve on the time complexity from our previous approach 8 from 0(m log m) to 0(n log n+m), where m is the number of tetrahedra and n is the number of vertices in the domain of F. In the second part of the paper we introduce a new divide and conquer algorithm that computes the Augmented Contour Tree for scalar fields defined on rectilinear grids. The central part of the scheme computes the output contour tree by merging two intermediate contour trees and is independent of the interpolant. In this way we confine any knowledge regarding a specific interpolant to an oracle that computes the tree for a single cell. We have implemented this oracle for the trilinear interpolant and plan to replace it with higher order interpolants when needed. The complexity of the scheme is O(n + t log n), where t is the number of critical points of F. This allows for the first time to compute the Contour Tree in linear time in many practical cases when t = O(n{sup 1-e}). We report the running times for a parallel implementation of our algorithm, showing good scalability with the number of processors.

  9. Computationally efficient implementation of combustion chemistry in parallel PDF calculations

    NASA Astrophysics Data System (ADS)

    Lu, Liuyan; Lantz, Steven R.; Ren, Zhuyin; Pope, Stephen B.

    2009-08-01

    In parallel calculations of combustion processes with realistic chemistry, the serial in situ adaptive tabulation (ISAT) algorithm [S.B. Pope, Computationally efficient implementation of combustion chemistry using in situ adaptive tabulation, Combustion Theory and Modelling, 1 (1997) 41-63; L. Lu, S.B. Pope, An improved algorithm for in situ adaptive tabulation, Journal of Computational Physics 228 (2009) 361-386] substantially speeds up the chemistry calculations on each processor. To improve the parallel efficiency of large ensembles of such calculations in parallel computations, in this work, the ISAT algorithm is extended to the multi-processor environment, with the aim of minimizing the wall clock time required for the whole ensemble. Parallel ISAT strategies are developed by combining the existing serial ISAT algorithm with different distribution strategies, namely purely local processing (PLP), uniformly random distribution (URAN), and preferential distribution (PREF). The distribution strategies enable the queued load redistribution of chemistry calculations among processors using message passing. They are implemented in the software x2f_mpi, which is a Fortran 95 library for facilitating many parallel evaluations of a general vector function. The relative performance of the parallel ISAT strategies is investigated in different computational regimes via the PDF calculations of multiple partially stirred reactors burning methane/air mixtures. The results show that the performance of ISAT with a fixed distribution strategy strongly depends on certain computational regimes, based on how much memory is available and how much overlap exists between tabulated information on different processors. No one fixed strategy consistently achieves good performance in all the regimes. Therefore, an adaptive distribution strategy, which blends PLP, URAN and PREF, is devised and implemented. It yields consistently good performance in all regimes. In the adaptive parallel

  10. Recent Algorithmic and Computational Efficiency Improvements in the NIMROD Code

    NASA Astrophysics Data System (ADS)

    Plimpton, S. J.; Sovinec, C. R.; Gianakon, T. A.; Parker, S. E.

    1999-11-01

    Extreme anisotropy and temporal stiffness impose severe challenges to simulating low frequency, nonlinear behavior in magnetized fusion plasmas. To address these challenges in computations of realistic experiment configurations, NIMROD(Glasser, et al., Plasma Phys. Control. Fusion 41) (1999) A747. uses a time-split, semi-implicit advance of the two-fluid equations for magnetized plasmas with a finite element/Fourier series spatial representation. The stiffness and anisotropy lead to ill-conditioned linear systems of equations, and they emphasize any truncation errors that may couple different modes of the continuous system. Recent work significantly improves NIMROD's performance in these areas. Implementing a parallel global preconditioning scheme in structured-grid regions permits scaling to large problems and large time steps, which are critical for achieving realistic S-values. In addition, coupling to the AZTEC parallel linear solver package now permits efficient computation with regions of unstructured grid. Changes in the time-splitting scheme improve numerical behavior in simulations with strong flow, and quadratic basis elements are being explored for accuracy. Different numerical forms of anisotropic thermal conduction, critical for slow island evolution, are compared. Algorithms for including gyrokinetic ions in the finite element computations are discussed.

  11. Efficient Homotopy Continuation Algorithms with Application to Computational Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Brown, David A.

    New homotopy continuation algorithms are developed and applied to a parallel implicit finite-difference Newton-Krylov-Schur external aerodynamic flow solver for the compressible Euler, Navier-Stokes, and Reynolds-averaged Navier-Stokes equations with the Spalart-Allmaras one-equation turbulence model. Many new analysis tools, calculations, and numerical algorithms are presented for the study and design of efficient and robust homotopy continuation algorithms applicable to solving very large and sparse nonlinear systems of equations. Several specific homotopies are presented and studied and a methodology is presented for assessing the suitability of specific homotopies for homotopy continuation. . A new class of homotopy continuation algorithms, referred to as monolithic homotopy continuation algorithms, is developed. These algorithms differ from classical predictor-corrector algorithms by combining the predictor and corrector stages into a single update, significantly reducing the amount of computation and avoiding wasted computational effort resulting from over-solving in the corrector phase. The new algorithms are also simpler from a user perspective, with fewer input parameters, which also improves the user's ability to choose effective parameters on the first flow solve attempt. Conditional convergence is proved analytically and studied numerically for the new algorithms. The performance of a fully-implicit monolithic homotopy continuation algorithm is evaluated for several inviscid, laminar, and turbulent flows over NACA 0012 airfoils and ONERA M6 wings. The monolithic algorithm is demonstrated to be more efficient than the predictor-corrector algorithm for all applications investigated. It is also demonstrated to be more efficient than the widely-used pseudo-transient continuation algorithm for all inviscid and laminar cases investigated, and good performance scaling with grid refinement is demonstrated for the inviscid cases. Performance is also demonstrated

  12. Schools (Students) Exchanging CAD/CAM Files over the Internet.

    ERIC Educational Resources Information Center

    Mahoney, Gary S.; Smallwood, James E.

    This document discusses how students and schools can benefit from exchanging computer-aided design/computer-aided manufacturing (CAD/CAM) files over the Internet, explains how files are exchanged, and examines the problem of selected hardware/software incompatibility. Key terms associated with information search services are defined, and several…

  13. Efficient Universal Computing Architectures for Decoding Neural Activity

    PubMed Central

    Rapoport, Benjamin I.; Turicchia, Lorenzo; Wattanapanitch, Woradorn; Davidson, Thomas J.; Sarpeshkar, Rahul

    2012-01-01

    The ability to decode neural activity into meaningful control signals for prosthetic devices is critical to the development of clinically useful brain– machine interfaces (BMIs). Such systems require input from tens to hundreds of brain-implanted recording electrodes in order to deliver robust and accurate performance; in serving that primary function they should also minimize power dissipation in order to avoid damaging neural tissue; and they should transmit data wirelessly in order to minimize the risk of infection associated with chronic, transcutaneous implants. Electronic architectures for brain– machine interfaces must therefore minimize size and power consumption, while maximizing the ability to compress data to be transmitted over limited-bandwidth wireless channels. Here we present a system of extremely low computational complexity, designed for real-time decoding of neural signals, and suited for highly scalable implantable systems. Our programmable architecture is an explicit implementation of a universal computing machine emulating the dynamics of a network of integrate-and-fire neurons; it requires no arithmetic operations except for counting, and decodes neural signals using only computationally inexpensive logic operations. The simplicity of this architecture does not compromise its ability to compress raw neural data by factors greater than . We describe a set of decoding algorithms based on this computational architecture, one designed to operate within an implanted system, minimizing its power consumption and data transmission bandwidth; and a complementary set of algorithms for learning, programming the decoder, and postprocessing the decoded output, designed to operate in an external, nonimplanted unit. The implementation of the implantable portion is estimated to require fewer than 5000 operations per second. A proof-of-concept, 32-channel field-programmable gate array (FPGA) implementation of this portion is consequently energy efficient

  14. Computer Aided Drafting. Instructor's Guide.

    ERIC Educational Resources Information Center

    Henry, Michael A.

    This guide is intended for use in introducing students to the operation and applications of computer-aided drafting (CAD) systems. The following topics are covered in the individual lessons: understanding CAD (CAD versus traditional manual drafting and care of software and hardware); using the components of a CAD system (primary and other input…

  15. The Efficiency of Various Computers and Optimizations in Performing Finite Element Computations

    NASA Technical Reports Server (NTRS)

    Marcus, Martin H.; Broduer, Steve (Technical Monitor)

    2001-01-01

    With the advent of computers with many processors, it becomes unclear how to best exploit this advantage. For example, matrices can be inverted by applying several processors to each vector operation, or one processor can be applied to each matrix. The former approach has diminishing returns beyond a handful of processors, but how many processors depends on the computer architecture. Applying one processor to each matrix is feasible with enough ram memory and scratch disk space, but the speed at which this is done is found to vary by a factor of three depending on how it is done. The cost of the computer must also be taken into account. A computer with many processors and fast interprocessor communication is much more expensive than the same computer and processors with slow interprocessor communication. Consequently, for problems that require several matrices to be inverted, the best speed per dollar for computers is found to be several small workstations that are networked together, such as in a Beowulf cluster. Since these machines typically have two processors per node, each matrix is most efficiently inverted with no more than two processors assigned to it.

  16. Optimization of computation efficiency in underwater acoustic navigation system.

    PubMed

    Lee, Hua

    2016-04-01

    This paper presents a technique for the estimation of the relative bearing angle between the unmanned underwater vehicle (UUV) and the base station for the homing and docking operations. The key requirement of this project includes computation efficiency and estimation accuracy for direct implementation onto the UUV electronic hardware, subject to the extreme constraints of physical limitation of the hardware due to the size and dimension of the UUV housing, electric power consumption for the requirement of UUV survey duration and range coverage, and heat dissipation of the hardware. Subsequent to the design and development of the algorithm, two phases of experiments were conducted to illustrate the feasibility and capability of this technique. The presentation of this paper includes system modeling, mathematical analysis, and results from laboratory experiments and full-scale sea tests. PMID:27106337

  17. Productivity improvements through the use of CAD/CAM

    NASA Astrophysics Data System (ADS)

    Wehrman, M. D.

    This paper focuses on Computer Aided Design/Computer Aided Manufacturing (CAD/CAM) productivity improvements that occurred in the Boeing Commercial Airplane Company (BCAC) between 1979 and 1983, with a look at future direction. Since the introduction of numerically controlled machinery in the 1950s, a wide range of engineering and manufacturing applications has evolved. The main portion of this paper includes a summarized and illustrated cross-section of these applications, touching on benefits such as reduced tooling, shortened flow time, increased accuracy, and reduced labor hours. The current CAD/CAM integration activity, directed toward capitalizing on this productivity in the future, is addressed.

  18. Efficient Computer Network Anomaly Detection by Changepoint Detection Methods

    NASA Astrophysics Data System (ADS)

    Tartakovsky, Alexander G.; Polunchenko, Aleksey S.; Sokolov, Grigory

    2013-02-01

    We consider the problem of efficient on-line anomaly detection in computer network traffic. The problem is approached statistically, as that of sequential (quickest) changepoint detection. A multi-cyclic setting of quickest change detection is a natural fit for this problem. We propose a novel score-based multi-cyclic detection algorithm. The algorithm is based on the so-called Shiryaev-Roberts procedure. This procedure is as easy to employ in practice and as computationally inexpensive as the popular Cumulative Sum chart and the Exponentially Weighted Moving Average scheme. The likelihood ratio based Shiryaev-Roberts procedure has appealing optimality properties, particularly it is exactly optimal in a multi-cyclic setting geared to detect a change occurring at a far time horizon. It is therefore expected that an intrusion detection algorithm based on the Shiryaev-Roberts procedure will perform better than other detection schemes. This is confirmed experimentally for real traces. We also discuss the possibility of complementing our anomaly detection algorithm with a spectral-signature intrusion detection system with false alarm filtering and true attack confirmation capability, so as to obtain a synergistic system.

  19. Efficient computation of spontaneous emission dynamics in arbitrary photonic structures

    NASA Astrophysics Data System (ADS)

    Teimourpour, M. H.; El-Ganainy, R.

    2015-12-01

    Defining a quantum mechanical wavefunction for photons is one of the remaining open problems in quantum physics. Thus quantum states of light are usually treated within the realm of second quantization. Consequently, spontaneous emission (SE) in arbitrary photonic media is often described by Fock space Hamiltonians. Here, we present a real space formulation of the SE process that can capture the physics of the problem accurately under different coupling conditions. Starting from first principles, we map the unitary evolution of a dressed two-level quantum emitter onto the problem of electromagnetic radiation from a self-interacting complex harmonic oscillator. Our formalism naturally leads to an efficient computational scheme of SE dynamics using finite difference time domain method without the need for calculating the photonic eigenmodes of the surrounding environment. In contrast to earlier investigations, our computational framework provides a unified numerical treatment for both weak and strong coupling regimes alike. We illustrate the versatility of our scheme by considering several different examples.

  20. An efficient parallel algorithm for accelerating computational protein design

    PubMed Central

    Zhou, Yichao; Xu, Wei; Donald, Bruce R.; Zeng, Jianyang

    2014-01-01

    Motivation: Structure-based computational protein design (SCPR) is an important topic in protein engineering. Under the assumption of a rigid backbone and a finite set of discrete conformations of side-chains, various methods have been proposed to address this problem. A popular method is to combine the dead-end elimination (DEE) and A* tree search algorithms, which provably finds the global minimum energy conformation (GMEC) solution. Results: In this article, we improve the efficiency of computing A* heuristic functions for protein design and propose a variant of A* algorithm in which the search process can be performed on a single GPU in a massively parallel fashion. In addition, we make some efforts to address the memory exceeding problem in A* search. As a result, our enhancements can achieve a significant speedup of the A*-based protein design algorithm by four orders of magnitude on large-scale test data through pre-computation and parallelization, while still maintaining an acceptable memory overhead. We also show that our parallel A* search algorithm could be successfully combined with iMinDEE, a state-of-the-art DEE criterion, for rotamer pruning to further improve SCPR with the consideration of continuous side-chain flexibility. Availability: Our software is available and distributed open-source under the GNU Lesser General License Version 2.1 (GNU, February 1999). The source code can be downloaded from http://www.cs.duke.edu/donaldlab/osprey.php or http://iiis.tsinghua.edu.cn/∼compbio/software.html. Contact: zengjy321@tsinghua.edu.cn Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24931991

  1. A method for assurance of image integrity in CAD-PACS integration

    NASA Astrophysics Data System (ADS)

    Zhou, Zheng

    2007-03-01

    Computer Aided Detection/Diagnosis (CAD) can greatly assist in the clinical decision making process, and therefore, has drawn tremendous research efforts. However, integrating independent CAD workstation results with the clinical diagnostic workflow still remains challenging. We have presented a CAD-PACS integration toolkit that complies with DICOM standard and IHE profiles. One major issue in CAD-PACS integration is the security of the images used in CAD post-processing and the corresponding CAD result images. In this paper, we present a method for assuring the integrity of both DICOM images used in CAD post-processing and the CAD image results that are in BMP or JPEG format. The method is evaluated in a PACS simulator that simulates clinical PACS workflow. It can also be applied to multiple CAD applications that are integrated with the PACS simulator. The successful development and evaluation of this method will provide a useful approach for assuring image integrity of the CAD-PACS integration in clinical diagnosis.

  2. Textbook Multigrid Efficiency for Computational Fluid Dynamics Simulations

    NASA Technical Reports Server (NTRS)

    Brandt, Achi; Thomas, James L.; Diskin, Boris

    2001-01-01

    Considerable progress over the past thirty years has been made in the development of large-scale computational fluid dynamics (CFD) solvers for the Euler and Navier-Stokes equations. Computations are used routinely to design the cruise shapes of transport aircraft through complex-geometry simulations involving the solution of 25-100 million equations; in this arena the number of wind-tunnel tests for a new design has been substantially reduced. However, simulations of the entire flight envelope of the vehicle, including maximum lift, buffet onset, flutter, and control effectiveness have not been as successful in eliminating the reliance on wind-tunnel testing. These simulations involve unsteady flows with more separation and stronger shock waves than at cruise. The main reasons limiting further inroads of CFD into the design process are: (1) the reliability of turbulence models; and (2) the time and expense of the numerical simulation. Because of the prohibitive resolution requirements of direct simulations at high Reynolds numbers, transition and turbulence modeling is expected to remain an issue for the near term. The focus of this paper addresses the latter problem by attempting to attain optimal efficiencies in solving the governing equations. Typically current CFD codes based on the use of multigrid acceleration techniques and multistage Runge-Kutta time-stepping schemes are able to converge lift and drag values for cruise configurations within approximately 1000 residual evaluations. An optimally convergent method is defined as having textbook multigrid efficiency (TME), meaning the solutions to the governing system of equations are attained in a computational work which is a small (less than 10) multiple of the operation count in the discretized system of equations (residual equations). In this paper, a distributed relaxation approach to achieving TME for Reynolds-averaged Navier-Stokes (RNAS) equations are discussed along with the foundations that form the

  3. CAD data exchange with Martin Marietta Energy Systems, Inc., Oak Ridge, TN

    SciTech Connect

    Smith, K.L.

    1994-10-01

    This document has been developed to provide guidance in the interchange of electronic CAD data with Martin Marietta Energy Systems, Inc., Oak Ridge, Tennessee. It is not meant to be as comprehensive as the existing standards and specifications, but to provide a minimum set of practices that will enhance the success of the CAD data exchange. It is now a Department of Energy (DOE) Oak Ridge Field Office requirement that Architect-Engineering (A-E) firms prepare all new drawings using a Computer Aided Design (CAD) system that is compatible with the Facility Manager`s (FM) CAD system. For Oak Ridge facilities, the CAD system used for facility design by the FM, Martin Marietta Energy Systems, Inc., is Intregraph. The format for interchange of CAD data for Oak Ridge facilities will be the Intergraph MicroStation/IGDS format.

  4. Computer-Aided Apparel Design in University Curricula.

    ERIC Educational Resources Information Center

    Belleau, Bonnie D.; Bourgeois, Elva B.

    1991-01-01

    As computer-assisted design (CAD) become an integral part of the fashion industry, universities must integrate CAD into the apparel curriculum. Louisiana State University's curriculum enables students to collaborate in CAD problem solving with industry personnel. (SK)

  5. Use of CAD output to guide the intelligent display of digital mammograms

    NASA Astrophysics Data System (ADS)

    Bloomquist, Aili K.; Yaffe, Martin J.; Mawdsley, Gordon E.; Morgan, Trevor; Rico, Dan; Jong, Roberta A.

    2003-05-01

    For digital mammography to be efficient, methods are needed to choose an initial default image presentation that maximizes the amount of relevant information perceived by the radiologist and minimizes the amount of time spent adjusting the image display parameters. The purpose of this work is to explore the possibility of using the output of computer aided detection (CAD) software to guide image enhancement and presentation. A set of 16 digital mammograms with lesions of known pathology was used to develop and evaluate an enhancement and display protocol to improve the initial softcopy presentation of digital mammograms. Lesions were identified by CAD and the DICOM structured report produced by the CAD program was used to determine what enhancement algorithm should be applied in the identified regions of the image. An improved version of contrast limited adaptive histogram equalization (CLAHE) is used to enhance calcifications. For masses, the image is first smoothed using a non-linear diffusion technique; subsequently, local contrast is enhanced with a method based on morphological operators. A non-linear lookup table is automatically created to optimize the contrast in the regions of interest (detected lesions) without losing the context of the periphery of the breast. The effectiveness of the enhancement will be compared with the default presentation of the images using a forced choice preference study.

  6. Computer aided design of computer generated holograms for electron beam fabrication.

    PubMed

    Urquhart, K S; Lee, S H; Guest, C C; Feldman, M R; Farhoosh, H

    1989-08-15

    Computer Aided Design (CAD) systems that have been developed for electrical and mechanical design tasks are also effective tools for the process of designing Computer Generated Holograms (CGHs), particularly when these holograms are to be fabricated using electron beam lithography. CAD workstations provide efficient and convenient means of computing, storing, displaying, and preparing for fabrication many of the features that are common to CGH designs. Experience gained in the process of designing CGHs with various types of encoding methods is presented. Suggestions are made so that future workstations may further accommodate the CGH design process. PMID:20555710

  7. Computer Aided Design of Computer Generated Holograms for electron beam fabrication

    NASA Technical Reports Server (NTRS)

    Urquhart, Kristopher S.; Lee, Sing H.; Guest, Clark C.; Feldman, Michael R.; Farhoosh, Hamid

    1989-01-01

    Computer Aided Design (CAD) systems that have been developed for electrical and mechanical design tasks are also effective tools for the process of designing Computer Generated Holograms (CGHs), particularly when these holograms are to be fabricated using electron beam lithography. CAD workstations provide efficient and convenient means of computing, storing, displaying, and preparing for fabrication many of the features that are common to CGH designs. Experience gained in the process of designing CGHs with various types of encoding methods is presented. Suggestions are made so that future workstations may further accommodate the CGH design process.

  8. The Use of a Parametric Feature Based CAD System to Teach Introductory Engineering Graphics.

    ERIC Educational Resources Information Center

    Howell, Steven K.

    1995-01-01

    Describes the use of a parametric-feature-based computer-aided design (CAD) System, AutoCAD Designer, in teaching concepts of three dimensional geometrical modeling and design. Allows engineering graphics to go beyond the role of documentation and communication and allows an engineer to actually build a virtual prototype of a design idea and…

  9. Teaching an Introductory CAD Course with the System-Engineering Approach.

    ERIC Educational Resources Information Center

    Pao, Y. C.

    1985-01-01

    Advocates that introductory computer aided design (CAD) courses be incorporated into engineering curricula in close conjunction with the system dynamics course. Block diagram manipulation/Bode analysis and finite elementary analysis are used as examples to illustrate the interdisciplinary nature of CAD teaching. (JN)

  10. 3D-CAD Effects on Creative Design Performance of Different Spatial Abilities Students

    ERIC Educational Resources Information Center

    Chang, Y.

    2014-01-01

    Students' creativity is an important focus globally and is interrelated with students' spatial abilities. Additionally, three-dimensional computer-assisted drawing (3D-CAD) overcomes barriers to spatial expression during the creative design process. Does 3D-CAD affect students' creative abilities? The purpose of this study was to…

  11. A computationally efficient particle-simulation method suited to vector-computer architectures

    SciTech Connect

    McDonald, J.D.

    1990-01-01

    Recent interest in a National Aero-Space Plane (NASP) and various Aero-assisted Space Transfer Vehicles (ASTVs) presents the need for a greater understanding of high-speed rarefied flight conditions. Particle simulation techniques such as the Direct Simulation Monte Carlo (DSMC) method are well suited to such problems, but the high cost of computation limits the application of the methods to two-dimensional or very simple three-dimensional problems. This research re-examines the algorithmic structure of existing particle simulation methods and re-structures them to allow efficient implementation on vector-oriented supercomputers. A brief overview of the DSMC method and the Cray-2 vector computer architecture are provided, and the elements of the DSMC method that inhibit substantial vectorization are identified. One such element is the collision selection algorithm. A complete reformulation of underlying kinetic theory shows that this may be efficiently vectorized for general gas mixtures. The mechanics of collisions are vectorizable in the DSMC method, but several optimizations are suggested that greatly enhance performance. Also this thesis proposes a new mechanism for the exchange of energy between vibration and other energy modes. The developed scheme makes use of quantized vibrational states and is used in place of the Borgnakke-Larsen model. Finally, a simplified representation of physical space and boundary conditions is utilized to further reduce the computational cost of the developed method. Comparison to solutions obtained from the DSMC method for the relaxation of internal energy modes in a homogeneous gas, as well as single and multiple specie shock wave profiles, are presented. Additionally, a large scale simulation of the flow about the proposed Aeroassisted Flight Experiment (AFE) vehicle is included as an example of the new computational capability of the developed particle simulation method.

  12. On the Use of CAD-Native Predicates and Geometry in Surface Meshing

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.

    1999-01-01

    Several paradigms for accessing computer-aided design (CAD) geometry during surface meshing for computational fluid dynamics are discussed. File translation, inconsistent geometry engines, and nonnative point construction are all identified as sources of nonrobustness. The paper argues in favor of accessing CAD parts and assemblies in their native format, without translation, and for the use of CAD-native predicates and constructors in surface mesh generation. The discussion also emphasizes the importance of examining the computational requirements for exact evaluation of triangulation predicates during surface meshing.

  13. Mammogram CAD, hybrid registration and iconic analysis

    NASA Astrophysics Data System (ADS)

    Boucher, A.; Cloppet, F.; Vincent, N.

    2013-03-01

    This paper aims to develop a computer aided diagnosis (CAD) based on a two-step methodology to register and analyze pairs of temporal mammograms. The concept of "medical file", including all the previous medical information on a patient, enables joint analysis of different acquisitions taken at different times, and the detection of significant modifications. The developed registration method aims to superimpose at best the different anatomical structures of the breast. The registration is designed in order to get rid of deformation undergone by the acquisition process while preserving those due to breast changes indicative of malignancy. In order to reach this goal, a referent image is computed from control points based on anatomical features that are extracted automatically. Then the second image of the couple is realigned on the referent image, using a coarse-to-fine approach according to expert knowledge that allows both rigid and non-rigid transforms. The joint analysis detects the evolution between two images representing the same scene. In order to achieve this, it is important to know the registration error limits in order to adapt the observation scale. The approach used in this paper is based on an image sparse representation. Decomposed in regular patterns, the images are analyzed under a new angle. The evolution detection problem has many practical applications, especially in medical images. The CAD is evaluated using recall and precision of differences in mammograms.

  14. The Effect of Computer Automation on Institutional Review Board (IRB) Office Efficiency

    ERIC Educational Resources Information Center

    Oder, Karl; Pittman, Stephanie

    2015-01-01

    Companies purchase computer systems to make their processes more efficient through automation. Some academic medical centers (AMC) have purchased computer systems for their institutional review boards (IRB) to increase efficiency and compliance with regulations. IRB computer systems are expensive to purchase, deploy, and maintain. An AMC should…

  15. New orthopaedic implant management tool for computer-assisted planning, navigation, and simulation: from implant CAD files to a standardized XML-based implant database.

    PubMed

    Sagbo, S; Blochaou, F; Langlotz, F; Vangenot, C; Nolte, L-P; Zheng, G

    2005-01-01

    Computer-Assisted Orthopaedic Surgery (CAOS) has made much progress over the last 10 years. Navigation systems have been recognized as important tools that help surgeons, and various such systems have been developed. A disadvantage of these systems is that they use non-standard formalisms and techniques. As a result, there are no standard concepts for implant and tool management or data formats to store information for use in 3D planning and navigation. We addressed these limitations and developed a practical and generic solution that offers benefits for surgeons, implant manufacturers, and CAS application developers. We developed a virtual implant database containing geometrical as well as calibration information for orthopedic implants and instruments, with a focus on trauma. This database has been successfully tested for various applications in the client/server mode. The implant information is not static, however, because manufacturers periodically revise their implants, resulting in the deletion of some implants and the introduction of new ones. Tracking these continuous changes and keeping CAS systems up to date is a tedious task if done manually. This leads to additional costs for system development, and some errors are inevitably generated due to the huge amount of information that has to be processed. To ease management with respect to implant life cycle, we developed a tool to assist end-users (surgeons, hospitals, CAS system providers, and implant manufacturers) in managing their implants. Our system can be used for pre-operative planning and intra-operative navigation, and also for any surgical simulation involving orthopedic implants. Currently, this tool allows addition of new implants, modification of existing ones, deletion of obsolete implants, export of a given implant, and also creation of backups. Our implant management system has been successfully tested in the laboratory with very promising results. It makes it possible to fill the current gap

  16. Correlating Trainee Attributes to Performance in 3D CAD Training

    ERIC Educational Resources Information Center

    Hamade, Ramsey F.; Artail, Hassan A.; Sikstrom, Sverker

    2007-01-01

    Purpose: The purpose of this exploratory study is to identify trainee attributes relevant for development of skills in 3D computer-aided design (CAD). Design/methodology/approach: Participants were trained to perform cognitive tasks of comparable complexity over time. Performance data were collected on the time needed to construct test models, and…

  17. The design and construction of the CAD-1 airship

    NASA Technical Reports Server (NTRS)

    Kleiner, H. J.; Schneider, R.; Duncan, J. L.

    1975-01-01

    The background history, design philosophy and Computer application as related to the design of the envelope shape, stress calculations and flight trajectories of the CAD-1 airship, now under construction by Canadian Airship Development Corporation are reported. A three-phase proposal for future development of larger cargo carrying airships is included.

  18. Present State of CAD Teaching in Spanish Universities

    ERIC Educational Resources Information Center

    Garcia, Ramon Rubio; Santos, Ramon Gallego; Quiros, Javier Suarez; Penin, Pedro I. Alvarez

    2005-01-01

    During the 1990s, all Spanish Universities updated the syllabuses of their courses as a result of the entry into force of the new Organic Law of Universities ("Ley Organica de Universidades") and, for the first time, "Computer Assisted Design" (CAD) appears in the list of core subjects (compulsory teaching content set by the government) in many of…

  19. Alternatives for Saving and Viewing CAD Graphics for the Web.

    ERIC Educational Resources Information Center

    Harris, La Verne Abe; Sadowski, Mary A.

    2001-01-01

    Introduces some alternatives for preparing and viewing computer aided design (CAD) graphics for Internet output on a budget, without the fear of copyright infringement, and without having to go back to college to learn a complex graphic application. (Author/YDS)

  20. Program Evolves from Basic CAD to Total Manufacturing Experience

    ERIC Educational Resources Information Center

    Cassola, Joel

    2011-01-01

    Close to a decade ago, John Hersey High School (JHHS) in Arlington Heights, Illinois, made a transition from a traditional classroom-based pre-engineering program. The new program is geared towards helping students understand the entire manufacturing process. Previously, a JHHS student would design a project in computer-aided design (CAD) software…

  1. How to Quickly Import CAD Geometry into Thermal Desktop

    NASA Technical Reports Server (NTRS)

    Wright, Shonte; Beltran, Emilio

    2002-01-01

    There are several groups at JPL (Jet Propulsion Laboratory) that are committed to concurrent design efforts, two are featured here. Center for Space Mission Architecture and Design (CSMAD) enables the practical application of advanced process technologies in JPL's mission architecture process. Team I functions as an incubator for projects that are in the Discovery, and even pre-Discovery proposal stages. JPL's concurrent design environment is to a large extent centered on the CAD (Computer Aided Design) file. During concurrent design sessions CAD geometry is ported to other more specialized engineering design packages.

  2. How to Quickly Import CAD Geometry into Thermal Desktop

    NASA Astrophysics Data System (ADS)

    Wright, Shonte; Beltran, Emilio

    2002-07-01

    There are several groups at JPL (Jet Propulsion Laboratory) that are committed to concurrent design efforts, two are featured here. Center for Space Mission Architecture and Design (CSMAD) enables the practical application of advanced process technologies in JPL's mission architecture process. Team I functions as an incubator for projects that are in the Discovery, and even pre-Discovery proposal stages. JPL's concurrent design environment is to a large extent centered on the CAD (Computer Aided Design) file. During concurrent design sessions CAD geometry is ported to other more specialized engineering design packages.

  3. Building Efficient Wireless Infrastructures for Pervasive Computing Environments

    ERIC Educational Resources Information Center

    Sheng, Bo

    2010-01-01

    Pervasive computing is an emerging concept that thoroughly brings computing devices and the consequent technology into people's daily life and activities. Most of these computing devices are very small, sometimes even "invisible", and often embedded into the objects surrounding people. In addition, these devices usually are not isolated, but…

  4. Bendix CAD-CAM site plan

    SciTech Connect

    Smith, M.L.

    1982-12-01

    The Bendix Site Plan for CAD-CAM encompasses the development and integration of interactive graphics systems, factory data management systems, robotics, direct numerical control, automated inspection, factory automation, and shared data bases to achieve significant plant-wide gains in productivity. This plan does not address all current or planned computerization projects within our facility. A summary of planning proposals and rationale is presented in the following paragraphs. Interactive Graphics System (IGS) capability presently consists of two Applicon CAD systems and the CD-2000 software program processing on a time-shared CYBER 174 computer and a dedicated CYBER 173. Proposed plans include phased procurment through FY85 of additional computers and sufficient graphics terminals to support projected needs in drafting, tool/gage design, N/C programming, and process engineering. Planned procurement of additional computer equipment in FY86 and FY87 will provide the capacity necessary for a comprehensive graphics data base management system, computer-aided process planning graphics, and special graphics requirements in facilities and test equipment design. The overall IGS plan, designated BICAM (Bendix Integrated Computer Aided Manufacturing), will provide the capability and capacity to integrate manufacturing activities through a shared product data base and standardized data exchange format. Planned efforts in robotics will result in productive applications of low to medium technology robots beginning in FY82, and extending by FY85 to work cell capabilities utilizing higher technology robots with sensors such as vision and instrumented remote compliance devices. A number of robots are projected to be in service by 1990.

  5. Balancing Accuracy and Computational Efficiency for Ternary Gas Hydrate Systems

    NASA Astrophysics Data System (ADS)

    White, M. D.

    2011-12-01

    phase transitions. This paper describes and demonstrates a numerical solution scheme for ternary hydrate systems that seeks a balance between accuracy and computational efficiency. This scheme uses a generalize cubic equation of state, functional forms for the hydrate equilibria and cage occupancies, variable switching scheme for phase transitions, and kinetic exchange of hydrate formers (i.e., CH4, CO2, and N2) between the mobile phases (i.e., aqueous, liquid CO2, and gas) and hydrate phase. Accuracy of the scheme will be evaluated by comparing property values and phase equilibria against experimental data. Computational efficiency of the scheme will be evaluated by comparing the base scheme against variants. The application of interest will the production of a natural gas hydrate deposit from a geologic formation, using the guest molecule exchange process; where, a mixture of CO2 and N2 are injected into the formation. During the guest-molecule exchange, CO2 and N2 will predominately replace CH4 in the large and small cages of the sI structure, respectively.

  6. CAD Skills Increased through Multicultural Design Project

    ERIC Educational Resources Information Center

    Clemons, Stephanie

    2006-01-01

    This article discusses how students in a college-entry-level CAD course researched four generations of their family histories and documented cultural and symbolic influences within their family backgrounds. AutoCAD software was then used to manipulate those cultural and symbolic images to create the design for a multicultural area rug. AutoCAD was…

  7. Cool-and Unusual-CAD Applications

    ERIC Educational Resources Information Center

    Calhoun, Ken

    2004-01-01

    This article describes several very useful applications of AutoCAD that may lie outside the normal scope of application. AutoCAD commands used in this article are based on AutoCAD 2000I. The author and his students used a Hewlett Packard 750C DesignJet plotter for plotting. (Contains 5 figures and 5 photos.)

  8. Comparative fracture strength analysis of Lava and Digident CAD/CAM zirconia ceramic crowns

    PubMed Central

    Kwon, Taek-Ka; Pak, Hyun-Soon; Han, Jung-Suk; Lee, Jai-Bong; Kim, Sung-Hun

    2013-01-01

    PURPOSE All-ceramic crowns are subject to fracture during function. To minimize this common clinical complication, zirconium oxide has been used as the framework for all-ceramic crowns. The aim of this study was to compare the fracture strengths of two computer-aided design/computer-aided manufacturing (CAD/CAM) zirconia crown systems: Lava and Digident. MATERIALS AND METHODS Twenty Lava CAD/CAM zirconia crowns and twenty Digident CAD/CAM zirconia crowns were fabricated. A metal die was also duplicated from the original prepared tooth for fracture testing. A universal testing machine was used to determine the fracture strength of the crowns. RESULTS The mean fracture strengths were as follows: 54.9 ± 15.6 N for the Lava CAD/CAM zirconia crowns and 87.0 ± 16.0 N for the Digident CAD/CAM zirconia crowns. The difference between the mean fracture strengths of the Lava and Digident crowns was statistically significant (P<.001). Lava CAD/CAM zirconia crowns showed a complete fracture of both the veneering porcelain and the core whereas the Digident CAD/CAM zirconia crowns showed fracture only of the veneering porcelain. CONCLUSION The fracture strengths of CAD/CAM zirconia crowns differ depending on the compatibility of the core material and the veneering porcelain. PMID:23755332

  9. Quality assurance and training procedures for computer-aided detection and diagnosis systems in clinical use.

    PubMed

    Huo, Zhimin; Summers, Ronald M; Paquerault, Sophie; Lo, Joseph; Hoffmeister, Jeffrey; Armato, Samuel G; Freedman, Matthew T; Lin, Jesse; Lo, Shih-Chung Ben; Petrick, Nicholas; Sahiner, Berkman; Fryd, David; Yoshida, Hiroyuki; Chan, Heang-Ping

    2013-07-01

    Computer-aided detection/diagnosis (CAD) is increasingly used for decision support by clinicians for detection and interpretation of diseases. However, there are no quality assurance (QA) requirements for CAD in clinical use at present. QA of CAD is important so that end users can be made aware of changes in CAD performance both due to intentional or unintentional causes. In addition, end-user training is critical to prevent improper use of CAD, which could potentially result in lower overall clinical performance. Research on QA of CAD and user training are limited to date. The purpose of this paper is to bring attention to these issues, inform the readers of the opinions of the members of the American Association of Physicists in Medicine (AAPM) CAD subcommittee, and thus stimulate further discussion in the CAD community on these topics. The recommendations in this paper are intended to be work items for AAPM task groups that will be formed to address QA and user training issues on CAD in the future. The work items may serve as a framework for the discussion and eventual design of detailed QA and training procedures for physicists and users of CAD. Some of the recommendations are considered by the subcommittee to be reasonably easy and practical and can be implemented immediately by the end users; others are considered to be "best practice" approaches, which may require significant effort, additional tools, and proper training to implement. The eventual standardization of the requirements of QA procedures for CAD will have to be determined through consensus from members of the CAD community, and user training may require support of professional societies. It is expected that high-quality CAD and proper use of CAD could allow these systems to achieve their true potential, thus benefiting both the patients and the clinicians, and may bring about more widespread clinical use of CAD for many other diseases and applications. It is hoped that the awareness of the need

  10. A Computationally Efficient Multicomponent Equilibrium Solver for Aerosols (MESA)

    SciTech Connect

    Zaveri, Rahul A.; Easter, Richard C.; Peters, Len K.

    2005-12-23

    deliquescence points as well as mass growth factors for the sulfate-rich systems. The MESA-MTEM configuration required only 5 to 10 single-level iterations to obtain the equilibrium solution for ~44% of the 328 multiphase problems solved in the 16 test cases at RH values ranging between 20% and 90%, while ~85% of the problems solved required less than 20 iterations. Based on the accuracy and computational efficiency considerations, the MESA-MTEM configuration is attractive for use in 3-D aerosol/air quality models.

  11. CAD/CAM-assisted breast reconstruction.

    PubMed

    Melchels, Ferry; Wiggenhauser, Paul Severin; Warne, David; Barry, Mark; Ong, Fook Rhu; Chong, Woon Shin; Hutmacher, Dietmar Werner; Schantz, Jan-Thorsten

    2011-09-01

    The application of computer-aided design and manufacturing (CAD/CAM) techniques in the clinic is growing slowly but steadily. The ability to build patient-specific models based on medical imaging data offers major potential. In this work we report on the feasibility of employing laser scanning with CAD/CAM techniques to aid in breast reconstruction. A patient was imaged with laser scanning, an economical and facile method for creating an accurate digital representation of the breasts and surrounding tissues. The obtained model was used to fabricate a customized mould that was employed as an intra-operative aid for the surgeon performing autologous tissue reconstruction of the breast removed due to cancer. Furthermore, a solid breast model was derived from the imaged data and digitally processed for the fabrication of customized scaffolds for breast tissue engineering. To this end, a novel generic algorithm for creating porosity within a solid model was developed, using a finite element model as intermediate. PMID:21900731

  12. Using High-Performance Graphics Machines in an Undergraduate CAD Course.

    ERIC Educational Resources Information Center

    Kirkpatrick, Allan; And Others

    1987-01-01

    Explains the approach taken at Colorado State University in a collegewide undergraduate computer-aided design (CAD) course. Reviews the topic areas covered, project requirements, and assesses the use of high-performance graphics devices. (ML)

  13. IFEMS, an Interactive Finite Element Modeling System Using a CAD/CAM System

    NASA Technical Reports Server (NTRS)

    Mckellip, S.; Schuman, T.; Lauer, S.

    1980-01-01

    A method of coupling a CAD/CAM system with a general purpose finite element mesh generator is described. The three computer programs which make up the interactive finite element graphics system are discussed.

  14. Efficient Hilbert transform-based alternative to Tofts physiological models for representing MRI dynamic contrast-enhanced images in computer-aided diagnosis of prostate cancer

    NASA Astrophysics Data System (ADS)

    Boehm, Kevin M.; Wang, Shijun; Burtt, Karen E.; Turkbey, Baris; Weisenthal, Samuel; Pinto, Peter; Choyke, Peter; Wood, Bradford J.; Petrick, Nicholas; Sahiner, Berkman; Summers, Ronald M.

    2015-03-01

    In computer-aided diagnosis (CAD) systems for prostate cancer, dynamic contrast enhanced (DCE) magnetic resonance imaging is useful for distinguishing cancerous and benign tissue. The Tofts physiological model is a commonly used representation of the DCE image data, but the parameters require extensive computation. Hence, we developed an alternative representation based on the Hilbert transform of the DCE images. The time maximum of the Hilbert transform, a binary metric of early enhancement, and a pre-DCE value was assigned to each voxel and appended to a standard feature set derived from T2-weighted images and apparent diffusion coefficient maps. A cohort of 40 patients was used for training the classifier, and 20 patients were used for testing. The AUC was calculated by pooling the voxel-wise prediction values and comparing with the ground truth. The resulting AUC of 0.92 (95% CI [0.87 0.97]) is not significantly different from an AUC calculated using Tofts physiological models of 0.92 (95% CI [0.87 0.97]), as validated by a Wilcoxon signed rank test on each patient's AUC (p = 0.19). The time required for calculation and feature extraction is 11.39 seconds (95% CI [10.95 11.82]) per patient using the Hilbert-based feature set, two orders of magnitude faster than the 1319 seconds (95% CI [1233 1404]) required for the Tofts parameter-based feature set (p<0.001). Hence, the features proposed herein appear useful for CAD systems integrated into clinical workflows where efficiency is important.

  15. Complete denture fabrication supported by CAD/CAM.

    PubMed

    Wimmer, Timea; Gallus, Korbinian; Eichberger, Marlis; Stawarczyk, Bogna

    2016-05-01

    The inclusion of computer-aided design/computer-aided manufacturing (CAD/CAM) technology into complete denture fabrication facilitates the procedures. The presented workflow for complete denture fabrication combines conventional and digitally supported treatment steps for improving dental care. With the presented technique, the registration of the occlusal plane, the determination of the ideal lip support, and the verification of the maxillomandibular relationship record are considered. PMID:26774323

  16. Computationally efficient algorithms for real-time attitude estimation

    NASA Technical Reports Server (NTRS)

    Pringle, Steven R.

    1993-01-01

    For many practical spacecraft applications, algorithms for determining spacecraft attitude must combine inputs from diverse sensors and provide redundancy in the event of sensor failure. A Kalman filter is suitable for this task, however, it may impose a computational burden which may be avoided by sub optimal methods. A suboptimal estimator is presented which was implemented successfully on the Delta Star spacecraft which performed a 9 month SDI flight experiment in 1989. This design sought to minimize algorithm complexity to accommodate the limitations of an 8K guidance computer. The algorithm used is interpreted in the framework of Kalman filtering and a derivation is given for the computation.

  17. CAD-Based Aerodynamic Design of Complex Configurations using a Cartesian Method

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.; Pulliam, Thomas H.

    2003-01-01

    A modular framework for aerodynamic optimization of complex geometries is developed. By working directly with a parametric CAD system, complex-geometry models are modified nnd tessellated in an automatic fashion. The use of a component-based Cartesian method significantly reduces the demands on the CAD system, and also provides for robust and efficient flowfield analysis. The optimization is controlled using either a genetic or quasi-Newton algorithm. Parallel efficiency of the framework is maintained even when subject to limited CAD resources by dynamically re-allocating the processors of the flow solver. Overall, the resulting framework can explore designs incorporating large shape modifications and changes in topology.

  18. CAD programs: a tool for crime scene processing and reconstruction

    NASA Astrophysics Data System (ADS)

    Boggiano, Daniel; De Forest, Peter R.; Sheehan, Francis X.

    1997-02-01

    Computer aided drafting (CAD) programs have great potential for helping the forensic scientist. One of their most direct and useful applications is crime scene documentation, as an aid in rendering neat, unambiguous line drawings of crime scenes. Once the data has been entered, it can easily be displayed, printed, or plotted in a variety of formats. Final renditions from this initial data entry can take multiple forms and can have multiple uses. As a demonstrative aid, a CAD program can produce two dimensional (2-D) drawings of the scene from one's notes to scale. These 2-D renditions are court display quality and help to make the forensic scientists's testimony easily understood. Another use for CAD is as an analytical tool for scene reconstruction. More than just a drawing aid, CAD can generate useful information from the data input. It can help reconstruct bullet paths or locations of furniture in a room when it is critical to the reconstruction. Data entry at the scene, on a notebook computer, can assist in framing and answering questions so that the forensic scientist can test hypotheses while actively documenting the scene. Further, three dimensional (3-D) renditions of items can be viewed from many 'locations' by using the program to rotate the object and the observers' viewpoint.

  19. Selective reduction of CAD false-positive findings

    NASA Astrophysics Data System (ADS)

    Camarlinghi, N.; Gori, I.; Retico, A.; Bagagli, F.

    2010-03-01

    Computer-Aided Detection (CAD) systems are becoming widespread supporting tools to radiologists' diagnosis, especially in screening contexts. However, a large amount of false positive (FP) alarms would inevitably lead both to an undesired possible increase in time for diagnosis, and to a reduction in radiologists' confidence in CAD as a useful tool. Most CAD systems implement as final step of the analysis a classifier which assigns a score to each entry of a list of findings; by thresholding this score it is possible to define the system performance on an annotated validation dataset in terms of a FROC curve (sensitivity vs. FP per scan). To use a CAD as a supportive tool for most clinical activities, an operative point has to be chosen on the system FROC curve, according to the obvious criterion of keeping the sensitivity as high as possible, while maintaining the number of FP alarms still acceptable. The strategy proposed in this study is to choose an operative point with high sensitivity on the CAD FROC curve, then to implement in cascade a further classification step, constituted by a smarter classifier. The key issue of this approach is that the smarter classifier is actually a meta-classifier of more then one decision system, each specialized in rejecting a particular type of FP findings generated by the CAD. The application of this approach to a dataset of 16 lung CT scans previously processed by the VBNACAD system is presented. The lung CT VBNACAD performance of 87.1% sensitivity to juxtapleural nodules with 18.5 FP per scan is improved up to 10.1 FP per scan while maintaining the same value of sensitivity. This work has been carried out in the framework of the MAGIC-V collaboration.

  20. Model-Based Engineering and Manufacturing CAD/CAM Benchmark

    SciTech Connect

    Domm, T.D.; Underwood, R.S.

    1999-04-26

    The Benehmark Project was created from a desire to identify best practices and improve the overall efficiency and performance of the Y-12 Plant's systems and personnel supprting the manufacturing mission. The mission of the benchmark team was to search out industry leaders in manufacturing and evaluate lheir engineering practices and processes to determine direction and focus fm Y-12 modmizadon efforts. The companies visited included several large established companies and anew, small, high-tech machining firm. As a result of this efforL changes are recommended that will enable Y-12 to become a more responsive cost-effective manufacturing facility capable of suppordng the needs of the Nuclear Weapons Complex (NW@) and Work Fw Others into the 21' century. The benchmark team identified key areas of interest, both focused and gencml. The focus arm included Human Resources, Information Management, Manufacturing Software Tools, and Standarda/ Policies and Practices. Areas of general interest included Inhstructure, Computer Platforms and Networking, and Organizational Structure. The method for obtaining the desired information in these areas centered on the creation of a benchmark questionnaire. The questionnaire was used throughout each of the visits as the basis for information gathering. The results of this benchmark showed that all companies are moving in the direction of model-based engineering and manufacturing. There was evidence that many companies are trying to grasp how to manage current and legacy data. In terms of engineering design software tools, the companies contacted were using both 3-D solid modeling and surfaced Wire-frame models. The manufacturing computer tools were varie4 with most companies using more than one software product to generate machining data and none currently performing model-based manufacturing (MBM) ftom a common medel. The majority of companies were closer to identifying or using a single computer-aided design (CAD) system than a

  1. Computer-aided design development transition for IPAD environment

    NASA Technical Reports Server (NTRS)

    Owens, H. G.; Mock, W. D.; Mitchell, J. C.

    1980-01-01

    The relationship of federally sponsored computer-aided design/computer-aided manufacturing (CAD/CAM) programs to the aircraft life cycle design process, an overview of NAAD'S CAD development program, an evaluation of the CAD design process, a discussion of the current computing environment within which NAAD is developing its CAD system, some of the advantages/disadvantages of the NAAD-IPAD approach, and CAD developments during transition into the IPAD system are discussed.

  2. Contemporary dental CAD/CAM: modern chairside/lab applications and the future of computerized dentistry.

    PubMed

    Patel, Neal

    2014-01-01

    CAD/CAM in dentistry has been particularly useful in enabling the fabrication of custom, patient-specific restorations and prosthetics without the need for traditional analog dental laboratory methods. While the optimal use of CAD/CAM technology must be determined on a case-by-case basis, it is important for clinicians to recognize the opportunity to utilize computerized technology in patient therapy to provide more highly efficient, accurate, and potentially ideal outcomes. This article will discuss and evaluate the state-ofthe- art of CAD/CAM dentistry for both chairside and laboratory-based solutions. Current options for CAD/CAM technology in the treatment of patients for comprehensive dentistry along with the most common uses of chairside and laboratory-based applications will be explored. The discussion will also identify recent and future trends in CAD/CAM applications in dentistry. PMID:25454527

  3. Parametric bicubic spline and CAD tools for complex targets shape modelling in physical optics radar cross section prediction

    NASA Astrophysics Data System (ADS)

    Delogu, A.; Furini, F.

    1991-09-01

    Increasing interest in radar cross section (RCS) reduction is placing new demands on theoretical, computation, and graphic techniques for calculating scattering properties of complex targets. In particular, computer codes capable of predicting the RCS of an entire aircraft at high frequency and of achieving RCS control with modest structural changes, are becoming of paramount importance in stealth design. A computer code, evaluating the RCS of arbitrary shaped metallic objects that are computer aided design (CAD) generated, and its validation with measurements carried out using ALENIA RCS test facilities are presented. The code, based on the physical optics method, is characterized by an efficient integration algorithm with error control, in order to contain the computer time within acceptable limits, and by an accurate parametric representation of the target surface in terms of bicubic splines.

  4. Model-Based Engineering and Manufacturing CAD/CAM Benchmark.

    SciTech Connect

    Domm, T.C.; Underwood, R.S.

    1999-10-13

    The Benchmark Project was created from a desire to identify best practices and improve the overall efficiency and performance of the Y-12 Plant's systems and personnel supporting the manufacturing mission. The mission of the benchmark team was to search out industry leaders in manufacturing and evaluate their engineering practices and processes to determine direction and focus for Y-12 modernization efforts. The companies visited included several large established companies and a new, small, high-tech machining firm. As a result of this effort, changes are recommended that will enable Y-12 to become a more modern, responsive, cost-effective manufacturing facility capable of supporting the needs of the Nuclear Weapons Complex (NWC) into the 21st century. The benchmark team identified key areas of interest, both focused and general. The focus areas included Human Resources, Information Management, Manufacturing Software Tools, and Standards/Policies and Practices. Areas of general interest included Infrastructure, Computer Platforms and Networking, and Organizational Structure. The results of this benchmark showed that all companies are moving in the direction of model-based engineering and manufacturing. There was evidence that many companies are trying to grasp how to manage current and legacy data. In terms of engineering design software tools, the companies contacted were somewhere between 3-D solid modeling and surfaced wire-frame models. The manufacturing computer tools were varied, with most companies using more than one software product to generate machining data and none currently performing model-based manufacturing (MBM) from a common model. The majority of companies were closer to identifying or using a single computer-aided design (CAD) system than a single computer-aided manufacturing (CAM) system. The Internet was a technology that all companies were looking to either transport information more easily throughout the corporation or as a conduit for

  5. Limits on efficient computation in the physical world

    NASA Astrophysics Data System (ADS)

    Aaronson, Scott Joel

    More than a speculative technology, quantum computing seems to challenge our most basic intuitions about how the physical world should behave. In this thesis I show that, while some intuitions from classical computer science must be jettisoned in the light of modern physics, many others emerge nearly unscathed; and I use powerful tools from computational complexity theory to help determine which are which. In the first part of the thesis, I attack the common belief that quantum computing resembles classical exponential parallelism, by showing that quantum computers would face serious limitations on a wider range of problems than was previously known. In particular, any quantum algorithm that solves the collision problem---that of deciding whether a sequence of n integers is one-to-one or two-to-one---must query the sequence O (n1/5) times. This resolves a question that was open for years; previously no lower bound better than constant was known. A corollary is that there is no "black-box" quantum algorithm to break cryptographic hash functions or solve the Graph Isomorphism problem in polynomial time. I also show that relative to an oracle, quantum computers could not solve NP-complete problems in polynomial time, even with the help of nonuniform "quantum advice states"; and that any quantum algorithm needs O (2n/4/n) queries to find a local minimum of a black-box function on the n-dimensional hypercube. Surprisingly, the latter result also leads to new classical lower bounds for the local search problem. Finally, I give new lower bounds on quantum one-way communication complexity, and on the quantum query complexity of total Boolean functions and recursive Fourier sampling. The second part of the thesis studies the relationship of the quantum computing model to physical reality. I first examine the arguments of Leonid Levin, Stephen Wolfram, and others who believe quantum computing to be fundamentally impossible. I find their arguments unconvincing without a "Sure

  6. CAD-CAM printed circuit board design

    NASA Astrophysics Data System (ADS)

    Agy, W. E.

    A step-by-step procedure for a printed circuit design achieved by CAD is presented. The operator at the interactive CRT station moves a stylus across a graphics tablet and intersperses commands which result in computer-generated pictorial forms on the screen that were drawn on the pad. Standard symbols are used for commands allowing, for instance, connections to be made of specific types in certain locations, which can be automatically edited from a materials list. An entire network of drawn lines can be referenced by a signal name for recall, and a finished circuit schematic can be checked for designs rules compliance, including fault reporting in terms of designator/pin number. A map may be present delineating the boundaries of the circuitry area, and previously completed circuitry segments can be recalled for piece-by-piece assembly of the circuit board.

  7. Computationally efficient calibration of WATCLASS Hydrologic models using surrogate optimization

    NASA Astrophysics Data System (ADS)

    Kamali, M.; Ponnambalam, K.; Soulis, E. D.

    2007-07-01

    In this approach, exploration of the cost function space was performed with an inexpensive surrogate function, not the expensive original function. The Design and Analysis of Computer Experiments(DACE) surrogate function, which is one type of approximate models, which takes correlation function for error was employed. The results for Monte Carlo Sampling, Latin Hypercube Sampling and Design and Analysis of Computer Experiments(DACE) approximate model have been compared. The results show that DACE model has a good potential for predicting the trend of simulation results. The case study of this document was WATCLASS hydrologic model calibration on Smokey-River watershed.

  8. Efficient computational simulation of actin stress fiber remodeling.

    PubMed

    Ristori, T; Obbink-Huizer, C; Oomens, C W J; Baaijens, F P T; Loerakker, S

    2016-09-01

    Understanding collagen and stress fiber remodeling is essential for the development of engineered tissues with good functionality. These processes are complex, highly interrelated, and occur over different time scales. As a result, excessive computational costs are required to computationally predict the final organization of these fibers in response to dynamic mechanical conditions. In this study, an analytical approximation of a stress fiber remodeling evolution law was derived. A comparison of the developed technique with the direct numerical integration of the evolution law showed relatively small differences in results, and the proposed method is one to two orders of magnitude faster. PMID:26823159

  9. Methods for Computationally Efficient Structured CFD Simulations of Complex Turbomachinery Flows

    NASA Technical Reports Server (NTRS)

    Herrick, Gregory P.; Chen, Jen-Ping

    2012-01-01

    This research presents more efficient computational methods by which to perform multi-block structured Computational Fluid Dynamics (CFD) simulations of turbomachinery, thus facilitating higher-fidelity solutions of complicated geometries and their associated flows. This computational framework offers flexibility in allocating resources to balance process count and wall-clock computation time, while facilitating research interests of simulating axial compressor stall inception with more complete gridding of the flow passages and rotor tip clearance regions than is typically practiced with structured codes. The paradigm presented herein facilitates CFD simulation of previously impractical geometries and flows. These methods are validated and demonstrate improved computational efficiency when applied to complicated geometries and flows.

  10. Understanding dental CAD/CAM for restorations--the digital workflow from a mechanical engineering viewpoint.

    PubMed

    Tapie, L; Lebon, N; Mawussi, B; Fron Chabouis, H; Duret, F; Attal, J-P

    2015-01-01

    As digital technology infiltrates every area of daily life, including the field of medicine, so it is increasingly being introduced into dental practice. Apart from chairside practice, computer-aided design/computer-aided manufacturing (CAD/CAM) solutions are available for creating inlays, crowns, fixed partial dentures (FPDs), implant abutments, and other dental prostheses. CAD/CAM dental solutions can be considered a chain of digital devices and software for the almost automatic design and creation of dental restorations. However, dentists who want to use the technology often do not have the time or knowledge to understand it. A basic knowledge of the CAD/CAM digital workflow for dental restorations can help dentists to grasp the technology and purchase a CAM/CAM system that meets the needs of their office. This article provides a computer-science and mechanical-engineering approach to the CAD/CAM digital workflow to help dentists understand the technology. PMID:25911827

  11. Efficient algorithm to compute mutually connected components in interdependent networks.

    PubMed

    Hwang, S; Choi, S; Lee, Deokjae; Kahng, B

    2015-02-01

    Mutually connected components (MCCs) play an important role as a measure of resilience in the study of interdependent networks. Despite their importance, an efficient algorithm to obtain the statistics of all MCCs during the removal of links has thus far been absent. Here, using a well-known fully dynamic graph algorithm, we propose an efficient algorithm to accomplish this task. We show that the time complexity of this algorithm is approximately O(N(1.2)) for random graphs, which is more efficient than O(N(2)) of the brute-force algorithm. We confirm the correctness of our algorithm by comparing the behavior of the order parameter as links are removed with existing results for three types of double-layer multiplex networks. We anticipate that this algorithm will be used for simulations of large-size systems that have been previously inaccessible. PMID:25768559

  12. Computer aided production engineering

    SciTech Connect

    Not Available

    1986-01-01

    This book presents the following contents: CIM in avionics; computer analysis of product designs for robot assembly; a simulation decision mould for manpower forecast and its application; development of flexible manufacturing system; advances in microcomputer applications in CAD/CAM; an automated interface between CAD and process planning; CAM and computer vision; low friction pneumatic actuators for accurate robot control; robot assembly of printed circuit boards; information systems design for computer integrated manufacture; and a CAD engineering language to aid manufacture.

  13. An Efficient Virtual Machine Consolidation Scheme for Multimedia Cloud Computing

    PubMed Central

    Han, Guangjie; Que, Wenhui; Jia, Gangyong; Shu, Lei

    2016-01-01

    Cloud computing has innovated the IT industry in recent years, as it can delivery subscription-based services to users in the pay-as-you-go model. Meanwhile, multimedia cloud computing is emerging based on cloud computing to provide a variety of media services on the Internet. However, with the growing popularity of multimedia cloud computing, its large energy consumption cannot only contribute to greenhouse gas emissions, but also result in the rising of cloud users’ costs. Therefore, the multimedia cloud providers should try to minimize its energy consumption as much as possible while satisfying the consumers’ resource requirements and guaranteeing quality of service (QoS). In this paper, we have proposed a remaining utilization-aware (RUA) algorithm for virtual machine (VM) placement, and a power-aware algorithm (PA) is proposed to find proper hosts to shut down for energy saving. These two algorithms have been combined and applied to cloud data centers for completing the process of VM consolidation. Simulation results have shown that there exists a trade-off between the cloud data center’s energy consumption and service-level agreement (SLA) violations. Besides, the RUA algorithm is able to deal with variable workload to prevent hosts from overloading after VM placement and to reduce the SLA violations dramatically. PMID:26901201

  14. An Efficient Virtual Machine Consolidation Scheme for Multimedia Cloud Computing.

    PubMed

    Han, Guangjie; Que, Wenhui; Jia, Gangyong; Shu, Lei

    2016-01-01

    Cloud computing has innovated the IT industry in recent years, as it can delivery subscription-based services to users in the pay-as-you-go model. Meanwhile, multimedia cloud computing is emerging based on cloud computing to provide a variety of media services on the Internet. However, with the growing popularity of multimedia cloud computing, its large energy consumption cannot only contribute to greenhouse gas emissions, but also result in the rising of cloud users' costs. Therefore, the multimedia cloud providers should try to minimize its energy consumption as much as possible while satisfying the consumers' resource requirements and guaranteeing quality of service (QoS). In this paper, we have proposed a remaining utilization-aware (RUA) algorithm for virtual machine (VM) placement, and a power-aware algorithm (PA) is proposed to find proper hosts to shut down for energy saving. These two algorithms have been combined and applied to cloud data centers for completing the process of VM consolidation. Simulation results have shown that there exists a trade-off between the cloud data center's energy consumption and service-level agreement (SLA) violations. Besides, the RUA algorithm is able to deal with variable workload to prevent hosts from overloading after VM placement and to reduce the SLA violations dramatically. PMID:26901201

  15. Learning with Computer-Based Multimedia: Gender Effects on Efficiency

    ERIC Educational Resources Information Center

    Pohnl, Sabine; Bogner, Franz X.

    2012-01-01

    Up to now, only a few studies in multimedia learning have focused on gender effects. While research has mostly focused on learning success, the effect of gender on instructional efficiency (IE) has not yet been considered. Consequently, we used a quasi-experimental design to examine possible gender differences in the learning success, mental…

  16. Computationally efficient statistical differential equation modeling using homogenization

    USGS Publications Warehouse

    Hooten, Mevin B.; Garlick, Martha J.; Powell, James A.

    2013-01-01

    Statistical models using partial differential equations (PDEs) to describe dynamically evolving natural systems are appearing in the scientific literature with some regularity in recent years. Often such studies seek to characterize the dynamics of temporal or spatio-temporal phenomena such as invasive species, consumer-resource interactions, community evolution, and resource selection. Specifically, in the spatial setting, data are often available at varying spatial and temporal scales. Additionally, the necessary numerical integration of a PDE may be computationally infeasible over the spatial support of interest. We present an approach to impose computationally advantageous changes of support in statistical implementations of PDE models and demonstrate its utility through simulation using a form of PDE known as “ecological diffusion.” We also apply a statistical ecological diffusion model to a data set involving the spread of mountain pine beetle (Dendroctonus ponderosae) in Idaho, USA.

  17. Labeled trees and the efficient computation of derivations

    NASA Technical Reports Server (NTRS)

    Grossman, Robert; Larson, Richard G.

    1989-01-01

    The effective parallel symbolic computation of operators under composition is discussed. Examples include differential operators under composition and vector fields under the Lie bracket. Data structures consisting of formal linear combinations of rooted labeled trees are discussed. A multiplication on rooted labeled trees is defined, thereby making the set of these data structures into an associative algebra. An algebra homomorphism is defined from the original algebra of operators into this algebra of trees. An algebra homomorphism from the algebra of trees into the algebra of differential operators is then described. The cancellation which occurs when noncommuting operators are expressed in terms of commuting ones occurs naturally when the operators are represented using this data structure. This leads to an algorithm which, for operators which are derivations, speeds up the computation exponentially in the degree of the operator. It is shown that the algebra of trees leads naturally to a parallel version of the algorithm.

  18. Algorithmic and architectural optimizations for computationally efficient particle filtering.

    PubMed

    Sankaranarayanan, Aswin C; Srivastava, Ankur; Chellappa, Rama

    2008-05-01

    In this paper, we analyze the computational challenges in implementing particle filtering, especially to video sequences. Particle filtering is a technique used for filtering nonlinear dynamical systems driven by non-Gaussian noise processes. It has found widespread applications in detection, navigation, and tracking problems. Although, in general, particle filtering methods yield improved results, it is difficult to achieve real time performance. In this paper, we analyze the computational drawbacks of traditional particle filtering algorithms, and present a method for implementing the particle filter using the Independent Metropolis Hastings sampler, that is highly amenable to pipelined implementations and parallelization. We analyze the implementations of the proposed algorithm, and, in particular, concentrate on implementations that have minimum processing times. It is shown that the design parameters for the fastest implementation can be chosen by solving a set of convex programs. The proposed computational methodology was verified using a cluster of PCs for the application of visual tracking. We demonstrate a linear speed-up of the algorithm using the methodology proposed in the paper. PMID:18390378

  19. Dental students' preferences and performance in crown design: conventional wax-added versus CAD.

    PubMed

    Douglas, R Duane; Hopp, Christa D; Augustin, Marcus A

    2014-12-01

    The purpose of this study was to evaluate dental students' perceptions of traditional waxing vs. computer-aided crown design and to determine the effectiveness of either technique through comparative grading of the final products. On one of twoidentical tooth preparations, second-year students at one dental school fabricated a wax pattern for a full contour crown; on the second tooth preparation, the same students designed and fabricated an all-ceramic crown using computer-aided design (CAD) and computer-aided manufacturing (CAM) technology. Projects were graded for occlusion and anatomic form by three faculty members. On completion of the projects, 100 percent of the students (n=50) completed an eight-question, five-point Likert scalesurvey, designed to assess their perceptions of and learning associated with the two design techniques. The average grades for the crown design projects were 78.3 (CAD) and 79.1 (wax design). The mean numbers of occlusal contacts were 3.8 (CAD) and 2.9(wax design), which was significantly higher for CAD (p=0.02). The survey results indicated that students enjoyed designing afull contour crown using CAD as compared to using conventional wax techniques and spent less time designing the crown using CAD. From a learning perspective, students felt that they learned more about position and the size/strength of occlusal contacts using CAD. However, students recognized that CAD technology has limits in terms of representing anatomic contours and excursive occlusion compared to conventional wax techniques. The results suggest that crown design using CAD could be considered as an adjunct to conventional wax-added techniques in preclinical fixed prosthodontic curricula. PMID:25480282

  20. Improving CAD performance by fusion of the bilateral mammographic tissue asymmetry information

    NASA Astrophysics Data System (ADS)

    Wang, Xingwei; Li, Lihua; Liu, Wei; Xu, Weidong; Lederman, Dror; Zheng, Bin

    2012-03-01

    Bilateral mammographic tissue density asymmetry could be an important factor in assessing risk of developing breast cancer and improving the detection of the suspicious lesions. This study aims to assess whether fusion of the bilateral mammographic density asymmetrical information into a computer-aided detection (CAD) scheme could improve CAD performance in detecting mass-like breast cancers. A testing dataset involving 1352 full-field digital mammograms (FFDM) acquired from 338 cases was used. In this dataset, half (169) cases are positive containing malignant masses and half are negative. Two computerized schemes were first independently applied to process FFDM images of each case. The first single-image based CAD scheme detected suspicious mass regions on each image. The second scheme detected and computed the bilateral mammographic tissue density asymmetry for each case. A fusion method was then applied to combine the output scores of the two schemes. The CAD performance levels using the original CAD-generated detection scores and the new fusion scores were evaluated and compared using a free-response receiver operating characteristic (FROC) type data analysis method. By fusion with the bilateral mammographic density asymmetrical scores, the case-based CAD sensitivity was increased from 79.2% to 84.6% at a false-positive rate of 0.3 per image. CAD also cued more "difficult" masses with lower CAD-generated detection scores while discarded some "easy" cases. The study indicated that fusion between the scores generated by a single-image based CAD scheme and the computed bilateral mammographic density asymmetry scores enabled to increase mass detection sensitivity in particular to detect more subtle masses.

  1. Learning-based image preprocessing for robust computer-aided detection

    NASA Astrophysics Data System (ADS)

    Raghupathi, Laks; Devarakota, Pandu R.; Wolf, Matthias

    2013-03-01

    Recent studies have shown that low dose computed tomography (LDCT) can be an effective screening tool to reduce lung cancer mortality. Computer-aided detection (CAD) would be a beneficial second reader for radiologists in such cases. Studies demonstrate that while iterative reconstructions (IR) improve LDCT diagnostic quality, it however degrades CAD performance significantly (increased false positives) when applied directly. For improving CAD performance, solutions such as retraining with newer data or applying a standard preprocessing technique may not be suffice due to high prevalence of CT scanners and non-uniform acquisition protocols. Here, we present a learning-based framework that can adaptively transform a wide variety of input data to boost an existing CAD performance. This not only enhances their robustness but also their applicability in clinical workflows. Our solution consists of applying a suitable pre-processing filter automatically on the given image based on its characteristics. This requires the preparation of ground truth (GT) of choosing an appropriate filter resulting in improved CAD performance. Accordingly, we propose an efficient consolidation process with a novel metric. Using key anatomical landmarks, we then derive consistent feature descriptors for the classification scheme that then uses a priority mechanism to automatically choose an optimal preprocessing filter. We demonstrate CAD prototype∗ performance improvement using hospital-scale datasets acquired from North America, Europe and Asia. Though we demonstrated our results for a lung nodule CAD, this scheme is straightforward to extend to other post-processing tools dedicated to other organs and modalities.

  2. Chunking as the result of an efficiency computation trade-off.

    PubMed

    Ramkumar, Pavan; Acuna, Daniel E; Berniker, Max; Grafton, Scott T; Turner, Robert S; Kording, Konrad P

    2016-01-01

    How to move efficiently is an optimal control problem, whose computational complexity grows exponentially with the horizon of the planned trajectory. Breaking a compound movement into a series of chunks, each planned over a shorter horizon can thus reduce the overall computational complexity and associated costs while limiting the achievable efficiency. This trade-off suggests a cost-effective learning strategy: to learn new movements we should start with many short chunks (to limit the cost of computation). As practice reduces the impediments to more complex computation, the chunking structure should evolve to allow progressively more efficient movements (to maximize efficiency). Here we show that monkeys learning a reaching sequence over an extended period of time adopt this strategy by performing movements that can be described as locally optimal trajectories. Chunking can thus be understood as a cost-effective strategy for producing and learning efficient movements. PMID:27397420

  3. Parametric Design Optimization By Integrating CAD Systems And Optimization Tools

    NASA Astrophysics Data System (ADS)

    Rehan, M.; Olabi, A. G.

    2009-11-01

    Designing a cost effective product in minimum time is a complex process. In order to achieve this goal the requirement of optimum designs are becoming more important. One of the time consuming factor in the design optimization cycle is the modifications of Computer Aided Design (CAD) model after optimization. In conventional design optimization techniques the design engineer has to update the CAD model after receiving optimum design from optimization tools. It is worthwhile using parametric design optimization process to minimize the optimization cycle time. This paper presents a comprehensive study to integrate the optimization parameters between CAD system and optimization tools which were driven from a single user environment. Finally, design optimization of a Compressed Natural Gas (CNG) cylinder was implemented as case study. In this case study the optimization tools were fully integrated with CAD system, therefore, all the deliverables including; part design, drawings and assembly can be automatically updated after achieving the optimum geometry having minimum volume and satisfying all imposed constraints.

  4. Efficient Helicopter Aerodynamic and Aeroacoustic Predictions on Parallel Computers

    NASA Technical Reports Server (NTRS)

    Wissink, Andrew M.; Lyrintzis, Anastasios S.; Strawn, Roger C.; Oliker, Leonid; Biswas, Rupak

    1996-01-01

    This paper presents parallel implementations of two codes used in a combined CFD/Kirchhoff methodology to predict the aerodynamics and aeroacoustics properties of helicopters. The rotorcraft Navier-Stokes code, TURNS, computes the aerodynamic flowfield near the helicopter blades and the Kirchhoff acoustics code computes the noise in the far field, using the TURNS solution as input. The overall parallel strategy adds MPI message passing calls to the existing serial codes to allow for communication between processors. As a result, the total code modifications required for parallel execution are relatively small. The biggest bottleneck in running the TURNS code in parallel comes from the LU-SGS algorithm that solves the implicit system of equations. We use a new hybrid domain decomposition implementation of LU-SGS to obtain good parallel performance on the SP-2. TURNS demonstrates excellent parallel speedups for quasi-steady and unsteady three-dimensional calculations of a helicopter blade in forward flight. The execution rate attained by the code on 114 processors is six times faster than the same cases run on one processor of the Cray C-90. The parallel Kirchhoff code also shows excellent parallel speedups and fast execution rates. As a performance demonstration, unsteady acoustic pressures are computed at 1886 far-field observer locations for a sample acoustics problem. The calculation requires over two hundred hours of CPU time on one C-90 processor but takes only a few hours on 80 processors of the SP2. The resultant far-field acoustic field is analyzed with state of-the-art audio and video rendering of the propagating acoustic signals.

  5. CYBERSECURITY AND USER ACCOUNTABILITY IN THE C-AD CONTROL SYSTEM

    SciTech Connect

    MORRIS,J.T.; BINELLO, S.; D OTTAVIO, T.; KATZ, R.A.

    2007-10-15

    A heightened awareness of cybersecurity has led to a review of the procedures that ensure user accountability for actions performed on the computers of the Collider-Accelerator Department (C-AD) Control System. Control system consoles are shared by multiple users in control rooms throughout the C-AD complex. A significant challenge has been the establishment of procedures that securely control and monitor access to these shared consoles without impeding accelerator operations. This paper provides an overview of C-AD cybersecurity strategies with an emphasis on recent enhancements in user authentication and tracking methods.

  6. A Software Demonstration of 'rap': Preparing CAD Geometries for Overlapping Grid Generation

    SciTech Connect

    Anders Petersson, N.

    2002-02-15

    We demonstrate the application code ''rap'' which is part of the ''Overture'' library. A CAD geometry imported from an IGES file is first cleaned up and simplified to suit the needs of mesh generation. Thereafter, the topology of the model is computed and a water-tight surface triangulation is created on the CAD surface. This triangulation is used to speed up the projection of points onto the CAD surface during the generation of overlapping surface grids. From each surface grid, volume grids are grown into the domain using a hyperbolic marching procedure. The final step is to fill any remaining parts of the interior with background meshes.

  7. Integration of a CAD System Into an MDO Framework

    NASA Technical Reports Server (NTRS)

    Townsend, J. C.; Samareh, J. A.; Weston, R. P.; Zorumski, W. E.

    1998-01-01

    NASA Langley has developed a heterogeneous distributed computing environment, called the Framework for Inter-disciplinary Design Optimization, or FIDO. Its purpose has been to demonstrate framework technical feasibility and usefulness for optimizing the preliminary design of complex systems and to provide a working environment for testing optimization schemes. Its initial implementation has been for a simplified model of preliminary design of a high-speed civil transport. Upgrades being considered for the FIDO system include a more complete geometry description, required by high-fidelity aerodynamics and structures codes and based on a commercial Computer Aided Design (CAD) system. This report presents the philosophy behind some of the decisions that have shaped the FIDO system and gives a brief case study of the problems and successes encountered in integrating a CAD system into the FEDO framework.

  8. Probabilistic Damage Characterization Using the Computationally-Efficient Bayesian Approach

    NASA Technical Reports Server (NTRS)

    Warner, James E.; Hochhalter, Jacob D.

    2016-01-01

    This work presents a computationally-ecient approach for damage determination that quanti es uncertainty in the provided diagnosis. Given strain sensor data that are polluted with measurement errors, Bayesian inference is used to estimate the location, size, and orientation of damage. This approach uses Bayes' Theorem to combine any prior knowledge an analyst may have about the nature of the damage with information provided implicitly by the strain sensor data to form a posterior probability distribution over possible damage states. The unknown damage parameters are then estimated based on samples drawn numerically from this distribution using a Markov Chain Monte Carlo (MCMC) sampling algorithm. Several modi cations are made to the traditional Bayesian inference approach to provide signi cant computational speedup. First, an ecient surrogate model is constructed using sparse grid interpolation to replace a costly nite element model that must otherwise be evaluated for each sample drawn with MCMC. Next, the standard Bayesian posterior distribution is modi ed using a weighted likelihood formulation, which is shown to improve the convergence of the sampling process. Finally, a robust MCMC algorithm, Delayed Rejection Adaptive Metropolis (DRAM), is adopted to sample the probability distribution more eciently. Numerical examples demonstrate that the proposed framework e ectively provides damage estimates with uncertainty quanti cation and can yield orders of magnitude speedup over standard Bayesian approaches.

  9. Efficient design of direct-binary-search computer-generated holograms

    SciTech Connect

    Jennison, B.K.; Allebach. J.P. ); Sweeney, D.W. )

    1991-04-01

    Computer-generated holograms (CGH's) synthesized by the iterative direct-binary-search (DBS) algorithm yield lower reconstruction error and higher diffraction efficiency than do CGH's designed by conventional methods, but the DBS algorithm is computationally intensive. A fast algorithm for DBS is developed that recursively computes the error measure to be minimized. For complex amplitude-based error, the required computation for an L-point and modifications are considered in order to make the algorithm more efficient. An acceleration technique that attempts to increase the rate of convergence of the DBS algorithm is also investigated.

  10. Westinghouse Idaho Nuclear Company, Inc. (WINCO) CAD activities at the Idaho Chemical Processing Plant (ICPP) (Idaho Engineering Laboratory)

    SciTech Connect

    Jensen, B.

    1989-04-18

    June 1985 -- The drafting manager obtained approval to implement a cad system at the ICPP. He formed a committee to evaluate the various cad systems and recommend a system that would most benefit the ICPP. A PC'' (personal computer) based system using Autocad software was recommended in lieu of the much more expensive main frame based systems.

  11. Westinghouse Idaho Nuclear Company, Inc. (WINCO) CAD activities at the Idaho Chemical Processing Plant (ICPP) (Idaho Engineering Laboratory)

    SciTech Connect

    Jensen, B.

    1989-04-18

    June 1985 -- The drafting manager obtained approval to implement a cad system at the ICPP. He formed a committee to evaluate the various cad systems and recommend a system that would most benefit the ICPP. A ``PC`` (personal computer) based system using Autocad software was recommended in lieu of the much more expensive main frame based systems.

  12. An Educational Exercise Examining the Role of Model Attributes on the Creation and Alteration of CAD Models

    ERIC Educational Resources Information Center

    Johnson, Michael D.; Diwakaran, Ram Prasad

    2011-01-01

    Computer-aided design (CAD) is a ubiquitous tool that today's students will be expected to use proficiently for numerous engineering purposes. Taking full advantage of the features available in modern CAD programs requires that models are created in a manner that allows others to easily understand how they are organized and alter them in an…

  13. Performance evaluation of the NASA/KSC CAD/CAE and office automation LAN's

    NASA Technical Reports Server (NTRS)

    Zobrist, George W.

    1994-01-01

    This study's objective is the performance evaluation of the existing CAD/CAE (Computer Aided Design/Computer Aided Engineering) network at NASA/KSC. This evaluation also includes a similar study of the Office Automation network, since it is being planned to integrate this network into the CAD/CAE network. The Microsoft mail facility which is presently on the CAD/CAE network was monitored to determine its present usage. This performance evaluation of the various networks will aid the NASA/KSC network managers in planning for the integration of future workload requirements into the CAD/CAE network and determining the effectiveness of the planned FDDI (Fiber Distributed Data Interface) migration.

  14. A computationally efficient modelling of laminar separation bubbles

    NASA Technical Reports Server (NTRS)

    Maughmer, Mark D.

    1988-01-01

    The goal of this research is to accurately predict the characteristics of the laminar separation bubble and its effects on airfoil performance. To this end, a model of the bubble is under development and will be incorporated in the analysis section of the Eppler and Somers program. As a first step in this direction, an existing bubble model was inserted into the program. It was decided to address the problem of the short bubble before attempting the prediction of the long bubble. In the second place, an integral boundary-layer method is believed more desirable than a finite difference approach. While these two methods achieve similar prediction accuracy, finite-difference methods tend to involve significantly longer computer run times than the integral methods. Finally, as the boundary-layer analysis in the Eppler and Somers program employs the momentum and kinetic energy integral equations, a short-bubble model compatible with these equations is most preferable.

  15. A computationally efficient modelling of laminar separation bubbles

    NASA Astrophysics Data System (ADS)

    Dini, Paolo; Maughmer, Mark D.

    1989-02-01

    The goal is to accurately predict the characteristics of the laminar separation bubble and its effects on airfoil performance. Toward this end, a computational model of the separation bubble was developed and incorporated into the Eppler and Somers airfoil design and analysis program. Thus far, the focus of the research was limited to the development of a model which can accurately predict situations in which the interaction between the bubble and the inviscid velocity distribution is weak, the so-called short bubble. A summary of the research performed in the past nine months is presented. The bubble model in its present form is then described. Lastly, the performance of this model in predicting bubble characteristics is shown for a few cases.

  16. A computationally efficient modelling of laminar separation bubbles

    NASA Astrophysics Data System (ADS)

    Maughmer, Mark D.

    1988-02-01

    The goal of this research is to accurately predict the characteristics of the laminar separation bubble and its effects on airfoil performance. To this end, a model of the bubble is under development and will be incorporated in the analysis section of the Eppler and Somers program. As a first step in this direction, an existing bubble model was inserted into the program. It was decided to address the problem of the short bubble before attempting the prediction of the long bubble. In the second place, an integral boundary-layer method is believed more desirable than a finite difference approach. While these two methods achieve similar prediction accuracy, finite-difference methods tend to involve significantly longer computer run times than the integral methods. Finally, as the boundary-layer analysis in the Eppler and Somers program employs the momentum and kinetic energy integral equations, a short-bubble model compatible with these equations is most preferable.

  17. A computationally efficient modelling of laminar separation bubbles

    NASA Technical Reports Server (NTRS)

    Dini, Paolo; Maughmer, Mark D.

    1989-01-01

    The goal is to accurately predict the characteristics of the laminar separation bubble and its effects on airfoil performance. Toward this end, a computational model of the separation bubble was developed and incorporated into the Eppler and Somers airfoil design and analysis program. Thus far, the focus of the research was limited to the development of a model which can accurately predict situations in which the interaction between the bubble and the inviscid velocity distribution is weak, the so-called short bubble. A summary of the research performed in the past nine months is presented. The bubble model in its present form is then described. Lastly, the performance of this model in predicting bubble characteristics is shown for a few cases.

  18. Computer modeling of high-efficiency solar cells

    NASA Technical Reports Server (NTRS)

    Schartz, R. J.; Lundstrom, M. S.

    1980-01-01

    Transport equations which describe the flow of holes and electrons in the heavily doped regions of a solar cell are presented in a form that is suitable for device modeling. Two experimentally determinable parameters, the effective bandgap shrinkage and the effective asymmetry factor are required to completely model the cell in these regions. Nevertheless, a knowledge of only the effective bandgap shrinkage is sufficient to model the terminal characteristics of the cell. The results of computer simulations of the effects of heavy doping are presented. The insensitivity of the terminal characteristics to the choice of effective asymmetry factor is shown along with the sensitivity of the electric field and quasielectric fields to this parameter. The dependence of the terminal characteristics on the effective bandgap shrinkage is also presented.

  19. Efficient computation of the spectrum of viscoelastic flows

    NASA Astrophysics Data System (ADS)

    Valério, J. V.; Carvalho, M. S.; Tomei, C.

    2009-03-01

    The understanding of viscoelastic flows in many situations requires not only the steady state solution of the governing equations, but also its sensitivity to small perturbations. Linear stability analysis leads to a generalized eigenvalue problem (GEVP), whose numerical analysis may be challenging, even for Newtonian liquids, because the incompressibility constraint creates singularities that lead to non-physical eigenvalues at infinity. For viscoelastic flows, the difficulties increase due to the presence of continuous spectrum, related to the constitutive equations. The Couette flow of upper convected Maxwell (UCM) liquids has been used as a case study of the stability of viscoelastic flows. The spectrum consists of two discrete eigenvalues and a continuous segment with real part equal to -1/ We ( We is the Weissenberg number). Most of the approximations in the literature were obtained using spectral expansions. The eigenvalues close to the continuous part of the spectrum show very slow convergence. In this work, the linear stability of Couette flow of a UCM liquid is studied using a finite element method. A new procedure to eliminate the eigenvalues at infinity from the GEVP is proposed. The procedure takes advantage of the structure of the matrices involved and avoids the computational overhead of the usual mapping techniques. The GEVP is transformed into a non-degenerate GEVP of dimension five times smaller. The computed eigenfunctions related to the continuous spectrum are in good agreement with the analytic solutions obtained by Graham [M.D. Graham, Effect of axial flow on viscoelastic Taylor-Couette instability, J. Fluid Mech. 360 (1998) 341].

  20. Adjoint Algorithm for CAD-Based Shape Optimization Using a Cartesian Method

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.

    2004-01-01

    Adjoint solutions of the governing flow equations are becoming increasingly important for the development of efficient analysis and optimization algorithms. A well-known use of the adjoint method is gradient-based shape optimization. Given an objective function that defines some measure of performance, such as the lift and drag functionals, its gradient is computed at a cost that is essentially independent of the number of design variables (geometric parameters that control the shape). More recently, emerging adjoint applications focus on the analysis problem, where the adjoint solution is used to drive mesh adaptation, as well as to provide estimates of functional error bounds and corrections. The attractive feature of this approach is that the mesh-adaptation procedure targets a specific functional, thereby localizing the mesh refinement and reducing computational cost. Our focus is on the development of adjoint-based optimization techniques for a Cartesian method with embedded boundaries.12 In contrast t o implementations on structured and unstructured grids, Cartesian methods decouple the surface discretization from the volume mesh. This feature makes Cartesian methods well suited for the automated analysis of complex geometry problems, and consequently a promising approach to aerodynamic optimization. Melvin et developed an adjoint formulation for the TRANAIR code, which is based on the full-potential equation with viscous corrections. More recently, Dadone and Grossman presented an adjoint formulation for the Euler equations. In both approaches, a boundary condition is introduced to approximate the effects of the evolving surface shape that results in accurate gradient computation. Central to automated shape optimization algorithms is the issue of geometry modeling and control. The need to optimize complex, "real-life" geometry provides a strong incentive for the use of parametric-CAD systems within the optimization procedure. In previous work, we presented

  1. An efficient network for interconnecting remote monitoring instruments and computers

    SciTech Connect

    Halbig, J.K.; Gainer, K.E.; Klosterbuer, S.F.

    1994-08-01

    Remote monitoring instrumentation must be connected with computers and other instruments. The cost and intrusiveness of installing cables in new and existing plants presents problems for the facility and the International Atomic Energy Agency (IAEA). The authors have tested a network that could accomplish this interconnection using mass-produced commercial components developed for use in industrial applications. Unlike components in the hardware of most networks, the components--manufactured and distributed in North America, Europe, and Asia--lend themselves to small and low-powered applications. The heart of the network is a chip with three microprocessors and proprietary network software contained in Read Only Memory. In addition to all nonuser levels of protocol, the software also contains message authentication capabilities. This chip can be interfaced to a variety of transmission media, for example, RS-485 lines, fiber topic cables, rf waves, and standard ac power lines. The use of power lines as the transmission medium in a facility could significantly reduce cabling costs.

  2. Enabling Efficient Climate Science Workflows in High Performance Computing Environments

    NASA Astrophysics Data System (ADS)

    Krishnan, H.; Byna, S.; Wehner, M. F.; Gu, J.; O'Brien, T. A.; Loring, B.; Stone, D. A.; Collins, W.; Prabhat, M.; Liu, Y.; Johnson, J. N.; Paciorek, C. J.

    2015-12-01

    A typical climate science workflow often involves a combination of acquisition of data, modeling, simulation, analysis, visualization, publishing, and storage of results. Each of these tasks provide a myriad of challenges when running on a high performance computing environment such as Hopper or Edison at NERSC. Hurdles such as data transfer and management, job scheduling, parallel analysis routines, and publication require a lot of forethought and planning to ensure that proper quality control mechanisms are in place. These steps require effectively utilizing a combination of well tested and newly developed functionality to move data, perform analysis, apply statistical routines, and finally, serve results and tools to the greater scientific community. As part of the CAlibrated and Systematic Characterization, Attribution and Detection of Extremes (CASCADE) project we highlight a stack of tools our team utilizes and has developed to ensure that large scale simulation and analysis work are commonplace and provide operations that assist in everything from generation/procurement of data (HTAR/Globus) to automating publication of results to portals like the Earth Systems Grid Federation (ESGF), all while executing everything in between in a scalable environment in a task parallel way (MPI). We highlight the use and benefit of these tools by showing several climate science analysis use cases they have been applied to.

  3. Efficient computer algebra algorithms for polynomial matrices in control design

    NASA Technical Reports Server (NTRS)

    Baras, J. S.; Macenany, D. C.; Munach, R.

    1989-01-01

    The theory of polynomial matrices plays a key role in the design and analysis of multi-input multi-output control and communications systems using frequency domain methods. Examples include coprime factorizations of transfer functions, cannonical realizations from matrix fraction descriptions, and the transfer function design of feedback compensators. Typically, such problems abstract in a natural way to the need to solve systems of Diophantine equations or systems of linear equations over polynomials. These and other problems involving polynomial matrices can in turn be reduced to polynomial matrix triangularization procedures, a result which is not surprising given the importance of matrix triangularization techniques in numerical linear algebra. Matrices with entries from a field and Gaussian elimination play a fundamental role in understanding the triangularization process. In the case of polynomial matrices, matrices with entries from a ring for which Gaussian elimination is not defined and triangularization is accomplished by what is quite properly called Euclidean elimination. Unfortunately, the numerical stability and sensitivity issues which accompany floating point approaches to Euclidean elimination are not very well understood. New algorithms are presented which circumvent entirely such numerical issues through the use of exact, symbolic methods in computer algebra. The use of such error-free algorithms guarantees that the results are accurate to within the precision of the model data--the best that can be hoped for. Care must be taken in the design of such algorithms due to the phenomenon of intermediate expressions swell.

  4. An accurate and efficient computation method of the hydration free energy of a large, complex molecule

    NASA Astrophysics Data System (ADS)

    Yoshidome, Takashi; Ekimoto, Toru; Matubayasi, Nobuyuki; Harano, Yuichi; Kinoshita, Masahiro; Ikeguchi, Mitsunori

    2015-05-01

    The hydration free energy (HFE) is a crucially important physical quantity to discuss various chemical processes in aqueous solutions. Although an explicit-solvent computation with molecular dynamics (MD) simulations is a preferable treatment of the HFE, huge computational load has been inevitable for large, complex solutes like proteins. In the present paper, we propose an efficient computation method for the HFE. In our method, the HFE is computed as a sum of /2 ( is the ensemble average of the sum of pair interaction energy between solute and water molecule) and the water reorganization term mainly reflecting the excluded volume effect. Since can readily be computed through a MD of the system composed of solute and water, an efficient computation of the latter term leads to a reduction of computational load. We demonstrate that the water reorganization term can quantitatively be calculated using the morphometric approach (MA) which expresses the term as the linear combinations of the four geometric measures of a solute and the corresponding coefficients determined with the energy representation (ER) method. Since the MA enables us to finish the computation of the solvent reorganization term in less than 0.1 s once the coefficients are determined, the use of the MA enables us to provide an efficient computation of the HFE even for large, complex solutes. Through the applications, we find that our method has almost the same quantitative performance as the ER method with substantial reduction of the computational load.

  5. An Efficient Objective Analysis System for Parallel Computers

    NASA Technical Reports Server (NTRS)

    Stobie, J.

    1999-01-01

    A new atmospheric objective analysis system designed for parallel computers will be described. The system can produce a global analysis (on a 1 X 1 lat-lon grid with 18 levels of heights and winds and 10 levels of moisture) using 120,000 observations in 17 minutes on 32 CPUs (SGI Origin 2000). No special parallel code is needed (e.g. MPI or multitasking) and the 32 CPUs do not have to be on the same platform. The system is totally portable and can run on several different architectures at once. In addition, the system can easily scale up to 100 or more CPUS. This will allow for much higher resolution and significant increases in input data. The system scales linearly as the number of observations and the number of grid points. The cost overhead in going from 1 to 32 CPUs is 18%. In addition, the analysis results are identical regardless of the number of processors used. This system has all the characteristics of optimal interpolation, combining detailed instrument and first guess error statistics to produce the best estimate of the atmospheric state. Static tests with a 2 X 2.5 resolution version of this system showed it's analysis increments are comparable to the latest NASA operational system including maintenance of mass-wind balance. Results from several months of cycling test in the Goddard EOS Data Assimilation System (GEOS DAS) show this new analysis retains the same level of agreement between the first guess and observations (O-F statistics) as the current operational system.

  6. An Efficient Objective Analysis System for Parallel Computers

    NASA Technical Reports Server (NTRS)

    Stobie, James G.

    1999-01-01

    A new objective analysis system designed for parallel computers will be described. The system can produce a global analysis (on a 2 x 2.5 lat-lon grid with 20 levels of heights and winds and 10 levels of moisture) using 120,000 observations in less than 3 minutes on 32 CPUs (SGI Origin 2000). No special parallel code is needed (e.g. MPI or multitasking) and the 32 CPUs do not have to be on the same platform. The system Ls totally portable and can run on -several different architectures at once. In addition, the system can easily scale up to 100 or more CPUS. This will allow for much higher resolution and significant increases in input data. The system scales linearly as the number of observations and the number of grid points. The cost overhead in going from I to 32 CPus is 18%. in addition, the analysis results are identical regardless of the number of processors used. T'his system has all the characteristics of optimal interpolation, combining detailed instrument and first guess error statistics to produce the best estimate of the atmospheric state. It also includes a new quality control (buddy check) system. Static tests with the system showed it's analysis increments are comparable to the latest NASA operational system including maintenance of mass-wind balance. Results from a 2-month cycling test in the Goddard EOS Data Assimilation System (GEOS DAS) show this new analysis retains the same level of agreement between the first guess and observations (0-F statistics) throughout the entire two months.

  7. The Challenging Academic Development (CAD) Collective

    ERIC Educational Resources Information Center

    Peseta, Tai

    2005-01-01

    This article discusses the Challenging Academic Development (CAD) Collective and describes how it came out of a symposium called "Liminality, identity, and hybridity: On the promise of new conceptual frameworks for theorising academic/faculty development." The CAD Collective is and represents a space where people can open up their contexts and…

  8. Probabilistic framework for reliability analysis of information-theoretic CAD systems in mammography.

    PubMed

    Habas, Piotr A; Zurada, Jacek M; Elmaghraby, Adel S; Tourassi, Georgia D

    2006-01-01

    The purpose of this study is to develop and evaluate a probabilistic framework for reliability analysis of information-theoretic computer-assisted detection (IT-CAD) systems in mammography. The study builds upon our previous work on a feature-based reliability analysis technique tailored to traditional CAD systems developed with a supervised learning scheme. The present study proposes a probabilistic framework to facilitate application of the reliability analysis technique for knowledge-based CAD systems that are not feature-based. The study was based on an information-theoretic CAD system developed for detection of masses in screening mammograms from the Digital Database for Screening Mammography (DDSM). The experimental results reveal that the query-specific reliability estimate provided by the proposed probabilistic framework is an accurate predictor of CAD performance for the query case. It can also be successfully applied as a base for stratification of CAD predictions into clinically meaningful reliability groups (i.e., HIGH, MEDIUM, and LOW). Based on a leave-one-out sampling scheme and ROC analysis, the study demonstrated that the diagnostic performance of the IT-CAD is significantly higher for cases with HIGH reliability (A(z) = 0.92 +/- 0.03) than for those stratified as MEDIUM (A(z) = 0.84 +/- 0.02) or LOW reliability predictions (A(z) = 0.78 +/- 0.02). PMID:17946741

  9. Time efficient 3-D electromagnetic modeling on massively parallel computers

    SciTech Connect

    Alumbaugh, D.L.; Newman, G.A.

    1995-08-01

    A numerical modeling algorithm has been developed to simulate the electromagnetic response of a three dimensional earth to a dipole source for frequencies ranging from 100 to 100MHz. The numerical problem is formulated in terms of a frequency domain--modified vector Helmholtz equation for the scattered electric fields. The resulting differential equation is approximated using a staggered finite difference grid which results in a linear system of equations for which the matrix is sparse and complex symmetric. The system of equations is solved using a preconditioned quasi-minimum-residual method. Dirichlet boundary conditions are employed at the edges of the mesh by setting the tangential electric fields equal to zero. At frequencies less than 1MHz, normal grid stretching is employed to mitigate unwanted reflections off the grid boundaries. For frequencies greater than this, absorbing boundary conditions must be employed by making the stretching parameters of the modified vector Helmholtz equation complex which introduces loss at the boundaries. To allow for faster calculation of realistic models, the original serial version of the code has been modified to run on a massively parallel architecture. This modification involves three distinct tasks; (1) mapping the finite difference stencil to a processor stencil which allows for the necessary information to be exchanged between processors that contain adjacent nodes in the model, (2) determining the most efficient method to input the model which is accomplished by dividing the input into ``global`` and ``local`` data and then reading the two sets in differently, and (3) deciding how to output the data which is an inherently nonparallel process.

  10. Dynamic MRI-based computer aided diagnostic systems for early detection of kidney transplant rejection: A survey

    NASA Astrophysics Data System (ADS)

    Mostapha, Mahmoud; Khalifa, Fahmi; Alansary, Amir; Soliman, Ahmed; Gimel'farb, Georgy; El-Baz, Ayman

    2013-10-01

    Early detection of renal transplant rejection is important to implement appropriate medical and immune therapy in patients with transplanted kidneys. In literature, a large number of computer-aided diagnostic (CAD) systems using different image modalities, such as ultrasound (US), magnetic resonance imaging (MRI), computed tomography (CT), and radionuclide imaging, have been proposed for early detection of kidney diseases. A typical CAD system for kidney diagnosis consists of a set of processing steps including: motion correction, segmentation of the kidney and/or its internal structures (e.g., cortex, medulla), construction of agent kinetic curves, functional parameter estimation, diagnosis, and assessment of the kidney status. In this paper, we survey the current state-of-the-art CAD systems that have been developed for kidney disease diagnosis using dynamic MRI. In addition, the paper addresses several challenges that researchers face in developing efficient, fast and reliable CAD systems for the early detection of kidney diseases.

  11. 3D CAD model retrieval method based on hierarchical multi-features

    NASA Astrophysics Data System (ADS)

    An, Ran; Wang, Qingwen

    2015-12-01

    The classical "Shape Distribution D2" algorithm takes the distance between two random points on a surface of CAD model as statistical features, and based on that it generates a feature vector to calculate the dissimilarity and achieve the retrieval goal. This algorithm has a simple principle, high computational efficiency and can get a better retrieval results for the simple shape models. Based on the analysis of D2 algorithm's shape distribution curve, this paper enhances the algorithm's descriptive ability for a model's overall shape through the statistics of the angle between two random points' normal vectors, especially for the distinctions between the model's plane features and curved surface features; meanwhile, introduce the ratio that a line between two random points cut off by the model's surface to enhance the algorithm's descriptive ability for a model's detailed features; finally, integrating the two shape describing methods with the original D2 algorithm, this paper proposes a new method based the hierarchical multi-features. Experimental results showed that this method has bigger improvements and could get a better retrieval results compared with the traditional 3D CAD model retrieval method.

  12. A NURBS enhanced extended finite element approach for unfitted CAD analysis

    NASA Astrophysics Data System (ADS)

    Legrain, Grégory

    2013-10-01

    A NURBS enhanced extended finite element approach is proposed for the unfitted simulation of structures defined by means of CAD parametric surfaces. In contrast to classical X-FEM that uses levelsets to define the geometry of the computational domain, exact CAD description is considered here. Following the ideas developed in the context of the NURBS-enhanced finite element method, NURBS-enhanced subelements are defined to take into account the exact geometry of the interface inside an element. In addition, a high-order approximation is considered to allow for large elements compared to the size of the geometrical details (without loss of accuracy). Finally, a geometrically implicit/explicit approach is proposed for efficiency purpose in the context of fracture mechanics. In this paper, only 2D examples are considered: It is shown that optimal rates of convergence are obtained without the need to consider shape functions defined in the physical space. Moreover, thanks to the flexibility given by the Partition of Unity, it is possible to recover optimal convergence rates in the case of re-entrant corners, cracks and embedded material interfaces.

  13. Introduction: From Efficient Quantum Computation to Nonextensive Statistical Mechanics

    NASA Astrophysics Data System (ADS)

    Prosen, Tomaz

    These few pages will attempt to make a short comprehensive overview of several contributions to this volume which concern rather diverse topics. I shall review the following works, essentially reversing the sequence indicated in my title: • First, by C. Tsallis on the relation of nonextensive statistics to the stability of quantum motion on the edge of quantum chaos. • Second, the contribution by P. Jizba on information theoretic foundations of generalized (nonextensive) statistics. • Third, the contribution by J. Rafelski on a possible generalization of Boltzmann kinetics, again, formulated in terms of nonextensive statistics. • Fourth, the contribution by D.L. Stein on the state-of-the-art open problems in spin glasses and on the notion of complexity there. • Fifth, the contribution by F.T. Arecchi on the quantum-like uncertainty relations and decoherence appearing in the description of perceptual tasks of the brain. • Sixth, the contribution by G. Casati on the measurement and information extraction in the simulation of complex dynamics by a quantum computer. Immediately, the following question arises: What do the topics of these talks have in common? Apart from the variety of questions they address, it is quite obvious that the common denominator of these contributions is an approach to describe and control "the complexity" by simple means. One of the very useful tools to handle such problems, also often used or at least referred to in several of the works presented here, is the concept of Tsallis entropy and nonextensive statistics.

  14. Instructional Efficiency of Integrated and Separated Text with Animated Presentations in Computer-Based Science Instruction

    ERIC Educational Resources Information Center

    Kablan, Z.; Erden, M.

    2008-01-01

    This study deals with the instructional efficiency of integrating text and animation into computer-based science instruction. The participants were 84 seventh-grade students in a private primary school in Istanbul. The efficiency of instruction was measured by mental effort and performance level of the learners. The results of the study showed…

  15. A CAD interface for GEANT4.

    PubMed

    Poole, C M; Cornelius, I; Trapp, J V; Langton, C M

    2012-09-01

    Often CAD models already exist for parts of a geometry being simulated using GEANT4. Direct import of these CAD models into GEANT4 however, may not be possible and complex components may be difficult to define via other means. Solutions that allow for users to work around the limited support in the GEANT4 toolkit for loading predefined CAD geometries have been presented by others, however these solutions require intermediate file format conversion using commercial software. Here within we describe a technique that allows for CAD models to be directly loaded as geometry without the need for commercial software and intermediate file format conversion. Robustness of the interface was tested using a set of CAD models of various complexity; for the models used in testing, no import errors were reported and all geometry was found to be navigable by GEANT4. PMID:22956356

  16. Integrated Computer-Aided Drafting Instruction (ICADI).

    ERIC Educational Resources Information Center

    Chen, C. Y.; McCampbell, David H.

    Until recently, computer-aided drafting and design (CAD) systems were almost exclusively operated on mainframes or minicomputers and their cost prohibited many schools from offering CAD instruction. Today, many powerful personal computers are capable of performing the high-speed calculation and analysis required by the CAD application; however,…

  17. Efficient conjugate gradient algorithms for computation of the manipulator forward dynamics

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Scheid, Robert E.

    1989-01-01

    The applicability of conjugate gradient algorithms for computation of the manipulator forward dynamics is investigated. The redundancies in the previously proposed conjugate gradient algorithm are analyzed. A new version is developed which, by avoiding these redundancies, achieves a significantly greater efficiency. A preconditioned conjugate gradient algorithm is also presented. A diagonal matrix whose elements are the diagonal elements of the inertia matrix is proposed as the preconditioner. In order to increase the computational efficiency, an algorithm is developed which exploits the synergism between the computation of the diagonal elements of the inertia matrix and that required by the conjugate gradient algorithm.

  18. Indications for Computer-Aided Design and Manufacturing in Congenital Craniofacial Reconstruction.

    PubMed

    Fisher, Mark; Medina, Miguel; Bojovic, Branko; Ahn, Edward; Dorafshar, Amir H

    2016-09-01

    The complex three-dimensional relationships in congenital craniofacial reconstruction uniquely lend themselves to the ability to accurately plan and model the result provided by computer-aided design and manufacturing (CAD/CAM). The goal of this study was to illustrate indications where CAD/CAM would be helpful in the treatment of congenital craniofacial anomalies reconstruction and to discuss the application of this technology and its outcomes. A retrospective review was performed of all congenital craniofacial cases performed by the senior author between 2010 and 2014. Cases where CAD/CAM was used were identified, and illustrative cases to demonstrate the benefits of CAD/CAM were selected. Preoperative appearance, computerized plan, intraoperative course, and final outcome were analyzed. Preoperative planning enabled efficient execution of the operative plan with predictable results. Risk factors which made these patients good candidates for CAD/CAM were identified and compiled. Several indications, including multisuture and revisional craniosynostosis, facial bipartition, four-wall box osteotomy, reduction cranioplasty, and distraction osteogenesis could benefit most from this technology. We illustrate the use of CAD/CAM for these applications and describe the decision-making process both before and during surgery. We explore why we believe that CAD/CAM is indicated in these scenarios as well as the disadvantages and risks. PMID:27516839

  19. On the Design of a CADS for Shoulder Pain Pathology

    NASA Astrophysics Data System (ADS)

    de Ipiña, K. López; Hernández, M. C.; Martínez, E.; Vaquero, C.

    A musculoskeletal disorder is a condition of the musculoskeletal system, which consists in part of it being injured continuously over time. Shoulder disorders are one of the most common musculoskeletal cases attended in primary health care services. Shoulder disorders cause pain and limit the ability to perform many routine activities, affecting about 15-25 % of the general population. Several clinical tests have been described to aid diagnosis of shoulder disorders. However, the current literature acknowledges a lack of concordance in clinical assessment, even among musculoskeletal specialists. We are working on the design of a Computer-Aided Decision Support (CADS) system for Shoulder Pain Pathology. The paper presents the results of our efforts to build a CADS system testing several classical classification paradigms, feature reduction methods (PCA) and K-means unsupervised clustering. The small database size imposes the use of robust covariance matrix estimation methods to improve the system performance. Finally, the system was evaluated by a medical specialist.

  20. Tooth-colored CAD/CAM monolithic restorations.

    PubMed

    Reich, S

    2015-01-01

    A monolithic restoration (also known as a full contour restoration) is one that is manufactured from a single material for the fully anatomic replacement of lost tooth structure. Additional staining (followed by glaze firing if ceramic materials are used) may be performed to enhance the appearance of the restoration. For decades, monolithic restoration has been the standard for inlay and partial crown restorations manufactured by both pressing and computer-aided design and manufacturing (CAD/CAM) techniques. A limited selection of monolithic materials is now available for dental crown and bridge restorations. The IDS (2015) provided an opportunity to learn about and evaluate current trends in this field. In addition to new developments, established materials are also mentioned in this article to complete the picture. In line with the strategic focus of the IJCD, the focus here is naturally on CAD/CAM materials. PMID:26110926

  1. A single user efficiency measure for evaluation of parallel or pipeline computer architectures

    NASA Technical Reports Server (NTRS)

    Jones, W. P.

    1978-01-01

    A precise statement of the relationship between sequential computation at one rate, parallel or pipeline computation at a much higher rate, the data movement rate between levels of memory, the fraction of inherently sequential operations or data that must be processed sequentially, the fraction of data to be moved that cannot be overlapped with computation, and the relative computational complexity of the algorithms for the two processes, scalar and vector, was developed. The relationship should be applied to the multirate processes that obtain in the employment of various new or proposed computer architectures for computational aerodynamics. The relationship, an efficiency measure that the single user of the computer system perceives, argues strongly in favor of separating scalar and vector processes, sometimes referred to as loosely coupled processes, to achieve optimum use of hardware.

  2. CAD/CAM at a Distance: Assessing the Effectiveness of Web-Based Instruction To Meet Workforce Development Needs. AIR 2000 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Wilkerson, Joyce A.; Elkins, Susan A.

    This qualitative case study assessed web-based instruction in a computer-aided design/computer-assisted manufacturing (CAD/CAM) course designed for workforce development. The study examined students' and instructors' experience in a CAD/CAM course delivered exclusively on the Internet, evaluating course content and delivery, clarity of…

  3. Impact of CAD-deficiency in flax on biogas production.

    PubMed

    Wróbel-Kwiatkowska, Magdalena; Jabłoński, Sławomir; Szperlik, Jakub; Dymińska, Lucyna; Łukaszewicz, Marcin; Rymowicz, Waldemar; Hanuza, Jerzy; Szopa, Jan

    2015-12-01

    Global warming and the reduction in our fossil fuel reservoir have forced humanity to look for new means of energy production. Agricultural waste remains a large source for biofuel and bioenergy production. Flax shives are a waste product obtained during the processing of flax fibers. We investigated the possibility of using low-lignin flax shives for biogas production, specifically by assessing the impact of CAD deficiency on the biochemical and structural properties of shives. The study used genetically modified flax plants with a silenced CAD gene, which encodes the key enzyme for lignin synthesis. Reducing the lignin content modified cellulose crystallinity, improved flax shive fermentation and optimized biogas production. Chemical pretreatment of the shive biomass further increased biogas production efficiency. PMID:26178244

  4. Understanding dental CAD/CAM for restorations--accuracy from a mechanical engineering viewpoint.

    PubMed

    Tapie, Laurent; Lebon, Nicolas; Mawussi, Bernardin; Fron-Chabouis, Hélène; Duret, Francois; Attal, Jean-Pierre

    2015-01-01

    As is the case in the field of medicine, as well as in most areas of daily life, digital technology is increasingly being introduced into dental practice. Computer-aided design/ computer-aided manufacturing (CAD/CAM) solutions are available not only for chairside practice but also for creating inlays, crowns, fixed partial dentures (FPDs), implant abutments, and other dental prostheses. CAD/CAM dental practice can be considered as the handling of devices and software processing for the almost automatic design and creation of dental restorations. However, dentists who want to use dental CAD/CAM systems often do not have enough information to understand the variations offered by such technology practice. Knowledge of the random and systematic errors in accuracy with CAD/CAM systems can help to achieve successful restorations with this technology, and help with the purchasing of a CAD/CAM system that meets the clinical needs of restoration. This article provides a mechanical engineering viewpoint of the accuracy of CAD/ CAM systems, to help dentists understand the impact of this technology on restoration accuracy. PMID:26734668

  5. Rationale for the Use of CAD/CAM Technology in Implant Prosthodontics

    PubMed Central

    Abduo, Jaafar; Lyons, Karl

    2013-01-01

    Despite the predictable longevity of implant prosthesis, there is an ongoing interest to continue to improve implant prosthodontic treatment and outcomes. One of the developments is the application of computer-aided design and computer-aided manufacturing (CAD/CAM) to produce implant abutments and frameworks from metal or ceramic materials. The aim of this narrative review is to critically evaluate the rationale of CAD/CAM utilization for implant prosthodontics. To date, CAD/CAM allows simplified production of precise and durable implant components. The precision of fit has been proven in several laboratory experiments and has been attributed to the design of implants. Milling also facilitates component fabrication from durable and aesthetic materials. With further development, it is expected that the CAD/CAM protocol will be further simplified. Although compelling clinical evidence supporting the superiority of CAD/CAM implant restorations is still lacking, it is envisioned that CAD/CAM may become the main stream for implant component fabrication. PMID:23690778

  6. An approximate solution to improve computational efficiency of impedance-type payload load prediction

    NASA Technical Reports Server (NTRS)

    White, C. W.

    1981-01-01

    The computational efficiency of the impedance type loads prediction method was studied. Three goals were addressed: devise a method to make the impedance method operate more efficiently in the computer; assess the accuracy and convenience of the method for determining the effect of design changes; and investigate the use of the method to identify design changes for reduction of payload loads. The method is suitable for calculation of dynamic response in either the frequency or time domain. It is concluded that: the choice of an orthogonal coordinate system will allow the impedance method to operate more efficiently in the computer; the approximate mode impedance technique is adequate for determining the effect of design changes, and is applicable for both statically determinate and statically indeterminate payload attachments; and beneficial design changes to reduce payload loads can be identified by the combined application of impedance techniques and energy distribution review techniques.

  7. A Computationally Efficient Parallel Levenberg-Marquardt Algorithm for Large-Scale Big-Data Inversion

    NASA Astrophysics Data System (ADS)

    Lin, Y.; O'Malley, D.; Vesselinov, V. V.

    2015-12-01

    Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a

  8. Some computational challenges of developing efficient parallel algorithms for data-dependent computations in thermal-hydraulics supercomputer applications

    SciTech Connect

    Woodruff, S.B.

    1992-05-01

    The Transient Reactor Analysis Code (TRAC), which features a two- fluid treatment of thermal-hydraulics, is designed to model transients in water reactors and related facilities. One of the major computational costs associated with TRAC and similar codes is calculating constitutive coefficients. Although the formulations for these coefficients are local the costs are flow-regime- or data-dependent; i.e., the computations needed for a given spatial node often vary widely as a function of time. Consequently, poor load balancing will degrade efficiency on either vector or data parallel architectures when the data are organized according to spatial location. Unfortunately, a general automatic solution to the load-balancing problem associated with data-dependent computations is not yet available for massively parallel architectures. This document discusses why developers algorithms, such as a neural net representation, that do not exhibit algorithms, such as a neural net representation, that do not exhibit load-balancing problems.

  9. Computationally Efficient Use of Derivatives in Emulation of Complex Computational Models

    SciTech Connect

    Williams, Brian J.; Marcy, Peter W.

    2012-06-07

    We will investigate the use of derivative information in complex computer model emulation when the correlation function is of the compactly supported Bohman class. To this end, a Gaussian process model similar to that used by Kaufman et al. (2011) is extended to a situation where first partial derivatives in each dimension are calculated at each input site (i.e. using gradients). A simulation study in the ten-dimensional case is conducted to assess the utility of the Bohman correlation function against strictly positive correlation functions when a high degree of sparsity is induced.

  10. Efficient and Flexible Computation of Many-Electron Wave Function Overlaps

    PubMed Central

    2016-01-01

    A new algorithm for the computation of the overlap between many-electron wave functions is described. This algorithm allows for the extensive use of recurring intermediates and thus provides high computational efficiency. Because of the general formalism employed, overlaps can be computed for varying wave function types, molecular orbitals, basis sets, and molecular geometries. This paves the way for efficiently computing nonadiabatic interaction terms for dynamics simulations. In addition, other application areas can be envisaged, such as the comparison of wave functions constructed at different levels of theory. Aside from explaining the algorithm and evaluating the performance, a detailed analysis of the numerical stability of wave function overlaps is carried out, and strategies for overcoming potential severe pitfalls due to displaced atoms and truncated wave functions are presented. PMID:26854874

  11. An Efficient Algorithm for Stiffness Identification of Truss Structures Through Distributed Local Computation

    NASA Astrophysics Data System (ADS)

    Zhang, G.; Burgueño, R.; Elvin, N. G.

    2010-02-01

    This paper presents an efficient stiffness identification technique for truss structures based on distributed local computation. Sensor nodes on each element are assumed to collect strain data and communicate only with sensors on neighboring elements. This can significantly reduce the energy demand for data transmission and the complexity of transmission protocols, thus enabling a simplified wireless implementation. Element stiffness parameters are identified by simple low order matrix inversion at a local level, which reduces the computational energy, allows for distributed computation and makes parallel data processing possible. The proposed method also permits addressing the problem of missing data or faulty sensors. Numerical examples, with and without missing data, are presented and the element stiffness parameters are accurately identified. The computation efficiency of the proposed method is n2 times higher than previously proposed global damage identification methods.

  12. Use of MathCAD in a Pharmacokinetics Course for PharmD Students.

    ERIC Educational Resources Information Center

    Sullivan, Timothy J.

    1992-01-01

    This paper describes the application of the Student Edition of MathCAD as a computational aid in an introductory graduate level pharmacokinetics course. The program allows the student to perform mathematical calculations and analysis on a computer screen. The advantages and disadvantages of this application are discussed. (GLR)

  13. An accelerated technique for a ceramic-pressed-to-metal restoration with CAD/CAM technology.

    PubMed

    Lee, Ju-Hyoung

    2014-11-01

    The conventional fabrication of metal ceramic restorations depends on an experienced dental technician and requires a long processing time. However, complete-contour digital waxing and digital cutback with computer-aided design and computer-aided manufacturing (CAD/CAM) technology can overcome these disadvantages and provide a correct metal framework design and space for the ceramic material. PMID:24952883

  14. Education & Training for CAD/CAM: Results of a National Probability Survey. Krannert Institute Paper Series.

    ERIC Educational Resources Information Center

    Majchrzak, Ann

    A study was conducted of the training programs used by plants with Computer Automated Design/Computer Automated Manufacturing (CAD/CAM) to help their employees adapt to automated manufacturing. The study sought to determine the relative priorities of manufacturing establishments for training certain workers in certain skills; the status of…

  15. Challenges facing developers of CAD/CAM models that seek to predict human working postures

    NASA Astrophysics Data System (ADS)

    Wiker, Steven F.

    2005-11-01

    This paper outlines the need for development of human posture prediction models for Computer Aided Design (CAD) and Computer Aided Manufacturing (CAM) design applications in product, facility and work design. Challenges facing developers of posture prediction algorithms are presented and discussed.

  16. Development of efficient computer program for dynamic simulation of telerobotic manipulation

    NASA Technical Reports Server (NTRS)

    Chen, J.; Ou, Y. J.

    1989-01-01

    Research in robot control has generated interest in computationally efficient forms of dynamic equations for multi-body systems. For a simply connected open-loop linkage, dynamic equations arranged in recursive form were found to be particularly efficient. A general computer program capable of simulating an open-loop manipulator with arbitrary number of links has been developed based on an efficient recursive form of Kane's dynamic equations. Also included in the program is some of the important dynamics of the joint drive system, i.e., the rotational effect of the motor rotors. Further efficiency is achieved by the use of symbolic manipulation program to generate the FORTRAN simulation program tailored for a specific manipulator based on the parameter values given. The formulations and the validation of the program are described, and some results are shown.

  17. A new computationally-efficient computer program for simulating spectral gamma-ray logs

    SciTech Connect

    Conaway, J.G.

    1995-12-31

    Several techniques to improve the accuracy of radionuclide concentration estimates as a function of depth from gamma-ray logs have appeared in the literature. Much of that work was driven by interest in uranium as an economic mineral. More recently, the problem of mapping and monitoring artificial gamma-emitting contaminants in the ground has rekindled interest in improving the accuracy of radioelement concentration estimates from gamma-ray logs. We are looking at new approaches to accomplishing such improvements. The first step in this effort has been to develop a new computational model of a spectral gamma-ray logging sonde in a borehole environment. The model supports attenuation in any combination of materials arranged in 2-D cylindrical geometry, including any combination of attenuating materials in the borehole, formation, and logging sonde. The model can also handle any distribution of sources in the formation. The model considers unscattered radiation only, as represented by the background-corrected area under a given spectral photopeak as a function of depth. Benchmark calculations using the standard Monte Carlo model MCNP show excellent agreement with total gamma flux estimates with a computation time of about 0.01% of the time required for the MCNP calculations. This model lacks the flexibility of MCNP, although for this application a great deal can be accomplished without that flexibility.

  18. Bubbles, Clusters and Denaturation in Genomic Dna: Modeling, Parametrization, Efficient Computation

    NASA Astrophysics Data System (ADS)

    Theodorakopoulos, Nikos

    2011-08-01

    The paper uses mesoscopic, non-linear lattice dynamics based (Peyrard-Bishop-Dauxois, PBD) modeling to describe thermal properties of DNA below and near the denaturation temperature. Computationally efficient notation is introduced for the relevant statistical mechanics. Computed melting profiles of long and short heterogeneous sequences are presented, using a recently introduced reparametrization of the PBD model, and critically discussed. The statistics of extended open bubbles and bound clusters is formulated and results are presented for selected examples.

  19. Equilibrium gas flow computations. I - Accurate and efficient calculation of equilibrium gas properties

    NASA Technical Reports Server (NTRS)

    Liu, Yen; Vinokur, Marcel

    1989-01-01

    This paper treats the accurate and efficient calculation of thermodynamic properties of arbitrary gas mixtures for equilibrium flow computations. New improvements in the Stupochenko-Jaffe model for the calculation of thermodynamic properties of diatomic molecules are presented. A unified formulation of equilibrium calculations for gas mixtures in terms of irreversible entropy is given. Using a highly accurate thermo-chemical data base, a new, efficient and vectorizable search algorithm is used to construct piecewise interpolation procedures with generate accurate thermodynamic variable and their derivatives required by modern computational algorithms. Results are presented for equilibrium air, and compared with those given by the Srinivasan program.

  20. Use of Existing CAD Models for Radiation Shielding Analysis

    NASA Technical Reports Server (NTRS)

    Lee, K. T.; Barzilla, J. E.; Wilson, P.; Davis, A.; Zachman, J.

    2015-01-01

    The utility of a radiation exposure analysis depends not only on the accuracy of the underlying particle transport code, but also on the accuracy of the geometric representations of both the vehicle used as radiation shielding mass and the phantom representation of the human form. The current NASA/Space Radiation Analysis Group (SRAG) process to determine crew radiation exposure in a vehicle design incorporates both output from an analytic High Z and Energy Particle Transport (HZETRN) code and the properties (i.e., material thicknesses) of a previously processed drawing. This geometry pre-process can be time-consuming, and the results are less accurate than those determined using a Monte Carlo-based particle transport code. The current work aims to improve this process. Although several Monte Carlo programs (FLUKA, Geant4) are readily available, most use an internal geometry engine. The lack of an interface with the standard CAD formats used by the vehicle designers limits the ability of the user to communicate complex geometries. Translation of native CAD drawings into a format readable by these transport programs is time consuming and prone to error. The Direct Accelerated Geometry -United (DAGU) project is intended to provide an interface between the native vehicle or phantom CAD geometry and multiple particle transport codes to minimize problem setup, computing time and analysis error.

  1. A CAD system based on spherical dual representations

    SciTech Connect

    Roach, J.W.; Paripati, P.K.; Wright, J.S.

    1987-08-01

    Computer-aided design (CAD) systems typically have many different functions: drafting, two-dimensional modeling, three-dimensional modeling, finite element analysis, and fit and tolerancing of parts. The authors report on the construction of a CAD system based on shape representation ideas used in the vision community to determine the shape of an object from its image. In the long term, they propose to construct a combined CAD and sensing system based on the same underlying object models. Considerable advantages follow from building a model-driven sensor fusion system that uses a common geometric model. In a manufacturing environment, for example, a library of objects can be built up and its models used in a vision and touch sensing system integrated into an automated assembly line to discriminate between objects and determine orientation and distance. If such a system could be made robust and highly reliable, then some of the most difficult problems that plague attempts to create a fully flexible automated environment would be solved.

  2. Centerline-based colon segmentation for CAD of CT colonography

    NASA Astrophysics Data System (ADS)

    Näppi, Janne; Frimmel, Hans; Yoshida, Hiroyuki

    2006-03-01

    We developed a fast centerline-based segmentation (CBS) algorithm for the extraction of colon in computer-aided detection (CAD) for CT colonography (CTC). CBS calculates local centerpoints along thresholded components of abdominal air, and connects the centerpoints iteratively to yield a colon centerline. A thick region encompassing the colonic wall is extracted by use of region-growing around the centerline. The resulting colonic wall is employed in our CAD scheme for the detection of polyps, in which polyps are detected within the wall by use of volumetric shape features. False-positive detections are reduced by use of a Bayesian neural network. The colon extraction accuracy of CBS was evaluated by use of 38 clinical CTC scans representing various preparation conditions. On average, CBS covered more than 96% of the visible region of colon with less than 1% extracolonic components in the extracted region. The polyp detection performance of the CAD scheme was evaluated by use of 121 clinical cases with 42 colonoscopy-confirmed polyps 5-25 mm. At a 93% by-polyp detection sensitivity for polyps >=5 mm, a leave-one-patient-out evaluation yielded 1.4 false-positive polyp detections per CT scan.

  3. CADS:Cantera Aerosol Dynamics Simulator.

    SciTech Connect

    Moffat, Harry K.

    2007-07-01

    This manual describes a library for aerosol kinetics and transport, called CADS (Cantera Aerosol Dynamics Simulator), which employs a section-based approach for describing the particle size distributions. CADS is based upon Cantera, a set of C++ libraries and applications that handles gas phase species transport and reactions. The method uses a discontinuous Galerkin formulation to represent the particle distributions within each section and to solve for changes to the aerosol particle distributions due to condensation, coagulation, and nucleation processes. CADS conserves particles, elements, and total enthalpy up to numerical round-off error, in all of its formulations. Both 0-D time dependent and 1-D steady state applications (an opposing-flow flame application) have been developed with CADS, with the initial emphasis on developing fundamental mechanisms for soot formation within fires. This report also describes the 0-D application, TDcads, which models a time-dependent perfectly stirred reactor.

  4. The unstoppable progress of CAD/CAM - Results and prospects

    NASA Astrophysics Data System (ADS)

    Seifert, H.

    1982-08-01

    The state of CAD/CAM technology in the construction and machinery industries is clarified by means of a few examples using PROREN software. The use of CAD in both two-dimensional and three-dimensional design is discussed, and CAD/CAM's potential for determining control information for numerically controlled milling is assessed. The design of parts by CAD is pictorially shown.

  5. The CAD-EGS Project: Using CAD Geometrics in EGS4

    SciTech Connect

    Langeveld, Willy G.J.

    2002-03-28

    The objective of the CAD-EGS project is to provide a way to use a CAD system to create 3D geometries for use within EGS4. In this report, we describe an approach based on an intermediate file, written out by the CAD system, that is read by an EGS4 user code designed for the purpose. A prototype solution was implemented using a commonly used CAD system and the Virtual Reality Modeling Language (VRML) as an intermediate file format. We report results from the prototype, and discuss various problems arising from both the approach and the particular choices made.

  6. Energy-Efficient Computational Chemistry: Comparison of x86 and ARM Systems.

    PubMed

    Keipert, Kristopher; Mitra, Gaurav; Sunriyal, Vaibhav; Leang, Sarom S; Sosonkina, Masha; Rendell, Alistair P; Gordon, Mark S

    2015-11-10

    The computational efficiency and energy-to-solution of several applications using the GAMESS quantum chemistry suite of codes is evaluated for 32-bit and 64-bit ARM-based computers, and compared to an x86 machine. The x86 system completes all benchmark computations more quickly than either ARM system and is the best choice to minimize time to solution. The ARM64 and ARM32 computational performances are similar to each other for Hartree-Fock and density functional theory energy calculations. However, for memory-intensive second-order perturbation theory energy and gradient computations the lower ARM32 read/write memory bandwidth results in computation times as much as 86% longer than on the ARM64 system. The ARM32 system is more energy efficient than the x86 and ARM64 CPUs for all benchmarked methods, while the ARM64 CPU is more energy efficient than the x86 CPU for some core counts and molecular sizes. PMID:26574303

  7. Evaluation of Emerging Energy-Efficient Heterogeneous Computing Platforms for Biomolecular and Cellular Simulation Workloads

    PubMed Central

    Stone, John E.; Hallock, Michael J.; Phillips, James C.; Peterson, Joseph R.; Luthey-Schulten, Zaida; Schulten, Klaus

    2016-01-01

    Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers. PMID:27516922

  8. Diagnostic performance of radiologists with and without different CAD systems for mammography

    NASA Astrophysics Data System (ADS)

    Lauria, Adele; Fantacci, Maria E.; Bottigli, Ubaldo; Delogu, Pasquale; Fauci, Francesco; Golosio, Bruno; Indovina, Pietro L.; Masala, Giovanni L.; Oliva, Piernicola; Palmiero, Rosa; Raso, Giuseppe; Stumbo, Simone; Tangaro, Sabina

    2003-05-01

    The purpose of this study is the evaluation of the variation of performance in terms of sensitivity and specificity of two radiologists with different experience in mammography, with and without the assistance of two different CAD systems. The CAD considered are SecondLookTM (CADx Medical Systems, Canada), and CALMA (Computer Assisted Library in MAmmography). The first is a commercial system, the other is the result of a research project, supported by INFN (Istituto Nazionale di Fisica Nucleare, Italy); their characteristics have already been reported in literature. To compare the results with and without these tools, a dataset composed by 70 images of patients with cancer (biopsy proven) and 120 images of healthy breasts (with a three years follow up) has been collected. All the images have been digitized and analysed by two CAD, then two radiologists with respectively 6 and 2 years of experience in mammography indipendently made their diagnosis without and with, the support of the two CAD systems. In this work sensitivity and specificity variation, the Az area under the ROC curve, are reported. The results show that the use of a CAD allows for a substantial increment in sensitivity and a less pronounced decrement in specificity. The extent of these effects depends on the experience of the readers and is comparable for the two CAD considered.

  9. Spin-neurons: A possible path to energy-efficient neuromorphic computers

    NASA Astrophysics Data System (ADS)

    Sharad, Mrigank; Fan, Deliang; Roy, Kaushik

    2013-12-01

    Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices. Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and "thresholding" operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that "spin-neurons" (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.

  10. Spin-neurons: A possible path to energy-efficient neuromorphic computers

    SciTech Connect

    Sharad, Mrigank; Fan, Deliang; Roy, Kaushik

    2013-12-21

    Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices. Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and “thresholding” operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that “spin-neurons” (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.

  11. NREL's Building-Integrated Supercomputer Provides Heating and Efficient Computing (Fact Sheet)

    SciTech Connect

    Not Available

    2014-09-01

    NREL's Energy Systems Integration Facility (ESIF) is meant to investigate new ways to integrate energy sources so they work together efficiently, and one of the key tools to that investigation, a new supercomputer, is itself a prime example of energy systems integration. NREL teamed with Hewlett-Packard (HP) and Intel to develop the innovative warm-water, liquid-cooled Peregrine supercomputer, which not only operates efficiently but also serves as the primary source of building heat for ESIF offices and laboratories. This innovative high-performance computer (HPC) can perform more than a quadrillion calculations per second as part of the world's most energy-efficient HPC data center.

  12. CAD system for the assistance of a comparative reading for lung cancer using retrospective helical CT images

    NASA Astrophysics Data System (ADS)

    Kubo, Mitsuru; Yamamoto, Takuya; Kawata, Yoshiki; Niki, Noboru; Eguchi, Kenji; Ohmatsu, Hironobu; Kakinuma, Ryutaro; Kaneko, Masahiro; Kusumoto, Masahiko; Moriyama, Noriyuki; Mori, Kiyoshi; Nishiyama, Hiroyuki

    2001-07-01

    The objective of our study is to develop a new computer- aided diagnosis (CAD) system to support effectually the comparative reading using serial helical CT images for lung cancer screening without using the film display. The placement of pulmonary shadows between the serial helical CT images is sometimes different to change the size and the shape of lung by inspired air. We analyzed the motion of the pulmonary structure using the serial cases of 17 pairs, which are different in the inspired air. This algorithm consists of the extraction process of region of interest such as the lung, heart and blood vessels region using thresholding and fuzzy c-means method, and the comparison process of each region in serial CT images using template matching. We validated the efficiency of this algorithm by application to image of 60 subjects. The algorithm could compare the slice images correctly in most combinations with respect to physician's point of view. The experimental results of the proposed algorithm indicate that our CAD system without using the film display is useful to increase the efficiency of the mass screening process.

  13. Framework for computationally efficient optimal irrigation scheduling using ant colony optimization

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A general optimization framework is introduced with the overall goal of reducing search space size and increasing the computational efficiency of evolutionary algorithm application for optimal irrigation scheduling. The framework achieves this goal by representing the problem in the form of a decisi...

  14. Using Neural Net Technology To Enhance the Efficiency of a Computer Adaptive Testing Application.

    ERIC Educational Resources Information Center

    Van Nelson, C.; Henriksen, Larry W.

    The potential for computer adaptive testing (CAT) has been well documented. In order to improve the efficiency of this process, it may be possible to utilize a neural network, or more specifically, a back propagation neural network. The paper asserts that in order to accomplish this end, it must be shown that grouping examinees by ability as…

  15. The Improvement of Efficiency in the Numerical Computation of Orbit Trajectories

    NASA Technical Reports Server (NTRS)

    Dyer, J.; Danchick, R.; Pierce, S.; Haney, R.

    1972-01-01

    An analysis, system design, programming, and evaluation of results are described for numerical computation of orbit trajectories. Evaluation of generalized methods, interaction of different formulations for satellite motion, transformation of equations of motion and integrator loads, and development of efficient integrators are also considered.

  16. Improving the Efficiency and Effectiveness of Grading through the Use of Computer-Assisted Grading Rubrics

    ERIC Educational Resources Information Center

    Anglin, Linda; Anglin, Kenneth; Schumann, Paul L.; Kaliski, John A.

    2008-01-01

    This study tests the use of computer-assisted grading rubrics compared to other grading methods with respect to the efficiency and effectiveness of different grading processes for subjective assignments. The test was performed on a large Introduction to Business course. The students in this course were randomly assigned to four treatment groups…

  17. Web-based CAD and CAM for optomechatronics

    NASA Astrophysics Data System (ADS)

    Han, Min; Zhou, Hai-Guang

    2001-10-01

    CAD & CAM technologies are being used in design and manufacturing process, and are receiving increasing attention from industries and education. We have been researching to develop a new kind of software that is for web-course CAD & CAM. It can be used either in industries or in training, it is supported by IE. Firstly, we aim at CAD/CAM for optomechatronics. We have developed a kind of CAD/CAM, which is not only for mechanics but also for optics and electronic. That is a new kind of software in China. Secondly, we have developed a kind of software for web-course CAD & CAM, we introduce the basis of CAD, the commands of CAD, the programming, CAD/CAM for optomechatronics, the joint application of CAD & CAM. We introduce the functions of MasterCAM, show the whole processes of CAD/CAM/CNC by examples. Following the steps showed on the web, the trainer can not miss. CAD & CAM are widely used in many areas, development of web-course CAD & CAM is necessary for long- distance education and public education. In 1992, China raised: CAD technique, as an important part of electronic technology, is a new key technique to improve the national economic and the modernization of national defence. As so for, the education. Of CAD & CAM is mainly involved in manufacturing industry in China. But with the rapidly development of new technology, especially the development of optics and electronics, CAD & CAM will receive more attention from those areas.

  18. Efficient shortest-path-tree computation in network routing based on pulse-coupled neural networks.

    PubMed

    Qu, Hong; Yi, Zhang; Yang, Simon X

    2013-06-01

    Shortest path tree (SPT) computation is a critical issue for routers using link-state routing protocols, such as the most commonly used open shortest path first and intermediate system to intermediate system. Each router needs to recompute a new SPT rooted from itself whenever a change happens in the link state. Most commercial routers do this computation by deleting the current SPT and building a new one using static algorithms such as the Dijkstra algorithm at the beginning. Such recomputation of an entire SPT is inefficient, which may consume a considerable amount of CPU time and result in a time delay in the network. Some dynamic updating methods using the information in the updated SPT have been proposed in recent years. However, there are still many limitations in those dynamic algorithms. In this paper, a new modified model of pulse-coupled neural networks (M-PCNNs) is proposed for the SPT computation. It is rigorously proved that the proposed model is capable of solving some optimization problems, such as the SPT. A static algorithm is proposed based on the M-PCNNs to compute the SPT efficiently for large-scale problems. In addition, a dynamic algorithm that makes use of the structure of the previously computed SPT is proposed, which significantly improves the efficiency of the algorithm. Simulation results demonstrate the effective and efficient performance of the proposed approach. PMID:23144039

  19. A computationally efficient OMP-based compressed sensing reconstruction for dynamic MRI

    NASA Astrophysics Data System (ADS)

    Usman, M.; Prieto, C.; Odille, F.; Atkinson, D.; Schaeffter, T.; Batchelor, P. G.

    2011-04-01

    Compressed sensing (CS) methods in MRI are computationally intensive. Thus, designing novel CS algorithms that can perform faster reconstructions is crucial for everyday applications. We propose a computationally efficient orthogonal matching pursuit (OMP)-based reconstruction, specifically suited to cardiac MR data. According to the energy distribution of a y-f space obtained from a sliding window reconstruction, we label the y-f space as static or dynamic. For static y-f space images, a computationally efficient masked OMP reconstruction is performed, whereas for dynamic y-f space images, standard OMP reconstruction is used. The proposed method was tested on a dynamic numerical phantom and two cardiac MR datasets. Depending on the field of view composition of the imaging data, compared to the standard OMP method, reconstruction speedup factors ranging from 1.5 to 2.5 are achieved.

  20. A new computational scheme for the Dirac-Hartree-Fock method employing an efficient integral algorithm

    NASA Astrophysics Data System (ADS)

    Yanai, Takeshi; Nakajima, Takahito; Ishikawa, Yasuyuki; Hirao, Kimihiko

    2001-04-01

    A highly efficient computational scheme for four-component relativistic ab initio molecular orbital (MO) calculations over generally contracted spherical harmonic Gaussian-type spinors (GTSs) is presented. Benchmark calculations for the ground states of the group IB hydrides, MH, and dimers, M2 (M=Cu, Ag, and Au), by the Dirac-Hartree-Fock (DHF) method were performed with a new four-component relativistic ab initio MO program package oriented toward contracted GTSs. The relativistic electron repulsion integrals (ERIs), the major bottleneck in routine DHF calculations, are calculated efficiently employing the fast ERI routine SPHERICA, exploiting the general contraction scheme, and the accompanying coordinate expansion method developed by Ishida. Illustrative calculations clearly show the efficiency of our computational scheme.

  1. Computer-Aided Design in Further Education.

    ERIC Educational Resources Information Center

    Ingham, Peter, Ed.

    This publication updates the 1982 occasional paper that was intended to foster staff awareness and assist colleges in Great Britain considering the use of computer-aided design (CAD) material in engineering courses. The paper begins by defining CAD and its place in the Integrated Business System with a brief discussion of the effect of CAD on the…

  2. Developing a computationally efficient dynamic multilevel hybrid optimization scheme using multifidelity model interactions.

    SciTech Connect

    Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Castro, Joseph Pete Jr.; Giunta, Anthony Andrew

    2006-01-01

    Many engineering application problems use optimization algorithms in conjunction with numerical simulators to search for solutions. The formulation of relevant objective functions and constraints dictate possible optimization algorithms. Often, a gradient based approach is not possible since objective functions and constraints can be nonlinear, nonconvex, non-differentiable, or even discontinuous and the simulations involved can be computationally expensive. Moreover, computational efficiency and accuracy are desirable and also influence the choice of solution method. With the advent and increasing availability of massively parallel computers, computational speed has increased tremendously. Unfortunately, the numerical and model complexities of many problems still demand significant computational resources. Moreover, in optimization, these expenses can be a limiting factor since obtaining solutions often requires the completion of numerous computationally intensive simulations. Therefore, we propose a multifidelity optimization algorithm (MFO) designed to improve the computational efficiency of an optimization method for a wide range of applications. In developing the MFO algorithm, we take advantage of the interactions between multi fidelity models to develop a dynamic and computational time saving optimization algorithm. First, a direct search method is applied to the high fidelity model over a reduced design space. In conjunction with this search, a specialized oracle is employed to map the design space of this high fidelity model to that of a computationally cheaper low fidelity model using space mapping techniques. Then, in the low fidelity space, an optimum is obtained using gradient or non-gradient based optimization, and it is mapped back to the high fidelity space. In this paper, we describe the theory and implementation details of our MFO algorithm. We also demonstrate our MFO method on some example problems and on two applications: earth penetrators and

  3. A uniform algebraically-based approach to computational physics and efficient programming

    NASA Astrophysics Data System (ADS)

    Raynolds, James; Mullin, Lenore

    2007-03-01

    We present an approach to computational physics in which a common formalism is used both to express the physical problem as well as to describe the underlying details of how computation is realized on arbitrary multiprocessor/memory computer architectures. This formalism is the embodiment of a generalized algebra of multi-dimensional arrays (A Mathematics of Arrays) and an efficient computational implementation is obtained through the composition of of array indices (the psi-calculus) of algorithms defined using matrices, tensors, and arrays in general. The power of this approach arises from the fact that multiple computational steps (e.g. Fourier Transform followed by convolution, etc.) can be algebraically composed and reduced to an simplified expression (i.e. Operational Normal Form), that when directly translated into computer code, can be mathematically proven to be the most efficient implementation with the least number of temporary variables, etc. This approach will be illustrated in the context of a cache-optimized FFT that outperforms or is competitive with established library routines: ESSL, FFTW, IMSL, NAG.

  4. Getting into CAD at the Savannah River Plant

    SciTech Connect

    Scoggins, W.R.

    1984-01-01

    In 1978, the Savannah River Plant (SRP) Project Department was producing approximately 1100 new drawings and 3000 revisions per year, with a force of 30 draftsmen. Design services for the Plant were increasing due to changing programs, obsolescent equipment replacements and added security requirements. This increasing workload greatly increased the engineering drafting backlog. At the same time, many draftsmen were approaching retirement age and were to be replaced with unskilled draftsman trainees. A proposal was presented to management to acquire a Computer Aided Drafting (CAD) system to produce instrument and electrical drawings which comprised 30% of the work load.

  5. CAD/CAM and scientific data management at Dassault

    NASA Technical Reports Server (NTRS)

    Bohn, P.

    1984-01-01

    The history of CAD/CAM and scientific data management at Dassault are presented. Emphasis is put on the targets of the now commercially available software CATIA. The links with scientific computations such as aerodynamics and structural analysis are presented. Comments are made on the principles followed within the company. The consequences of the approximative nature of scientific data are examined. Consequence of the new history function is mainly its protection against copy or alteration. Future plans at Dassault for scientific data appear to be in opposite directions compared to some general tendencies.

  6. A computationally efficient model for turbulent droplet dispersion in spray combustion

    NASA Technical Reports Server (NTRS)

    Litchford, Ron J.; Jeng, San-Mou

    1990-01-01

    A novel model for turbulent droplet dispersion is formulated having significantly improved computational efficiency in comparison to the conventional point source stochastic sampling methodology. In the proposed model, a computational parcel representing a group of physical particles is considered to have a normal (Gaussian) probability density function (PDF) in three-dimensional space. The mean of each PDF is determined by Lagrangian tracking of each computational parcel, either deterministically or stochastically. The variance is represented by a turbulence-induced mean squared dispersion which is based on statistical inferences from the linearized direct modeling formulation for particle/eddy interactions. Convolution of the computational parcel PDF's produces a single PDF for the physical particle distribution profile. The validity of the new model is established by comparison with the conventional stochastic sampling method, where in each parcel is represented by a delta function distribution, for non-evaporating particles injected into simple turbulent air flows.

  7. A computationally efficient approach for hidden-Markov model-augmented fingerprint-based positioning

    NASA Astrophysics Data System (ADS)

    Roth, John; Tummala, Murali; McEachen, John

    2016-09-01

    This paper presents a computationally efficient approach for mobile subscriber position estimation in wireless networks. A method of data scaling assisted by timing adjust is introduced in fingerprint-based location estimation under a framework which allows for minimising computational cost. The proposed method maintains a comparable level of accuracy to the traditional case where no data scaling is used and is evaluated in a simulated environment under varying channel conditions. The proposed scheme is studied when it is augmented by a hidden-Markov model to match the internal parameters to the channel conditions that present, thus minimising computational cost while maximising accuracy. Furthermore, the timing adjust quantity, available in modern wireless signalling messages, is shown to be able to further reduce computational cost and increase accuracy when available. The results may be seen as a significant step towards integrating advanced position-based modelling with power-sensitive mobile devices.

  8. Automated knowledge base development from CAD/CAE databases

    NASA Technical Reports Server (NTRS)

    Wright, R. Glenn; Blanchard, Mary

    1988-01-01

    Knowledge base development requires a substantial investment in time, money, and resources in order to capture the knowledge and information necessary for anything other than trivial applications. This paper addresses a means to integrate the design and knowledge base development process through automated knowledge base development from CAD/CAE databases and files. Benefits of this approach include the development of a more efficient means of knowledge engineering, resulting in the timely creation of large knowledge based systems that are inherently free of error.

  9. Computationally efficient analysis of extraordinary optical transmission through infinite and truncated subwavelength hole arrays

    NASA Astrophysics Data System (ADS)

    Camacho, Miguel; Boix, Rafael R.; Medina, Francisco

    2016-06-01

    The authors present a computationally efficient technique for the analysis of extraordinary transmission through both infinite and truncated periodic arrays of slots in perfect conductor screens of negligible thickness. An integral equation is obtained for the tangential electric field in the slots both in the infinite case and in the truncated case. The unknown functions are expressed as linear combinations of known basis functions, and the unknown weight coefficients are determined by means of Galerkin's method. The coefficients of Galerkin's matrix are obtained in the spatial domain in terms of double finite integrals containing the Green's functions (which, in the infinite case, is efficiently computed by means of Ewald's method) times cross-correlations between both the basis functions and their divergences. The computation in the spatial domain is an efficient alternative to the direct computation in the spectral domain since this latter approach involves the determination of either slowly convergent double infinite summations (infinite case) or slowly convergent double infinite integrals (truncated case). The results obtained are validated by means of commercial software, and it is found that the integral equation technique presented in this paper is at least two orders of magnitude faster than commercial software for a similar accuracy. It is also shown that the phenomena related to periodicity such as extraordinary transmission and Wood's anomaly start to appear in the truncated case for arrays with more than 100 (10 ×10 ) slots.

  10. Computer-aided design for metabolic engineering.

    PubMed

    Fernández-Castané, Alfred; Fehér, Tamás; Carbonell, Pablo; Pauthenier, Cyrille; Faulon, Jean-Loup

    2014-12-20

    The development and application of biotechnology-based strategies has had a great socio-economical impact and is likely to play a crucial role in the foundation of more sustainable and efficient industrial processes. Within biotechnology, metabolic engineering aims at the directed improvement of cellular properties, often with the goal of synthesizing a target chemical compound. The use of computer-aided design (CAD) tools, along with the continuously emerging advanced genetic engineering techniques have allowed metabolic engineering to broaden and streamline the process of heterologous compound-production. In this work, we review the CAD tools available for metabolic engineering with an emphasis, on retrosynthesis methodologies. Recent advances in genetic engineering strategies for pathway implementation and optimization are also reviewed as well as a range of bionalytical tools to validate in silico predictions. A case study applying retrosynthesis is presented as an experimental verification of the output from Retropath, the first complete automated computational pipeline applicable to metabolic engineering. Applying this CAD pipeline, together with genetic reassembly and optimization of culture conditions led to improved production of the plant flavonoid pinocembrin. Coupling CAD tools with advanced genetic engineering strategies and bioprocess optimization is crucial for enhanced product yields and will be of great value for the development of non-natural products through sustainable biotechnological processes. PMID:24704607

  11. An efficient sparse matrix multiplication scheme for the CYBER 205 computer

    NASA Technical Reports Server (NTRS)

    Lambiotte, Jules J., Jr.

    1988-01-01

    This paper describes the development of an efficient algorithm for computing the product of a matrix and vector on a CYBER 205 vector computer. The desire to provide software which allows the user to choose between the often conflicting goals of minimizing central processing unit (CPU) time or storage requirements has led to a diagonal-based algorithm in which one of four types of storage is selected for each diagonal. The candidate storage types employed were chosen to be efficient on the CYBER 205 for diagonals which have nonzero structure which is dense, moderately sparse, very sparse and short, or very sparse and long; however, for many densities, no diagonal type is most efficient with respect to both resource requirements, and a trade-off must be made. For each diagonal, an initialization subroutine estimates the CPU time and storage required for each storage type based on results from previously performed numerical experimentation. These requirements are adjusted by weights provided by the user which reflect the relative importance the user places on the two resources. The adjusted resource requirements are then compared to select the most efficient storage and computational scheme.

  12. Efficient scatter model for simulation of ultrasound images from computed tomography data

    NASA Astrophysics Data System (ADS)

    D'Amato, J. P.; Lo Vercio, L.; Rubi, P.; Fernandez Vera, E.; Barbuzza, R.; Del Fresno, M.; Larrabide, I.

    2015-12-01

    Background and motivation: Real-time ultrasound simulation refers to the process of computationally creating fully synthetic ultrasound images instantly. Due to the high value of specialized low cost training for healthcare professionals, there is a growing interest in the use of this technology and the development of high fidelity systems that simulate the acquisitions of echographic images. The objective is to create an efficient and reproducible simulator that can run either on notebooks or desktops using low cost devices. Materials and methods: We present an interactive ultrasound simulator based on CT data. This simulator is based on ray-casting and provides real-time interaction capabilities. The simulation of scattering that is coherent with the transducer position in real time is also introduced. Such noise is produced using a simplified model of multiplicative noise and convolution with point spread functions (PSF) tailored for this purpose. Results: The computational efficiency of scattering maps generation was revised with an improved performance. This allowed a more efficient simulation of coherent scattering in the synthetic echographic images while providing highly realistic result. We describe some quality and performance metrics to validate these results, where a performance of up to 55fps was achieved. Conclusion: The proposed technique for real-time scattering modeling provides realistic yet computationally efficient scatter distributions. The error between the original image and the simulated scattering image was compared for the proposed method and the state-of-the-art, showing negligible differences in its distribution.

  13. Reducing Vehicle Weight and Improving U.S. Energy Efficiency Using Integrated Computational Materials Engineering

    NASA Astrophysics Data System (ADS)

    Joost, William J.

    2012-09-01

    Transportation accounts for approximately 28% of U.S. energy consumption with the majority of transportation energy derived from petroleum sources. Many technologies such as vehicle electrification, advanced combustion, and advanced fuels can reduce transportation energy consumption by improving the efficiency of cars and trucks. Lightweight materials are another important technology that can improve passenger vehicle fuel efficiency by 6-8% for each 10% reduction in weight while also making electric and alternative vehicles more competitive. Despite the opportunities for improved efficiency, widespread deployment of lightweight materials for automotive structures is hampered by technology gaps most often associated with performance, manufacturability, and cost. In this report, the impact of reduced vehicle weight on energy efficiency is discussed with a particular emphasis on quantitative relationships determined by several researchers. The most promising lightweight materials systems are described along with a brief review of the most significant technical barriers to their implementation. For each material system, the development of accurate material models is critical to support simulation-intensive processing and structural design for vehicles; improved models also contribute to an integrated computational materials engineering (ICME) approach for addressing technical barriers and accelerating deployment. The value of computational techniques is described by considering recent ICME and computational materials science success stories with an emphasis on applying problem-specific methods.

  14. Can computational efficiency alone drive the evolution of modularity in neural networks?

    PubMed Central

    Tosh, Colin R.

    2016-01-01

    Some biologists have abandoned the idea that computational efficiency in processing multipart tasks or input sets alone drives the evolution of modularity in biological networks. A recent study confirmed that small modular (neural) networks are relatively computationally-inefficient but large modular networks are slightly more efficient than non-modular ones. The present study determines whether these efficiency advantages with network size can drive the evolution of modularity in networks whose connective architecture can evolve. The answer is no, but the reason why is interesting. All simulations (run in a wide variety of parameter states) involving gradualistic connective evolution end in non-modular local attractors. Thus while a high performance modular attractor exists, such regions cannot be reached by gradualistic evolution. Non-gradualistic evolutionary simulations in which multi-modularity is obtained through duplication of existing architecture appear viable. Fundamentally, this study indicates that computational efficiency alone does not drive the evolution of modularity, even in large biological networks, but it may still be a viable mechanism when networks evolve by non-gradualistic means. PMID:27573614

  15. Performance evaluation for volumetric segmentation of multiple sclerosis lesions using MATLAB and computing engine in the graphical processing unit (GPU)

    NASA Astrophysics Data System (ADS)

    Le, Anh H.; Park, Young W.; Ma, Kevin; Jacobs, Colin; Liu, Brent J.

    2010-03-01

    Multiple Sclerosis (MS) is a progressive neurological disease affecting myelin pathways in the brain. Multiple lesions in the white matter can cause paralysis and severe motor disabilities of the affected patient. To solve the issue of inconsistency and user-dependency in manual lesion measurement of MRI, we have proposed a 3-D automated lesion quantification algorithm to enable objective and efficient lesion volume tracking. The computer-aided detection (CAD) of MS, written in MATLAB, utilizes K-Nearest Neighbors (KNN) method to compute the probability of lesions on a per-voxel basis. Despite the highly optimized algorithm of imaging processing that is used in CAD development, MS CAD integration and evaluation in clinical workflow is technically challenging due to the requirement of high computation rates and memory bandwidth in the recursive nature of the algorithm. In this paper, we present the development and evaluation of using a computing engine in the graphical processing unit (GPU) with MATLAB for segmentation of MS lesions. The paper investigates the utilization of a high-end GPU for parallel computing of KNN in the MATLAB environment to improve algorithm performance. The integration is accomplished using NVIDIA's CUDA developmental toolkit for MATLAB. The results of this study will validate the practicality and effectiveness of the prototype MS CAD in a clinical setting. The GPU method may allow MS CAD to rapidly integrate in an electronic patient record or any disease-centric health care system.

  16. Unified commutation-pruning technique for efficient computation of composite DFTs

    NASA Astrophysics Data System (ADS)

    Castro-Palazuelos, David E.; Medina-Melendrez, Modesto Gpe.; Torres-Roman, Deni L.; Shkvarko, Yuriy V.

    2015-12-01

    An efficient computation of a composite length discrete Fourier transform (DFT), as well as a fast Fourier transform (FFT) of both time and space data sequences in uncertain (non-sparse or sparse) computational scenarios, requires specific processing algorithms. Traditional algorithms typically employ some pruning methods without any commutations, which prevents them from attaining the potential computational efficiency. In this paper, we propose an alternative unified approach with automatic commutations between three computational modalities aimed at efficient computations of the pruned DFTs adapted for variable composite lengths of the non-sparse input-output data. The first modality is an implementation of the direct computation of a composite length DFT, the second one employs the second-order recursive filtering method, and the third one performs the new pruned decomposed transform. The pruned decomposed transform algorithm performs the decimation in time or space (DIT) data acquisition domain and, then, decimation in frequency (DIF). The unified combination of these three algorithms is addressed as the DFTCOMM technique. Based on the treatment of the combinational-type hypotheses testing optimization problem of preferable allocations between all feasible commuting-pruning modalities, we have found the global optimal solution to the pruning problem that always requires a fewer or, at most, the same number of arithmetic operations than other feasible modalities. The DFTCOMM method outperforms the existing competing pruning techniques in the sense of attainable savings in the number of required arithmetic operations. It requires fewer or at most the same number of arithmetic operations for its execution than any other of the competing pruning methods reported in the literature. Finally, we provide the comparison of the DFTCOMM with the recently developed sparse fast Fourier transform (SFFT) algorithmic family. We feature that, in the sensing scenarios with

  17. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  18. Efficient O(N) recursive computation of the operational space inertial matrix

    SciTech Connect

    Lilly, K.W.; Orin, D.E.

    1993-09-01

    The operational space inertia matrix {Lambda} reflects the dynamic properties of a robot manipulator to its tip. In the control domain, it may be used to decouple force and/or motion control about the manipulator workspace axes. The matrix {Lambda} also plays an important role in the development of efficient algorithms for the dynamic simulation of closed-chain robotic mechanisms, including simple closed-chain mechanisms such as multiple manipulator systems and walking machines. The traditional approach used to compute {Lambda} has a computational complexity of O(N{sup 3}) for an N degree-of-freedom manipulator. This paper presents the development of a recursive algorithm for computing the operational space inertia matrix (OSIM) that reduces the computational complexity to O(N). This algorithm, the inertia propagation method, is based on a single recursion that begins at the base of the manipulator and progresses out to the last link. Also applicable to redundant systems and mechanisms with multiple-degree-of-freedom joints, the inertia propagation method is the most efficient method known for computing {Lambda} for N {>=} 6. The numerical accuracy of the algorithm is discussed for a PUMA 560 robot with a fixed base.

  19. An efficient parallel implementation of explicit multirate Runge–Kutta schemes for discontinuous Galerkin computations

    SciTech Connect

    Seny, Bruno Lambrechts, Jonathan; Toulorge, Thomas; Legat, Vincent; Remacle, Jean-François

    2014-01-01

    Although explicit time integration schemes require small computational efforts per time step, their efficiency is severely restricted by their stability limits. Indeed, the multi-scale nature of some physical processes combined with highly unstructured meshes can lead some elements to impose a severely small stable time step for a global problem. Multirate methods offer a way to increase the global efficiency by gathering grid cells in appropriate groups under local stability conditions. These methods are well suited to the discontinuous Galerkin framework. The parallelization of the multirate strategy is challenging because grid cells have different workloads. The computational cost is different for each sub-time step depending on the elements involved and a classical partitioning strategy is not adequate any more. In this paper, we propose a solution that makes use of multi-constraint mesh partitioning. It tends to minimize the inter-processor communications, while ensuring that the workload is almost equally shared by every computer core at every stage of the algorithm. Particular attention is given to the simplicity of the parallel multirate algorithm while minimizing computational and communication overheads. Our implementation makes use of the MeTiS library for mesh partitioning and the Message Passing Interface for inter-processor communication. Performance analyses for two and three-dimensional practical applications confirm that multirate methods preserve important computational advantages of explicit methods up to a significant number of processors.

  20. Efficient marginalization to compute protein posterior probabilities from shotgun mass spectrometry data

    PubMed Central

    Serang, Oliver; MacCoss, Michael J.; Noble, William Stafford

    2010-01-01

    The problem of identifying proteins from a shotgun proteomics experiment has not been definitively solved. Identifying the proteins in a sample requires ranking them, ideally with interpretable scores. In particular, “degenerate” peptides, which map to multiple proteins, have made such a ranking difficult to compute. The problem of computing posterior probabilities for the proteins, which can be interpreted as confidence in a protein’s presence, has been especially daunting. Previous approaches have either ignored the peptide degeneracy problem completely, addressed it by computing a heuristic set of proteins or heuristic posterior probabilities, or by estimating the posterior probabilities with sampling methods. We present a probabilistic model for protein identification in tandem mass spectrometry that recognizes peptide degeneracy. We then introduce graph-transforming algorithms that facilitate efficient computation of protein probabilities, even for large data sets. We evaluate our identification procedure on five different well-characterized data sets and demonstrate our ability to efficiently compute high-quality protein posteriors. PMID:20712337

  1. Efficient computation of PDF-based characteristics from diffusion MR signal.

    PubMed

    Assemlal, Haz-Edine; Tschumperlé, David; Brun, Luc

    2008-01-01

    We present a general method for the computation of PDF-based characteristics of the tissue micro-architecture in MR imaging. The approach relies on the approximation of the MR signal by a series expansion based on Spherical Harmonics and Laguerre-Gaussian functions, followed by a simple projection step that is efficiently done in a finite dimensional space. The resulting algorithm is generic, flexible and is able to compute a large set of useful characteristics of the local tissues structure. We illustrate the effectiveness of this approach by showing results on synthetic and real MR datasets acquired in a clinical time-frame. PMID:18982591

  2. Efficient implementation of block-matching motion estimation algorithms for video compression on custom computers

    NASA Astrophysics Data System (ADS)

    Chung, Vera Y. Y.; Bergmann, Neil W.

    1998-12-01

    This paper presents how to implement the block-matching motion estimation algorithm efficiently by Field Programmable Gate Arrays (FPGAs) based Custom Computer Machine (CCM) for video compression. The SPACE2 Custom Computer board consists of up to eight Xilinx XC6216 fine- grain, sea-of-gate FPGA chips. The results show that two Xilinx XC6216 FPGA can perform at 960 MOPs, hence the real- time full-search motion estimation encoder can be easily implemented by our SPACE2 CCM system.

  3. Efficient path-based computations on pedigree graphs with compact encodings

    PubMed Central

    2012-01-01

    A pedigree is a diagram of family relationships, and it is often used to determine the mode of inheritance (dominant, recessive, etc.) of genetic diseases. Along with rapidly growing knowledge of genetics and accumulation of genealogy information, pedigree data is becoming increasingly important. In large pedigree graphs, path-based methods for efficiently computing genealogical measurements, such as inbreeding and kinship coefficients of individuals, depend on efficient identification and processing of paths. In this paper, we propose a new compact path encoding scheme on large pedigrees, accompanied by an efficient algorithm for identifying paths. We demonstrate the utilization of our proposed method by applying it to the inbreeding coefficient computation. We present time and space complexity analysis, and also manifest the efficiency of our method for evaluating inbreeding coefficients as compared to previous methods by experimental results using pedigree graphs with real and synthetic data. Both theoretical and experimental results demonstrate that our method is more scalable and efficient than previous methods in terms of time and space requirements. PMID:22536898

  4. cadDX Operon of Streptococcus salivarius 57.I▿

    PubMed Central

    Chen, Yi-Ywan M.; Feng, C. W.; Chiu, C. F.; Burne, Robert A.

    2008-01-01

    A CadDX system that confers resistance to Cd2+ and Zn2+ was identified in Streptococcus salivarius 57.I. Unlike with other CadDX systems, the expression of the cad promoter was negatively regulated by CadX, and the repression was inducible by Cd2+ and Zn2+, similar to what was found for CadCA systems. The lower G+C content of the S. salivarius cadDX genes suggests acquisition by horizontal gene transfer. PMID:18165364

  5. cadDX operon of Streptococcus salivarius 57.I.

    PubMed

    Chen, Yi-Ywan M; Feng, C W; Chiu, C F; Burne, Robert A

    2008-03-01

    A CadDX system that confers resistance to Cd(2+) and Zn(2+) was identified in Streptococcus salivarius 57.I. Unlike with other CadDX systems, the expression of the cad promoter was negatively regulated by CadX, and the repression was inducible by Cd(2+) and Zn(2+), similar to what was found for CadCA systems. The lower G+C content of the S. salivarius cadDX genes suggests acquisition by horizontal gene transfer. PMID:18165364

  6. CAD system for automatic analysis of CT perfusion maps

    NASA Astrophysics Data System (ADS)

    Hachaj, T.; Ogiela, M. R.

    2011-03-01

    In this article, authors present novel algorithms developed for the computer-assisted diagnosis (CAD) system for analysis of dynamic brain perfusion, computer tomography (CT) maps, cerebral blood flow (CBF), and cerebral blood volume (CBV). Those methods perform both quantitative analysis [detection and measurement and description with brain anatomy atlas (AA) of potential asymmetries/lesions] and qualitative analysis (semantic interpretation of visualized symptoms). The semantic interpretation (decision about type of lesion: ischemic/hemorrhagic, is the brain tissue at risk of infraction or not) of visualized symptoms is done by, so-called, cognitive inference processes allowing for reasoning on character of pathological regions based on specialist image knowledge. The whole system is implemented in.NET platform (C# programming language) and can be used on any standard PC computer with.NET framework installed.

  7. Next Generation CAD/CAM/CAE Systems

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler); Malone, John B. (Compiler)

    1997-01-01

    This document contains presentations from the joint UVA/NASA Workshop on Next Generation CAD/CAM/CAE Systems held at NASA Langley Research Center in Hampton, Virginia on March 18-19, 1997. The presentations focused on current capabilities and future directions of CAD/CAM/CAE systems, aerospace industry projects, and university activities related to simulation-based design. Workshop attendees represented NASA, commercial software developers, the aerospace industry, government labs, and academia. The workshop objectives were to assess the potential of emerging CAD/CAM/CAE technology for use in intelligent simulation-based design and to provide guidelines for focused future research leading to effective use of CAE systems for simulating the entire life cycle of aerospace systems.

  8. CAD/CAM integration - The imperatives

    NASA Astrophysics Data System (ADS)

    Jackson, R. H.

    1985-01-01

    The assimilation of CAD/CAM capabilities into the organizational/production aspects of a manufacturing company are discussed. One company has begun the process by automating the engineering department, with all final design products subject to one vice-president's approval before being sent to the production line, which will eventually become an integrated part of the automated process. Another firm has established a CAE team within the engineering department to refine preliminary work and recommendations from other sources and fit them into manufacturing specifications. It is recommended that all managers and users be familiarized with CAD/CAM systems and that all defect-tendencies induced into people working under production pressure be anticipated and eliminated. It is emphasized that incorporating CAD/CAM into a company is as much a matter of the people involved as the technical considerations.

  9. Formal Management of CAD/CAM Processes

    NASA Astrophysics Data System (ADS)

    Kohlhase, Michael; Lemburg, Johannes; Schröder, Lutz; Schulz, Ewaryst

    Systematic engineering design processes have many aspects in common with software engineering, with CAD/CAM objects replacing program code as the implementation stage of the development. They are, however, currently considerably less formal. We propose to draw on the mentioned similarities and transfer methods from software engineering to engineering design in order to enhance in particular the reliability and reusability of engineering processes. We lay out a vision of a document-oriented design process that integrates CAD/CAM documents with requirement specifications; as a first step towards supporting such a process, we present a tool that interfaces a CAD system with program verification workflows, thus allowing for completely formalised development strands within a semi-formal methodology.

  10. Reentry-Vehicle Shape Optimization Using a Cartesian Adjoint Method and CAD Geometry

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.

    2006-01-01

    A DJOINT solutions of the governing flow equations are becoming increasingly important for the development of efficient analysis and optimization algorithms. A well-known use of the adjoint method is gradient-based shape. Given an objective function that defines some measure of performance, such as the lift and drag functionals, its gradient is computed at a cost that is essentially independent of the number of design variables (e.g., geometric parameters that control the shape). Classic aerodynamic applications of gradient-based optimization include the design of cruise configurations for transonic and supersonic flow, as well as the design of high-lift systems. are perhaps the most promising approach for addressing the issues of flow solution automation for aerodynamic design problems. In these methods, the discretization of the wetted surface is decoupled from that of the volume mesh. This not only enables fast and robust mesh generation for geometry of arbitrary complexity, but also facilitates access to geometry modeling and manipulation using parametric computer-aided design (CAD). In previous work on Cartesian adjoint solvers, Melvin et al. developed an adjoint formulation for the TRANAIR code, which is based on the full-potential equation with viscous corrections. More recently, Dadone and Grossman presented an adjoint formulation for the two-dimensional Euler equations using a ghost-cell method to enforce the wall boundary conditions. In Refs. 18 and 19, we presented an accurate and efficient algorithm for the solution of the adjoint Euler equations discretized on Cartesian meshes with embedded, cut-cell boundaries. Novel aspects of the algorithm were the computation of surface shape sensitivities for triangulations based on parametric-CAD models and the linearization of the coupling between the surface triangulation and the cut-cells. The accuracy of the gradient computation was verified using several three-dimensional test cases, which included design

  11. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE PAGESBeta

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-08-19

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~101 to ~102 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  12. Computer Aided Drafting Workshop. Workshop Booklet.

    ERIC Educational Resources Information Center

    Goetsch, David L.

    This mini-course and article are presentations from a workshop on computer-aided drafting. The purpose of the mini-course is to assist drafting instructors in updating their occupational knowledge to include computer-aided drafting (CAD). Topics covered in the course include general computer information, the computer in drafting, CAD terminology,…

  13. An efficient method for computing the QTAIM topology of a scalar field: the electron density case.

    PubMed

    Rodríguez, Juan I

    2013-03-30

    An efficient method for computing the quantum theory of atoms in molecules (QTAIM) topology of the electron density (or other scalar field) is presented. A modified Newton-Raphson algorithm was implemented for finding the critical points (CP) of the electron density. Bond paths were constructed with the second-order Runge-Kutta method. Vectorization of the present algorithm makes it to scale linearly with the system size. The parallel efficiency decreases with the number of processors (from 70% to 50%) with an average of 54%. The accuracy and performance of the method are demonstrated by computing the QTAIM topology of the electron density of a series of representative molecules. Our results show that our algorithm might allow to apply QTAIM analysis to large systems (carbon nanotubes, polymers, fullerenes) considered unreachable until now. PMID:23175458

  14. A Power Efficient Exaflop Computer Design for Global Cloud System Resolving Climate Models.

    NASA Astrophysics Data System (ADS)

    Wehner, M. F.; Oliker, L.; Shalf, J.

    2008-12-01

    Exascale computers would allow routine ensemble modeling of the global climate system at the cloud system resolving scale. Power and cost requirements of traditional architecture systems are likely to delay such capability for many years. We present an alternative route to the exascale using embedded processor technology to design a system optimized for ultra high resolution climate modeling. These power efficient processors, used in consumer electronic devices such as mobile phones, portable music players, cameras, etc., can be tailored to the specific needs of scientific computing. We project that a system capable of integrating a kilometer scale climate model a thousand times faster than real time could be designed and built in a five year time scale for US$75M with a power consumption of 3MW. This is cheaper, more power efficient and sooner than any other existing technology.

  15. Virtual detector methods for efficiently computing momentum-resolved dissociation and ionization spectra

    NASA Astrophysics Data System (ADS)

    Kramer, Alex; Thumm, Uwe

    2016-05-01

    We discuss a class of window-transform-based ``virtual detector'' methods for computing momentum-resolved dissociation and ionization spectra by numerically analyzing the motion of nuclear or electronic quantum-mechanical wavepackets at the periphery of their numerical grids. While prior applications of such surface-flux methods considered semi-classical limits to derive ionization and dissociation spectra, we systematically include quantum-mechanical corrections and extensions to higher dimensions, discussing numerical convergence properties and the computational efficiency of our method in comparison with alternative schemes for obtaining momentum distributions. Using the example of atomic ionization by co- and counter-rotating circularly polarized laser pulses, we scrutinize the efficiency of common finite-difference schemes for solving the time-dependent Schrödinger equation in virtual detection and standard Fourier-transformation methods for extracting momentum spectra. Supported by the DoE, NSF, and Alexander von Humboldt foundation.

  16. Automated CAD design for sculptured airfoil surfaces

    NASA Astrophysics Data System (ADS)

    Murphy, S. D.; Yeagley, S. R.

    1990-11-01

    The design of tightly tolerated sculptured surfaces such as those for airfoils requires a significant design effort in order to machine the tools to create these surfaces. Because of the quantity of numerical data required to describe the airfoil surfaces, a CAD approach is required. Although this approach will result in productivity gains, much larger gains can be achieved by automating the design process. This paper discusses an application which resulted in an eightfold improvement in productivity by automating the design process on the CAD system.

  17. Computational efficient segmentation of cell nuclei in 2D and 3D fluorescent micrographs

    NASA Astrophysics Data System (ADS)

    De Vylder, Jonas; Philips, Wilfried

    2011-02-01

    This paper proposes a new segmentation technique developed for the segmentation of cell nuclei in both 2D and 3D fluorescent micrographs. The proposed method can deal with both blurred edges as with touching nuclei. Using a dual scan line algorithm its both memory as computational efficient, making it interesting for the analysis of images coming from high throughput systems or the analysis of 3D microscopic images. Experiments show good results, i.e. recall of over 0.98.

  18. Efficient Probability of Failure Calculations for QMU using Computational Geometry LDRD 13-0144 Final Report

    SciTech Connect

    Mitchell, Scott A.; Ebeida, Mohamed Salah; Romero, Vicente J.; Swiler, Laura Painton; Rushdi, Ahmad A.; Abdelkader, Ahmad

    2015-09-01

    This SAND report summarizes our work on the Sandia National Laboratory LDRD project titled "Efficient Probability of Failure Calculations for QMU using Computational Geometry" which was project #165617 and proposal #13-0144. This report merely summarizes our work. Those interested in the technical details are encouraged to read the full published results, and contact the report authors for the status of the software and follow-on projects.

  19. Step-by-step magic state encoding for efficient fault-tolerant quantum computation

    PubMed Central

    Goto, Hayato

    2014-01-01

    Quantum error correction allows one to make quantum computers fault-tolerant against unavoidable errors due to decoherence and imperfect physical gate operations. However, the fault-tolerant quantum computation requires impractically large computational resources for useful applications. This is a current major obstacle to the realization of a quantum computer. In particular, magic state distillation, which is a standard approach to universality, consumes the most resources in fault-tolerant quantum computation. For the resource problem, here we propose step-by-step magic state encoding for concatenated quantum codes, where magic states are encoded step by step from the physical level to the logical one. To manage errors during the encoding, we carefully use error detection. Since the sizes of intermediate codes are small, it is expected that the resource overheads will become lower than previous approaches based on the distillation at the logical level. Our simulation results suggest that the resource requirements for a logical magic state will become comparable to those for a single logical controlled-NOT gate. Thus, the present method opens a new possibility for efficient fault-tolerant quantum computation. PMID:25511387

  20. Step-by-step magic state encoding for efficient fault-tolerant quantum computation

    NASA Astrophysics Data System (ADS)

    Goto, Hayato

    2014-12-01

    Quantum error correction allows one to make quantum computers fault-tolerant against unavoidable errors due to decoherence and imperfect physical gate operations. However, the fault-tolerant quantum computation requires impractically large computational resources for useful applications. This is a current major obstacle to the realization of a quantum computer. In particular, magic state distillation, which is a standard approach to universality, consumes the most resources in fault-tolerant quantum computation. For the resource problem, here we propose step-by-step magic state encoding for concatenated quantum codes, where magic states are encoded step by step from the physical level to the logical one. To manage errors during the encoding, we carefully use error detection. Since the sizes of intermediate codes are small, it is expected that the resource overheads will become lower than previous approaches based on the distillation at the logical level. Our simulation results suggest that the resource requirements for a logical magic state will become comparable to those for a single logical controlled-NOT gate. Thus, the present method opens a new possibility for efficient fault-tolerant quantum computation.

  1. An open source implementation of colon CAD in 3D slicer

    NASA Astrophysics Data System (ADS)

    Xu, Haiyong; Gage, H. Donald; Santago, Pete

    2010-03-01

    Most colon CAD (computer aided detection) software products, especially commercial products, are designed for use by radiologists in a clinical environment. Therefore, those features that effectively assist radiologists in finding polyps are emphasized in those tools. However, colon CAD researchers, many of whom are engineers or computer scientists, are working with CT studies in which polyps have already been identified using CT Colonography (CTC) and/or optical colonoscopy (OC). Their goal is to utilize that data to design a computer system that will identify all true polyps with no false positive detections. Therefore, they are more concerned with how to reduce false positives and to understand the behavior of the system than how to find polyps. Thus, colon CAD researchers have different requirements for tools not found in current CAD software. We have implemented a module in 3D Slicer to assist these researchers. As with clinical colon CAD implementations, the ability to promptly locate a polyp candidate in a 2D slice image and on a 3D colon surface is essential for researchers. Our software provides this capability, and uniquely, for each polyp candidate, the prediction value from a classifier is shown next to the 3D view of the polyp candidate, as well as its CTC/OC finding. This capability makes it easier to study each false positive detection and identify its causes. We describe features in our colon CAD system that meets researchers' specific requirements. Our system uses an open source implementation of a 3D Slicer module, and the software is available to the pubic for use and for extension (http://www2.wfubmc.edu/ctc/download/).

  2. Computationally efficient design of optimal output feedback strategies for controllable passive damping devices

    NASA Astrophysics Data System (ADS)

    Kamalzare, Mahmoud; Johnson, Erik A.; Wojtkiewicz, Steven F.

    2014-05-01

    Designing control strategies for smart structures, such as those with semiactive devices, is complicated by the nonlinear nature of the feedback control, secondary clipping control and other additional requirements such as device saturation. The usual design approach resorts to large-scale simulation parameter studies that are computationally expensive. The authors have previously developed an approach for state-feedback semiactive clipped-optimal control design, based on a nonlinear Volterra integral equation that provides for the computationally efficient simulation of such systems. This paper expands the applicability of the approach by demonstrating that it can also be adapted to accommodate more realistic cases when, instead of full state feedback, only a limited set of noisy response measurements is available to the controller. This extension requires incorporating a Kalman filter (KF) estimator, which is linear, into the nominal model of the uncontrolled system. The efficacy of the approach is demonstrated by a numerical study of a 100-degree-of-freedom frame model, excited by a filtered Gaussian random excitation, with noisy acceleration sensor measurements to determine the semiactive control commands. The results show that the proposed method can improve computational efficiency by more than two orders of magnitude relative to a conventional solver, while retaining a comparable level of accuracy. Further, the proposed approach is shown to be similarly efficient for an extensive Monte Carlo simulation to evaluate the effects of sensor noise levels and KF tuning on the accuracy of the response.

  3. An efficient surrogate-based method for computing rare failure probability

    NASA Astrophysics Data System (ADS)

    Li, Jing; Li, Jinglai; Xiu, Dongbin

    2011-10-01

    In this paper, we present an efficient numerical method for evaluating rare failure probability. The method is based on a recently developed surrogate-based method from Li and Xiu [J. Li, D. Xiu, Evaluation of failure probability via surrogate models, J. Comput. Phys. 229 (2010) 8966-8980] for failure probability computation. The method by Li and Xiu is of hybrid nature, in the sense that samples of both the surrogate model and the true physical model are used, and its efficiency gain relies on using only very few samples of the true model. Here we extend the capability of the method to rare probability computation by using the idea of importance sampling (IS). In particular, we employ cross-entropy (CE) method, which is an effective method to determine the biasing distribution in IS. We demonstrate that, by combining with the CE method, a surrogate-based IS algorithm can be constructed and is highly efficient for rare failure probability computation—it incurs much reduced simulation efforts compared to the traditional CE-IS method. In many cases, the new method is capable of capturing failure probability as small as 10 -12 ˜ 10 -6 with only several hundreds samples.

  4. Generation and use of human 3D-CAD models

    NASA Astrophysics Data System (ADS)

    Grotepass, Juergen; Speyer, Hartmut; Kaiser, Ralf

    2002-05-01

    Individualized Products are one of the ten mega trends of the 21st Century with human modeling as the key issue for tomorrow's design and product development. The use of human modeling software for computer based ergonomic simulations within the production process increases quality while reducing costs by 30- 50 percent and shortening production time. This presentation focuses on the use of human 3D-CAD models for both, the ergonomic design of working environments and made to measure garment production. Today, the entire production chain can be designed, individualized models generated and analyzed in 3D computer environments. Anthropometric design for ergonomics is matched to human needs, thus preserving health. Ergonomic simulation includes topics as human vision, reachability, kinematics, force and comfort analysis and international design capabilities. In German more than 17 billions of Mark are moved to other industries, because clothes do not fit. Individual clothing tailored to the customer's preference means surplus value, pleasure and perfect fit. The body scanning technology is the key to generation and use of human 3D-CAD models for both, the ergonomic design of working environments and made to measure garment production.

  5. Computing the energy of a water molecule using multideterminants: A simple, efficient algorithm

    NASA Astrophysics Data System (ADS)

    Clark, Bryan K.; Morales, Miguel A.; McMinis, Jeremy; Kim, Jeongnim; Scuseria, Gustavo E.

    2011-12-01

    Quantum Monte Carlo (QMC) methods such as variational Monte Carlo and fixed node diffusion Monte Carlo depend heavily on the quality of the trial wave function. Although Slater-Jastrow wave functions are the most commonly used variational ansatz in electronic structure, more sophisticated wave functions are critical to ascertaining new physics. One such wave function is the multi-Slater-Jastrow wave function which consists of a Jastrow function multiplied by the sum of Slater determinants. In this paper we describe a method for working with these wave functions in QMC codes that is easy to implement, efficient both in computational speed as well as memory, and easily parallelized. The computational cost scales quadratically with particle number making this scaling no worse than the single determinant case and linear with the total number of excitations. Additionally, we implement this method and use it to compute the ground state energy of a water molecule.

  6. Computing the energy of a water molecule using multideterminants: a simple, efficient algorithm.

    PubMed

    Clark, Bryan K; Morales, Miguel A; McMinis, Jeremy; Kim, Jeongnim; Scuseria, Gustavo E

    2011-12-28

    Quantum Monte Carlo (QMC) methods such as variational Monte Carlo and fixed node diffusion Monte Carlo depend heavily on the quality of the trial wave function. Although Slater-Jastrow wave functions are the most commonly used variational ansatz in electronic structure, more sophisticated wave functions are critical to ascertaining new physics. One such wave function is the multi-Slater-Jastrow wave function which consists of a Jastrow function multiplied by the sum of Slater determinants. In this paper we describe a method for working with these wave functions in QMC codes that is easy to implement, efficient both in computational speed as well as memory, and easily parallelized. The computational cost scales quadratically with particle number making this scaling no worse than the single determinant case and linear with the total number of excitations. Additionally, we implement this method and use it to compute the ground state energy of a water molecule. PMID:22225142

  7. An efficient FPGA architecture for integer ƞth root computation

    NASA Astrophysics Data System (ADS)

    Rangel-Valdez, Nelson; Barron-Zambrano, Jose Hugo; Torres-Huitzil, Cesar; Torres-Jimenez, Jose

    2015-10-01

    In embedded computing, it is common to find applications such as signal processing, image processing, computer graphics or data compression that might benefit from hardware implementation for the computation of integer roots of order ?. However, the scientific literature lacks architectural designs that implement such operations for different values of N, using a low amount of resources. This article presents a parameterisable field programmable gate array (FPGA) architecture for an efficient Nth root calculator that uses only adders/subtractors and ? location memory elements. The architecture was tested for different values of ?, using 64-bit number representation. The results show a consumption up to 10% of the logical resources of a Xilinx XC6SLX45-CSG324C device, depending on the value of N. The hardware implementation improved the performance of its corresponding software implementations in one order of magnitude. The architecture performance varies from several thousands to seven millions of root operations per second.

  8. Dendritic nonlinearities are tuned for efficient spike-based computations in cortical circuits.

    PubMed

    Ujfalussy, Balázs B; Makara, Judit K; Branco, Tiago; Lengyel, Máté

    2015-01-01

    Cortical neurons integrate thousands of synaptic inputs in their dendrites in highly nonlinear ways. It is unknown how these dendritic nonlinearities in individual cells contribute to computations at the level of neural circuits. Here, we show that dendritic nonlinearities are critical for the efficient integration of synaptic inputs in circuits performing analog computations with spiking neurons. We developed a theory that formalizes how a neuron's dendritic nonlinearity that is optimal for integrating synaptic inputs depends on the statistics of its presynaptic activity patterns. Based on their in vivo preynaptic population statistics (firing rates, membrane potential fluctuations, and correlations due to ensemble dynamics), our theory accurately predicted the responses of two different types of cortical pyramidal cells to patterned stimulation by two-photon glutamate uncaging. These results reveal a new computational principle underlying dendritic integration in cortical neurons by suggesting a functional link between cellular and systems--level properties of cortical circuits. PMID:26705334

  9. Redundancy management for efficient fault recovery in NASA's distributed computing system

    NASA Technical Reports Server (NTRS)

    Malek, Miroslaw; Pandya, Mihir; Yau, Kitty

    1991-01-01

    The management of redundancy in computer systems was studied and guidelines were provided for the development of NASA's fault-tolerant distributed systems. Fault recovery and reconfiguration mechanisms were examined. A theoretical foundation was laid for redundancy management by efficient reconfiguration methods and algorithmic diversity. Algorithms were developed to optimize the resources for embedding of computational graphs of tasks in the system architecture and reconfiguration of these tasks after a failure has occurred. The computational structure represented by a path and the complete binary tree was considered and the mesh and hypercube architectures were targeted for their embeddings. The innovative concept of Hybrid Algorithm Technique was introduced. This new technique provides a mechanism for obtaining fault tolerance while exhibiting improved performance.

  10. Efficient computation of the stability of three-dimensional compressible boundary layers

    NASA Technical Reports Server (NTRS)

    Malik, M. R.; Orszag, S. A.

    1981-01-01

    Methods for the computer analysis of the stability of three-dimensional compressible boundary layers are discussed and the user-oriented Compressible Stability Analysis (COSAL) computer code is described. The COSAL code uses a matrix finite-difference method for local eigenvalue solution when a good guess for the eigenvalue is available and is significantly more computationally efficient than the commonly used initial-value approach. The local eigenvalue search procedure also results in eigenfunctions and, at little extra work, group velocities. A globally convergent eigenvalue procedure is also developed which may be used when no guess for the eigenvalue is available. The global problem is formulated in such a way that no unstable spurious modes appear so that the method is suitable for use in a black-box stability code. Sample stability calculations are presented for the boundary layer profiles of an LFC swept wing.

  11. Efficient computation of root numbers and class numbers of parametrized families of real abelian number fields

    NASA Astrophysics Data System (ADS)

    Louboutin, Stephane R.

    2007-03-01

    Let \\{K_m\\} be a parametrized family of simplest real cyclic cubic, quartic, quintic or sextic number fields of known regulators, e.g., the so-called simplest cubic and quartic fields associated with the polynomials P_m(x) Dx^3 -mx^2-(m+3)x+1 and P_m(x) Dx^4 -mx^3-6x^2+mx+1 . We give explicit formulas for powers of the Gaussian sums attached to the characters associated with these simplest number fields. We deduce a method for computing the exact values of these Gaussian sums. These values are then used to efficiently compute class numbers of simplest fields. Finally, such class number computations yield many examples of real cyclotomic fields Q(zeta_p)^+ of prime conductors pge 3 and class numbers h_p^+ greater than or equal to p . However, in accordance with Vandiver's conjecture, we found no example of p for which p divides h_p^+ .

  12. A distributed data base management facility for the CAD/CAM environment

    NASA Technical Reports Server (NTRS)

    Balza, R. M.; Beaudet, R. W.; Johnson, H. R.

    1984-01-01

    Current/PAD research in the area of distributed data base management considers facilities for supporting CAD/CAM data management in a heterogeneous network of computers encompassing multiple data base managers supporting a variety of data models. These facilities include coordinated execution of multiple DBMSs to provide for administration of and access to data distributed across them.

  13. Extending Engineering Design Graphics Laboratories to Have a CAD/CAM Component: Implementation Issues.

    ERIC Educational Resources Information Center

    Juricic, Davor; Barr, Ronald E.

    1996-01-01

    Reports on a project that extended the Engineering Design Graphics curriculum to include instruction and laboratory experience in computer-aided design, analysis, and manufacturing (CAD/CAM). Discusses issues in project implementation, including introduction of finite element analysis to lower-division students, feasibility of classroom prototype…

  14. Preparing for High Technology: CAD/CAM Programs. Research & Development Series No. 234.

    ERIC Educational Resources Information Center

    Abram, Robert; And Others

    This guide is one of three developed to provide information and resources to assist in planning and developing postsecondary technican training programs in high technology areas. It is specifically intended for vocational-technical educators and planners in the initial stages of planning a specialized training option in computer-aided design (CAD)…

  15. Classroom Experiences in an Engineering Design Graphics Course with a CAD/CAM Extension.

    ERIC Educational Resources Information Center

    Barr, Ronald E.; Juricic, Davor

    1997-01-01

    Reports on the development of a new CAD/CAM laboratory experience for an Engineering Design Graphics (EDG) course. The EDG curriculum included freehand sketching, introduction to Computer-Aided Design and Drafting (CADD), and emphasized 3-D solid modeling. Reviews the project and reports on the testing of the new laboratory components which were…

  16. Using Claris CAD To Develop a Floor Plan. High-Technology Training Module.

    ERIC Educational Resources Information Center

    Pawlowicz, Bruce; Johnson, Tom

    This learning module for a high school architectural drafting course introduces students to the use of Claris CAD (Computer Aided Drafting) to develop a floor plan. The six sections of the module are the following: module objectives, content outline, teaching methods, student activities, resource list, and evaluation (pretest, posttest). Student…

  17. Bridging CAGD knowledge into CAD/CG applications: Mathematical theories as stepping stones of innovations

    NASA Astrophysics Data System (ADS)

    Gobithaasan, R. U.; Miura, Kenjiro T.; Hassan, Mohamad Nor

    2014-07-01

    Computer Aided Geometric Design (CAGD) which surpasses the underlying theories of Computer Aided Design (CAD) and Computer Graphics (CG) has been taught in a number of Malaysian universities under the umbrella of Mathematical Sciences' faculty/department. On the other hand, CAD/CG is taught either under the Engineering or Computer Science Faculty. Even though CAGD researchers/educators/students (denoted as contributors) have been enriching this field of study by means of article/journal publication, many fail to convert the idea into constructive innovation due to the gap that occurs between CAGD contributors and practitioners (engineers/product/designers/architects/artists). This paper addresses this issue by advocating a number of technologies that can be used to transform CAGD contributors into innovators where immediate impact in terms of practical application can be experienced by the CAD/CG practitioners. The underlying principle of solving this issue is twofold. First would be to expose the CAGD contributors on ways to turn mathematical ideas into plug-ins and second is to impart relevant CAGD theories to CAD/CG to practitioners. Both cases are discussed in detail and the final section shows examples to illustrate the importance of turning mathematical knowledge into innovations.

  18. An iso-deviant approach for acoustic computations using efficient adaptive gridder for littoral environments

    NASA Astrophysics Data System (ADS)

    Rike, Erik R.; Delbalzo, Donald R.

    2005-04-01

    Transmission Loss (TL) computations in littoral areas require a dense spatial and azimuthal grid to achieve acceptable accuracy and detail. The computational cost of accurate predictions led to a new concept, OGRES (Objective Grid/Radials using Environmentally-sensitive Selection), which produces sparse, irregular acoustic grids, with controlled accuracy. Recent work to further increase accuracy and efficiency with better metrics and interpolation led to EAGLE (Efficient Adaptive Gridder for Littoral Environments). On each iteration, EAGLE produces grids with approximately constant spatial uncertainty (hence, iso-deviance), yielding predictions with ever-increasing resolution and accuracy. The EAGLE point-selection mechanism is tested using the predictive error metric and 1-D synthetic data-sets created from combinations of simple signal functions (e.g., polynomials, sines, cosines, exponentials), along with white and chromatic noise. The speed, efficiency, fidelity, and iso-deviance of EAGLE are determined for each combination of signal, noise, and interpolator. The results show significant efficiency enhancements compared to uniform grids of the same accuracy. [Work sponsored by ONR under the LADC project.

  19. Development of CAD prototype system for Crohn's disease

    NASA Astrophysics Data System (ADS)

    Oda, Masahiro; Kitasaka, Takayuki; Furukawa, Kazuhiro; Watanabe, Osamu; Ando, Takafumi; Goto, Hidemi; Mori, Kensaku

    2010-03-01

    The purpose of this paper is to present a CAD prototype system for Crohn's disease. Crohn's disease causes inflammation or ulcers of the gastrointestinal tract. The number of patients of Crohn's disease is increasing in Japan. Symptoms of Crohn's disease include intestinal stenosis, longitudinal ulcers, and fistulae. Optical endoscope cannot pass through intestinal stenosis in some cases. We propose a new CAD system using abdominal fecal tagging CT images for efficient diagnosis of Crohn's disease. The system displays virtual unfolded (VU), virtual endoscopic, curved planar reconstruction, multi planar reconstruction, and outside views of both small and large intestines. To generate the VU views, we employ a small and large intestines extraction method followed by a simple electronic cleansing method. The intestine extraction is based on the region growing process, which uses a characteristic that tagged fluid neighbor air in the intestine. The electronic cleansing enables observation of intestinal wall under tagged fluid. We change the height of the VU views according to the perimeter of the intestine. In addition, we developed a method to enhance the longitudinal ulcer on views of the system. We enhance concave parts on the intestinal wall, which are caused by the longitudinal ulcer, based on local intensity structure analysis. We examined the small and the large intestines of eleven CT images by the proposed system. The VU views enabled efficient observation of the intestinal wall. The height change of the VU views helps finding intestinal stenosis on the VU views. The concave region enhancement made longitudinal ulcers clear on the views.

  20. Space crew radiation exposure analysis system based on a commercial stand-alone CAD system

    NASA Astrophysics Data System (ADS)

    Appleby, Matthew H.; Golightly, Michael J.; Hardy, Alva C.

    1992-07-01

    Major improvements have recently been completed in the approach to spacecraft shielding analysis. A Computer-Aided Design (CAD)-based system has been developed for determining the shielding provided to any point within or external to the spacecraft. Shielding analysis is performed using a commercially available stand-alone CAD system and a customized ray-tracing subroutine contained within a standard engineering modeling software package. This improved shielding analysis technique has been used in several vehicle design projects such as a Mars transfer habitat, pressurized lunar rover, and the redesigned Space Station. Results of these analyses are provided to demonstrate the applicability and versatility of the system.

  1. Space crew radiation exposure analysis system based on a commercial stand-alone CAD system

    NASA Technical Reports Server (NTRS)

    Appleby, Matthew H.; Golightly, Michael J.; Hardy, Alva C.

    1992-01-01

    Major improvements have recently been completed in the approach to spacecraft shielding analysis. A Computer-Aided Design (CAD)-based system has been developed for determining the shielding provided to any point within or external to the spacecraft. Shielding analysis is performed using a commercially available stand-alone CAD system and a customized ray-tracing subroutine contained within a standard engineering modeling software package. This improved shielding analysis technique has been used in several vehicle design projects such as a Mars transfer habitat, pressurized lunar rover, and the redesigned Space Station. Results of these analyses are provided to demonstrate the applicability and versatility of the system.

  2. Online mammographic images database for development and comparison of CAD schemes.

    PubMed

    Matheus, Bruno Roberto Nepomuceno; Schiabel, Homero

    2011-06-01

    Considering the difficulties in finding good-quality images for the development and test of computer-aided diagnosis (CAD), this paper presents a public online mammographic images database free for all interested viewers and aimed to help develop and evaluate CAD schemes. The digitalization of the mammographic images is made with suitable contrast and spatial resolution for processing purposes. The broad recuperation system allows the user to search for different images, exams, or patient characteristics. Comparison with other databases currently available has shown that the presented database has a sufficient number of images, is of high quality, and is the only one to include a functional search system. PMID:20480383

  3. Cad Graphics in Facilities Planning.

    ERIC Educational Resources Information Center

    Collier, Linda M.

    1984-01-01

    By applying a computer-aided drafting system to a range of facilities layouts and plans, a division of Tektronix, Inc., Oregon, is maintaining staffing levels with an added workload. The tool is also being used in other areas of the company for illustration, design, and administration. (MLF)

  4. Switchgrass PviCAD1: Understanding residues important for substrate preferences and activity

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Lignin is a major component of plant cell walls and is a complex aromatic heteropolymer. Reducing lignin content improves conversion efficiency into liquid fuels, and enzymes involved in lignin biosynthesis are attractive targets for bioengineering. Cinnamyl alcohol dehydrogenase (CAD) catalyzes t...

  5. Potential reasons for differences in CAD effectiveness evaluated using laboratory and clinical studies

    NASA Astrophysics Data System (ADS)

    He, Xin; Samuelson, Frank; Zeng, Rongping; Sahiner, Berkman

    2015-03-01

    Research studies have investigated a number of factors that may impact the performance assessment of computer aided detection (CAD) effectiveness, such as the inherent design of the CAD, the image and reader samples, and the assessment methods. In this study, we focused on the effect of prevalence on cue validity (co-occurrence of cue and signal) and learning as potentially important factors in CAD assessment. For example, the prevalence of cases with breast cancer is around 50% in laboratory CAD studies, which is 100 times higher than that in breast cancer screening. Although ROC is prevalence-independent, an observer's use of CAD involves tasks that are more complicated than binary classification, including: search, detection, classification, cueing and learning. We developed models to investigate the potential impact of prevalence on cue validity and the learning of cue validity tasks. We hope this work motivates new studies that investigate previously under-explored factors involved in image interpretation with a new modality in its assessment.

  6. CAD for 4-step braided fabric composites

    SciTech Connect

    Pandey, R.; Hahn, H.T.

    1994-12-31

    A general framework is provided to predict thermoelastic properties of three dimensional 4-step braided fabric composites. Three key steps involved are (1) the development of a CAD model for yarn architecture, (2) the extraction of a unit cell (3) the prediction of the thermoelastic properties based on micromechanics. Main features of each step are summarized and experimental correlations are provided in the paper.

  7. Mechanical design productivity using CAD graphics - A user's point of view

    NASA Astrophysics Data System (ADS)

    Boltz, R. J.; Avery, J. T., Jr.

    1985-02-01

    The present investigation is concerned with the mechanical design productivity resulting from the use of Computer-Aided Design (CAD) graphics as a design tool. The considered studies had been conducted by a company which is involved in the design, development, and manufacture of government and defense products. Attention is given to CAD graphics for mechanical design, productivity, an overall productivity assessment, the use of CAD graphics for basic mechanical design, productivity in engineering-related areas, and an overall engineering productivity assessment. The investigation shows that there was no appreciable improvement in productivity with respect to basic mechanical design. However, rather substantial increases could be realized in productivity for engineering-related activities.

  8. On the Use of Parmetric-CAD Systems and Cartesian Methods for Aerodynamic Design

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.; Pulliam, Thomas H.

    2004-01-01

    Automated, high-fidelity tools for aerodynamic design face critical issues in attempting to optimize real-life geometry arid in permitting radical design changes. Success in these areas promises not only significantly shorter design- cycle times, but also superior and unconventional designs. To address these issues, we investigate the use of a parmetric-CAD system in conjunction with an embedded-boundary Cartesian method. Our goal is to combine the modeling capabilities of feature-based CAD with the robustness and flexibility of component-based Cartesian volume-mesh generation for complex geometry problems. We present the development of an automated optimization frame-work with a focus on the deployment of such a CAD-based design approach in a heterogeneous parallel computing environment.

  9. Study on the integration approaches to CAD/CAPP/FMS in garment CIMS

    NASA Astrophysics Data System (ADS)

    Wang, Xiankui; Tian, Wensheng; Liu, Chengying; Li, Zhizhong

    1995-08-01

    Computer integrated manufacturing system (CIMS), as an advanced methodology, has been applied in many industry fields. There is, however, little research on the application of CIMS in the garment industry, especially on the integrated approach to CAD, CAPP, and FMS in garment CIMS. In this paper, the current situations of CAD, CAPP, and FMS in the garment industry are discussed, and information requirements between them as well as the integrated approaches are also investigated. The representation of the garments' product data by the group technology coding is proposed. Based on the group technology, a shared data base as an integration element can be constructed, which leads to the integration of CAD/CAPP/FMS in garment CIMS.

  10. Enhancing simulation of efficiency with analytical tools. [combining computer simulation and analytical techniques for cost reduction

    NASA Technical Reports Server (NTRS)

    Seltzer, S. M.

    1974-01-01

    Some means of combining both computer simulation and anlytical techniques are indicated in order to mutually enhance their efficiency as design tools and to motivate those involved in engineering design to consider using such combinations. While the idea is not new, heavy reliance on computers often seems to overshadow the potential utility of analytical tools. Although the example used is drawn from the area of dynamics and control, the principles espoused are applicable to other fields. In the example the parameter plane stability analysis technique is described briefly and extended beyond that reported in the literature to increase its utility (through a simple set of recursive formulas) and its applicability (through the portrayal of the effect of varying the sampling period of the computer). The numerical values that were rapidly selected by analysis were found to be correct for the hybrid computer simulation for which they were needed. This obviated the need for cut-and-try methods to choose the numerical values, thereby saving both time and computer utilization.

  11. Efficient curve-skeleton computation for the analysis of biomedical 3d images - biomed 2010.

    PubMed

    Brun, Francesco; Dreossi, Diego

    2010-01-01

    Advances in three dimensional (3D) biomedical imaging techniques, such as magnetic resonance (MR) and computed tomography (CT), make it easy to reconstruct high quality 3D models of portions of human body and other biological specimens. A major challenge lies in the quantitative analysis of the resulting models thus allowing a more comprehensive characterization of the object under investigation. An interesting approach is based on curve-skeleton (or medial axis) extraction, which gives basic information concerning the topology and the geometry. Curve-skeletons have been applied in the analysis of vascular networks and the diagnosis of tracheal stenoses as well as a 3D flight path in virtual endoscopy. However curve-skeleton computation is a crucial task. An effective skeletonization algorithm was introduced by N. Cornea in [1] but it lacks in computational performances. Thanks to the advances in imaging techniques the resolution of 3D images is increasing more and more, therefore there is the need for efficient algorithms in order to analyze significant Volumes of Interest (VOIs). In the present paper an improved skeletonization algorithm based on the idea proposed in [1] is presented. A computational comparison between the original and the proposed method is also reported. The obtained results show that the proposed method allows a significant computational improvement making more appealing the adoption of the skeleton representation in biomedical image analysis applications. PMID:20467122

  12. Toward Efficient Computation of the Dempster-Shafer Belief Theoretic Conditionals.

    PubMed

    Wickramarathne, Thanuka L; Premaratne, Kamal; Murthi, Manohar N

    2013-04-01

    Dempster-Shafer (DS) belief theory provides a convenient framework for the development of powerful data fusion engines by allowing for a convenient representation of a wide variety of data imperfections. The recent work on the DS theoretic (DST) conditional approach, which is based on the Fagin-Halpern (FH) DST conditionals, appears to demonstrate the suitability of DS theory for incorporating both soft (generated by human-based sensors) and hard (generated by physics-based sources) evidence into the fusion process. However, the computation of the FH conditionals imposes a significant computational burden. One reason for this is the difficulty in identifying the FH conditional core, i.e., the set of propositions receiving nonzero support after conditioning. The conditional core theorem (CCT) in this paper redresses this shortcoming by explicitly identifying the conditional focal elements with no recourse to numerical computations, thereby providing a complete characterization of the conditional core. In addition, we derive explicit results to identify those conditioning propositions that may have generated a given conditional core. This "converse" to the CCT is of significant practical value for studying the sensitivity of the updated knowledge base with respect to the evidence received. Based on the CCT, we also develop an algorithm to efficiently compute the conditional masses (generated by FH conditionals), provide bounds on its computational complexity, and employ extensive simulations to analyze its behavior. PMID:23033433

  13. Computationally efficient characterization of potential energy surfaces based on fingerprint distances

    NASA Astrophysics Data System (ADS)

    Schaefer, Bastian; Goedecker, Stefan

    2016-07-01

    An analysis of the network defined by the potential energy minima of multi-atomic systems and their connectivity via reaction pathways that go through transition states allows us to understand important characteristics like thermodynamic, dynamic, and structural properties. Unfortunately computing the transition states and reaction pathways in addition to the significant energetically low-lying local minima is a computationally demanding task. We here introduce a computationally efficient method that is based on a combination of the minima hopping global optimization method and the insight that uphill barriers tend to increase with increasing structural distances of the educt and product states. This method allows us to replace the exact connectivity information and transition state energies with alternative and approximate concepts. Without adding any significant additional cost to the minima hopping global optimization approach, this method allows us to generate an approximate network of the minima, their connectivity, and a rough measure for the energy needed for their interconversion. This can be used to obtain a first qualitative idea on important physical and chemical properties by means of a disconnectivity graph analysis. Besides the physical insight obtained by such an analysis, the gained knowledge can be used to make a decision if it is worthwhile or not to invest computational resources for an exact computation of the transition states and the reaction pathways. Furthermore it is demonstrated that the here presented method can be used for finding physically reasonable interconversion pathways that are promising input pathways for methods like transition path sampling or discrete path sampling.

  14. CAD/CAM interface design of excimer laser micro-processing system

    NASA Astrophysics Data System (ADS)

    Jing, Liang; Chen, Tao; Zuo, Tiechuan

    2005-12-01

    Recently CAD/CAM technology has been gradually used in the field of laser processing. The excimer laser micro-processing system just identified G instruction before CAD/CAM interface was designed. However the course of designing a part with G instruction for users is too hard. The efficiency is low and probability of making errors is high. By secondary development technology of AutoCAD with Visual Basic, an application was developed to pick-up each entity's information in graph and convert them to each entity's processing parameters. Also an additional function was added into former controlling software to identify these processing parameters of each entity and realize continue processing of graphic. Based on the above CAD/CAM interface, Users can design a part in AutoCAD instead of using G instruction. The period of designing a part is sharply shortened. This new way of design greatly guarantees the processing parameters of the part is right and exclusive. The processing of complex novel bio-chip has been realized by this new function.

  15. A new data integration approach for AutoCAD and GIS

    NASA Astrophysics Data System (ADS)

    Ye, Hongmei; Li, Yuhong; Wang, Cheng; Li, Lijun

    2006-10-01

    GIS has its advantages both on spatial data analysis and management, particularly on the geometric and attributive information management, which has also attracted lots attentions among researchers around world. AutoCAD plays more and more important roles as one of the main data sources of GIS. Various work and achievements can be found in the related literature. However, the conventional data integration from AutoCAD to GIS is time-consuming, which also can cause the information loss both in the geometric aspects and the attributive aspects for a large system. It is necessary and urgent to sort out new approach and algorithm for the efficient high-quality data integration. In this paper, a novel data integration approach from AutoCAD to GIS will be introduced based on the spatial data mining technique through the data structure analysis both in the AutoCAD and GIS. A practicable algorithm for the data conversion from CAD to GIS will be given as well. By a designed evaluation scheme, the accuracy of the conversion both in the geometric and the attributive information will be demonstrated. Finally, the validity and feasibility of the new approach will be shown by an experimental analysis.

  16. Efficient mapping algorithms for scheduling robot inverse dynamics computation on a multiprocessor system

    NASA Technical Reports Server (NTRS)

    Lee, C. S. G.; Chen, C. L.

    1989-01-01

    Two efficient mapping algorithms for scheduling the robot inverse dynamics computation consisting of m computational modules with precedence relationship to be executed on a multiprocessor system consisting of p identical homogeneous processors with processor and communication costs to achieve minimum computation time are presented. An objective function is defined in terms of the sum of the processor finishing time and the interprocessor communication time. The minimax optimization is performed on the objective function to obtain the best mapping. This mapping problem can be formulated as a combination of the graph partitioning and the scheduling problems; both have been known to be NP-complete. Thus, to speed up the searching for a solution, two heuristic algorithms were proposed to obtain fast but suboptimal mapping solutions. The first algorithm utilizes the level and the communication intensity of the task modules to construct an ordered priority list of ready modules and the module assignment is performed by a weighted bipartite matching algorithm. For a near-optimal mapping solution, the problem can be solved by the heuristic algorithm with simulated annealing. These proposed optimization algorithms can solve various large-scale problems within a reasonable time. Computer simulations were performed to evaluate and verify the performance and the validity of the proposed mapping algorithms. Finally, experiments for computing the inverse dynamics of a six-jointed PUMA-like manipulator based on the Newton-Euler dynamic equations were implemented on an NCUBE/ten hypercube computer to verify the proposed mapping algorithms. Computer simulation and experimental results are compared and discussed.

  17. Capture Efficiency of Biocompatible Magnetic Nanoparticles in Arterial Flow: A Computer Simulation for Magnetic Drug Targeting

    NASA Astrophysics Data System (ADS)

    Lunnoo, Thodsaphon; Puangmali, Theerapong

    2015-10-01

    The primary limitation of magnetic drug targeting (MDT) relates to the strength of an external magnetic field which decreases with increasing distance. Small nanoparticles (NPs) displaying superparamagnetic behaviour are also required in order to reduce embolization in the blood vessel. The small NPs, however, make it difficult to vector NPs and keep them in the desired location. The aims of this work were to investigate parameters influencing the capture efficiency of the drug carriers in mimicked arterial flow. In this work, we computationally modelled and evaluated capture efficiency in MDT with COMSOL Multiphysics 4.4. The studied parameters were (i) magnetic nanoparticle size, (ii) three classes of magnetic cores (Fe3O4, Fe2O3, and Fe), and (iii) the thickness of biocompatible coating materials (Au, SiO2, and PEG). It was found that the capture efficiency of small particles decreased with decreasing size and was less than 5 % for magnetic particles in the superparamagnetic regime. The thickness of non-magnetic coating materials did not significantly influence the capture efficiency of MDT. It was difficult to capture small drug carriers ( D<200 nm) in the arterial flow. We suggest that the MDT with high-capture efficiency can be obtained in small vessels and low-blood velocities such as micro-capillary vessels.

  18. Efficiency Improvement Opportunities for Personal Computer Monitors. Implications for Market Transformation Programs

    SciTech Connect

    Park, Won Young; Phadke, Amol; Shah, Nihar

    2012-06-29

    Displays account for a significant portion of electricity consumed in personal computer (PC) use, and global PC monitor shipments are expected to continue to increase. We assess the market trends in the energy efficiency of PC monitors that are likely to occur without any additional policy intervention and estimate that display efficiency will likely improve by over 40% by 2015 compared to today’s technology. We evaluate the cost effectiveness of a key technology which further improves efficiency beyond this level by at least 20% and find that its adoption is cost effective. We assess the potential for further improving efficiency taking into account the recent development of universal serial bus (USB) powered liquid crystal display (LCD) monitors and find that the current technology available and deployed in USB powered monitors has the potential to deeply reduce energy consumption by as much as 50%. We provide insights for policies and programs that can be used to accelerate the adoption of efficient technologies to capture global energy saving potential from PC monitors which we estimate to be 9.2 terawatt-hours [TWh] per year in 2015.

  19. Capture Efficiency of Biocompatible Magnetic Nanoparticles in Arterial Flow: A Computer Simulation for Magnetic Drug Targeting.

    PubMed

    Lunnoo, Thodsaphon; Puangmali, Theerapong

    2015-12-01

    The primary limitation of magnetic drug targeting (MDT) relates to the strength of an external magnetic field which decreases with increasing distance. Small nanoparticles (NPs) displaying superparamagnetic behaviour are also required in order to reduce embolization in the blood vessel. The small NPs, however, make it difficult to vector NPs and keep them in the desired location. The aims of this work were to investigate parameters influencing the capture efficiency of the drug carriers in mimicked arterial flow. In this work, we computationally modelled and evaluated capture efficiency in MDT with COMSOL Multiphysics 4.4. The studied parameters were (i) magnetic nanoparticle size, (ii) three classes of magnetic cores (Fe3O4, Fe2O3, and Fe), and (iii) the thickness of biocompatible coating materials (Au, SiO2, and PEG). It was found that the capture efficiency of small particles decreased with decreasing size and was less than 5 % for magnetic particles in the superparamagnetic regime. The thickness of non-magnetic coating materials did not significantly influence the capture efficiency of MDT. It was difficult to capture small drug carriers (D<200 nm) in the arterial flow. We suggest that the MDT with high-capture efficiency can be obtained in small vessels and low-blood velocities such as micro-capillary vessels. PMID:26515074

  20. Custom hip prostheses by integrating CAD and casting technology

    NASA Astrophysics Data System (ADS)

    Silva, Pedro F.; Leal, Nuno; Neto, Rui J.; Lino, F. Jorge; Reis, Ana

    2012-09-01

    Total Hip Arthroplasty (THA) is a surgical intervention that is being achieving high rates of success, leaving room to research on long run durability, patient comfort and costs reduction. Even so, up to the present, little research has been done to improve the method of manufacturing customized prosthesis. The common customized prostheses are made by full machining. This document presents a different approach methodology which combines the study of medical images, through CAD (Computer Aided Design) software, SLadditive manufacturing, ceramic shell manufacture, precision foundry with Titanium alloys and Computer Aided Manufacturing (CAM). The goal is to achieve the best comfort for the patient, stress distribution and the maximum lifetime of the prosthesis produced by this integrated methodology. The way to achieve this desiderate is to make custom hip prosthesis which are adapted to each patient needs and natural physiognomy. Not only the process is reliable, but also represents a cost reduction comparing to the conventional full machined custom hip prosthesis.

  1. Uncertainty in Aspiration Efficiency Estimates from Torso Simplifications in Computational Fluid Dynamics Simulations

    PubMed Central

    Anthony, T. Renée

    2013-01-01

    Computational fluid dynamics (CFD) has been used to report particle inhalability in low velocity freestreams, where realistic faces but simplified, truncated, and cylindrical human torsos were used. When compared to wind tunnel velocity studies, the truncated models were found to underestimate the air’s upward velocity near the humans, raising questions about aspiration estimation. This work compares aspiration efficiencies for particles ranging from 7 to 116 µm using three torso geometries: (i) a simplified truncated cylinder, (ii) a non-truncated cylinder, and (iii) an anthropometrically realistic humanoid body. The primary aim of this work is to (i) quantify the errors introduced by using a simplified geometry and (ii) determine the required level of detail to adequately represent a human form in CFD studies of aspiration efficiency. Fluid simulations used the standard k-epsilon turbulence models, with freestream velocities at 0.1, 0.2, and 0.4 m s−1 and breathing velocities at 1.81 and 12.11 m s−1 to represent at-rest and heavy breathing rates, respectively. Laminar particle trajectory simulations were used to determine the upstream area, also known as the critical area, where particles would be inhaled. These areas were used to compute aspiration efficiencies for facing the wind. Significant differences were found in both vertical velocity estimates and the location of the critical area between the three models. However, differences in aspiration efficiencies between the three forms were <8.8% over all particle sizes, indicating that there is little difference in aspiration efficiency between torso models. PMID:23006817

  2. An efficient and general numerical method to compute steady uniform vortices

    NASA Astrophysics Data System (ADS)

    Luzzatto-Fegiz, Paolo; Williamson, Charles H. K.

    2011-07-01

    Steady uniform vortices are widely used to represent high Reynolds number flows, yet their efficient computation still presents some challenges. Existing Newton iteration methods become inefficient as the vortices develop fine-scale features; in addition, these methods cannot, in general, find solutions with specified Casimir invariants. On the other hand, available relaxation approaches are computationally inexpensive, but can fail to converge to a solution. In this paper, we overcome these limitations by introducing a new discretization, based on an inverse-velocity map, which radically increases the efficiency of Newton iteration methods. In addition, we introduce a procedure to prescribe Casimirs and remove the degeneracies in the steady vorticity equation, thus ensuring convergence for general vortex configurations. We illustrate our methodology by considering several unbounded flows involving one or two vortices. Our method enables the computation, for the first time, of steady vortices that do not exhibit any geometric symmetry. In addition, we discover that, as the limiting vortex state for each flow is approached, each family of solutions traces a clockwise spiral in a bifurcation plot consisting of a velocity-impulse diagram. By the recently introduced "IVI diagram" stability approach [Phys. Rev. Lett. 104 (2010) 044504], each turn of this spiral is associated with a loss of stability for the steady flows. Such spiral structure is suggested to be a universal feature of steady, uniform-vorticity flows.

  3. Efficient Calibration of Computationally Intensive Groundwater Models through Surrogate Modelling with Lower Levels of Fidelity

    NASA Astrophysics Data System (ADS)

    Razavi, S.; Anderson, D.; Martin, P.; MacMillan, G.; Tolson, B.; Gabriel, C.; Zhang, B.

    2012-12-01

    Many sophisticated groundwater models tend to be computationally intensive as they rigorously represent detailed scientific knowledge about the groundwater systems. Calibration (model inversion), which is a vital step of groundwater model development, can require hundreds or thousands of model evaluations (runs) for different sets of parameters and as such demand prohibitively large computational time and resources. One common strategy to circumvent this computational burden is surrogate modelling which is concerned with developing and utilizing fast-to-run surrogates of the original computationally intensive models (also called fine models). Surrogates can be either based on statistical and data-driven models such as kriging and neural networks or simplified physically-based models with lower fidelity to the original system (also called coarse models). Fidelity in this context refers to the degree of the realism of a simulation model. This research initially investigates different strategies for developing lower-fidelity surrogates of a fine groundwater model and their combinations. These strategies include coarsening the fine model, relaxing the numerical convergence criteria, and simplifying the model geological conceptualisation. Trade-offs between model efficiency and fidelity (accuracy) are of special interest. A methodological framework is developed for coordinating the original fine model with its lower-fidelity surrogates with the objective of efficiently calibrating the parameters of the original model. This framework is capable of mapping the original model parameters to the corresponding surrogate model parameters and also mapping the surrogate model response for the given parameters to the original model response. This framework is general in that it can be used with different optimization and/or uncertainty analysis techniques available for groundwater model calibration and parameter/predictive uncertainty assessment. A real-world computationally

  4. Sampling efficiency of modified 37-mm sampling cassettes using computational fluid dynamics.

    PubMed

    Anthony, T Renée; Sleeth, Darrah; Volckens, John

    2016-01-01

    In the U.S., most industrial hygiene practitioners continue to rely on the closed-face cassette (CFC) to assess worker exposures to hazardous dusts, primarily because ease of use, cost, and familiarity. However, mass concentrations measured with this classic sampler underestimate exposures to larger particles throughout the inhalable particulate mass (IPM) size range (up to aerodynamic diameters of 100 μm). To investigate whether the current 37-mm inlet cap can be redesigned to better meet the IPM sampling criterion, computational fluid dynamics (CFD) models were developed, and particle sampling efficiencies associated with various modifications to the CFC inlet cap were determined. Simulations of fluid flow (standard k-epsilon turbulent model) and particle transport (laminar trajectories, 1-116 μm) were conducted using sampling flow rates of 10 L min(-1) in slow moving air (0.2 m s(-1)) in the facing-the-wind orientation. Combinations of seven inlet shapes and three inlet diameters were evaluated as candidates to replace the current 37-mm inlet cap. For a given inlet geometry, differences in sampler efficiency between inlet diameters averaged less than 1% for particles through 100 μm, but the largest opening was found to increase the efficiency for the 116 μm particles by 14% for the flat inlet cap. A substantial reduction in sampler efficiency was identified for sampler inlets with side walls extending beyond the dimension of the external lip of the current 37-mm CFC. The inlet cap based on the 37-mm CFC dimensions with an expanded 15-mm entry provided the best agreement with facing-the-wind human aspiration efficiency. The sampler efficiency was increased with a flat entry or with a thin central lip adjacent to the new enlarged entry. This work provides a substantial body of sampling efficiency estimates as a function of particle size and inlet geometry for personal aerosol samplers. PMID:26513395

  5. Computationally efficient multidimensional analysis of complex flow cytometry data using second order polynomial histograms.

    PubMed

    Zaunders, John; Jing, Junmei; Leipold, Michael; Maecker, Holden; Kelleher, Anthony D; Koch, Inge

    2016-01-01

    Many methods have been described for automated clustering analysis of complex flow cytometry data, but so far the goal to efficiently estimate multivariate densities and their modes for a moderate number of dimensions and potentially millions of data points has not been attained. We have devised a novel approach to describing modes using second order polynomial histogram estimators (SOPHE). The method divides the data into multivariate bins and determines the shape of the data in each bin based on second order polynomials, which is an efficient computation. These calculations yield local maxima and allow joining of adjacent bins to identify clusters. The use of second order polynomials also optimally uses wide bins, such that in most cases each parameter (dimension) need only be divided into 4-8 bins, again reducing computational load. We have validated this method using defined mixtures of up to 17 fluorescent beads in 16 dimensions, correctly identifying all populations in data files of 100,000 beads in <10 s, on a standard laptop. The method also correctly clustered granulocytes, lymphocytes, including standard T, B, and NK cell subsets, and monocytes in 9-color stained peripheral blood, within seconds. SOPHE successfully clustered up to 36 subsets of memory CD4 T cells using differentiation and trafficking markers, in 14-color flow analysis, and up to 65 subpopulations of PBMC in 33-dimensional CyTOF data, showing its usefulness in discovery research. SOPHE has the potential to greatly increase efficiency of analysing complex mixtures of cells in higher dimensions. PMID:26097104

  6. Improving Computational Efficiency of Prediction in Model-Based Prognostics Using the Unscented Transform

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew John; Goebel, Kai Frank

    2010-01-01

    Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of end of life (EOL). EOL is predicted based on the estimated current state distribution of a component and expected profiles of future usage. In general, this requires simulations of the component using the underlying models. In this paper, we develop a simulation-based prediction methodology that achieves computational efficiency by performing only the minimal number of simulations needed in order to accurately approximate the mean and variance of the complete EOL distribution. This is performed through the use of the unscented transform, which predicts the means and covariances of a distribution passed through a nonlinear transformation. In this case, the EOL simulation acts as that nonlinear transformation. In this paper, we review the unscented transform, and describe how this concept is applied to efficient EOL prediction. As a case study, we develop a physics-based model of a solenoid valve, and perform simulation experiments to demonstrate improved computational efficiency without sacrificing prediction accuracy.

  7. Computational Efficient Upscaling Methodology for Predicting Thermal Conductivity of Nuclear Waste forms

    SciTech Connect

    Li, Dongsheng; Sun, Xin; Khaleel, Mohammad A.

    2011-09-28

    This study evaluated different upscaling methods to predict thermal conductivity in loaded nuclear waste form, a heterogeneous material system. The efficiency and accuracy of these methods were compared. Thermal conductivity in loaded nuclear waste form is an important property specific to scientific researchers, in waste form Integrated performance and safety code (IPSC). The effective thermal conductivity obtained from microstructure information and local thermal conductivity of different components is critical in predicting the life and performance of waste form during storage. How the heat generated during storage is directly related to thermal conductivity, which in turn determining the mechanical deformation behavior, corrosion resistance and aging performance. Several methods, including the Taylor model, Sachs model, self-consistent model, and statistical upscaling models were developed and implemented. Due to the absence of experimental data, prediction results from finite element method (FEM) were used as reference to determine the accuracy of different upscaling models. Micrographs from different loading of nuclear waste were used in the prediction of thermal conductivity. Prediction results demonstrated that in term of efficiency, boundary models (Taylor and Sachs model) are better than self consistent model, statistical upscaling method and FEM. Balancing the computation resource and accuracy, statistical upscaling is a computational efficient method in predicting effective thermal conductivity for nuclear waste form.

  8. TRI2SOLID: an application of reverse engineering methods to the creation of CAD models of bone segments.

    PubMed

    Viceconti, M; Zannoni, C; Pierotti, L

    1998-06-01

    For many biomechanical engineering activities it would be useful to have the three dimensional (3D) geometry of bone segments available in form of vectorial models within computer aided design (CAD) environments. In this paper a new method for the semi-automatic conversion of a stack of CT images of a femur into a CAD solid model is described. This method is relatively simple, accurate, and requires only a 3D CAD plus a few additional programs available in the public domain. The proposed method was used to convert the CT scan data set of a human femur into a valid CAD model; the resulting solid was two times more accurate than that obtained using the commonly used procedure based on 2D segmentation. PMID:9725647

  9. CAD and CAE Analysis for Siphon Jet Toilet

    NASA Astrophysics Data System (ADS)

    Wang, Yuhua; Xiu, Guoji; Tan, Haishu

    The high precision 3D laser scanner with the dual CCD technology was used to measure the original design sample of a siphon jet toilet. The digital toilet model was constructed from the cloud data measured with the curve and surface fitting technology and the CAD/CAE systems. The Realizable k - ɛ double equation model of the turbulence viscosity coefficient method and the VOF multiphase flow model were used to simulate the flushing flow in the toilet digital model. Through simulating and analyzing the distribution of the flushing flow's total pressure, the flow speed at the toilet-basin surface and the siphoning bent tube, the toilet performance can be evaluated efficiently and conveniently. The method of "establishing digital model, flushing flow simulating, performances evaluating, function shape modifying" would provide a high efficiency approach to develop new water-saving toilets.

  10. CAD-RADS(TM) Coronary Artery Disease - Reporting and Data System. An expert consensus document of the Society of Cardiovascular Computed Tomography (SCCT), the American College of Radiology (ACR) and the North American Society for Cardiovascular Imaging (NASCI). Endorsed by the American College of Cardiology.

    PubMed

    Cury, Ricardo C; Abbara, Suhny; Achenbach, Stephan; Agatston, Arthur; Berman, Daniel S; Budoff, Matthew J; Dill, Karin E; Jacobs, Jill E; Maroules, Christopher D; Rubin, Geoffrey D; Rybicki, Frank J; Schoepf, U Joseph; Shaw, Leslee J; Stillman, Arthur E; White, Charles S; Woodard, Pamela K; Leipsic, Jonathon A

    2016-01-01

    The intent of CAD-RADS - Coronary Artery Disease Reporting and Data System is to create a standardized method to communicate findings of coronary CT angiography (coronary CTA) in order to facilitate decision-making regarding further patient management. The suggested CAD-RADS classification is applied on a per-patient basis and represents the highest-grade coronary artery lesion documented by coronary CTA. It ranges from CAD-RADS 0 (Zero) for the complete absence of stenosis and plaque to CAD-RADS 5 for the presence of at least one totally occluded coronary artery and should always be interpreted in conjunction with the impression found in the report. Specific recommendations are provided for further management of patients with stable or acute chest pain based on the CAD-RADS classification. The main goal of CAD-RADS is to standardize reporting of coronary CTA results and to facilitate communication of test results to referring physicians along with suggestions for subsequent patient management. In addition, CAD-RADS will provide a framework of standardization that may benefit education, research, peer-review and quality assurance with the potential to ultimately result in improved quality of care. PMID:27318587

  11. Recent improvements in efficiency, accuracy, and convergence for implicit approximate factorization algorithms. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Pulliam, T. H.; Steger, J. L.

    1985-01-01

    In 1977 and 1978, general purpose centrally space differenced implicit finite difference codes in two and three dimensions have been introduced. These codes, now called ARC2D and ARC3D, can run either in inviscid or viscous mode for steady or unsteady flow. Since the introduction of the ARC2D and ARC3D codes, overall computational efficiency could be improved by making use of a number of algorithmic changes. These changes are related to the use of a spatially varying time step, the use of a sequence of mesh refinements to establish approximate solutions, implementation of various ways to reduce inversion work, improved numerical dissipation terms, and more implicit treatment of terms. The present investigation has the objective to describe the considered improvements and to quantify advantages and disadvantages. It is found that using established and simple procedures, a computer code can be maintained which is competitive with specialized codes.

  12. A computationally efficient description of heterogeneous freezing: A simplified version of the Soccer ball model

    NASA Astrophysics Data System (ADS)

    Niedermeier, Dennis; Ervens, Barbara; Clauss, Tina; Voigtländer, Jens; Wex, Heike; Hartmann, Susan; Stratmann, Frank

    2014-01-01

    In a recent study, the Soccer ball model (SBM) was introduced for modeling and/or parameterizing heterogeneous ice nucleation processes. The model applies classical nucleation theory. It allows for a consistent description of both apparently singular and stochastic ice nucleation behavior, by distributing contact angles over the nucleation sites of a particle population assuming a Gaussian probability density function. The original SBM utilizes the Monte Carlo technique, which hampers its usage in atmospheric models, as fairly time-consuming calculations must be performed to obtain statistically significant results. Thus, we have developed a simplified and computationally more efficient version of the SBM. We successfully used the new SBM to parameterize experimental nucleation data of, e.g., bacterial ice nucleation. Both SBMs give identical results; however, the new model is computationally less expensive as confirmed by cloud parcel simulations. Therefore, it is a suitable tool for describing heterogeneous ice nucleation processes in atmospheric models.

  13. An efficient computational method for use in structural studies of crystals with substitutional disorder.

    PubMed

    Poloni, Roberta; Íñiguez, Jorge; García, Alberto; Canadell, Enric

    2010-10-20

    We present a computationally efficient semi-empirical method, based on standard first-principles techniques and the so-called virtual crystal approximation, for determining the average atomic structure of crystals with substitutional disorder. We show that, making use of a minimal amount of experimental information, it is possible to define convenient figures of merit that allow us to recast the determination of the average atomic ordering within the unit cell as a minimization problem. We have tested our approach by applying it to a wide variety of materials, ranging from oxynitrides to borocarbides and transition-metal perovskite oxides. In all the cases we were able to reproduce the experimental solution, when it exists, or the first-principles result obtained by means of much more computationally intensive approaches. PMID:21386597

  14. Use of global functions for improvement in efficiency of nonlinear analysis. [in computer structural displacement estimation

    NASA Technical Reports Server (NTRS)

    Almroth, B. O.; Stehlin, P.; Brogan, F. A.

    1981-01-01

    A method for improving the efficiency of nonlinear structural analysis by the use of global displacement functions is presented. The computer programs include options to define the global functions as input or let the program automatically select and update these functions. The program was applied to a number of structures: (1) 'pear-shaped cylinder' in compression, (2) bending of a long cylinder, (3) spherical shell subjected to point force, (4) panel with initial imperfections, (5) cylinder with cutouts. The sample cases indicate the usefulness of the procedure in the solution of nonlinear structural shell problems by the finite element method. It is concluded that the use of global functions for extrapolation will lead to savings in computer time.

  15. Efficient computation of Hamiltonian matrix elements between non-orthogonal Slater determinants

    NASA Astrophysics Data System (ADS)

    Utsuno, Yutaka; Shimizu, Noritaka; Otsuka, Takaharu; Abe, Takashi

    2013-01-01

    We present an efficient numerical method for computing Hamiltonian matrix elements between non-orthogonal Slater determinants, focusing on the most time-consuming component of the calculation that involves a sparse array. In the usual case where many matrix elements should be calculated, this computation can be transformed into a multiplication of dense matrices. It is demonstrated that the present method based on the matrix-matrix multiplication attains ˜80% of the theoretical peak performance measured on systems equipped with modern microprocessors, a factor of 5-10 better than the normal method using indirectly indexed arrays to treat a sparse array. The reason for such different performances is discussed from the viewpoint of memory access.

  16. A network of spiking neurons for computing sparse representations in an energy efficient way

    PubMed Central

    Hu, Tao; Genkin, Alexander; Chklovskii, Dmitri B.

    2013-01-01

    Computing sparse redundant representations is an important problem both in applied mathematics and neuroscience. In many applications, this problem must be solved in an energy efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating via low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, such operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We compare the numerical performance of HDA with existing algorithms and show that in the asymptotic regime the representation error of HDA decays with time, t, as 1/t. We show that HDA is stable against time-varying noise, specifically, the representation error decays as 1/t for Gaussian white noise. PMID:22920853

  17. Modeling weakly-ionized plasmas in magnetic field: A new computationally-efficient approach

    NASA Astrophysics Data System (ADS)

    Parent, Bernard; Macheret, Sergey O.; Shneider, Mikhail N.

    2015-11-01

    Despite its success at simulating accurately both non-neutral and quasi-neutral weakly-ionized plasmas, the drift-diffusion model has been observed to be a particularly stiff set of equations. Recently, it was demonstrated that the stiffness of the system could be relieved by rewriting the equations such that the potential is obtained from Ohm's law rather than Gauss's law while adding some source terms to the ion transport equation to ensure that Gauss's law is satisfied in non-neutral regions. Although the latter was applicable to multicomponent and multidimensional plasmas, it could not be used for plasmas in which the magnetic field was significant. This paper hence proposes a new computationally-efficient set of electron and ion transport equations that can be used not only for a plasma with multiple types of positive and negative ions, but also for a plasma in magnetic field. Because the proposed set of equations is obtained from the same physical model as the conventional drift-diffusion equations without introducing new assumptions or simplifications, it results in the same exact solution when the grid is refined sufficiently while being more computationally efficient: not only is the proposed approach considerably less stiff and hence requires fewer iterations to reach convergence but it yields a converged solution that exhibits a significantly higher resolution. The combined faster convergence and higher resolution is shown to result in a hundredfold increase in computational efficiency for some typical steady and unsteady plasma problems including non-neutral cathode and anode sheaths as well as quasi-neutral regions.

  18. Efficient Computation of Closed-loop Frequency Response for Large Order Flexible Systems

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Giesy, Daniel P.

    1997-01-01

    An efficient and robust computational scheme is given for the calculation of the frequency response function of a large order, flexible system implemented with a linear, time invariant control system. Advantage is taken of the highly structured sparsity of the system matrix of the plant based on a model of the structure using normal mode coordinates. The computational time per frequency point of the new computational scheme is a linear function of system size, a significant improvement over traditional, full-matrix techniques whose computational times per frequency point range from quadratic to cubic functions of system size. This permits the practical frequency domain analysis of systems of much larger order than by traditional, full-matrix techniques. Formulations are given for both open and closed loop loop systems. Numerical examples are presented showing the advantages of the present formulation over traditional approaches, both in speed and in accuracy. Using a model with 703 structural modes, a speed-up of almost two orders of magnitude was observed while accuracy improved by up to 5 decimal places.

  19. Algorithms for Efficient Computation of Transfer Functions for Large Order Flexible Systems

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Giesy, Daniel P.

    1998-01-01

    An efficient and robust computational scheme is given for the calculation of the frequency response function of a large order, flexible system implemented with a linear, time invariant control system. Advantage is taken of the highly structured sparsity of the system matrix of the plant based on a model of the structure using normal mode coordinates. The computational time per frequency point of the new computational scheme is a linear function of system size, a significant improvement over traditional, still-matrix techniques whose computational times per frequency point range from quadratic to cubic functions of system size. This permits the practical frequency domain analysis of systems of much larger order than by traditional, full-matrix techniques. Formulations are given for both open- and closed-loop systems. Numerical examples are presented showing the advantages of the present formulation over traditional approaches, both in speed and in accuracy. Using a model with 703 structural modes, the present method was up to two orders of magnitude faster than a traditional method. The present method generally showed good to excellent accuracy throughout the range of test frequencies, while traditional methods gave adequate accuracy for lower frequencies, but generally deteriorated in performance at higher frequencies with worst case errors being many orders of magnitude times the correct values.

  20. Interoperation of heterogeneous CAD tools in Ptolemy II

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Wu, Bicheng; Liu, Xiaojun; Lee, Edward A.

    1999-03-01

    Typical complex systems that involve microsensors and microactuators exhibit heterogeneity both at the implementation level and the problem level. For example, a system can be modeled using discrete events for digital circuits and SPICE-like analog descriptions for sensors. This heterogeneity exist not only in different implementation domains, but also at different level of abstraction. This naturally leads to a heterogeneous approach to system design that uses domain-specific models of computation (MoC) at various levels of abstractions to define a system, and leverages multiple CAD tools to do simulation, verification and synthesis. As the size and scope of the system increase, the integration becomes too difficult and unmanageable if different tools are coordinated using simple scripts. In addition, for MEMS devices and mixed-signal circuits, it is essential to integrate tools with different MoC to simulate the whole system. Ptolemy II, a heterogeneous system-level design tool, supports the interaction among different MoCs. This paper discusses heterogeneous CAD tool interoperability in the Ptolemy II framework. The key is to understand the semantic interface and classify the tools by their MoC and their level of abstraction. Interfaces are designed for each domain so that the external tools can be easily wrapped. Then the interoperability of the tools becomes the interoperability of the semantics. Ptolemy II can act as the standard interface among different tools to achieve the overall design modeling. A micro-accelerometer with digital feedback is studied as an example.

  1. Efficient solid state NMR powder simulations using SMP and MPP parallel computation.

    PubMed

    Kristensen, Jørgen Holm; Farnan, Ian

    2003-04-01

    Methods for parallel simulation of solid state NMR powder spectra are presented for both shared and distributed memory parallel supercomputers. For shared memory architectures the performance of simulation programs implementing the OpenMP application programming interface is evaluated. It is demonstrated that the design of correct and efficient shared memory parallel programs is difficult as the performance depends on data locality and cache memory effects. The distributed memory parallel programming model is examined for simulation programs using the MPI message passing interface. The results reveal that both shared and distributed memory parallel computation are very efficient with an almost perfect application speedup and may be applied to the most advanced powder simulations. PMID:12713968

  2. An accurate and computationally efficient model for membrane-type circular-symmetric micro-hotplates.

    PubMed

    Khan, Usman; Falconi, Christian

    2014-01-01

    Ideally, the design of high-performance micro-hotplates would require a large number of simulations because of the existence of many important design parameters as well as the possibly crucial effects of both spread and drift. However, the computational cost of FEM simulations, which are the only available tool for accurately predicting the temperature in micro-hotplates, is very high. As a result, micro-hotplate designers generally have no effective simulation-tools for the optimization. In order to circumvent these issues, here, we propose a model for practical circular-symmetric micro-hot-plates which takes advantage of modified Bessel functions, computationally efficient matrix-approach for considering the relevant boundary conditions, Taylor linearization for modeling the Joule heating and radiation losses, and external-region-segmentation strategy in order to accurately take into account radiation losses in the entire micro-hotplate. The proposed model is almost as accurate as FEM simulations and two to three orders of magnitude more computationally efficient (e.g., 45 s versus more than 8 h). The residual errors, which are mainly associated to the undesired heating in the electrical contacts, are small (e.g., few degrees Celsius for an 800 °C operating temperature) and, for important analyses, almost constant. Therefore, we also introduce a computationally-easy single-FEM-compensation strategy in order to reduce the residual errors to about 1 °C. As illustrative examples of the power of our approach, we report the systematic investigation of a spread in the membrane thermal conductivity and of combined variations of both ambient and bulk temperatures. Our model enables a much faster characterization of micro-hotplates and, thus, a much more effective optimization prior to fabrication. PMID:24763214

  3. An Accurate and Computationally Efficient Model for Membrane-Type Circular-Symmetric Micro-Hotplates

    PubMed Central

    Khan, Usman; Falconi, Christian

    2014-01-01

    Ideally, the design of high-performance micro-hotplates would require a large number of simulations because of the existence of many important design parameters as well as the possibly crucial effects of both spread and drift. However, the computational cost of FEM simulations, which are the only available tool for accurately predicting the temperature in micro-hotplates, is very high. As a result, micro-hotplate designers generally have no effective simulation-tools for the optimization. In order to circumvent these issues, here, we propose a model for practical circular-symmetric micro-hot-plates which takes advantage of modified Bessel functions, computationally efficient matrix-approach for considering the relevant boundary conditions, Taylor linearization for modeling the Joule heating and radiation losses, and external-region-segmentation strategy in order to accurately take into account radiation losses in the entire micro-hotplate. The proposed model is almost as accurate as FEM simulations and two to three orders of magnitude more computationally efficient (e.g., 45 s versus more than 8 h). The residual errors, which are mainly associated to the undesired heating in the electrical contacts, are small (e.g., few degrees Celsius for an 800 °C operating temperature) and, for important analyses, almost constant. Therefore, we also introduce a computationally-easy single-FEM-compensation strategy in order to reduce the residual errors to about 1 °C. As illustrative examples of the power of our approach, we report the systematic investigation of a spread in the membrane thermal conductivity and of combined variations of both ambient and bulk temperatures. Our model enables a much faster characterization of micro-hotplates and, thus, a much more effective optimization prior to fabrication. PMID:24763214

  4. Hierarchy of Efficiently Computable and Faithful Lower Bounds to Quantum Discord

    NASA Astrophysics Data System (ADS)

    Piani, Marco

    2016-08-01

    Quantum discord expresses a fundamental nonclassicality of correlations that is more general than entanglement, but that, in its standard definition, is not easily evaluated. We derive a hierarchy of computationally efficient lower bounds to the standard quantum discord. Every nontrivial element of the hierarchy constitutes by itself a valid discordlike measure, based on a fundamental feature of quantum correlations: their lack of shareability. Our approach emphasizes how the difference between entanglement and discord depends on whether shareability is intended as a static property or as a dynamical process.

  5. Geometrical splitting technique to improve the computational efficiency in Monte Carlo calculations for proton therapy

    NASA Astrophysics Data System (ADS)

    Ramos-Mendez, J. A.; Perl, J.; Faddegon, B.; Paganetti, H.

    2012-10-01

    In this work, the well accepted particle splitting technique has been adapted to proton therapy and implemented in a new Monte Carlo simulation tool (TOPAS) for modeling the gantry mounted treatment nozzles at the Northeast Proton Therapy Center (NPTC) at Massachusetts General Hospital (MGH). Gains up to a factor of 14.5 in computational efficiency were reached with respect to a reference simulation in the generation of the phase space data in the cylindrically symmetric region of the nozzle. Comparisons between dose profiles in a water tank for several configurations show agreement between the simulations done with and without particle splitting within the statistical precision.

  6. Hierarchy of Efficiently Computable and Faithful Lower Bounds to Quantum Discord.

    PubMed

    Piani, Marco

    2016-08-19

    Quantum discord expresses a fundamental nonclassicality of correlations that is more general than entanglement, but that, in its standard definition, is not easily evaluated. We derive a hierarchy of computationally efficient lower bounds to the standard quantum discord. Every nontrivial element of the hierarchy constitutes by itself a valid discordlike measure, based on a fundamental feature of quantum correlations: their lack of shareability. Our approach emphasizes how the difference between entanglement and discord depends on whether shareability is intended as a static property or as a dynamical process. PMID:27588837

  7. CAD/CAM of braided preforms for advanced composites

    NASA Astrophysics Data System (ADS)

    Yang, Gui; Pastore, Christopher; Tsai, Yung Jia; Soebroto, Heru; Ko, Frank

    A CAD/CAM system for braiding to produce preforms for advanced textile structural composites is presented in this paper. The CAD and CAM systems are illustrated in detail. The CAD system identifies the fiber placement and orientation needed to fabricate a braided structure over a mandrel, for subsequent composite formation. The CAM system uses the design parameters generated by the CAD system to control the braiding machine. Experimental evidence demonstrating the success of combining these two technologies to form a unified CAD/CAM system for the manufacture of braided fabric preforms with complex structural shapes is presented.

  8. PVT: An Efficient Computational Procedure to Speed up Next-generation Sequence Analysis

    PubMed Central

    2014-01-01

    Background High-throughput Next-Generation Sequencing (NGS) techniques are advancing genomics and molecular biology research. This technology generates substantially large data which puts up a major challenge to the scientists for an efficient, cost and time effective solution to analyse such data. Further, for the different types of NGS data, there are certain common challenging steps involved in analysing those data. Spliced alignment is one such fundamental step in NGS data analysis which is extremely computational intensive as well as time consuming. There exists serious problem even with the most widely used spliced alignment tools. TopHat is one such widely used spliced alignment tools which although supports multithreading, does not efficiently utilize computational resources in terms of CPU utilization and memory. Here we have introduced PVT (Pipelined Version of TopHat) where we take up a modular approach by breaking TopHat’s serial execution into a pipeline of multiple stages, thereby increasing the degree of parallelization and computational resource utilization. Thus we address the discrepancies in TopHat so as to analyze large NGS data efficiently. Results We analysed the SRA dataset (SRX026839 and SRX026838) consisting of single end reads and SRA data SRR1027730 consisting of paired-end reads. We used TopHat v2.0.8 to analyse these datasets and noted the CPU usage, memory footprint and execution time during spliced alignment. With this basic information, we designed PVT, a pipelined version of TopHat that removes the redundant computational steps during ‘spliced alignment’ and breaks the job into a pipeline of multiple stages (each comprising of different step(s)) to improve its resource utilization, thus reducing the execution time. Conclusions PVT provides an improvement over TopHat for spliced alignment of NGS data analysis. PVT thus resulted in the reduction of the execution time to ~23% for the single end read dataset. Further, PVT designed

  9. Use of cadA-Specific Primers and DNA Probes as Tools to Select Cadmium Biosorbents with Potential in Remediation Strategies.

    PubMed

    Icgen, Bulent; Yilmaz, Fadime

    2016-05-01

    Biosorption, using cadmium-resistant bacterial isolates, is often regarded as a relatively inexpensive and efficient way of cleaning up wastes, sediments, or soils polluted with cadmium. Therefore, many efforts have been devoted to the isolation of cadmium-resistant isolates for the efficient management of cadmium remediation processes. However, isolation, identification and in situ screening of efficient cadmium-resistant isolates are primary challenges. To overcome these challanges, in this study, cadA, cadmium resistance coding gene, specific primers and DNA probes were used to identify and screen cadmium-resistant bacteria in the cadmium-polluted river waters through polymerase chain reaction (PCR) and fluorescein in situ hybridization (FISH). PCR amplification of the cadA amplicon coupled with 16S rRNA sequencing revealed various gram-positive and -negative bacterial isolates harboring cadA. Accordingly, a cadA-mediated DNA probe was prepared and used for in situ screening of cadmium-resistant isolates from water samples collected from cadmium-polluted river waters. The FISH analyses of cadA probe showed highly specific and efficient hybridization with cadA harboring isolates. The use of primers and DNA probes specific for cadA gene seems to be very helpful tools for the selection and screening of cadmium biosorbents with potential to be used in the remediation of cadmium-polluted sites. PMID:26969609

  10. Computationally efficient characterization of potential energy surfaces based on fingerprint distances.

    PubMed

    Schaefer, Bastian; Goedecker, Stefan

    2016-07-21

    An analysis of the network defined by the potential energy minima of multi-atomic systems and their connectivity via reaction pathways that go through transition states allows us to understand important characteristics like thermodynamic, dynamic, and structural properties. Unfortunately computing the transition states and reaction pathways in addition to the significant energetically low-lying local minima is a computationally demanding task. We here introduce a computationally efficient method that is based on a combination of the minima hopping global optimization method and the insight that uphill barriers tend to increase with increasing structural distances of the educt and product states. This method allows us to replace the exact connectivity information and transition state energies with alternative and approximate concepts. Without adding any significant additional cost to the minima hopping global optimization approach, this method allows us to generate an approximate network of the minima, their connectivity, and a rough measure for the energy needed for their interconversion. This can be used to obtain a first qualitative idea on important physical and chemical properties by means of a disconnectivity graph analysis. Besides the physical insight obtained by such an analysis, the gained knowledge can be used to make a decision if it is worthwhile or not to invest computational resources for an exact computation of the transition states and the reaction pathways. Furthermore it is demonstrated that the here presented method can be used for finding physically reasonable interconversion pathways that are promising input pathways for methods like transition path sampling or discrete path sampling. PMID:27448868

  11. Computer-Assisted Dieting: Effects of a Randomized Nutrition Intervention

    ERIC Educational Resources Information Center

    Schroder, Kerstin E. E.

    2011-01-01

    Objectives: To compare the effects of a computer-assisted dieting intervention (CAD) with and without self-management training on dieting among 55 overweight and obese adults. Methods: Random assignment to a single-session nutrition intervention (CAD-only) or a combined CAD plus self-management group intervention (CADG). Dependent variables were…

  12. Complete-mouth rehabilitation using a 3D printing technique and the CAD/CAM double scanning method: A clinical report.

    PubMed

    Joo, Han-Sung; Park, Sang-Won; Yun, Kwi-Dug; Lim, Hyun-Pil

    2016-07-01

    According to evolving computer-aided design/computer-aided manufacturing (CAD/CAM) technology, ceramic materials such as zirconia can be used to create fixed dental prostheses for partial removable dental prostheses. Since 3D printing technology was introduced a few years ago, dental applications of this technique have gradually increased. This clinical report presents a complete-mouth rehabilitation using 3D printing and the CAD/CAM double-scanning method. PMID:26946918

  13. CBT Pilot Program Instructional Guide. Basic Drafting Skills Curriculum Delivered through CAD Workstations and Artificial Intelligence Software.

    ERIC Educational Resources Information Center

    Smith, Richard J.; Sauer, Mardelle A.

    This guide is intended to assist teachers in using computer-aided design (CAD) workstations and artificial intelligence software to teach basic drafting skills. The guide outlines a 7-unit shell program that may also be used as a generic authoring system capable of supporting computer-based training (CBT) in other subject areas. The first section…

  14. AutoCAD-To-GIFTS Translator Program

    NASA Technical Reports Server (NTRS)

    Jones, Andrew

    1989-01-01

    AutoCAD-to-GIFTS translator program, ACTOG, developed to facilitate quick generation of small finite-element models using CASA/GIFTS finite-element modeling program. Reads geometric data of drawing from Data Exchange File (DXF) used in AutoCAD and other PC-based drafting programs. Geometric entities recognized by ACTOG include points, lines, arcs, solids, three-dimensional lines, and three-dimensional faces. From this information, ACTOG creates GIFTS SRC file, which then reads into GIFTS preprocessor BULKM or modified and reads into EDITM to create finite-element model. SRC file used as is or edited for any number of uses. Written in Microsoft Quick-Basic (Version 2.0).

  15. A Novel, Computationally Efficient Multipolar Model Employing Distributed Charges for Molecular Dynamics Simulations.

    PubMed

    Devereux, Mike; Raghunathan, Shampa; Fedorov, Dmitri G; Meuwly, Markus

    2014-10-14

    A truncated multipole expansion can be re-expressed exactly using an appropriate arrangement of point charges. This means that groups of point charges that are shifted away from nuclear coordinates can be used to achieve accurate electrostatics for molecular systems. We introduce a multipolar electrostatic model formulated in this way for use in computationally efficient multipolar molecular dynamics simulations with well-defined forces and energy conservation in NVE (constant number-volume-energy) simulations. A framework is introduced to distribute torques arising from multipole moments throughout a molecule, and a refined fitting approach is suggested to obtain atomic multipole moments that are optimized for accuracy and numerical stability in a force field context. The formulation of the charge model is outlined as it has been implemented into CHARMM, with application to test systems involving H2O and chlorobenzene. As well as ease of implementation and computational efficiency, the approach can be used to provide snapshots for multipolar QM/MM calculations in QM/MM-MD studies and easily combined with a standard point-charge force field to allow mixed multipolar/point charge simulations of large systems. PMID:26588121

  16. Poisson Green's function method for increased computational efficiency in numerical calculations of Coulomb coupling elements

    NASA Astrophysics Data System (ADS)

    Zimmermann, Anke; Kuhn, Sandra; Richter, Marten

    2016-01-01

    Often, the calculation of Coulomb coupling elements for quantum dynamical treatments, e.g., in cluster or correlation expansion schemes, requires the evaluation of a six dimensional spatial integral. Therefore, it represents a significant limiting factor in quantum mechanical calculations. If the size or the complexity of the investigated system increases, many coupling elements need to be determined. The resulting computational constraints require an efficient method for a fast numerical calculation of the Coulomb coupling. We present a computational method to reduce the numerical complexity by decreasing the number of spatial integrals for arbitrary geometries. We use a Green's function formulation of the Coulomb coupling and introduce a generalized scalar potential as solution of a generalized Poisson equation with a generalized charge density as the inhomogeneity. That enables a fast calculation of Coulomb coupling elements and, additionally, a straightforward inclusion of boundary conditions and arbitrarily spatially dependent dielectrics through the Coulomb Green's function. Particularly, if many coupling elements are included, the presented method, which is not restricted to specific symmetries of the model, presents a promising approach for increasing the efficiency of numerical calculations of the Coulomb interaction. To demonstrate the wide range of applications, we calculate internanostructure couplings, such as the Förster coupling, and illustrate the inclusion of symmetry considerations in the method for the Coulomb coupling between bound quantum dot states and unbound continuum states.

  17. An efficient algorithm to compute row and column counts for sparse Cholesky factorization

    SciTech Connect

    Gilbert, J.R. ); Ng, E.G.; Peyton, B.W. )

    1992-09-01

    Let an undirected graph G be given, along with a specified depth- first spanning tree T. We give almost-linear-time algorithms to solve the following two problems: First, for every vertex v, compute the number of descendants w of v for which some descendant of w is adjacent (in G) to v. Second, for every vertx v, compute the number of ancestors of v that are adjacent (in G) to at least one descendant of v. These problems arise in Cholesky and QR factorizations of sparse matrices. Our algorithms can be used to determine the number of nonzero entries in each row and column of the triangular factor of a matrix from the zero/nonzero structure of the matrix. Such a prediction makes storage allocation for sparse matrix factorizations more efficient. Our algorithms run in time linear in the size of the input times a slowly-growing inverse of Ackermann's function. The best previously known algorithms for these problems ran in time linear in the sum of the nonzero counts, which is usually much larger. We give experimental results demonstrating the practical efficiency of the new algorithms.

  18. An efficient algorithm to compute row and column counts for sparse Cholesky factorization

    SciTech Connect

    Gilbert, J.R.; Ng, E.G.; Peyton, B.W.

    1992-09-01

    Let an undirected graph G be given, along with a specified depth- first spanning tree T. We give almost-linear-time algorithms to solve the following two problems: First, for every vertex v, compute the number of descendants w of v for which some descendant of w is adjacent (in G) to v. Second, for every vertx v, compute the number of ancestors of v that are adjacent (in G) to at least one descendant of v. These problems arise in Cholesky and QR factorizations of sparse matrices. Our algorithms can be used to determine the number of nonzero entries in each row and column of the triangular factor of a matrix from the zero/nonzero structure of the matrix. Such a prediction makes storage allocation for sparse matrix factorizations more efficient. Our algorithms run in time linear in the size of the input times a slowly-growing inverse of Ackermann`s function. The best previously known algorithms for these problems ran in time linear in the sum of the nonzero counts, which is usually much larger. We give experimental results demonstrating the practical efficiency of the new algorithms.

  19. Management of CAD/CAM information: Key to improved manufacturing productivity

    NASA Technical Reports Server (NTRS)

    Fulton, R. E.; Brainin, J.

    1984-01-01

    A key element to improved industry productivity is effective management of CAD/CAM information. To stimulate advancements in this area, a joint NASA/Navy/Industry project designated Integrated Programs for Aerospace-Vehicle Design (IPAD) is underway with the goal of raising aerospace industry productivity through advancement of technology to integrate and manage information involved in the design and manufacturing process. The project complements traditional NASA/DOD research to develop aerospace design technology and the Air Force's Integrated Computer-Aided Manufacturing (ICAM) program to advance CAM technology. IPAD research is guided by an Industry Technical Advisory Board (ITAB) composed of over 100 repesentatives from aerospace and computer companies. The IPAD accomplishments to date in development of requirements and prototype software for various levels of company-wide CAD/CAM data management are summarized and plans for development of technology for management of distributed CAD/CAM data and information required to control future knowledge-based CAD/CAM systems are discussed.

  20. Integrating conventional and CAD/CAM digital techniques for establishing canine protected articulation: A clinical report.

    PubMed

    El Kerdani, Tarek; Nimmo, Arthur

    2016-05-01

    Canine protected articulation is widely accepted for patients requiring extensive oral rehabilitation. Computer-aided design and computer-aided manufacturing (CAD/CAM) restorations have been primarily designed in occlusion at the maximum intercuspal position. Designing a virtual articulator that is capable of accepting excursive occlusal records and duplicating the mandibular movements is a challenge for CAD/CAM technology. Modifying tooth shape using composite resin trial restorations to produce esthetic results and later scanning the modified teeth to create milled crowns is becoming a popular use of the CAD/CAM technology. This report describes a technique that combines conventional and CAD/CAM prosthodontic techniques for milling crowns for canine teeth that are designed to establish or improve canine protected articulation. This technique involves designing and fabricating interim restorations based on diagnostic waxing, scanning the designs intraorally, and storing them in software as pretreatment digital records. The scanned designs are then applied to the digital representation of the prepared teeth to fabricate the definitive restorations. PMID:26774319

  1. On the Use of CAD-Native Predicates and Geometry in Surface Meshing

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.

    1999-01-01

    Several paradigms for accessing CAD geometry during surface meshing for CFD are discussed. File translation, inconsistent geometry engines and non-native point construction are all identified as sources of non-robustness. The paper argues in favor of accessing CAD parts and assemblies in their native format, without translation, and for the use of CAD-native predicates and constructors in surface mesh generation. The discussion also emphasizes the importance of examining the computational requirements for exact evaluation of triangulation predicates during surface meshing. The native approach is demonstrated through an algorithm for the generation of closed manifold surface triangulations from CAD geometry. CAD parts and assemblies are used in their native format, and a part's native geometry engine is accessed through a modeler-independent application programming interface (API). In seeking a robust and fully automated procedure, the algorithm is based on a new physical space manifold triangulation technique specially developed to avoid robustness issues associated with poorly conditioned mappings. In addition, this approach avoids the usual ambiguities associated with floating-point predicate evaluation on constructed coordinate geometry in a mapped space. The technique is incremental, so that each new site improves the triangulation by some well defined quality measure. The algorithm terminates after achieving a prespecified measure of mesh quality and produces a triangulation such that no angle is less than a given angle bound, a or greater than pi - 2alpha. This result also sets bounds on the maximum vertex degree, triangle aspect-ratio and maximum stretching rate for the triangulation. In addition to the output triangulations for a variety of CAD parts, the discussion presents related theoretical results which assert the existence of such an angle bound, and demonstrate that maximum bounds of between 25 deg and 30 deg may be achieved in practice.

  2. Intelligent CAD approach for modular design

    NASA Astrophysics Data System (ADS)

    Ouyang, Miao-an; Li, Chenggang; Zhong, Yifang; Yu, Jun; Zhou, Ji

    1996-03-01

    In this paper, the technology of Artificial Intelligence is introduced into a modular design and manufacturing for machine tools. The authors present a methodology to realize the modular conceptual design combined with traditional CAD, and develop an intelligent machine tools modular conceptual system. The problem-solving strategies are described in detail. The design model and system architecture are set up. Techniques and their incorporation of expert system, case-based reasoning and artificial neural networks are clarified.

  3. Efficient computation of the Grünwald-Letnikov fractional diffusion derivative using adaptive time step memory

    NASA Astrophysics Data System (ADS)

    MacDonald, Christopher L.; Bhattacharya, Nirupama; Sprouse, Brian P.; Silva, Gabriel A.

    2015-09-01

    Computing numerical solutions to fractional differential equations can be computationally intensive due to the effect of non-local derivatives in which all previous time points contribute to the current iteration. In general, numerical approaches that depend on truncating part of the system history while efficient, can suffer from high degrees of error and inaccuracy. Here we present an adaptive time step memory method for smooth functions applied to the Grünwald-Letnikov fractional diffusion derivative. This method is computationally efficient and results in smaller errors during numerical simulations. Sampled points along the system's history at progressively longer intervals are assumed to reflect the values of neighboring time points. By including progressively fewer points backward in time, a temporally 'weighted' history is computed that includes contributions from the entire past of the system, maintaining accuracy, but with fewer points actually calculated, greatly improving computational efficiency.

  4. Computationally Efficient Multiscale Reactive Molecular Dynamics to Describe Amino Acid Deprotonation in Proteins.

    PubMed

    Lee, Sangyun; Liang, Ruibin; Voth, Gregory A; Swanson, Jessica M J

    2016-02-01

    An important challenge in the simulation of biomolecular systems is a quantitative description of the protonation and deprotonation process of amino acid residues. Despite the seeming simplicity of adding or removing a positively charged hydrogen nucleus, simulating the actual protonation/deprotonation process is inherently difficult. It requires both the explicit treatment of the excess proton, including its charge defect delocalization and Grotthuss shuttling through inhomogeneous moieties (water and amino residues), and extensive sampling of coupled condensed phase motions. In a recent paper (J. Chem. Theory Comput. 2014, 10, 2729-2737), a multiscale approach was developed to map high-level quantum mechanics/molecular mechanics (QM/MM) data into a multiscale reactive molecular dynamics (MS-RMD) model in order to describe amino acid deprotonation in bulk water. In this article, we extend the fitting approach (called FitRMD) to create MS-RMD models for ionizable amino acids within proteins. The resulting models are shown to faithfully reproduce the free energy profiles of the reference QM/MM Hamiltonian for PT inside an example protein, the ClC-ec1 H(+)/Cl(-) antiporter. Moreover, we show that the resulting MS-RMD models are computationally efficient enough to then characterize more complex 2-dimensional free energy surfaces due to slow degrees of freedom such as water hydration of internal protein cavities that can be inherently coupled to the excess proton charge translocation. The FitRMD method is thus shown to be an effective way to map ab initio level accuracy into a much more computationally efficient reactive MD method in order to explicitly simulate and quantitatively describe amino acid protonation/deprotonation in proteins. PMID:26734942

  5. Image preprocessing for improving computational efficiency in implementation of restoration and superresolution algorithms.

    PubMed

    Sundareshan, Malur K; Bhattacharjee, Supratik; Inampudi, Radhika; Pang, Ho-Yuen

    2002-12-10

    Computational complexity is a major impediment to the real-time implementation of image restoration and superresolution algorithms in many applications. Although powerful restoration algorithms have been developed within the past few years utilizing sophisticated mathematical machinery (based on statistical optimization and convex set theory), these algorithms are typically iterative in nature and require a sufficient number of iterations to be executed to achieve the desired resolution improvement that may be needed to meaningfully perform postprocessing image exploitation tasks in practice. Additionally, recent technological breakthroughs have facilitated novel sensor designs (focal plane arrays, for instance) that make it possible to capture megapixel imagery data at video frame rates. A major challenge in the processing of these large-format images is to complete the execution of the image processing steps within the frame capture times and to keep up with the output rate of the sensor so that all data captured by the sensor can be efficiently utilized. Consequently, development of novel methods that facilitate real-time implementation of image restoration and superresolution algorithms is of significant practical interest and is the primary focus of this study. The key to designing computationally efficient processing schemes lies in strategically introducing appropriate preprocessing steps together with the superresolution iterations to tailor optimized overall processing sequences for imagery data of specific formats. For substantiating this assertion, three distinct methods for tailoring a preprocessing filter and integrating it with the superresolution processing steps are outlined. These methods consist of a region-of-interest extraction scheme, a background-detail separation procedure, and a scene-derived information extraction step for implementing a set-theoretic restoration of the image that is less demanding in computation compared with the

  6. Image preprocessing for improving computational efficiency in implementation of restoration and superresolution algorithms

    NASA Astrophysics Data System (ADS)

    Sundareshan, Malur K.; Bhattacharjee, Supratik; Inampudi, Radhika; Pang, Ho-Yuen

    2002-12-01

    Computational complexity is a major impediment to the real-time implementation of image restoration and superresolution algorithms in many applications. Although powerful restoration algorithms have been developed within the past few years utilizing sophisticated mathematical machinery (based on statistical optimization and convex set theory), these algorithms are typically iterative in nature and require a sufficient number of iterations to be executed to achieve the desired resolution improvement that may be needed to meaningfully perform postprocessing image exploitation tasks in practice. Additionally, recent technological breakthroughs have facilitated novel sensor designs (focal plane arrays, for instance) that make it possible to capture megapixel imagery data at video frame rates. A major challenge in the processing of these large-format images is to complete the execution of the image processing steps within the frame capture times and to keep up with the output rate of the sensor so that all data captured by the sensor can be efficiently utilized. Consequently, development of novel methods that facilitate real-time implementation of image restoration and superresolution algorithms is of significant practical interest and is the primary focus of this study. The key to designing computationally efficient processing schemes lies in strategically introducing appropriate preprocessing steps together with the superresolution iterations to tailor optimized overall processing sequences for imagery data of specific formats. For substantiating this assertion, three distinct methods for tailoring a preprocessing filter and integrating it with the superresolution processing steps are outlined. These methods consist of a region-of-interest extraction scheme, a background-detail separation procedure, and a scene-derived information extraction step for implementing a set-theoretic restoration of the image that is less demanding in computation compared with the

  7. Computationally Efficient Multiscale Reactive Molecular Dynamics to Describe Amino Acid Deprotonation in Proteins

    PubMed Central

    2016-01-01

    An important challenge in the simulation of biomolecular systems is a quantitative description of the protonation and deprotonation process of amino acid residues. Despite the seeming simplicity of adding or removing a positively charged hydrogen nucleus, simulating the actual protonation/deprotonation process is inherently difficult. It requires both the explicit treatment of the excess proton, including its charge defect delocalization and Grotthuss shuttling through inhomogeneous moieties (water and amino residues), and extensive sampling of coupled condensed phase motions. In a recent paper (J. Chem. Theory Comput.2014, 10, 2729−273725061442), a multiscale approach was developed to map high-level quantum mechanics/molecular mechanics (QM/MM) data into a multiscale reactive molecular dynamics (MS-RMD) model in order to describe amino acid deprotonation in bulk water. In this article, we extend the fitting approach (called FitRMD) to create MS-RMD models for ionizable amino acids within proteins. The resulting models are shown to faithfully reproduce the free energy profiles of the reference QM/MM Hamiltonian for PT inside an example protein, the ClC-ec1 H+/Cl– antiporter. Moreover, we show that the resulting MS-RMD models are computationally efficient enough to then characterize more complex 2-dimensional free energy surfaces due to slow degrees of freedom such as water hydration of internal protein cavities that can be inherently coupled to the excess proton charge translocation. The FitRMD method is thus shown to be an effective way to map ab initio level accuracy into a much more computationally efficient reactive MD method in order to explicitly simulate and quantitatively describe amino acid protonation/deprotonation in proteins. PMID:26734942

  8. A computationally-efficient secondary organic aerosol module for three-dimensional air quality models

    NASA Astrophysics Data System (ADS)

    Liu, P.; Zhang, Y.

    2008-04-01

    Accurately simulating secondary organic aerosols (SOA) in three-dimensional (3-D) air quality models is challenging due to the complexity of the physics and chemistry involved and the high computational demand required. A computationally-efficient yet accurate SOA module is necessary in 3-D applications for long-term simulations and real-time air quality forecasting. A coupled gas and aerosol box model (i.e., 0-D CMAQ-MADRID 2) is used to optimize relevant processes in order to develop such a SOA module. Solving the partitioning equations for condensable volatile organic compounds (VOCs) and calculating their activity coefficients in the multicomponent mixtures are identified to be the most computationally-expensive processes. The two processes can be speeded up by relaxing the error tolerance levels and reducing the maximum number of iterations of the numerical solver for the partitioning equations for organic species; turning on organic-inorganic interactions only when the water content associated with organic compounds is significant; and parameterizing the calculation of activity coefficients for organic mixtures in the hydrophilic module. The optimal speed-up method can reduce the total CPU cost by up to a factor of 29.7 with ±15% deviation from benchmark results. These speedup methods are applicable to other SOA modules that are based on partitioning theories.

  9. A computationally-efficient secondary organic aerosol module for three-dimensional air quality models

    NASA Astrophysics Data System (ADS)

    Liu, P.; Zhang, Y.

    2008-07-01

    Accurately simulating secondary organic aerosols (SOA) in three-dimensional (3-D) air quality models is challenging due to the complexity of the physics and chemistry involved and the high computational demand required. A computationally-efficient yet accurate SOA module is necessary in 3-D applications for long-term simulations and real-time air quality forecasting. A coupled gas and aerosol box model (i.e., 0-D CMAQ-MADRID 2) is used to optimize relevant processes in order to develop such a SOA module. Solving the partitioning equations for condensable volatile organic compounds (VOCs) and calculating their activity coefficients in the multicomponent mixtures are identified to be the most computationally-expensive processes. The two processes can be speeded up by relaxing the error tolerance levels and reducing the maximum number of iterations of the numerical solver for the partitioning equations for organic species; conditionally activating organic-inorganic interactions; and parameterizing the calculation of activity coefficients for organic mixtures in the hydrophilic module. The optimal speed-up method can reduce the total CPU cost by up to a factor of 31.4 from benchmark under the rural conditions with 2 ppb isoprene and by factors of 10 71 under various test conditions with 2 10 ppb isoprene and >40% relative humidity while maintaining ±15% deviation. These speed-up methods are applicable to other SOA modules that are based on partitioning theories.

  10. Multiple copy sampling in protein loop modeling: computational efficiency and sensitivity to dihedral angle perturbations.

    PubMed Central

    Zheng, Q.; Rosenfeld, R.; DeLisi, C.; Kyle, D. J.

    1994-01-01

    Multiple copy sampling and the bond scaling-relaxation technique are combined to generate 3-dimensional conformations of protein loop segments. The computational efficiency and sensitivity to initial loop copy dispersion are analyzed. The multicopy loop modeling method requires approximately 20-50% of the computational time required by the single-copy method for the various protein segments tested. An analytical formula is proposed to estimate the computational gain prior to carrying out a multicopy simulation. When 7-residue loops within flexible proteins are modeled, each multicopy simulation can sample a set of loop conformations with initial dispersions up to +/- 15 degrees for backbone and +/- 30 degrees for side-chain rotatable dihedral angles. The dispersions are larger for shorter and smaller for longer and/or surface loops. The degree of convergence of loop copies during a simulation can be used to complement commonly used target functions (such as potential energy) for distinguishing between native and misfolded conformations. Furthermore, this convergence also reflects the conformational flexibility of the modeled protein segment. Application to simultaneously building all 6 hypervariable loops of an antibody is discussed. PMID:8019420

  11. A universal and efficient method to compute maps from image-based prediction models.

    PubMed

    Sabuncu, Mert R

    2014-01-01

    Discriminative supervised learning algorithms, such as Support Vector Machines, are becoming increasingly popular in biomedical image computing. One of their main uses is to construct image-based prediction models, e.g., for computer aided diagnosis or "mind reading." A major challenge in these applications is the biological interpretation of the machine learning models, which can be arbitrarily complex functions of the input features (e.g., as induced by kernel-based methods). Recent work has proposed several strategies for deriving maps that highlight regions relevant for accurate prediction. Yet most of these methods o n strong assumptions about t he prediction model (e.g., linearity, sparsity) and/or data (e.g., Gaussianity), or fail to exploit the covariance structure in the data. In this work, we propose a computationally efficient and universal framework for quantifying associations captured by black box machine learning models. Furthermore, our theoretical perspective reveals that examining associations with predictions, in the absence of ground truth labels, can be very informative. We apply the proposed method to machine learning models trained to predict cognitive impairment from structural neuroimaging data. We demonstrate that our approach yields biologically meaningful maps of association. PMID:25320819

  12. Computing the energy of a water molecule using multideterminants: A simple, efficient algorithm

    SciTech Connect

    Clark, Bryan K.; Morales, Miguel A; Mcminis, Jeremy; Kim, Jeongnim; Scuseria, Gustavo E

    2011-01-01

    Quantum Monte Carlo (QMC) methods such as variational Monte Carlo and fixed node diffusion Monte Carlo depend heavily on the quality of the trial wave function. Although Slater-Jastrow wave functions are the most commonly used variational ansatz in electronic structure, more sophisticated wave functions are critical to ascertaining new physics. One such wave function is the multi-Slater- Jastrow wave function which consists of a Jastrow function multiplied by the sum of Slater deter- minants. In this paper we describe a method for working with these wave functions in QMC codes that is easy to implement, efficient both in computational speed as well as memory, and easily par- allelized. The computational cost scales quadratically with particle number making this scaling no worse than the single determinant case and linear with the total number of excitations. Addition- ally, we implement this method and use it to compute the ground state energy of a water molecule. 2011 American Institute of Physics. [doi:10.1063/1.3665391

  13. An efficient computational method for predicting rotational diffusion tensors of globular proteins using an ellipsoid representation.

    PubMed

    Ryabov, Yaroslav E; Geraghty, Charles; Varshney, Amitabh; Fushman, David

    2006-12-01

    We propose a new computational method for predicting rotational diffusion properties of proteins in solution. The method is based on the idea of representing protein surface as an ellipsoid shell. In contrast to other existing approaches this method uses principal component analysis of protein surface coordinates, which results in a substantial increase in the computational efficiency of the method. Direct comparison with the experimental data as well as with the recent computational approach (Garcia de la Torre; et al. J. Magn. Reson. 2000, B147, 138-146), based on representation of protein surface as a set of small spherical friction elements, shows that the method proposed here reproduces experimental data with at least the same level of accuracy and precision as the other approach, while being approximately 500 times faster. Using the new method we investigated the effect of hydration layer and protein surface topography on the rotational diffusion properties of a protein. We found that a hydration layer constructed of approximately one monolayer of water molecules smoothens the protein surface and effectively doubles the overall tumbling time. We also calculated the rotational diffusion tensors for a set of 841 protein structures representing the known protein folds. Our analysis suggests that an anisotropic rotational diffusion model is generally required for NMR relaxation data analysis in single-domain proteins, and that the axially symmetric model could be sufficient for these purposes in approximately half of the proteins. PMID:17132010

  14. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems.

    PubMed

    Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D

    2015-01-01

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems. PMID:26184211

  15. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems

    PubMed Central

    Ehsan, Shoaib; Clark, Adrian F.; ur Rehman, Naveed; McDonald-Maier, Klaus D.

    2015-01-01

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems. PMID:26184211

  16. Quantum propagation of electronic excitations in macromolecules: A computationally efficient multiscale approach

    NASA Astrophysics Data System (ADS)

    Schneider, E.; a Beccara, S.; Mascherpa, F.; Faccioli, P.

    2016-07-01

    We introduce a theoretical approach to study the quantum-dissipative dynamics of electronic excitations in macromolecules, which enables to perform calculations in large systems and cover long-time intervals. All the parameters of the underlying microscopic Hamiltonian are obtained from ab initio electronic structure calculations, ensuring chemical detail. In the short-time regime, the theory is solvable using a diagrammatic perturbation theory, enabling analytic insight. To compute the time evolution of the density matrix at intermediate times, typically ≲ps , we develop a Monte Carlo algorithm free from any sign or phase problem, hence computationally efficient. Finally, the dynamics in the long-time and large-distance limit can be studied combining the microscopic calculations with renormalization group techniques to define a rigorous low-resolution effective theory. We benchmark our Monte Carlo algorithm against the results obtained in perturbation theory and using a semiclassical nonperturbative scheme. Then, we apply it to compute the intrachain charge mobility in a realistic conjugated polymer.

  17. Dendritic nonlinearities are tuned for efficient spike-based computations in cortical circuits

    PubMed Central

    Ujfalussy, Balázs B; Makara, Judit K; Branco, Tiago; Lengyel, Máté

    2015-01-01

    Cortical neurons integrate thousands of synaptic inputs in their dendrites in highly nonlinear ways. It is unknown how these dendritic nonlinearities in individual cells contribute to computations at the level of neural circuits. Here, we show that dendritic nonlinearities are critical for the efficient integration of synaptic inputs in circuits performing analog computations with spiking neurons. We developed a theory that formalizes how a neuron's dendritic nonlinearity that is optimal for integrating synaptic inputs depends on the statistics of its presynaptic activity patterns. Based on their in vivo preynaptic population statistics (firing rates, membrane potential fluctuations, and correlations due to ensemble dynamics), our theory accurately predicted the responses of two different types of cortical pyramidal cells to patterned stimulation by two-photon glutamate uncaging. These results reveal a new computational principle underlying dendritic integration in cortical neurons by suggesting a functional link between cellular and systems--level properties of cortical circuits. DOI: http://dx.doi.org/10.7554/eLife.10056.001 PMID:26705334

  18. A Solution Methodology and Computer Program to Efficiently Model Thermodynamic and Transport Coefficients of Mixtures

    NASA Technical Reports Server (NTRS)

    Ferlemann, Paul G.

    2000-01-01

    A solution methodology has been developed to efficiently model multi-specie, chemically frozen, thermally perfect gas mixtures. The method relies on the ability to generate a single (composite) set of thermodynamic and transport coefficients prior to beginning a CFD solution. While not fundamentally a new concept, many applied CFD users are not aware of this capability nor have a mechanism to easily and confidently generate new coefficients. A database of individual specie property coefficients has been created for 48 species. The seven coefficient form of the thermodynamic functions is currently used rather then the ten coefficient form due to the similarity of the calculated properties, low temperature behavior and reduced CPU requirements. Sutherland laminar viscosity and thermal conductivity coefficients were computed in a consistent manner from available reference curves. A computer program has been written to provide CFD users with a convenient method to generate composite specie coefficients for any mixture. Mach 7 forebody/inlet calculations demonstrated nearly equivalent results and significant CPU time savings compared to a multi-specie solution approach. Results from high-speed combustor analysis also illustrate the ability to model inert test gas contaminants without additional computational expense.

  19. Virtual tomography: a new approach to efficient human-computer interaction for medical imaging

    NASA Astrophysics Data System (ADS)

    Teistler, Michael; Bott, Oliver J.; Dormeier, Jochen; Pretschner, Dietrich P.

    2003-05-01

    By utilizing virtual reality (VR) technologies the computer system virtusMED implements the concept of virtual tomography for exploring medical volumetric image data. Photographic data from a virtual patient as well as CT or MRI data from real patients are visualized within a virtual scene. The view of this scene is determined either by a conventional computer mouse, a head-mounted display or a freely movable flat panel. A virtual examination probe is used to generate oblique tomographic images which are computed from the given volume data. In addition, virtual models can be integrated into the scene such as anatomical models of bones and inner organs. virtusMED has shown to be a valuable tool to learn human anaotomy and to udnerstand the principles of medical imaging such as sonography. Furthermore its utilization to improve CT and MRI based diagnosis is very promising. Compared to VR systems of the past, the standard PC-based system virtusMED is a cost-efficient and easily maintained solution providing a highly intuitive time-saving user interface for medical imaging.

  20. Efficient control schemes with limited computation complexity for Tomographic AO systems on VLTs and ELTs

    NASA Astrophysics Data System (ADS)

    Petit, C.; Le Louarn, M.; Fusco, T.; Madec, P.-Y.

    2011-09-01

    Various tomographic control solutions have been proposed during the last decades to ensure efficient or even optimal closed-loop correction to tomographic Adaptive Optics (AO) concepts such as Laser Tomographic AO (LTAO), Multi-Conjugate AO (MCAO). The optimal solution, based on Linear Quadratic Gaussian (LQG) approach, as well as suboptimal but efficient solutions such as Pseudo-Open Loop Control (POLC) require multiple Matrix Vector Multiplications (MVM). Disregarding their respective performance, these efficient control solutions thus exhibit strong increase of on-line complexity and their implementation may become difficult in demanding cases. Among them, two cases are of particular interest. First, the system Real-Time Computer architecture and implementation is derived from past or present solutions and does not support multiple MVM. This is the case of the AO Facility which RTC architecture is derived from the SPARTA platform and inherits its simple MVM architecture, which does not fit with LTAO control solutions for instance. Second, considering future systems such as Extremely Large Telescopes, the number of degrees of freedom is twenty to one hundred times bigger than present systems. In these conditions, tomographic control solutions can hardly be used in their standard form and optimized implementation shall be considered. Single MVM tomographic control solutions represent a potential solution, and straightforward solutions such as Virtual Deformable Mirrors have been already proposed for LTAO but with tuning issues. We investigate in this paper the possibility to derive from tomographic control solutions, such as POLC or LQG, simplified control solutions ensuring simple MVM architecture and that could be thus implemented on nowadays systems or future complex systems. We theoretically derive various solutions and analyze their respective performance on various systems thanks to numerical simulation. We discuss the optimization of their performance and