A new computationally efficient CAD system for pulmonary nodule detection in CT imagery.
Messay, Temesguen; Hardie, Russell C; Rogers, Steven K
2010-06-01
Early detection of lung nodules is extremely important for the diagnosis and clinical management of lung cancer. In this paper, a novel computer aided detection (CAD) system for the detection of pulmonary nodules in thoracic computed tomography (CT) imagery is presented. The paper describes the architecture of the CAD system and assesses its performance on a publicly available database to serve as a benchmark for future research efforts. Training and tuning of all modules in our CAD system is done using a separate and independent dataset provided courtesy of the University of Texas Medical Branch (UTMB). The publicly available testing dataset is that created by the Lung Image Database Consortium (LIDC). The LIDC data used here is comprised of 84 CT scans containing 143 nodules ranging from 3 to 30mm in effective size that are manually segmented at least by one of the four radiologists. The CAD system uses a fully automated lung segmentation algorithm to define the boundaries of the lung regions. It combines intensity thresholding with morphological processing to detect and segment nodule candidates simultaneously. A set of 245 features is computed for each segmented nodule candidate. A sequential forward selection process is used to determine the optimum subset of features for two distinct classifiers, a Fisher Linear Discriminant (FLD) classifier and a quadratic classifier. A performance comparison between the two classifiers is presented, and based on this, the FLD classifier is selected for the CAD system. With an average of 517.5 nodule candidates per case/scan (517.5+/-72.9), the proposed front-end detector/segmentor is able to detect 92.8% of all the nodules in the LIDC/testing dataset (based on merged ground truth). The mean overlap between the nodule regions delineated by three or more radiologists and the ones segmented by the proposed segmentation algorithm is approximately 63%. Overall, with a specificity of 3 false positives (FPs) per case/patient on
Computing Mass Properties From AutoCAD
NASA Technical Reports Server (NTRS)
Jones, A.
1990-01-01
Mass properties of structures computed from data in drawings. AutoCAD to Mass Properties (ACTOMP) computer program developed to facilitate quick calculations of mass properties of structures containing many simple elements in such complex configurations as trusses or sheet-metal containers. Mathematically modeled in AutoCAD or compatible computer-aided design (CAD) system in minutes by use of three-dimensional elements. Written in Microsoft Quick-Basic (Version 2.0).
Bogoni, Luca; Ko, Jane P; Alpert, Jeffrey; Anand, Vikram; Fantauzzi, John; Florin, Charles H; Koo, Chi Wan; Mason, Derek; Rom, William; Shiau, Maria; Salganicoff, Marcos; Naidich, David P
2012-12-01
The objective of this study is to assess the impact on nodule detection and efficiency using a computer-aided detection (CAD) device seamlessly integrated into a commercially available picture archiving and communication system (PACS). Forty-eight consecutive low-dose thoracic computed tomography studies were retrospectively included from an ongoing multi-institutional screening study. CAD results were sent to PACS as a separate image series for each study. Five fellowship-trained thoracic radiologists interpreted each case first on contiguous 5 mm sections, then evaluated the CAD output series (with CAD marks on corresponding axial sections). The standard of reference was based on three-reader agreement with expert adjudication. The time to interpret CAD marking was automatically recorded. A total of 134 true-positive nodules, measuring 3 mm and larger were included in our study; with 85 ≥ 4 and 50 ≥ 5 mm in size. Readers detection improved significantly in each size category when using CAD, respectively, from 44 to 57 % for ≥3 mm, 48 to 61 % for ≥4 mm, and 44 to 60 % for ≥5 mm. CAD stand-alone sensitivity was 65, 68, and 66 % for nodules ≥3, ≥4, and ≥5 mm, respectively, with CAD significantly increasing the false positives for two readers only. The average time to interpret and annotate a CAD mark was 15.1 s, after localizing it in the original image series. The integration of CAD into PACS increases reader sensitivity with minimal impact on interpretation time and supports such implementation into daily clinical practice. PMID:22710985
A CAD (Classroom Assessment Design) of a Computer Programming Course
ERIC Educational Resources Information Center
Hawi, Nazir S.
2012-01-01
This paper presents a CAD (classroom assessment design) of an entry-level undergraduate computer programming course "Computer Programming I". CAD has been the product of a long experience in teaching computer programming courses including teaching "Computer Programming I" 22 times. Each semester, CAD is evaluated and modified for the subsequent…
ERIC Educational Resources Information Center
Burns, William E.
1986-01-01
Discusses the field of computer-aided design, which combines the skills and creativity of the architect, designer, drafter, and engineer with the power of the computer. Reports on job tasks, applications, background of the field, job outlook, and necessary training. (CH)
Computer-aided diagnosis (CAD) for colonoscopy
NASA Astrophysics Data System (ADS)
Gu, Jia; Poirson, Allen
2007-03-01
Colorectal cancer is the second leading cause of cancer deaths, and ranks third for new cancer cases and cancer mortality for both men and women. However, its death rate can be dramatically reduced by appropriate treatment when early detection is available. The purpose of colonoscopy is to identify and assess the severity of lesions, which may be flat or protruding. Due to the subjective nature of the examination, colonoscopic proficiency is highly variable and dependent upon the colonoscopist's knowledge and experience. An automated image processing system providing an objective, rapid, and inexpensive analysis of video from a standard colonoscope could provide a valuable tool for screening and diagnosis. In this paper, we present the design, functionality and preliminary results of its Computer-Aided-Diagnosis (CAD) system for colonoscopy - ColonoCAD TM. ColonoCAD is a complex multi-sensor, multi-data and multi-algorithm image processing system, incorporating data management and visualization, video quality assessment and enhancement, calibration, multiple view based reconstruction, feature extraction and classification. As this is a new field in medical image processing, our hope is that this paper will provide the framework to encourage and facilitate collaboration and discussion between industry, academia, and medical practitioners.
Computer-aided-diagnosis (CAD) for colposcopy
NASA Astrophysics Data System (ADS)
Lange, Holger; Ferris, Daron G.
2005-04-01
Uterine cervical cancer is the second most common cancer among women worldwide. Colposcopy is a diagnostic method, whereby a physician (colposcopist) visually inspects the lower genital tract (cervix, vulva and vagina), with special emphasis on the subjective appearance of metaplastic epithelium comprising the transformation zone on the cervix. Cervical cancer precursor lesions and invasive cancer exhibit certain distinctly abnormal morphologic features. Lesion characteristics such as margin; color or opacity; blood vessel caliber, intercapillary spacing and distribution; and contour are considered by colposcopists to derive a clinical diagnosis. Clinicians and academia have suggested and shown proof of concept that automated image analysis of cervical imagery can be used for cervical cancer screening and diagnosis, having the potential to have a direct impact on improving women"s health care and reducing associated costs. STI Medical Systems is developing a Computer-Aided-Diagnosis (CAD) system for colposcopy -- ColpoCAD. At the heart of ColpoCAD is a complex multi-sensor, multi-data and multi-feature image analysis system. A functional description is presented of the envisioned ColpoCAD system, broken down into: Modality Data Management System, Image Enhancement, Feature Extraction, Reference Database, and Diagnosis and directed Biopsies. The system design and development process of the image analysis system is outlined. The system design provides a modular and open architecture built on feature based processing. The core feature set includes the visual features used by colposcopists. This feature set can be extended to include new features introduced by new instrument technologies, like fluorescence and impedance, and any other plausible feature that can be extracted from the cervical data. Preliminary results of our research on detecting the three most important features: blood vessel structures, acetowhite regions and lesion margins are shown. As this is a new
Preparing Students for Computer Aided Drafting (CAD). A Conceptual Approach.
ERIC Educational Resources Information Center
Putnam, A. R.; Duelm, Brian
This presentation outlines guidelines for developing and implementing an introductory course in computer-aided drafting (CAD) that is geared toward secondary-level students. The first section of the paper, which deals with content identification and selection, includes lists of mechanical drawing and CAD competencies and a list of rationales for…
CAD-centric Computation Management System for a Virtual TBM
Ramakanth Munipalli; K.Y. Szema; P.Y. Huang; C.M. Rowell; A.Ying; M. Abdou
2011-05-03
HyPerComp Inc. in research collaboration with TEXCEL has set out to build a Virtual Test Blanket Module (VTBM) computational system to address the need in contemporary fusion research for simulating the integrated behavior of the blanket, divertor and plasma facing components in a fusion environment. Physical phenomena to be considered in a VTBM will include fluid flow, heat transfer, mass transfer, neutronics, structural mechanics and electromagnetics. We seek to integrate well established (third-party) simulation software in various disciplines mentioned above. The integrated modeling process will enable user groups to interoperate using a common modeling platform at various stages of the analysis. Since CAD is at the core of the simulation (as opposed to computational meshes which are different for each problem,) VTBM will have a well developed CAD interface, governing CAD model editing, cleanup, parameter extraction, model deformation (based on simulation,) CAD-based data interpolation. In Phase-I, we built the CAD-hub of the proposed VTBM and demonstrated its use in modeling a liquid breeder blanket module with coupled MHD and structural mechanics using HIMAG and ANSYS. A complete graphical user interface of the VTBM was created, which will form the foundation of any future development. Conservative data interpolation via CAD (as opposed to mesh-based transfer), the regeneration of CAD models based upon computed deflections, are among the other highlights of phase-I activity.
Introduction to CAD/Computers. High-Technology Training Module.
ERIC Educational Resources Information Center
Lockerby, Hugh
This learning module for an eighth-grade introductory technology course is designed to help teachers introduce students to computer-assisted design (CAD) in a communications unit on graphics. The module contains a module objective and five specific objectives, a content outline, suggested instructor methodology, student activities, a list of six…
NASA Astrophysics Data System (ADS)
Safdar, Nabile; Ramakrishna, Bharath; Saiprasad, Ganesh; Siddiqui, Khan; Siegel, Eliot
2008-03-01
Knee-related injuries involving the meniscal or articular cartilage are common and require accurate diagnosis and surgical intervention when appropriate. With proper techniques and experience, confidence in detection of meniscal tears and articular cartilage abnormalities can be quite high. However, for radiologists without musculoskeletal training, diagnosis of such abnormalities can be challenging. In this paper, the potential of improving diagnosis through integration of computer-aided detection (CAD) algorithms for automatic detection of meniscal tears and articular cartilage injuries of the knees is studied. An integrated approach in which the results of algorithms evaluating either meniscal tears or articular cartilage injuries provide feedback to each other is believed to improve the diagnostic accuracy of the individual CAD algorithms due to the known association between abnormalities in these distinct anatomic structures. The correlation between meniscal tears and articular cartilage injuries is exploited to improve the final diagnostic results of the individual algorithms. Preliminary results from the integrated application are encouraging and more comprehensive tests are being planned.
Differences in computer exposure between university administrators and CAD draftsmen.
Wu, Hsin-Chieh; Liu, Yung-Ping; Chen, Hsieh-Ching
2010-10-01
This study utilized an external logger system for onsite measurements of computer activities of two professional groups-twelve university administrators and twelve computer-aided design (CAD) draftsmen. Computer use of each participant was recorded for 10 consecutive days-an average of 7.9+/-1.8 workdays and 7.8+/-1.5 workdays for administrators and draftsmen, respectively. Quantitative parameters computed using recorded data were daily dynamic duration (DD) and static duration, daily keystrokes, mouse clicks, wheel scrolling counts, mouse movement and dragged distance, average typing and clicking rates, and average time holding down keys and mouse buttons. Significant group differences existed in the number of daily keystrokes (p<0.0005) and mouse clicks (p<0.0005), mouse distance moved (p<0.0005), typing rate (p<0.0001), daily mouse DD (p<0.0001), and keyboard DD (p<0.005). Both groups had significantly longer mouse DD than keyboard DD (p<0.0001). Statistical analysis indicates that the duration of computer use for different computer tasks cannot be represented by a single formula with same set of quantitative parameters as those associated with mouse and keyboard activities. Results of this study demonstrate that computer exposure during different tasks cannot be estimated solely by computer use duration. Quantification of onsite computer activities is necessary when determining computer-associated risk of musculoskeletal disorders. Other significant findings are discussed. PMID:20392434
Computer Aided Detection (CAD) Systems for Mammography and the Use of GRID in Medicine
NASA Astrophysics Data System (ADS)
Lauria, Adele
It is well known that the most effective way to defeat breast cancer is early detection, as surgery and medical therapies are more efficient when the disease is diagnosed at an early stage. The principal diagnostic technique for breast cancer detection is X-ray mammography. Screening programs have been introduced in many European countries to invite women to have periodic radiological breast examinations. In such screenings, radiologists are often required to examine large numbers of mammograms with a double reading, that is, two radiologists examine the images independently and then compare their results. In this way an increment in sensitivity (the rate of correctly identified images with a lesion) of up to 15% is obtained.1,2 In most radiological centres, it is a rarity to find two radiologists to examine each report. In recent years different Computer Aided Detection (CAD) systems have been developed as a support to radiologists working in mammography: one may hope that the "second opinion" provided by CAD might represent a lower cost alternative to improve the diagnosis. At present, four CAD systems have obtained the FDA approval in the USA. † Studies3,4 show an increment in sensitivity when CAD systems are used. Freer and Ulissey in 2001 5 demonstrated that the use of a commercial CAD system (ImageChecker M1000, R2 Technology) increases the number of cancers detected up to 19.5% with little increment in recall rate. Ciatto et al.,5 in a study simulating a double reading with a commercial CAD system (SecondLook‡), showed a moderate increment in sensitivity while reducing specificity (the rate of correctly identified images without a lesion). Notwithstanding these optimistic results, there is an ongoing debate to define the advantages of the use of CAD as second reader: the main limits underlined, e.g., by Nishikawa6 are that retrospective studies are considered much too optimistic and that clinical studies must be performed to demonstrate a statistically
Traz - An Interactive Ray-Tracing Computer Program Integrated With A Solid-Modeling CAD System
NASA Astrophysics Data System (ADS)
Dolan, Ariel
1986-02-01
The combination of an optical ray-tracing program with a solid modeling C.A.D. (computer-aided-design) system creates a very flexible tool for optical system analysis and evaluation. The program uses the CAD data-structure and user-friendly menus for creation, manipulation and visualization of the optical system. Furthermore, it is capable of dealing with problems which are impossible or difficult to handle by existing optical design programs, such as calculations of three-dimensional sensitivities, multiple reflections, multiple-surface apertures, specular stray radiation, image rotation and complex-prism design. It can also be used as an efficient tool for error-budget and error-analysis, and can be fully interfaced with a finite-elements analysis program, thus enabling the evaluation of the effects of mechanical or thermal loads on the optical performance.
Creation of Anatomically Accurate Computer-Aided Design (CAD) Solid Models from Medical Images
NASA Technical Reports Server (NTRS)
Stewart, John E.; Graham, R. Scott; Samareh, Jamshid A.; Oberlander, Eric J.; Broaddus, William C.
1999-01-01
Most surgical instrumentation and implants used in the world today are designed with sophisticated Computer-Aided Design (CAD)/Computer-Aided Manufacturing (CAM) software. This software automates the mechanical development of a product from its conceptual design through manufacturing. CAD software also provides a means of manipulating solid models prior to Finite Element Modeling (FEM). Few surgical products are designed in conjunction with accurate CAD models of human anatomy because of the difficulty with which these models are created. We have developed a novel technique that creates anatomically accurate, patient specific CAD solids from medical images in a matter of minutes.
Sriraam, N; Roopa, J; Saranya, M; Dhanalakshmi, M
2009-08-01
Recent advances in digital imaging technology have greatly enhanced the interpretation of critical/pathology conditions from the 2-dimensional medical images. This has become realistic due to the existence of the computer aided diagnostic tool. A computer aided diagnostic (CAD) tool generally possesses components like preprocessing, identification/selection of region of interest, extraction of typical features and finally an efficient classification system. This paper enumerates on development of CAD tool for classification of chronic liver disease through the 2-D image acquired from ultrasonic device. Characterization of tissue through qualitative treatment leads to the detection of abnormality which is not viable through qualitative visual inspection by the radiologist. Common liver diseases are the indicators of changes in tissue elasticity. One can show the detection of normal, fatty or malignant condition based on the application of CAD tool thereby, further investigation required by radiologist can be avoided. The proposed work involves an optimal block analysis (64 x 64) of the liver image of actual size 256 x 256 by incorporating Gabor wavelet transform which does the texture classification through automated mode. Statistical features such as gray level mean as well as variance values are estimated after this preprocessing mode. A non-linear back propagation neural network (BPNN) is applied for classifying the normal (vs) fatty and normal (vs) malignant liver which yields a classification accuracy of 96.8%. Further multi classification is also performed and a classification accuracy of 94% is obtained. It can be concluded that the proposed CAD can be used as an expert system to aid the automated diagnosis of liver diseases. PMID:19697693
Computer Use and CAD in Assisting Schools in the Creation of Facilities.
ERIC Educational Resources Information Center
Beach, Robert H.; Essex, Nathan
1987-01-01
Computer-aided design (CAD) programs are powerful drafting tools, but are also able to assist with many other facility planning functions. Describes the hardware, software, and the learning process that led to understanding the CAD software at the University of Alabama. (MLF)
ERIC Educational Resources Information Center
Franken, Ken; And Others
A multidisciplinary research team was assembled to review existing computer-aided drafting (CAD) systems for the purpose of enabling staff in the Design Drafting Department at Linn Technical College (Missouri) to select the best system out of the many CAD systems in existence. During the initial stage of the evaluation project, researchers…
Prakashini, K; Babu, Satish; Rajgopal, KV; Kokila, K Raja
2016-01-01
Aims and Objectives: To determine the overall performance of an existing CAD algorithm with thin-section computed tomography (CT) in the detection of pulmonary nodules and to evaluate detection sensitivity at a varying range of nodule density, size, and location. Materials and Methods: A cross-sectional prospective study was conducted on 20 patients with 322 suspected nodules who underwent diagnostic chest imaging using 64-row multi-detector CT. The examinations were evaluated on reconstructed images of 1.4 mm thickness and 0.7 mm interval. Detection of pulmonary nodules, initially by a radiologist of 2 years experience (RAD) and later by CAD lung nodule software was assessed. Then, CAD nodule candidates were accepted or rejected accordingly. Detected nodules were classified based on their size, density, and location. The performance of the RAD and CAD system was compared with the gold standard that is true nodules confirmed by consensus of senior RAD and CAD together. The overall sensitivity and false-positive (FP) rate of CAD software was calculated. Observations and Results: Of the 322 suspected nodules, 221 were classified as true nodules on the consensus of senior RAD and CAD together. Of the true nodules, the RAD detected 206 (93.2%) and 202 (91.4%) by the CAD. CAD and RAD together picked up more number of nodules than either CAD or RAD alone. Overall sensitivity for nodule detection with the CAD program was 91.4%, and FP detection per patient was 5.5%. The CAD showed comparatively higher sensitivity for nodules of size 4–10 mm (93.4%) and nodules in hilar (100%) and central (96.5%) location when compared to RAD's performance. Conclusion: CAD performance was high in detecting pulmonary nodules including the small size and low-density nodules. CAD even with relatively high FP rate, assists and improves RAD's performance as a second reader, especially for nodules located in the central and hilar region and for small nodules by saving RADs time.
ERIC Educational Resources Information Center
Goosen, Richard F.
2009-01-01
This study provides information for higher education leaders that have or are considering conducting Computer Aided Design (CAD) instruction using student owned notebook computers. Survey data were collected during the first 8 years of a pilot program requiring engineering technology students at a four year public university to acquire a notebook…
Surgical retained foreign object (RFO) prevention by computer aided detection (CAD)
NASA Astrophysics Data System (ADS)
Marentis, Theodore C.; Hadjiiyski, Lubomir; Chaudhury, Amrita R.; Rondon, Lucas; Chronis, Nikolaos; Chan, Heang-Ping
2014-03-01
Surgical Retained Foreign Objects (RFOs) cause significant morbidity and mortality. They are associated with $1.5 billion annually in preventable medical costs. The detection accuracy of radiographs for RFOs is a mediocre 59%. We address the RFO problem with two complementary technologies: a three dimensional (3D) Gossypiboma Micro Tag (μTa) that improves the visibility of RFOs on radiographs, and a Computer Aided Detection (CAD) system that detects the μTag. The 3D geometry of the μTag produces a similar 2D depiction on radiographs regardless of its orientation in the human body and ensures accurate detection by a radiologist and the CAD. We create a database of cadaveric radiographs with the μTag and other common man-made objects positioned randomly. We develop the CAD modules that include preprocessing, μTag enhancement, labeling, segmentation, feature analysis, classification and detection. The CAD can operate in a high specificity mode for the surgeon to allow for seamless workflow integration and function as a first reader. The CAD can also operate in a high sensitivity mode for the radiologist to ensure accurate detection. On a data set of 346 cadaveric radiographs, the CAD system performed at a high specificity (85.5% sensitivity, 0.02 FPs/image) for the OR and a high sensitivity (96% sensitivity, 0.73 FPs/image) for the radiologists.
Computationally efficient multibody simulations
NASA Technical Reports Server (NTRS)
Ramakrishnan, Jayant; Kumar, Manoj
1994-01-01
Computationally efficient approaches to the solution of the dynamics of multibody systems are presented in this work. The computational efficiency is derived from both the algorithmic and implementational standpoint. Order(n) approaches provide a new formulation of the equations of motion eliminating the assembly and numerical inversion of a system mass matrix as required by conventional algorithms. Computational efficiency is also gained in the implementation phase by the symbolic processing and parallel implementation of these equations. Comparison of this algorithm with existing multibody simulation programs illustrates the increased computational efficiency.
Adjoint Sensitivity Computations for an Embedded-Boundary Cartesian Mesh Method and CAD Geometry
NASA Technical Reports Server (NTRS)
Nemec, Marian; Aftosmis,Michael J.
2006-01-01
Cartesian-mesh methods are perhaps the most promising approach for addressing the issues of flow solution automation for aerodynamic design problems. In these methods, the discretization of the wetted surface is decoupled from that of the volume mesh. This not only enables fast and robust mesh generation for geometry of arbitrary complexity, but also facilitates access to geometry modeling and manipulation using parametric Computer-Aided Design (CAD) tools. Our goal is to combine the automation capabilities of Cartesian methods with an eficient computation of design sensitivities. We address this issue using the adjoint method, where the computational cost of the design sensitivities, or objective function gradients, is esseutially indepeudent of the number of design variables. In previous work, we presented an accurate and efficient algorithm for the solution of the adjoint Euler equations discretized on Cartesian meshes with embedded, cut-cell boundaries. Novel aspects of the algorithm included the computation of surface shape sensitivities for triangulations based on parametric-CAD models and the linearization of the coupling between the surface triangulation and the cut-cells. The objective of the present work is to extend our adjoint formulation to problems involving general shape changes. Central to this development is the computation of volume-mesh sensitivities to obtain a reliable approximation of the objective finction gradient. Motivated by the success of mesh-perturbation schemes commonly used in body-fitted unstructured formulations, we propose an approach based on a local linearization of a mesh-perturbation scheme similar to the spring analogy. This approach circumvents most of the difficulties that arise due to non-smooth changes in the cut-cell layer as the boundary shape evolves and provides a consistent approximation tot he exact gradient of the discretized abjective function. A detailed gradient accurace study is presented to verify our approach
Web-based computer-aided-diagnosis (CAD) system for bone age assessment (BAA) of children
NASA Astrophysics Data System (ADS)
Zhang, Aifeng; Uyeda, Joshua; Tsao, Sinchai; Ma, Kevin; Vachon, Linda A.; Liu, Brent J.; Huang, H. K.
2008-03-01
Bone age assessment (BAA) of children is a clinical procedure frequently performed in pediatric radiology to evaluate the stage of skeletal maturation based on a left hand and wrist radiograph. The most commonly used standard: Greulich and Pyle (G&P) Hand Atlas was developed 50 years ago and exclusively based on Caucasian population. Moreover, inter- & intra-observer discrepancies using this method create a need of an objective and automatic BAA method. A digital hand atlas (DHA) has been collected with 1,400 hand images of normal children from Asian, African American, Caucasian and Hispanic descends. Based on DHA, a fully automatic, objective computer-aided-diagnosis (CAD) method was developed and it was adapted to specific population. To bring DHA and CAD method to the clinical environment as a useful tool in assisting radiologist to achieve higher accuracy in BAA, a web-based system with direct connection to a clinical site is designed as a novel clinical implementation approach for online and real time BAA. The core of the system, a CAD server receives the image from clinical site, processes it by the CAD method and finally, generates report. A web service publishes the results and radiologists at the clinical site can review it online within minutes. This prototype can be easily extended to multiple clinical sites and will provide the foundation for broader use of the CAD system for BAA.
NASA Technical Reports Server (NTRS)
Smith, Leigh M.; Parker, Nelson C. (Technical Monitor)
2002-01-01
This paper analyzes the use of Computer Aided Design (CAD) packages at NASA's Marshall Space Flight Center (MSFC). It examines the effectiveness of recent efforts to standardize CAD practices across MSFC engineering activities. An assessment of the roles played by management, designers, analysts, and manufacturers in this initiative will be explored. Finally, solutions are presented for better integration of CAD across MSFC in the future.
3D object optonumerical acquisition methods for CAD/CAM and computer graphics systems
NASA Astrophysics Data System (ADS)
Sitnik, Robert; Kujawinska, Malgorzata; Pawlowski, Michal E.; Woznicki, Jerzy M.
1999-08-01
The creation of a virtual object for CAD/CAM and computer graphics on the base of data gathered by full-field optical measurement of 3D object is presented. The experimental co- ordinates are alternatively obtained by combined fringe projection/photogrammetry based system or fringe projection/virtual markers setup. The new and fully automatic procedure which process the cloud of measured points into triangular mesh accepted by CAD/CAM and computer graphics systems is presented. Its applicability for various classes of objects is tested including the error analysis of virtual objects generated. The usefulness of the method is proved by applying the virtual object in rapid prototyping system and in computer graphics environment.
NASA Astrophysics Data System (ADS)
Choi, Jae Young; Kim, Dae Hoe; Plataniotis, Konstantinos N.; Ro, Yong Man
2014-07-01
We propose a novel computer-aided detection (CAD) framework of breast masses in mammography. To increase detection sensitivity for various types of mammographic masses, we propose the combined use of different detection algorithms. In particular, we develop a region-of-interest combination mechanism that integrates detection information gained from unsupervised and supervised detection algorithms. Also, to significantly reduce the number of false-positive (FP) detections, the new ensemble classification algorithm is developed. Extensive experiments have been conducted on a benchmark mammogram database. Results show that our combined detection approach can considerably improve the detection sensitivity with a small loss of FP rate, compared to representative detection algorithms previously developed for mammographic CAD systems. The proposed ensemble classification solution also has a dramatic impact on the reduction of FP detections; as much as 70% (from 15 to 4.5 per image) at only cost of 4.6% sensitivity loss (from 90.0% to 85.4%). Moreover, our proposed CAD method performs as well or better (70.7% and 80.0% per 1.5 and 3.5 FPs per image respectively) than the results of mammography CAD algorithms previously reported in the literature.
Choi, Jae Young; Kim, Dae Hoe; Plataniotis, Konstantinos N; Ro, Yong Man
2014-07-21
We propose a novel computer-aided detection (CAD) framework of breast masses in mammography. To increase detection sensitivity for various types of mammographic masses, we propose the combined use of different detection algorithms. In particular, we develop a region-of-interest combination mechanism that integrates detection information gained from unsupervised and supervised detection algorithms. Also, to significantly reduce the number of false-positive (FP) detections, the new ensemble classification algorithm is developed. Extensive experiments have been conducted on a benchmark mammogram database. Results show that our combined detection approach can considerably improve the detection sensitivity with a small loss of FP rate, compared to representative detection algorithms previously developed for mammographic CAD systems. The proposed ensemble classification solution also has a dramatic impact on the reduction of FP detections; as much as 70% (from 15 to 4.5 per image) at only cost of 4.6% sensitivity loss (from 90.0% to 85.4%). Moreover, our proposed CAD method performs as well or better (70.7% and 80.0% per 1.5 and 3.5 FPs per image respectively) than the results of mammography CAD algorithms previously reported in the literature. PMID:24923292
NASA Technical Reports Server (NTRS)
Benyo, Theresa L.
2002-01-01
Integration of a supersonic inlet simulation with a computer aided design (CAD) system is demonstrated. The integration is performed using the Project Integration Architecture (PIA). PIA provides a common environment for wrapping many types of applications. Accessing geometry data from CAD files is accomplished by incorporating appropriate function calls from the Computational Analysis Programming Interface (CAPRI). CAPRI is a CAD vendor neutral programming interface that aids in acquiring geometry data directly from CAD files. The benefits of wrapping a supersonic inlet simulation into PIA using CAPRI are; direct access of geometry data, accurate capture of geometry data, automatic conversion of data units, CAD vendor neutral operation, and on-line interactive history capture. This paper describes the PIA and the CAPRI wrapper and details the supersonic inlet simulation demonstration.
CAD-based graphical computer simulation in endoscopic surgery.
Kuehnapfel, U G; Neisius, B
1993-06-01
This article presents new techniques for three-dimensional, kinematic realtime simulation of dextrous endoscopic instruments. The integrated simulation package KISMET is used for engineering design verification and evaluation. Geometric and kinematic computer models of the mechanisms and the laparoscopic workspace were created. Using the advanced capabilities of high-performance graphical workstations combined with state-of-the-art simulation software, it is possible to generate displays of the surgical instruments acting realistically on the organs of the digestive system. The organ geometry is modelled in a high degree of detail. Apart from discussing the use of KISMET for the development of MFM-II (Modular Flexible MIS Instrument, Release II), the paper indicates further applications of realtime 3D graphical simulation methods in endoscopic surgery. PMID:8055320
Task analysis for computer-aided design (CAD) at a keystroke level.
Chi, C F; Chung, K L
1996-08-01
The purpose of this research was to develop a new model to describe and predict a computerized task. AutoCAD was utilized as the experimental tool to collect operating procedure and time data at a keystroke level for a computer aided design (CAD) task. Six undergraduate students participated in the experiment. They were required to complete one simple and one complex engineering drawing. A model which characterized the task performance by software commands and predicted task execution time using keystroke-level model operators was proposed and applied to the analysis of the dialogue data. This task parameter model adopted software commands, e.g. LINE, OFFSET in AutoCAD, to describe the function of a task unit and used up to five parameters to indicate the number of keystrokes, chosen function for a command and ways of starting and ending a command. Each task unit in the task parameter model can be replaced by a number of primitive operators as in the keystroke level model to predict the task execution time. The observed task execution times of all task units were found to be highly correlated with the task execution times predicted by the keystroke level model. Therefore, the task parameter model was proved to be a usable analytical tool for evaluating the human-computer interface (HCI). PMID:15677066
Computer-Aided Diagnostic (CAD) Scheme by Use of Contralateral Subtraction Technique
NASA Astrophysics Data System (ADS)
Nagashima, Hiroyuki; Harakawa, Tetsumi
We developed a computer-aided diagnostic (CAD) scheme for detection of subtle image findings of acute cerebral infarction in brain computed tomography (CT) by using a contralateral subtraction technique. In our computerized scheme, the lateral inclination of image was first corrected automatically by rotating and shifting. The contralateral subtraction image was then derived by subtraction of reversed image from original image. Initial candidates for acute cerebral infarctions were identified using the multiple-thresholding and image filtering techniques. As the 1st step for removing false positive candidates, fourteen image features were extracted in each of the initial candidates. Halfway candidates were detected by applying the rule-based test with these image features. At the 2nd step, five image features were extracted using the overlapping scale with halfway candidates in interest slice and upper/lower slice image. Finally, acute cerebral infarction candidates were detected by applying the rule-based test with five image features. The sensitivity in the detection for 74 training cases was 97.4% with 3.7 false positives per image. The performance of CAD scheme for 44 testing cases had an approximate result to training cases. Our CAD scheme using the contralateral subtraction technique can reveal suspected image findings of acute cerebral infarctions in CT images.
ERIC Educational Resources Information Center
Chester, Ivan
2007-01-01
CAD (Computer Aided Design) has now become an integral part of Technology Education. The recent introduction of highly sophisticated, low-cost CAD software and CAM hardware capable of running on desktop computers has accelerated this trend. There is now quite widespread introduction of solid modeling CAD software into secondary schools but how…
Computationally efficient Bayesian tracking
NASA Astrophysics Data System (ADS)
Aughenbaugh, Jason; La Cour, Brian
2012-06-01
In this paper, we describe the progress we have achieved in developing a computationally efficient, grid-based Bayesian fusion tracking system. In our approach, the probability surface is represented by a collection of multidimensional polynomials, each computed adaptively on a grid of cells representing state space. Time evolution is performed using a hybrid particle/grid approach and knowledge of the grid structure, while sensor updates use a measurement-based sampling method with a Delaunay triangulation. We present an application of this system to the problem of tracking a submarine target using a field of active and passive sonar buoys.
Computationally efficient control allocation
NASA Technical Reports Server (NTRS)
Durham, Wayne (Inventor)
2001-01-01
A computationally efficient method for calculating near-optimal solutions to the three-objective, linear control allocation problem is disclosed. The control allocation problem is that of distributing the effort of redundant control effectors to achieve some desired set of objectives. The problem is deemed linear if control effectiveness is affine with respect to the individual control effectors. The optimal solution is that which exploits the collective maximum capability of the effectors within their individual physical limits. Computational efficiency is measured by the number of floating-point operations required for solution. The method presented returned optimal solutions in more than 90% of the cases examined; non-optimal solutions returned by the method were typically much less than 1% different from optimal and the errors tended to become smaller than 0.01% as the number of controls was increased. The magnitude of the errors returned by the present method was much smaller than those that resulted from either pseudo inverse or cascaded generalized inverse solutions. The computational complexity of the method presented varied linearly with increasing numbers of controls; the number of required floating point operations increased from 5.5 i, to seven times faster than did the minimum-norm solution (the pseudoinverse), and at about the same rate as did the cascaded generalized inverse solution. The computational requirements of the method presented were much better than that of previously described facet-searching methods which increase in proportion to the square of the number of controls.
Selection and implementation of a computer aided design and drafting (CAD/D) system
Davis, J.P.
1981-01-01
Faced with very heavy workloads and limited engineering and graphics personnel, Transco opted for a computer-aided design and drafting system that can produce intelligent drawings, which have associated data bases that can be integrated with other graphical and nongraphical data bases to form comprehensive sets of data for construction projects. Because so much time was being spent in all phases of materials and inventory control, Transco decided to integrate materials-management capabilities into the CAD/D system. When a specific item of material is requested on the graphics equipment, the request triggers production of both the drawing and a materials list. Transco plans to extend its computer applications into mapping tasks as well.
Computer-Aided Design/Manufacturing (CAD/M) for high-speed interconnect
NASA Astrophysics Data System (ADS)
Santoski, N. F.
1981-10-01
The objective of the Computer-Aided Design/Manufacturing (CAD/M) for High-Speed Interconnect Program study was to assess techniques for design, analysis and fabrication of interconnect structures between high-speed logic ICs that are clocked in the 200 MHz to 5 GHz range. Interconnect structure models were investigated and integrated with existing device models. Design rules for interconnects were developed in terms of parameters that can be installed in software that is used for the design, analysis and fabrication of circuits. To implement these design rules in future software development, algorithms and software development techniques were defined. Major emphasis was on Printed Wiring Board and hybrid level circuits as opposed to monolithic chips. Various packaging schemes were considered, including controlled impedance lines in the 50 to 200 ohms range where needed. The design rules developed are generic in nature, in that various architecture classes and device technologies were considered.
NASA Astrophysics Data System (ADS)
Suh, Joohyung; Ma, Kevin; Le, Anh
2011-03-01
Multiple Sclerosis (MS) is a disease which is caused by damaged myelin around axons of the brain and spinal cord. Currently, MR Imaging is used for diagnosis, but it is very highly variable and time-consuming since the lesion detection and estimation of lesion volume are performed manually. For this reason, we developed a CAD (Computer Aided Diagnosis) system which would assist segmentation of MS to facilitate physician's diagnosis. The MS CAD system utilizes K-NN (k-nearest neighbor) algorithm to detect and segment the lesion volume in an area based on the voxel. The prototype MS CAD system was developed under the MATLAB environment. Currently, the MS CAD system consumes a huge amount of time to process data. In this paper we will present the development of a second version of MS CAD system which has been converted into C/C++ in order to take advantage of the GPU (Graphical Processing Unit) which will provide parallel computation. With the realization of C/C++ and utilizing the GPU, we expect to cut running time drastically. The paper investigates the conversion from MATLAB to C/C++ and the utilization of a high-end GPU for parallel computing of data to improve algorithm performance of MS CAD.
The computation of all plane/surface intersections for CAD/CAM applications
NASA Technical Reports Server (NTRS)
Hoitsma, D. H., Jr.; Roche, M.
1984-01-01
The problem of the computation and display of all intersections of a given plane with a rational bicubic surface patch for use on an interactive CAD/CAM system is examined. The general problem of calculating all intersections of a plane and a surface consisting of rational bicubic patches is reduced to the case of a single generic patch by applying a rejection algorithm which excludes all patches that do not intersect the plane. For each pertinent patch the algorithm presented computed the intersection curves by locating an initial point on each curve, and computes successive points on the curve using a tolerance step equation. A single cubic equation solver is used to compute the initial curve points lying on the boundary of a surface patch, and the method of resultants as applied to curve theory is used to determine critical points which, in turn, are used to locate initial points that lie on intersection curves which are in the interior of the patch. Examples are given to illustrate the ability of this algorithm to produce all intersection curves.
ERIC Educational Resources Information Center
Snyder, Nancy V.
North Seattle Community College decided to integrate computer-aided design/drafting (CAD/D) into its Electro-Mechanical Drafting Program. This choice necessitated a redefinition of the program through new curriculum and course development. To initiate the project, a new industrial advisory council was formed. Major electronic and recruiting firms…
A computer-aided detection (CAD) system with a 3D algorithm for small acute intracranial hemorrhage
NASA Astrophysics Data System (ADS)
Wang, Ximing; Fernandez, James; Deshpande, Ruchi; Lee, Joon K.; Chan, Tao; Liu, Brent
2012-02-01
Acute Intracranial hemorrhage (AIH) requires urgent diagnosis in the emergency setting to mitigate eventual sequelae. However, experienced radiologists may not always be available to make a timely diagnosis. This is especially true for small AIH, defined as lesion smaller than 10 mm in size. A computer-aided detection (CAD) system for the detection of small AIH would facilitate timely diagnosis. A previously developed 2D algorithm shows high false positive rates in the evaluation based on LAC/USC cases, due to the limitation of setting up correct coordinate system for the knowledge-based classification system. To achieve a higher sensitivity and specificity, a new 3D algorithm is developed. The algorithm utilizes a top-hat transformation and dynamic threshold map to detect small AIH lesions. Several key structures of brain are detected and are used to set up a 3D anatomical coordinate system. A rule-based classification of the lesion detected is applied based on the anatomical coordinate system. For convenient evaluation in clinical environment, the CAD module is integrated with a stand-alone system. The CAD is evaluated by small AIH cases and matched normal collected in LAC/USC. The result of 3D CAD and the previous 2D CAD has been compared.
NASA Astrophysics Data System (ADS)
Damyanova, M.; Sabchevski, S.; Zhelyazkov, I.; Vasileva, E.; Balabanova, E.; Dankov, P.; Malinov, P.
2016-03-01
Gyrotrons are the most powerful sources of coherent CW (continuous wave) radiation in the frequency range situated between the long-wavelength edge of the infrared light (far-infrared region) and the microwaves, i.e., in the region of the electromagnetic spectrum which is usually called the THz-gap (or T-gap), since the output power of other devices (e.g., solid-state oscillators) operating in this interval is by several orders of magnitude lower. In the recent years, the unique capabilities of the sub-THz and THz gyrotrons have opened the road to many novel and future prospective applications in various physical studies and advanced high-power terahertz technologies. In this paper, we present the current status and functionality of the problem-oriented software packages (most notably GYROSIM and GYREOSS) used for numerical studies, computer-aided design (CAD) and optimization of gyrotrons for diverse applications. They consist of a hierarchy of codes specialized to modelling and simulation of different subsystems of the gyrotrons (EOS, resonant cavity, etc.) and are based on adequate physical models, efficient numerical methods and algorithms.
ERIC Educational Resources Information Center
Domermuth, Dave; And Others
1996-01-01
Includes "Quick Start CNC (computer numerical control) with a Vacuum Filter and Laminated Plastic" (Domermuth); "School and Industry Cooperate for Mutual Benefit" (Buckler); and "CAD (computer-assisted drafting) Careers--What Professionals Have to Say" (Skinner). (JOW)
Efficient universal blind quantum computation.
Giovannetti, Vittorio; Maccone, Lorenzo; Morimae, Tomoyuki; Rudolph, Terry G
2013-12-01
We give a cheat sensitive protocol for blind universal quantum computation that is efficient in terms of computational and communication resources: it allows one party to perform an arbitrary computation on a second party's quantum computer without revealing either which computation is performed, or its input and output. The first party's computational capabilities can be extremely limited: she must only be able to create and measure single-qubit superposition states. The second party is not required to use measurement-based quantum computation. The protocol requires the (optimal) exchange of O(Jlog2(N)) single-qubit states, where J is the computational depth and N is the number of qubits needed for the computation. PMID:24476238
Efficient Universal Blind Quantum Computation
NASA Astrophysics Data System (ADS)
Giovannetti, Vittorio; Maccone, Lorenzo; Morimae, Tomoyuki; Rudolph, Terry G.
2013-12-01
We give a cheat sensitive protocol for blind universal quantum computation that is efficient in terms of computational and communication resources: it allows one party to perform an arbitrary computation on a second party’s quantum computer without revealing either which computation is performed, or its input and output. The first party’s computational capabilities can be extremely limited: she must only be able to create and measure single-qubit superposition states. The second party is not required to use measurement-based quantum computation. The protocol requires the (optimal) exchange of O(Jlog2(N)) single-qubit states, where J is the computational depth and N is the number of qubits needed for the computation.
NASA Technical Reports Server (NTRS)
Rasmussen, John
1990-01-01
Structural optimization has attracted the attention since the days of Galileo. Olhoff and Taylor have produced an excellent overview of the classical research within this field. However, the interest in structural optimization has increased greatly during the last decade due to the advent of reliable general numerical analysis methods and the computer power necessary to use them efficiently. This has created the possibility of developing general numerical systems for shape optimization. Several authors, eg., Esping; Braibant & Fleury; Bennet & Botkin; Botkin, Yang, and Bennet; and Stanton have published practical and successful applications of general optimization systems. Ding and Homlein have produced extensive overviews of available systems. Furthermore, a number of commercial optimization systems based on well-established finite element codes have been introduced. Systems like ANSYS, IDEAS, OASIS, and NISAOPT are widely known examples. In parallel to this development, the technology of computer aided design (CAD) has gained a large influence on the design process of mechanical engineering. The CAD technology has already lived through a rapid development driven by the drastically growing capabilities of digital computers. However, the systems of today are still considered as being only the first generation of a long row of computer integrated manufacturing (CIM) systems. These systems to come will offer an integrated environment for design, analysis, and fabrication of products of almost any character. Thus, the CAD system could be regarded as simply a database for geometrical information equipped with a number of tools with the purpose of helping the user in the design process. Among these tools are facilities for structural analysis and optimization as well as present standard CAD features like drawing, modeling, and visualization tools. The state of the art of structural optimization is that a large amount of mathematical and mechanical techniques are
CAD/CAE Integration Enhanced by New CAD Services Standard
NASA Technical Reports Server (NTRS)
Claus, Russell W.
2002-01-01
A Government-industry team led by the NASA Glenn Research Center has developed a computer interface standard for accessing data from computer-aided design (CAD) systems. The Object Management Group, an international computer standards organization, has adopted this CAD services standard. The new standard allows software (e.g., computer-aided engineering (CAE) and computer-aided manufacturing software to access multiple CAD systems through one programming interface. The interface is built on top of a distributed computing system called the Common Object Request Broker Architecture (CORBA). CORBA allows the CAD services software to operate in a distributed, heterogeneous computing environment.
Development of a full body CAD dataset for computational modeling: a multi-modality approach.
Gayzik, F S; Moreno, D P; Geer, C P; Wuertzer, S D; Martin, R S; Stitzel, J D
2011-10-01
The objective of this study was to develop full body CAD geometry of a seated 50th percentile male. Model development was based on medical image data acquired for this study, in conjunction with extensive data from the open literature. An individual (height, 174.9 cm, weight, 78.6 ± 0.77 kg, and age 26 years) was enrolled in the study for a period of 4 months. 72 scans across three imaging modalities (CT, MRI, and upright MRI) were collected. The whole-body dataset contains 15,622 images. Over 300 individual components representing human anatomy were generated through segmentation. While the enrolled individual served as a template, segmented data were verified against, or augmented with, data from over 75 literature sources on the average morphology of the human body. Non-Uniform Rational B-Spline (NURBS) surfaces with tangential (G1) continuity were constructed over all the segmented data. The sagittally symmetric model consists of 418 individual components representing bones, muscles, organs, blood vessels, ligaments, tendons, cartilaginous structures, and skin. Length, surface area, and volumes of components germane to crash injury prediction are presented. The total volume (75.7 × 103 cm(3)) and surface area (1.86 × 102 cm(2)) of the model closely agree with the literature data. The geometry is intended for subsequent use in nonlinear dynamics solvers, and serves as the foundation of a global effort to develop the next-generation computational human body model for injury prediction and prevention. PMID:21785882
Computer Graphic Design Using Auto-CAD and Plug Nozzle Research
NASA Technical Reports Server (NTRS)
Rogers, Rayna C.
2004-01-01
The purpose of creating computer generated images varies widely. They can be use for computational fluid dynamics (CFD), or as a blueprint for designing parts. The schematic that I will be working on the summer will be used to create nozzles that are a part of a larger system. At this phase in the project, the nozzles needed for the systems have been fabricated. One part of my mission is to create both three dimensional and two dimensional models on Auto-CAD 2002 of the nozzles. The research on plug nozzles will allow me to have a better understanding of how they assist in the thrust need for a missile to take off. NASA and the United States military are working together to develop a new design concept. On most missiles a convergent-divergent nozzle is used to create thrust. However, the two are looking into different concepts for the nozzle. The standard convergent-divergent nozzle forces a mixture of combustible fluids and air through a smaller area in comparison to where the combination was mixed. Once it passes through the smaller area known as A8 it comes out the end of the nozzle which is larger the first or area A9. This creates enough thrust for the mechanism whether it is an F-18 fighter jet or a missile. The A9 section of the convergent-divergent nozzle has a mechanism that controls how large A9 can be. This is needed because the pressure of the air coming out nozzle must be equal to that of the ambient pressure other wise there will be a loss of performance in the machine. The plug nozzle however does not need to have an A9 that can vary. When the air flow comes out it can automatically sense what the ambient pressure is and will adjust accordingly. The objective of this design is to create a plug nozzle that is not as complicated mechanically as it counterpart the convergent-divergent nozzle.
Sperling, R.B.
1993-03-19
Value Engineering (VE) and Computer-Aided Design (CAD) can be used synergistically to reduce costs and improve facilities designs. The cost and schedule impacts of implementing alternative design ideas developed by VE teams can be greatly reduced when the drawings have been produced with interactive CAD systems. To better understand the interrelationship between VE and CAD, the fundamentals of the VE process are explained; and example of a VE proposal is described and the way CAD drawings facilitated its implementation is illustrated.
A computational investigation on radiation damage and activation of structural material for C-ADS
NASA Astrophysics Data System (ADS)
Liang, Tairan; Shen, Fei; Yin, Wen; Yu, Quanzhi; Liang, Tianjiao
2015-11-01
The C-ADS (China Accelerator-Driven Subcritical System) project, which aims at transmuting high-level radiotoxic waste (HLW) and power generation, is now in the research and development stage. In this paper, a simplified ADS model is set up based on the IAEA Th-ADS benchmark calculation model, then the radiation damage as well as the residual radioactivity of the structural material are estimated using the Monte Carlo simulation method. The peak displacement production rate, gas productions, activity and residual dose rate of the structural components like beam window and outer casing of subcritical reactor core are calculated. The calculation methods and the corresponding results provide the basic reference for making reasonable predictions for the lifetime and maintenance operations of the structural material of C-ADS.
Randolph Schwarz; Leland L. Carter; Alysia Schwarz
2005-08-23
Monte Carlo N-Particle Transport Code (MCNP) is the code of choice for doing complex neutron/photon/electron transport calculations for the nuclear industry and research institutions. The Visual Editor for Monte Carlo N-Particle is internationally recognized as the best code for visually creating and graphically displaying input files for MCNP. The work performed in this grant was used to enhance the capabilities of the MCNP Visual Editor to allow it to read in both 2D and 3D Computer Aided Design (CAD) files, allowing the user to electronically generate a valid MCNP input geometry.
Burns, T.J.
1994-03-01
An Xwindow application capable of importing geometric information directly from two Computer Aided Design (CAD) based formats for use in radiation transport and shielding analyses is being developed at ORNL. The application permits the user to graphically view the geometric models imported from the two formats for verification and debugging. Previous models, specifically formatted for the radiation transport and shielding codes can also be imported. Required extensions to the existing combinatorial geometry analysis routines are discussed. Examples illustrating the various options and features which will be implemented in the application are presented. The use of the application as a visualization tool for the output of the radiation transport codes is also discussed.
Evaluation of Five Microcomputer CAD Packages.
ERIC Educational Resources Information Center
Leach, James A.
1987-01-01
Discusses the similarities, differences, advanced features, applications and number of users of five microcomputer computer-aided design (CAD) packages. Included are: "AutoCAD (V.2.17)"; "CADKEY (V.2.0)"; "CADVANCE (V.1.0)"; "Super MicroCAD"; and "VersaCAD Advanced (V.4.00)." Describes the evaluation of the packages and makes recommendations for…
Efficient computation of NACT seismograms
NASA Astrophysics Data System (ADS)
Zheng, Z.; Romanowicz, B. A.
2009-12-01
We present a modification to the NACT formalism (Li and Romanowicz, 1995) for computing synthetic seismograms and sensitivity kernels in global seismology. In the NACT theory, the perturbed seismogram consists of an along-branch coupling term, which is computed under the well-known PAVA approximation (e.g. Woodhouse and Dziewonski, 1984), and an across-branch coupling term, which is computed under the linear Born approximation. In the classical formalism, the Born part is obtained by a double summation over all pairs of coupling modes, where the numerical cost grows as (number of sources * number of receivers) * (corner frequency)^4. Here, however, by adapting the approach of Capdeville (2005), we are able to separate the computation into two single summations, which are responsible for the “source to scatterer” and the “scatterer to receiver” contributions, respectively. As a result, the numerical cost of the new scheme grows as (number of sources + number of receivers) * (corner frequency)^2. Moreover, by expanding eigen functions on a wavelet basis, a compression factor of at least 3 (larger at lower frequency) is achieved, leading to a factor of ~10 saving in disk storage. Numerical experiments show that the synthetic seismograms computed from the new approach agree well with those from the classical mode coupling method. The new formalism is significantly more efficient when approaching higher frequencies and in cases of large numbers of sources and receivers, while the across-branch mode coupling feature is still preserved, though not explicitly.
NASA Astrophysics Data System (ADS)
Lieberman, Robert; Kwong, Heston; Liu, Brent; Huang, H. K.
2009-02-01
The chest x-ray radiological features of tuberculosis patients are well documented, and the radiological features that change in response to successful pharmaceutical therapy can be followed with longitudinal studies over time. The patients can also be classified as either responsive or resistant to pharmaceutical therapy based on clinical improvement. We have retrospectively collected time series chest x-ray images of 200 patients diagnosed with tuberculosis receiving the standard pharmaceutical treatment. Computer algorithms can be created to utilize image texture features to assess the temporal changes in the chest x-rays of the tuberculosis patients. This methodology provides a framework for a computer-assisted detection (CAD) system that may provide physicians with the ability to detect poor treatment response earlier in pharmaceutical therapy. Early detection allows physicians to respond with more timely treatment alternatives and improved outcomes. Such a system has the potential to increase treatment efficacy for millions of patients each year.
Bishop, N.A. Jr. )
1994-04-01
Over the past decade, computer-aided design (CAD) has become a practical and economical design tool. Today, specifying CAD hardware and software is relatively easy once you know what the design requirements are. But finding experienced CAD professionals is often more difficult. Most CAD users have only two or three years of design experience; more experienced design personnel are frequently not CAD literate. However, effective use of CAD can be the key to lowering design costs and improving design quality--a quest familiar to every manager and designer. By emphasizing computer-aided design literacy at all levels of the firm, a Canadian joint-venture company that specializes in engineering small hydroelectric projects has cut costs, become more productive and improved design quality. This article describes how they did it.
Efficient computation of optimal actions
Todorov, Emanuel
2009-01-01
Optimal choice of actions is a fundamental problem relevant to fields as diverse as neuroscience, psychology, economics, computer science, and control engineering. Despite this broad relevance the abstract setting is similar: we have an agent choosing actions over time, an uncertain dynamical system whose state is affected by those actions, and a performance criterion that the agent seeks to optimize. Solving problems of this kind remains hard, in part, because of overly generic formulations. Here, we propose a more structured formulation that greatly simplifies the construction of optimal control laws in both discrete and continuous domains. An exhaustive search over actions is avoided and the problem becomes linear. This yields algorithms that outperform Dynamic Programming and Reinforcement Learning, and thereby solve traditional problems more efficiently. Our framework also enables computations that were not possible before: composing optimal control laws by mixing primitives, applying deterministic methods to stochastic systems, quantifying the benefits of error tolerance, and inferring goals from behavioral data via convex optimization. Development of a general class of easily solvable problems tends to accelerate progress—as linear systems theory has done, for example. Our framework may have similar impact in fields where optimal choice of actions is relevant. PMID:19574462
Computationally efficient lossless image coder
NASA Astrophysics Data System (ADS)
Sriram, Parthasarathy; Sudharsanan, Subramania I.
1999-12-01
Lossless coding of image data has been a very active area of research in the field of medical imaging, remote sensing and document processing/delivery. While several lossless image coders such as JPEG and JBIG have been in existence for a while, their compression performance for encoding continuous-tone images were rather poor. Recently, several state of the art techniques like CALIC and LOCO were introduced with significant improvement in compression performance over traditional coders. However, these coders are very difficult to implement using dedicated hardware or in software using media processors due to their inherently serial nature of their encoding process. In this work, we propose a lossless image coding technique with a compression performance that is very close to the performance of CALIC and LOCO while being very efficient to implement both in hardware and software. Comparisons for encoding the JPEG- 2000 image set show that the compression performance of the proposed coder is within 2 - 5% of the more complex coders while being computationally very efficient. In addition, the encoder is shown to be parallelizabl at a hierarchy of levels. The execution time of the proposed encoder is smaller than what is required by LOCO while the decoder is 2 - 3 times faster that the execution time required by LOCO decoder.
Use of CAD systems in design of Space Station and space robots
NASA Technical Reports Server (NTRS)
Dwivedi, Suren N.; Yadav, P.; Jones, Gary; Travis, Elmer W.
1988-01-01
The evolution of CAD systems is traced. State-of-the-art CAD systems are reviewed and various advanced CAD facilities and supplementing systems being used at NASA-Goddard are described. CAD hardware, computer software, and protocols are detailed.
ERIC Educational Resources Information Center
Resetarits, Paul J.
1989-01-01
Studies whether traditional drafting equipment (TRAD) or computer aided drafting equipment (CAD) is more effective. Proposes that students using only CAD can learn principles of drafting as well as students using only TRAD. Reports no significant difference either on achievement or attitude. (MVL)
Incorporating CAD Instruction into the Drafting Curriculum.
ERIC Educational Resources Information Center
Yuen, Steve Chi-Yin
1990-01-01
If education is to meet the challenged posed by the U.S. productivity crisis and the large number of computer-assisted design (CAD) workstations forecast as necessary in the future, schools must integrate CAD into the drafting curriculum and become aggressive in providing CAD training. Teachers need to maintain close contact with local industries…
Gathering Empirical Evidence Concerning Links between Computer Aided Design (CAD) and Creativity
ERIC Educational Resources Information Center
Musta'amal, Aede Hatib; Norman, Eddie; Hodgson, Tony
2009-01-01
Discussion is often reported concerning potential links between computer-aided designing and creativity, but there is a lack of systematic enquiry to gather empirical evidence concerning such links. This paper reports an indication of findings from other research studies carried out in contexts beyond general education that have sought evidence…
Rudziński, Piotr N.; Demkow, Marcin; Dzielińska, Zofia; Pręgowski, Jerzy; Witkowski, Adam; Rużyłło, Witold; Kępka, Cezary
2015-01-01
Introduction The primary diagnostic examination performed in patients with a high pre-test probability of coronary artery disease (CAD) is invasive coronary angiography. Currently, approximately 50% of all invasive coronary angiographies do not end with percutaneous coronary intervention (PCI) because of the absence of significant coronary artery lesions. It is desirable to eliminate such situations. There is an alternative, non-invasive method useful for exclusion of significant CAD, which is coronary computed tomography angiography (CCTA). Aim We hypothesize that use of CCTA as the first choice method in the diagnosis of patients with high pre-test probability of CAD may reduce the number of invasive coronary angiographies not followed by interventional treatment. Coronary computed tomography angiography also seems not to be connected with additional risks and costs of the diagnosis. Confirmation of these assumptions may impact cardiology guidelines. Material and methods One hundred and twenty patients with indications for invasive coronary angiography determined by current ESC guidelines regarding stable CAD are randomized 1 : 1 to classic invasive coronary angiography group and the CCTA group. Results All patients included in the study are monitored for the occurrence of possible end points during the diagnostic and therapeutic cycle (from the first imaging examination to either complete revascularization or disqualification from the invasive treatment), or during the follow-up period. Conclusions Based on the literature, it appears that the use of modern CT systems in patients with high pre-test probability of CAD, as well as appropriate clinical interpretation of the imaging study by invasive cardiologists, enables precise planning of invasive therapeutic procedures. Our randomized study will provide data to verify these assumptions. PMID:26677376
NASA Astrophysics Data System (ADS)
Gilbert, B. K.
1982-12-01
This document is the third year interim report for a four-year program to refine and develop Computer-Aided Design protocols for implementation of subnanosceond Emitter Coupled Logic in High-Speed Computer Modules using a wire wrap interconnection medium. The software and user manual for implementation guides are not part of the actual report. This report describes the results of work conducted in the third year of a four year program to develop rapid methods for designing and prototyping high-speed digital processor systems using subnanosecond emitter coupled logic (ECL). The third year effort was divided into two separate sets of tasks. In Task 1, described in Sections 3 - 7 of this report, we have nearly completed development of new sets of design rules, interconnection protocols, special components, and logic panels, for a technology based upon specially designed leadless ceramic chip carriers developed at Mayo Foundation. Task 2, described in Sections 8 and IX of this report, continued the development of a comprehensive computer-aided design/computer-aided manufacturing (CAD/CAM) software package which is specifically tailored to support the peculiar design requirements of processors operating in a high clock rate, transmission line environment, either with subnanosecond ECL components or with any other families of subnanosecond devices.
ERIC Educational Resources Information Center
Havas, George D.
This brief guide to materials in the Library of Congress (LC) on computer aided design and/or computer aided manufacturing lists reference materials and other information sources under 13 headings: (1) brief introductions; (2) LC subject headings used for such materials; (3) textbooks; (4) additional titles; (5) glossaries and handbooks; (6)…
HistoCAD: Machine Facilitated Quantitative Histoimaging with Computer Assisted Diagnosis
NASA Astrophysics Data System (ADS)
Tomaszewski, John E.
Prostatic adenocarcinoma (CAP) is the most common malignancy in American men. In 2010 there will be an estimated 217,730 new cases and 32,050 deaths from CAP in the US. The diagnosis of prostatic adenocarcinoma is made exclusively from the histological evaluation of prostate tissue. The sampling protocols used to obtain 18 gauge (1.5 mm diameter) needle cores are standard sampling templates consisting of 6-12 cores performed in the context of an elevated serum value for prostate specific antigen (PSA). In this context, the prior probability of cancer is somewhat increased. However, even in this screened population, the efficiency of finding cancer is low at only approximately 20%. Histopathologists are faced with the task of reviewing the 5-10 million cores of tissue resulting from approximately 1,000,000 biopsy procedures yearly, parsing all the benign scenes from the worrisome scenes, and deciding which of the worrisome images are cancer.
Ames, A.L.
1999-02-01
This paper documents development of a capability for performing shape-changing editing operations on solid model representations in an immersive environment. The capability includes part- and assembly-level operations, with part modeling supporting topology-invariant and topology-changing modifications. A discussion of various design considerations in developing an immersive capability is included, along with discussion of a prototype implementation we have developed and explored. The project investigated approaches to providing both topology-invariant and topology-changing editing. A prototype environment was developed to test the approaches and determine the usefulness of immersive editing. The prototype showed exciting potential in redefining the CAD interface. It is fun to use. Editing is much faster and friendlier than traditional feature-based CAD software. The prototype algorithms did not reliably provide a sufficient frame rate for complex geometries, but has provided the necessary roadmap for development of a production capability.
Efficient Computational Model of Hysteresis
NASA Technical Reports Server (NTRS)
Shields, Joel
2005-01-01
A recently developed mathematical model of the output (displacement) versus the input (applied voltage) of a piezoelectric transducer accounts for hysteresis. For the sake of computational speed, the model is kept simple by neglecting the dynamic behavior of the transducer. Hence, the model applies to static and quasistatic displacements only. A piezoelectric transducer of the type to which the model applies is used as an actuator in a computer-based control system to effect fine position adjustments. Because the response time of the rest of such a system is usually much greater than that of a piezoelectric transducer, the model remains an acceptably close approximation for the purpose of control computations, even though the dynamics are neglected. The model (see Figure 1) represents an electrically parallel, mechanically series combination of backlash elements, each having a unique deadband width and output gain. The zeroth element in the parallel combination has zero deadband width and, hence, represents a linear component of the input/output relationship. The other elements, which have nonzero deadband widths, are used to model the nonlinear components of the hysteresis loop. The deadband widths and output gains of the elements are computed from experimental displacement-versus-voltage data. The hysteresis curve calculated by use of this model is piecewise linear beyond deadband limits.
Computing Efficiency Of Transfer Of Microwave Power
NASA Technical Reports Server (NTRS)
Pinero, L. R.; Acosta, R.
1995-01-01
BEAM computer program enables user to calculate microwave power-transfer efficiency between two circular apertures at arbitrary range. Power-transfer efficiency obtained numerically. Two apertures have generally different sizes and arbitrary taper illuminations. BEAM also analyzes effect of distance and taper illumination on transmission efficiency for two apertures of equal size. Written in FORTRAN.
NASA Astrophysics Data System (ADS)
Suzuki, Kenji
2009-09-01
Computer-aided diagnosis (CAD) has been an active area of study in medical image analysis. A filter for the enhancement of lesions plays an important role for improving the sensitivity and specificity in CAD schemes. The filter enhances objects similar to a model employed in the filter; e.g. a blob-enhancement filter based on the Hessian matrix enhances sphere-like objects. Actual lesions, however, often differ from a simple model; e.g. a lung nodule is generally modeled as a solid sphere, but there are nodules of various shapes and with internal inhomogeneities such as a nodule with spiculations and ground-glass opacity. Thus, conventional filters often fail to enhance actual lesions. Our purpose in this study was to develop a supervised filter for the enhancement of actual lesions (as opposed to a lesion model) by use of a massive-training artificial neural network (MTANN) in a CAD scheme for detection of lung nodules in CT. The MTANN filter was trained with actual nodules in CT images to enhance actual patterns of nodules. By use of the MTANN filter, the sensitivity and specificity of our CAD scheme were improved substantially. With a database of 69 lung cancers, nodule candidate detection by the MTANN filter achieved a 97% sensitivity with 6.7 false positives (FPs) per section, whereas nodule candidate detection by a difference-image technique achieved a 96% sensitivity with 19.3 FPs per section. Classification-MTANNs were applied for further reduction of the FPs. The classification-MTANNs removed 60% of the FPs with a loss of one true positive; thus, it achieved a 96% sensitivity with 2.7 FPs per section. Overall, with our CAD scheme based on the MTANN filter and classification-MTANNs, an 84% sensitivity with 0.5 FPs per section was achieved. First presented at the Seventh International Conference on Machine Learning and Applications, San Diego, CA, USA, 11-13 December 2008.
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
1996-01-01
The purpose of this paper is to discuss the use of Computer-Aided Design (CAD) geometry in a Multi-Disciplinary Design Optimization (MDO) environment. Two techniques are presented to facilitate the use of CAD geometry by different disciplines, such as Computational Fluid Dynamics (CFD) and Computational Structural Mechanics (CSM). One method is to transfer the load from a CFD grid to a CSM grid. The second method is to update the CAD geometry for CSM deflection.
Efficient, massively parallel eigenvalue computation
NASA Technical Reports Server (NTRS)
Huo, Yan; Schreiber, Robert
1993-01-01
In numerical simulations of disordered electronic systems, one of the most common approaches is to diagonalize random Hamiltonian matrices and to study the eigenvalues and eigenfunctions of a single electron in the presence of a random potential. An effort to implement a matrix diagonalization routine for real symmetric dense matrices on massively parallel SIMD computers, the Maspar MP-1 and MP-2 systems, is described. Results of numerical tests and timings are also presented.
Lower bounds on the computational efficiency of optical computing systems
NASA Astrophysics Data System (ADS)
Barakat, Richard; Reif, John
1987-03-01
A general model for determining the computational efficiency of optical computing systems, termed the VLSIO model, is described. It is a 3-dimensional generalization of the wire model of a 2-dimensional VLSI with optical beams (via Gabor's theorem) replacing the wires as communication channels. Lower bounds (in terms of simultaneous volume and time) on the computational resources of the VLSIO are obtained for computing various problems such as matrix multiplication.
Goswami, B; Mitra, M; Nag, B; Mitra, T K
1993-11-01
The present paper discusses a Computer-Aided Design and Drafting (CAD) based data acquisition and polar phase response study of the ECG. The scalar ECG does not show vector properties although such properties are embedded in it. In the present paper the polar phase response property of monopolar chest lead (V1 to V6) ECG voltages has been studied. A software tool has been used to evaluate the relative phase response of ECG voltages. The data acquisition of monopolar ECG records of chest leads V1 to V6 from the chart recorder has been done with the help of the AutoCAD application package. The spin harmonic constituents of ECG voltages are evaluated at each harmonic plane and the polar phase responses are studied at each plane. Some interesting results have been observed in some typical cases which are discussed in the paper. PMID:8307653
A Computationally Efficient Bedrock Model
NASA Astrophysics Data System (ADS)
Fastook, J. L.
2002-05-01
Full treatments of the Earth's crust, mantle, and core for ice sheet modeling are often computationally overwhelming, in that the requirements to calculate a full self-gravitating spherical Earth model for the time-varying load history of an ice sheet are considerably greater than the computational requirements for the ice dynamics and thermodynamics combined. For this reason, we adopt a ``reasonable'' approximation for the behavior of the deforming bedrock beneath the ice sheet. This simpler model of the Earth treats the crust as an elastic plate supported from below by a hydrostatic fluid. Conservation of linear and angular momentum for an elastic plate leads to the classical Poisson-Kirchhoff fourth order differential equation in the crustal displacement. By adding a time-dependent term this treatment allows for an exponentially-decaying response of the bed to loading and unloading events. This component of the ice sheet model (along with the ice dynamics and thermodynamics) is solved using the Finite Element Method (FEM). C1 FEMs are difficult to implement in more than one dimension, and as such the engineering community has turned away from classical Poisson-Kirchhoff plate theory to treatments such as Reissner-Mindlin plate theory, which are able to accommodate transverse shear and hence require only C0 continuity of basis functions (only the function, and not the derivative, is required to be continuous at the element boundary) (Hughes 1987). This method reduces the complexity of the C1 formulation by adding additional degrees of freedom (the transverse shear in x and y) at each node. This ``reasonable'' solution is compared with two self-gravitating spherical Earth models (1. Ivins et al. (1997) and James and Ivins (1998) } and 2. Tushingham and Peltier 1991 ICE3G run by Jim Davis and Glenn Milne), as well as with preliminary results of residual rebound rates measured with GPS by the BIFROST project. Modeled responses of a simulated ice sheet experiencing a
CAD/CAM systems in machine construction
NASA Astrophysics Data System (ADS)
Hellwig, H.-E.; Paulus, M.
1985-09-01
A description is provided of the present status of Computer-Aided Design (CAD) and Computer-Aided Manufacturing (CAM) technology, taking into account applications, and risks related to the introduction and employment of CAD/CAM methods. The employment of CAD/CAM systems in the area of machine construction is discussed, giving attention to the situation in West Germany. With respect to the system component 'hardware', the transition to a new hardware generation is taking place. In addition to computer centers with large-scale computers, minicomputers and superminicomputers, designed especially for technical applications, have become available. However, existing CAD software does not yet permit the full exploitation of the changes in hardware technology. Attention is given to CAD potential and current utilization in various application areas, developments related to graphics and geometry, advantages of a suitable macro language, the current employment of CAD/CAM technology, and cost considerations.
PC Board Layout and Electronic Drafting with CAD. Teacher Edition.
ERIC Educational Resources Information Center
Bryson, Jimmy
This teacher's guide contains 11 units of instruction for a course on computer electronics and computer-assisted drafting (CAD) using a personal computer (PC). The course covers the following topics: introduction to electronic drafting with CAD; CAD system and software; basic electronic theory; component identification; basic integrated circuit…
Noh, Kwantae; Pae, Ahran; Lee, Jung-Woo; Kwon, Yong-Dae
2016-05-01
An obturator prosthesis with insufficient retention and support may be improved with implant placement. However, implant surgery in patients after maxillary tumor resection can be complicated because of limited visibility and anatomic complexity. Therefore, computer-guided surgery can be advantageous even for experienced surgeons. In this clinical report, the use of computer-guided surgery is described for implant placement using a bone-supported surgical template for a patient with maxillary defects. The prosthetic procedure was facilitated and simplified by using computer-aided design/computer-aided manufacture (CAD/CAM) technology. Oral function and phonetics were restored using a tooth- and implant-supported obturator prosthesis. No clinical symptoms and no radiographic signs of significant bone loss around the implants were found at a 3-year follow-up. The treatment approach presented here can be a viable option for patients with insufficient remaining zygomatic bone after a hemimaxillectomy. PMID:26774316
Improving the radiologist-CAD interaction: designing for appropriate trust.
Jorritsma, W; Cnossen, F; van Ooijen, P M A
2015-02-01
Computer-aided diagnosis (CAD) has great potential to improve radiologists' diagnostic performance. However, the reported performance of the radiologist-CAD team is lower than what might be expected based on the performance of the radiologist and the CAD system in isolation. This indicates that the interaction between radiologists and the CAD system is not optimal. An important factor in the interaction between humans and automated aids (such as CAD) is trust. Suboptimal performance of the human-automation team is often caused by an inappropriate level of trust in the automation. In this review, we examine the role of trust in the radiologist-CAD interaction and suggest ways to improve the output of the CAD system so that it allows radiologists to calibrate their trust in the CAD system more effectively. Observer studies of the CAD systems show that radiologists often have an inappropriate level of trust in the CAD system. They sometimes under-trust CAD, thereby reducing its potential benefits, and sometimes over-trust it, leading to diagnostic errors they would not have made without CAD. Based on the literature on trust in human-automation interaction and the results of CAD observer studies, we have identified four ways to improve the output of CAD so that it allows radiologists to form a more appropriate level of trust in CAD. Designing CAD systems for appropriate trust is important and can improve the performance of the radiologist-CAD team. Future CAD research and development should acknowledge the importance of the radiologist-CAD interaction, and specifically the role of trust therein, in order to create the perfect artificial partner for the radiologist. This review focuses on the role of trust in the radiologist-CAD interaction. The aim of the review is to encourage CAD developers to design for appropriate trust and thereby improve the performance of the radiologist-CAD team. PMID:25459198
ERIC Educational Resources Information Center
Sisk, Alan
1987-01-01
Computer-aided design (CAD) has been around for a number of years. An overview is provided of a number of major computer-aided design programs. A short analysis of each program includes the addresses of the software producers. (MLF)
Viewing CAD Drawings on the Internet
ERIC Educational Resources Information Center
Schwendau, Mark
2004-01-01
Computer aided design (CAD) has been producing 3-D models for years. AutoCAD software is frequently used to create sophisticated 3-D models. These CAD files can be exported as 3DS files for import into Autodesk's 3-D Studio Viz. In this program, the user can render and modify the 3-D model before exporting it out as a WRL (world file hyperlinked)…
Efficient Methods to Compute Genomic Predictions
Technology Transfer Automated Retrieval System (TEKTRAN)
Efficient methods for processing genomic data were developed to increase reliability of estimated breeding values and simultaneously estimate thousands of marker effects. Algorithms were derived and computer programs tested on simulated data for 50,000 markers and 2,967 bulls. Accurate estimates of ...
CAD/CAM. High-Technology Training Module.
ERIC Educational Resources Information Center
Zuleger, Robert
This high technology training module is an advanced course on computer-assisted design/computer-assisted manufacturing (CAD/CAM) for grades 11 and 12. This unit, to be used with students in advanced drafting courses, introduces the concept of CAD/CAM. The content outline includes the following seven sections: (1) CAD/CAM software; (2) computer…
Efficient computation of Lorentzian 6J symbols
NASA Astrophysics Data System (ADS)
Willis, Joshua
2007-04-01
Spin foam models are a proposal for a quantum theory of gravity, and an important open question is whether they reproduce classical general relativity in the low energy limit. One approach to tackling that problem is to simulate spin-foam models on the computer, but this is hampered by the high computational cost of evaluating the basic building block of these models, the so-called 10J symbol. For Euclidean models, Christensen and Egan have developed an efficient algorithm, but for Lorentzian models this problem remains open. In this talk we describe an efficient method developed for Lorentzian 6J symbols, and we also report on recent work in progress to use this efficient algorithm in calculating the 10J symbols that are of real interest.
CAD Services: an Industry Standard Interface for Mechanical CAD Interoperability
NASA Technical Reports Server (NTRS)
Claus, Russell; Weitzer, Ilan
2002-01-01
Most organizations seek to design and develop new products in increasingly shorter time periods. At the same time, increased performance demands require a team-based multidisciplinary design process that may span several organizations. One approach to meet these demands is to use 'Geometry Centric' design. In this approach, design engineers team their efforts through one united representation of the design that is usually captured in a CAD system. Standards-based interfaces are critical to provide uniform, simple, distributed services that enable the 'Geometry Centric' design approach. This paper describes an industry-wide effort, under the Object Management Group's (OMG) Manufacturing Domain Task Force, to define interfaces that enable the interoperability of CAD, Computer Aided Manufacturing (CAM), and Computer Aided Engineering (CAE) tools. This critical link to enable 'Geometry Centric' design is called: Cad Services V1.0. This paper discusses the features of this standard and proposed application.
An Efficient Method for Computing All Reducts
NASA Astrophysics Data System (ADS)
Bao, Yongguang; Du, Xiaoyong; Deng, Mingrong; Ishii, Naohiro
In the process of data mining of decision table using Rough Sets methodology, the main computational effort is associated with the determination of the reducts. Computing all reducts is a combinatorial NP-hard computational problem. Therefore the only way to achieve its faster execution is by providing an algorithm, with a better constant factor, which may solve this problem in reasonable time for real-life data sets. The purpose of this presentation is to propose two new efficient algorithms to compute reducts in information systems. The proposed algorithms are based on the proposition of reduct and the relation between the reduct and discernibility matrix. Experiments have been conducted on some real world domains in execution time. The results show it improves the execution time when compared with the other methods. In real application, we can combine the two proposed algorithms.
NASA Astrophysics Data System (ADS)
Paul, B. E.; Pham, T. A.
1982-11-01
A computerized ballistic design technique for CAD/PAD is described by which a set of ballistic design parameters are determined, all of which satisfy a particular performance requirement. In addition, the program yields the remaining performance predictions, so that only a very few computer runs of the design program can quickly bring the ballistic design within the specification limits prescribed. An example is presented for a small propulsion device, such as a remover or actuator, for which the input specifications define a maximum allowable thrust and minimum end-of-stroke velocity. The resulting output automatically satisfies the input requirements, and will always yield an acceptable ballistic design.
NASA Astrophysics Data System (ADS)
Ciany, Charles M.; Zurawski, William C.
2007-04-01
Raytheon has extensively processed high-resolution sonar images with its CAD/CAC algorithms to provide real-time classification of mine-like bottom objects in a wide range of shallow-water environments. The algorithm performance is measured in terms of probability of correct classification (Pcc) as a function of false alarm rate, and is impacted by variables associated with both the physics of the problem and the signal processing design choices. Some examples of prominent variables pertaining to the choices of signal processing parameters are image resolution (i.e., pixel dimensions), image normalization scheme, and pixel intensity quantization level (i.e., number of bits used to represent the intensity of each image pixel). Improvements in image resolution associated with the technology transition from sidescan to synthetic aperture sonars have prompted the use of image decimation algorithms to reduce the number of pixels per image that are processed by the CAD/CAC algorithms, in order to meet real-time processor throughput requirements. Additional improvements in digital signal processing hardware have also facilitated the use of an increased quantization level in converting the image data from analog to digital format. This study evaluates modifications to the normalization algorithm and image pixel quantization level within the image processing prior to CAD/CAC processing, and examines their impact on the resulting CAD/CAC algorithm performance. The study utilizes a set of at-sea data from multiple test exercises in varying shallow water environments.
Changing computing paradigms towards power efficiency.
Klavík, Pavel; Malossi, A Cristiano I; Bekas, Costas; Curioni, Alessandro
2014-06-28
Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications. PMID:24842033
CAD systems simplify engineering drawings
Holt, J.
1986-10-01
Computer assisted drafting systems, with today's technology, provide high-quality, timely drawings that can be justified by the lower costs for the final product. The author describes Exxon Pipeline Co.'s experience in deciding on hardware and software for a CAD system installation and the benefits effected by this procedure and equipment.
Efficient communication in massively parallel computers
Cypher, R.E.
1989-01-01
A fundamental operation in parallel computation is sorting. Sorting is important not only because it is required by many algorithms, but also because it can be used to implement irregular, pointer-based communication. The author studies two algorithms for sorting in massively parallel computers. First, he examines Shellsort. Shellsort is a sorting algorithm that is based on a sequence of parameters called increments. Shellsort can be used to create a parallel sorting device known as a sorting network. Researchers have suggested that if the correct increment sequence is used, an optimal size sorting network can be obtained. All published increment sequences have been monotonically decreasing. He shows that no monotonically decreasing increment sequence will yield an optimal size sorting network. Second, he presents a sorting algorithm called Cubesort. Cubesort is the fastest known sorting algorithm for a variety of parallel computers aver a wide range of parameters. He also presents a paradigm for developing parallel algorithms that have efficient communication. The paradigm, called the data reduction paradigm, consists of using a divide-and-conquer strategy. Both the division and combination phases of the divide-and-conquer algorithm may require irregular, pointer-based communication between processors. However, the problem is divided so as to limit the amount of data that must be communicated. As a result the communication can be performed efficiently. He presents data reduction algorithms for the image component labeling problem, the closest pair problem and four versions of the parallel prefix problem.
Education and Training Packages for CAD/CAM.
ERIC Educational Resources Information Center
Wright, I. C.
1986-01-01
Discusses educational efforts in the fields of Computer Assisted Design and Manufacturing (CAD/CAM). Describes two educational training initiatives underway in the United Kingdom, one of which is a resource materials package for teachers of CAD/CAM at the undergraduate level, and the other a training course for managers of CAD/CAM systems. (TW)
Computational efficiency improvements for image colorization
NASA Astrophysics Data System (ADS)
Yu, Chao; Sharma, Gaurav; Aly, Hussein
2013-03-01
We propose an efficient algorithm for colorization of greyscale images. As in prior work, colorization is posed as an optimization problem: a user specifies the color for a few scribbles drawn on the greyscale image and the color image is obtained by propagating color information from the scribbles to surrounding regions, while maximizing the local smoothness of colors. In this formulation, colorization is obtained by solving a large sparse linear system, which normally requires substantial computation and memory resources. Our algorithm improves the computational performance through three innovations over prior colorization implementations. First, the linear system is solved iteratively without explicitly constructing the sparse matrix, which significantly reduces the required memory. Second, we formulate each iteration in terms of integral images obtained by dynamic programming, reducing repetitive computation. Third, we use a coarseto- fine framework, where a lower resolution subsampled image is first colorized and this low resolution color image is upsampled to initialize the colorization process for the fine level. The improvements we develop provide significant speedup and memory savings compared to the conventional approach of solving the linear system directly using off-the-shelf sparse solvers, and allow us to colorize images with typical sizes encountered in realistic applications on typical commodity computing platforms.
A primer on the energy efficiency of computing
NASA Astrophysics Data System (ADS)
Koomey, Jonathan G.
2015-03-01
The efficiency of computing at peak output has increased rapidly since the dawn of the computer age. This paper summarizes some of the key factors affecting the efficiency of computing in all usage modes. While there is still great potential for improving the efficiency of computing devices, we will need to alter how we do computing in the next few decades because we are finally approaching the limits of current technologies.
A primer on the energy efficiency of computing
Koomey, Jonathan G.
2015-03-30
The efficiency of computing at peak output has increased rapidly since the dawn of the computer age. This paper summarizes some of the key factors affecting the efficiency of computing in all usage modes. While there is still great potential for improving the efficiency of computing devices, we will need to alter how we do computing in the next few decades because we are finally approaching the limits of current technologies.
Project CAD as of July 1978: CAD support project, situation in July 1978
NASA Technical Reports Server (NTRS)
Boesch, L.; Lang-Lendorff, G.; Rothenberg, R.; Stelzer, V.
1979-01-01
The structure of Computer Aided Design (CAD) and the requirements for program developments in past and future are described. The actual standard and the future aims of CAD programs are presented. The developed programs in: (1) civil engineering; (2) mechanical engineering; (3) chemical engineering/shipbuilding; (4) electrical engineering; and (5) general programs are discussed.
Using AutoCAD for descriptive geometry exercises. in undergraduate structural geology
NASA Astrophysics Data System (ADS)
Jacobson, Carl E.
2001-02-01
The exercises in descriptive geometry typically utilized in undergraduate structural geology courses are quickly and easily solved using the computer drafting program AutoCAD. The key to efficient use of AutoCAD for descriptive geometry involves taking advantage of User Coordinate Systems, alternative angle conventions, relative coordinates, and other aspects of AutoCAD that may not be familiar to the beginning user. A summary of these features and an illustration of their application to the creation of structure contours for a planar dipping bed provides the background necessary to solve other problems in descriptive geometry with the computer. The ease of the computer constructions reduces frustration for the student and provides more time to think about the principles of the problems.
Productivity increase through implementation of CAD/CAE workstation
NASA Technical Reports Server (NTRS)
Bromley, L. K.
1985-01-01
The tracking and communication division computer aided design/computer aided engineering system is now operational. The system is utilized in an effort to automate certain tasks that were previously performed manually. These tasks include detailed test configuration diagrams of systems under certification test in the ESTL, floorplan layouts of future planned laboratory reconfigurations, and other graphical documentation of division activities. The significant time savings achieved with this CAD/CAE system are examined: (1) input of drawings and diagrams; (2) editing of initial drawings; (3) accessibility of the data; and (4) added versatility. It is shown that the Applicon CAD/CAE system, with its ease of input and editing, the accessibility of data, and its added versatility, has made more efficient many of the necessary but often time-consuming tasks associated with engineering design and testing.
Efficient computation of Wigner-Eisenbud functions
NASA Astrophysics Data System (ADS)
Raffah, Bahaaudin M.; Abbott, Paul C.
2013-06-01
The R-matrix method, introduced by Wigner and Eisenbud (1947) [1], has been applied to a broad range of electron transport problems in nanoscale quantum devices. With the rapid increase in the development and modeling of nanodevices, efficient, accurate, and general computation of Wigner-Eisenbud functions is required. This paper presents the Mathematica package WignerEisenbud, which uses the Fourier discrete cosine transform to compute the Wigner-Eisenbud functions in dimensionless units for an arbitrary potential in one dimension, and two dimensions in cylindrical coordinates. Program summaryProgram title: WignerEisenbud Catalogue identifier: AEOU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html Distribution format: tar.gz Programming language: Mathematica Operating system: Any platform supporting Mathematica 7.0 and above Keywords: Wigner-Eisenbud functions, discrete cosine transform (DCT), cylindrical nanowires Classification: 7.3, 7.9, 4.6, 5 Nature of problem: Computing the 1D and 2D Wigner-Eisenbud functions for arbitrary potentials using the DCT. Solution method: The R-matrix method is applied to the physical problem. Separation of variables is used for eigenfunction expansion of the 2D Wigner-Eisenbud functions. Eigenfunction computation is performed using the DCT to convert the Schrödinger equation with Neumann boundary conditions to a generalized matrix eigenproblem. Limitations: Restricted to uniform (rectangular grid) sampling of the potential. In 1D the number of sample points, n, results in matrix computations involving n×n matrices. Unusual features: Eigenfunction expansion using the DCT is fast and accurate. Users can specify scattering potentials using functions, or interactively using mouse input. Use of dimensionless units permits application to a
Dimensioning storage and computing clusters for efficient high throughput computing
NASA Astrophysics Data System (ADS)
Accion, E.; Bria, A.; Bernabeu, G.; Caubet, M.; Delfino, M.; Espinal, X.; Merino, G.; Lopez, F.; Martinez, F.; Planas, E.
2012-12-01
Scientific experiments are producing huge amounts of data, and the size of their datasets and total volume of data continues increasing. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of scientific data centers has shifted from efficiently coping with PetaByte scale storage to deliver quality data processing throughput. The dimensioning of the internal components in High Throughput Computing (HTC) data centers is of crucial importance to cope with all the activities demanded by the experiments, both the online (data acceptance) and the offline (data processing, simulation and user analysis). This requires a precise setup involving disk and tape storage services, a computing cluster and the internal networking to prevent bottlenecks, overloads and undesired slowness that lead to losses cpu cycles and batch jobs failures. In this paper we point out relevant features for running a successful data storage and processing service in an intensive HTC environment.
Efficient gradient computation for dynamical models
Sengupta, B.; Friston, K.J.; Penny, W.D.
2014-01-01
Data assimilation is a fundamental issue that arises across many scales in neuroscience — ranging from the study of single neurons using single electrode recordings to the interaction of thousands of neurons using fMRI. Data assimilation involves inverting a generative model that can not only explain observed data but also generate predictions. Typically, the model is inverted or fitted using conventional tools of (convex) optimization that invariably extremise some functional — norms, minimum descriptive length, variational free energy, etc. Generally, optimisation rests on evaluating the local gradients of the functional to be optimized. In this paper, we compare three different gradient estimation techniques that could be used for extremising any functional in time — (i) finite differences, (ii) forward sensitivities and a method based on (iii) the adjoint of the dynamical system. We demonstrate that the first-order gradients of a dynamical system, linear or non-linear, can be computed most efficiently using the adjoint method. This is particularly true for systems where the number of parameters is greater than the number of states. For such systems, integrating several sensitivity equations – as required with forward sensitivities – proves to be most expensive, while finite-difference approximations have an intermediate efficiency. In the context of neuroimaging, adjoint based inversion of dynamical causal models (DCMs) can, in principle, enable the study of models with large numbers of nodes and parameters. PMID:24769182
Not Available
1983-04-01
After failing with an unstructured computer-aided drafting/ design CAD/D program, Bechtel changed to a structured training program. Five considerations are presented here: teach CAD/D to engineers, not engineering to CAD/D experts; keep the program flexible enough to avoid rewriting due to fast technology evolution; pace information delivery; and rote learning of sequences only works if the students have a conceptual model first. On the job training is necessary, and better monitoring systems to test the OJT are needed. One such test is presented.
CAD/CAM/CAE reshapes engineering processes
NASA Astrophysics Data System (ADS)
Ludwinski, Thomas A.
1993-06-01
A development history and development trends evaluation is undertaken for computer-aided design/manufacturing/engineering techniques. Attention is drawn to the failure of the Initial Graphics Exchange Specification for standardized transfer of CAD/CAM data among different data bases to support information concerning solids; it is anticipated that the ability to transfer data transparently among CAD/CAM systems will result in major savings to all users, but this directly impinges on company relations.
CAD/CAM-coupled image processing systems
NASA Astrophysics Data System (ADS)
Ahlers, Rolf-Juergen; Rauh, W.
1990-08-01
Image processing systems have found wide application in industry. For most computer integrated manufacturing faci- lities it is necessary to adapt these systems thus that they can automate the interaction with and the integration of CAD and CAM Systems. In this paper new approaches will be described that make use of the coupling of CAD and image processing as well as the automatic generation of programmes for the machining of products.
AutoCAD-To-NASTRAN Translator Program
NASA Technical Reports Server (NTRS)
Jones, A.
1989-01-01
Program facilitates creation of finite-element mathematical models from geometric entities. AutoCAD to NASTRAN translator (ACTON) computer program developed to facilitate quick generation of small finite-element mathematical models for use with NASTRAN finite-element modeling program. Reads geometric data of drawing from Data Exchange File (DXF) used in AutoCAD and other PC-based drafting programs. Written in Microsoft Quick-Basic (Version 2.0).
Aerodynamic Design of Complex Configurations Using Cartesian Methods and CAD Geometry
NASA Technical Reports Server (NTRS)
Nemec, Marian; Aftosmis, Michael J.; Pulliam, Thomas H.
2003-01-01
The objective for this paper is to present the development of an optimization capability for the Cartesian inviscid-flow analysis package of Aftosmis et al. We evaluate and characterize the following modules within the new optimization framework: (1) A component-based geometry parameterization approach using a CAD solid representation and the CAPRI interface. (2) The use of Cartesian methods in the development Optimization techniques using a genetic algorithm. The discussion and investigations focus on several real world problems of the optimization process. We examine the architectural issues associated with the deployment of a CAD-based design approach in a heterogeneous parallel computing environment that contains both CAD workstations and dedicated compute nodes. In addition, we study the influence of noise on the performance of optimization techniques, and the overall efficiency of the optimization process for aerodynamic design of complex three-dimensional configurations. of automated optimization tools. rithm and a gradient-based algorithm.
NASA Astrophysics Data System (ADS)
Deshpande, Ruchi R.; Fernandez, James; Lee, Joon K.; Chan, Tao; Liu, Brent J.; Huang, H. K.
2010-03-01
Timely detection of Acute Intra-cranial Hemorrhage (AIH) in an emergency environment is essential for the triage of patients suffering from Traumatic Brain Injury. Moreover, the small size of lesions and lack of experience on the reader's part could lead to difficulties in the detection of AIH. A CT based CAD algorithm for the detection of AIH has been developed in order to improve upon the current standard of identification and treatment of AIH. A retrospective analysis of the algorithm has already been carried out with 135 AIH CT studies with 135 matched normal head CT studies from the Los Angeles County General Hospital/ University of Southern California Hospital System (LAC/USC). In the next step, AIH studies have been collected from Walter Reed Army Medical Center, and are currently being processed using the AIH CAD system as part of implementing a multi-site assessment and evaluation of the performance of the algorithm. The sensitivity and specificity numbers from the Walter Reed study will be compared with the numbers from the LAC/USC study to determine if there are differences in the presentation and detection due to the difference in the nature of trauma between the two sites. Simultaneously, a stand-alone system with a user friendly GUI has been developed to facilitate implementation in a clinical setting.
CAD/CAM: Practical and Persuasive in Canadian Schools
ERIC Educational Resources Information Center
Willms, Ed
2007-01-01
Chances are that many high school students would not know how to use drafting instruments, but some might want to gain competence in computer-assisted design (CAD) and possibly computer-assisted manufacturing (CAM). These students are often attracted to tech courses by the availability of CAD/CAM instructions, and many go on to impress employers…
An application protocol for CAD to CAD transfer of electronic information
NASA Technical Reports Server (NTRS)
Azu, Charles C., Jr.
1993-01-01
The exchange of Computer Aided Design (CAD) information between dissimilar CAD systems is a problem. This is especially true for transferring electronics CAD information such as multi-chip module (MCM), hybrid microcircuit assembly (HMA), and printed circuit board (PCB) designs. Currently, there exists several neutral data formats for transferring electronics CAD information. These include IGES, EDIF, and DXF formats. All these formats have limitations for use in exchanging electronic data. In an attempt to overcome these limitations, the Navy's MicroCIM program implemented a project to transfer hybrid microcircuit design information between dissimilar CAD systems. The IGES (Initial Graphics Exchange Specification) format is used since it is well established within the CAD industry. The goal of the project is to have a complete transfer of microelectronic CAD information, using IGES, without any data loss. An Application Protocol (AP) is being developed to specify how hybrid microcircuit CAD information will be represented by IGES entity constructs. The AP defines which IGES data items are appropriate for describing HMA geometry, connectivity, and processing as well as HMA material characteristics.
Jalalian, E.; Atashkar, B.; Rostami, R.
2011-01-01
Objective One of the major problems of all ceramic restorations is their probable fracture against the occlusal force. The aim of the present in-vitro study is was to compare the effect of two marginal designs (chamfer & shoulder) on the fracture resistance of zirconia copings, CERCON (CAD/CAM). MATERIALS AND METHODS This in vitro study was done with single blind experimental technique. One stainless steel dye with 50′ chamfer finish line design (0.8 mm depth) was prepared using milling machine. Ten epoxy resin dyes were made, The same dye was retrieved and 50′ chamfer was converted into shoulder (1 mm).again ten epoxy resin dyes were made from shoulder dyes. Zirconia cores with 0.4 mm thickness and 35 μm cement Space fabricated on the 20 epoxy resin dyes (10 samples chamfer and 10 samples shoulder) in a dental laboratory. Then the zirconia cores were cemented on the epoxy resin dyes and underwent a fracture test with a universal testing machine (GOTECH AI-700LAC, Arson, USA) and samples were investigated from the point of view of the origin of the failure. RESULT The mean value of fracture resistance for shoulder margins were 788.90±99.56 N and for the chamfer margins were 991.75±112.00 N. The student’s T-test revealed a statistically significant difference between groups (P=0.001). CONCLUSION The result of this study indicates that marginal design of the zirconia cores effects on their fracture resistance. A chamfer margin could improve the biomechanical performance of posterior single zirconia crown restorations. This may be because of strong unity and round internal angle in chamfer margin. PMID:22457839
Efficient Computation Of Behavior Of Aircraft Tires
NASA Technical Reports Server (NTRS)
Tanner, John A.; Noor, Ahmed K.; Andersen, Carl M.
1989-01-01
NASA technical paper discusses challenging application of computational structural mechanics to numerical simulation of responses of aircraft tires during taxing, takeoff, and landing. Presents details of three main elements of computational strategy: use of special three-field, mixed-finite-element models; use of operator splitting; and application of technique reducing substantially number of degrees of freedom. Proposed computational strategy applied to two quasi-symmetric problems: linear analysis of anisotropic tires through use of two-dimensional-shell finite elements and nonlinear analysis of orthotropic tires subjected to unsymmetric loading. Three basic types of symmetry and combinations exhibited by response of tire identified.
Charette, Jyme R; Goldberg, Jack; Harris, Bryan T; Morton, Dean; Llop, Daniel R; Lin, Wei-Shao
2016-08-01
This article describes a digital workflow using cone beam computed tomography imaging as the primary diagnostic tool in the virtual planning of the computer-guided surgery and fabrication of a maxillary interim complete removable dental prosthesis and mandibular interim implant-supported complete fixed dental prosthesis with computer-aided design and computer-aided manufacturing technology. Diagnostic impressions (conventional or digital) and casts are unnecessary in this proposed digital workflow, providing clinicians with an alternative treatment in the indicated clinical scenario. PMID:27086108
Some Workplace Effects of CAD and CAM.
ERIC Educational Resources Information Center
Ebel, Karl-H.; Ulrich, Erhard
1987-01-01
Examines the impact of computer-aided design (CAD) and computer-aided manufacturing (CAM) on employment, work organization, working conditions, job content, training, and industrial relations in several countries. Finds little evidence of negative employment effects since productivity gains are offset by various compensatory factors. (Author/CH)
Pipe Drafting with CAD. Teacher Edition.
ERIC Educational Resources Information Center
Smithson, Buddy
This teacher's guide contains nine units of instruction for a course on computer-assisted pipe drafting. The course covers the following topics: introduction to pipe drafting with CAD (computer-assisted design); flow diagrams; pipe and pipe components; valves; piping plans and elevations; isometrics; equipment fabrication drawings; piping design…
Computationally efficient prediction of area per lipid
NASA Astrophysics Data System (ADS)
Chaban, Vitaly
2014-11-01
Area per lipid (APL) is an important property of biological and artificial membranes. Newly constructed bilayers are characterized by their APL and newly elaborated force fields must reproduce APL. Computer simulations of APL are very expensive due to slow conformational dynamics. The simulated dynamics increases exponentially with respect to temperature. APL dependence on temperature is linear over an entire temperature range. I provide numerical evidence that thermal expansion coefficient of a lipid bilayer can be computed at elevated temperatures and extrapolated to the temperature of interest. Thus, sampling times to predict accurate APL are reduced by a factor of ∼10.
Efficient Parallel Engineering Computing on Linux Workstations
NASA Technical Reports Server (NTRS)
Lou, John Z.
2010-01-01
A C software module has been developed that creates lightweight processes (LWPs) dynamically to achieve parallel computing performance in a variety of engineering simulation and analysis applications to support NASA and DoD project tasks. The required interface between the module and the application it supports is simple, minimal and almost completely transparent to the user applications, and it can achieve nearly ideal computing speed-up on multi-CPU engineering workstations of all operating system platforms. The module can be integrated into an existing application (C, C++, Fortran and others) either as part of a compiled module or as a dynamically linked library (DLL).
CAD software lights up the environmental scene
Basta, N.
1996-01-01
There seems to be a natural affinity between the data requirements of environmental work and computer-aided design (CAD) software. Perhaps the best example of this is the famous shots of the ozone hole produced by computer-enhanced satellite imagery in the mid-1980s. Once this image was published, the highly abstract discussion of ozone concentrations and arctic wind patterns suddenly became very real. On ground level, in the day-to-day work of environmental managers and site restorers, CAD software is proving its value over and over. Graphic images are a convenient, readily understandable way of presenting the large volumes of data produced by environmental projects. With the latest CAD systems, the work of specifying process equipment or subsurface conditions can be reused again and again as projects move from the study and design phase to the construction or remediation phases. An important subset of CAD is geographic information systems (GIS), which are used to organize data on a site-specific basis. Like CAD itself, GIS reaches out beyond the borders of the computer screen or printout, in such ways making use of the Geostationary Positioning System (a global method of locating position precisely), and matching current with historical data. Good GIS software can also make use of the large database of geological data produced by government and industry, thus saving on surveying costs and exploratory well construction.
Efficient Computation Of Manipulator Inertia Matrix
NASA Technical Reports Server (NTRS)
Fijany, Amir; Bejczy, Antal K.
1991-01-01
Improved method for computation of manipulator inertia matrix developed, based on concept of spatial inertia of composite rigid body. Required for implementation of advanced dynamic-control schemes as well as dynamic simulation of manipulator motion. Motivated by increasing demand for fast algorithms to provide real-time control and simulation capability and, particularly, need for faster-than-real-time simulation capability, required in many anticipated space teleoperation applications.
Experimental Realization of High-Efficiency Counterfactual Computation
NASA Astrophysics Data System (ADS)
Kong, Fei; Ju, Chenyong; Huang, Pu; Wang, Pengfei; Kong, Xi; Shi, Fazhan; Jiang, Liang; Du, Jiangfeng
2015-08-01
Counterfactual computation (CFC) exemplifies the fascinating quantum process by which the result of a computation may be learned without actually running the computer. In previous experimental studies, the counterfactual efficiency is limited to below 50%. Here we report an experimental realization of the generalized CFC protocol, in which the counterfactual efficiency can break the 50% limit and even approach unity in principle. The experiment is performed with the spins of a negatively charged nitrogen-vacancy color center in diamond. Taking advantage of the quantum Zeno effect, the computer can remain in the not-running subspace due to the frequent projection by the environment, while the computation result can be revealed by final detection. The counterfactual efficiency up to 85% has been demonstrated in our experiment, which opens the possibility of many exciting applications of CFC, such as high-efficiency quantum integration and imaging.
Efficient Associative Computation with Discrete Synapses.
Knoblauch, Andreas
2016-01-01
Neural associative networks are a promising computational paradigm for both modeling neural circuits of the brain and implementing associative memory and Hebbian cell assemblies in parallel VLSI or nanoscale hardware. Previous work has extensively investigated synaptic learning in linear models of the Hopfield type and simple nonlinear models of the Steinbuch/Willshaw type. Optimized Hopfield networks of size n can store a large number of about n(2)/k memories of size k (or associations between them) but require real-valued synapses, which are expensive to implement and can store at most C = 0.72 bits per synapse. Willshaw networks can store a much smaller number of about n(2)/k(2) memories but get along with much cheaper binary synapses. Here I present a learning model employing synapses with discrete synaptic weights. For optimal discretization parameters, this model can store, up to a factor ζ close to one, the same number of memories as for optimized Hopfield-type learning--for example, ζ = 0.64 for binary synapses, ζ = 0.88 for 2 bit (four-state) synapses, ζ = 0.96 for 3 bit (8-state) synapses, and ζ > 0.99 for 4 bit (16-state) synapses. The model also provides the theoretical framework to determine optimal discretization parameters for computer implementations or brainlike parallel hardware including structural plasticity. In particular, as recently shown for the Willshaw network, it is possible to store C(I) = 1 bit per computer bit and up to C(S) = log n bits per nonsilent synapse, whereas the absolute number of stored memories can be much larger than for the Willshaw model. PMID:26599711
Reliability and Efficiency of a DNA-Based Computation
NASA Astrophysics Data System (ADS)
Deaton, R.; Garzon, M.; Murphy, R. C.; Rose, J. A.; Franceschetti, D. R.; Stevens, S. E., Jr.
1998-01-01
DNA-based computing uses the tendency of nucleotide bases to bind (hybridize) in preferred combinations to do computation. Depending on reaction conditions, oligonucleotides can bind despite noncomplementary base pairs. These mismatched hybridizations are a source of false positives and negatives, which limit the efficiency and scalability of DNA-based computing. The ability of specific base sequences to support error-tolerant Adleman-style computation is analyzed, and criteria are proposed to increase reliability and efficiency. A method is given to calculate reaction conditions from estimates of DNA melting.
From Bad to CAD: Maintaining Records of Maintenance Projects.
ERIC Educational Resources Information Center
Shea, Diane C.
1994-01-01
A computer-assisted design (CAD) software program is used in a Connecticut school district to graphically provide information on maintenance projects by school and category of project over time. CAD supplies a computerized building plan, a "foot-print," used as a recordkeeping system for maintenance projects. (MLF)
An Evaluation of Internet-Based CAD Collaboration Tools
ERIC Educational Resources Information Center
Smith, Shana Shiang-Fong
2004-01-01
Due to the now widespread use of the Internet, most companies now require computer aided design (CAD) tools that support distributed collaborative design on the Internet. Such CAD tools should enable designers to share product models, as well as related data, from geographically distant locations. However, integrated collaborative design…
Making a Case for CAD in the Curriculum.
ERIC Educational Resources Information Center
Threlfall, K. Denise
1995-01-01
Computer-assisted design (CAD) technology is transforming the apparel industry. Students of fashion merchandising and clothing design must be prepared on state-of-the-art equipment. ApparelCAD software is one example of courseware for instruction in pattern design and production. (SK)
Efficient computation of parameter confidence intervals
NASA Technical Reports Server (NTRS)
Murphy, Patrick C.
1987-01-01
An important step in system identification of aircraft is the estimation of stability and control derivatives from flight data along with an assessment of parameter accuracy. When the maximum likelihood estimation technique is used, parameter accuracy is commonly assessed by the Cramer-Rao lower bound. It is known, however, that in some cases the lower bound can be substantially different from the parameter variance. Under these circumstances the Cramer-Rao bounds may be misleading as an accuracy measure. This paper discusses the confidence interval estimation problem based on likelihood ratios, which offers a more general estimate of the error bounds. Four approaches are considered for computing confidence intervals of maximum likelihood parameter estimates. Each approach is applied to real flight data and compared.
Efficient tree codes on SIMD computer architectures
NASA Astrophysics Data System (ADS)
Olson, Kevin M.
1996-11-01
This paper describes changes made to a previous implementation of an N -body tree code developed for a fine-grained, SIMD computer architecture. These changes include (1) switching from a balanced binary tree to a balanced oct tree, (2) addition of quadrupole corrections, and (3) having the particles search the tree in groups rather than individually. An algorithm for limiting errors is also discussed. In aggregate, these changes have led to a performance increase of over a factor of 10 compared to the previous code. For problems several times larger than the processor array, the code now achieves performance levels of ~ 1 Gflop on the Maspar MP-2 or roughly 20% of the quoted peak performance of this machine. This percentage is competitive with other parallel implementations of tree codes on MIMD architectures. This is significant, considering the low relative cost of SIMD architectures.
On the Use of CAD and Cartesian Methods for Aerodynamic Optimization
NASA Technical Reports Server (NTRS)
Nemec, M.; Aftosmis, M. J.; Pulliam, T. H.
2004-01-01
The objective for this paper is to present the development of an optimization capability for Curt3D, a Cartesian inviscid-flow analysis package. We present the construction of a new optimization framework and we focus on the following issues: 1) Component-based geometry parameterization approach using parametric-CAD models and CAPRI. A novel geometry server is introduced that addresses the issue of parallel efficiency while only sparingly consuming CAD resources; 2) The use of genetic and gradient-based algorithms for three-dimensional aerodynamic design problems. The influence of noise on the optimization methods is studied. Our goal is to create a responsive and automated framework that efficiently identifies design modifications that result in substantial performance improvements. In addition, we examine the architectural issues associated with the deployment of a CAD-based approach in a heterogeneous parallel computing environment that contains both CAD workstations and dedicated compute engines. We demonstrate the effectiveness of the framework for a design problem that features topology changes and complex geometry.
Efficient algorithm to compute the Berry conductivity
NASA Astrophysics Data System (ADS)
Dauphin, A.; Müller, M.; Martin-Delgado, M. A.
2014-07-01
We propose and construct a numerical algorithm to calculate the Berry conductivity in topological band insulators. The method is applicable to cold atom systems as well as solid state setups, both for the insulating case where the Fermi energy lies in the gap between two bulk bands as well as in the metallic regime. This method interpolates smoothly between both regimes. The algorithm is gauge-invariant by construction, efficient, and yields the Berry conductivity with known and controllable statistical error bars. We apply the algorithm to several paradigmatic models in the field of topological insulators, including Haldane's model on the honeycomb lattice, the multi-band Hofstadter model, and the BHZ model, which describes the 2D spin Hall effect observed in CdTe/HgTe/CdTe quantum well heterostructures.
Practical CAD/CAM aspects of CNC prepreg cutting
NASA Astrophysics Data System (ADS)
Connolly, Michael L.
1991-01-01
The use of CAD/CAM in cutting large quantities of complex shapes out of prepregs is addressed. The advantages of CAD/CAM includes reduction of prototype cycles from weeks to days, improved part quality resulting from accurately cut plies, safer and more efficient cutting operations, and lower direct labor costs.
Texture functions in image analysis: A computationally efficient solution
NASA Technical Reports Server (NTRS)
Cox, S. C.; Rose, J. F.
1983-01-01
A computationally efficient means for calculating texture measurements from digital images by use of the co-occurrence technique is presented. The calculation of the statistical descriptors of image texture and a solution that circumvents the need for calculating and storing a co-occurrence matrix are discussed. The results show that existing efficient algorithms for calculating sums, sums of squares, and cross products can be used to compute complex co-occurrence relationships directly from the digital image input.
Duality quantum computer and the efficient quantum simulations
NASA Astrophysics Data System (ADS)
Wei, Shi-Jie; Long, Gui-Lu
2016-03-01
Duality quantum computing is a new mode of a quantum computer to simulate a moving quantum computer passing through a multi-slit. It exploits the particle wave duality property for computing. A quantum computer with n qubits and a qudit simulates a moving quantum computer with n qubits passing through a d-slit. Duality quantum computing can realize an arbitrary sum of unitaries and therefore a general quantum operator, which is called a generalized quantum gate. All linear bounded operators can be realized by the generalized quantum gates, and unitary operators are just the extreme points of the set of generalized quantum gates. Duality quantum computing provides flexibility and a clear physical picture in designing quantum algorithms, and serves as a powerful bridge between quantum and classical algorithms. In this paper, after a brief review of the theory of duality quantum computing, we will concentrate on the applications of duality quantum computing in simulations of Hamiltonian systems. We will show that duality quantum computing can efficiently simulate quantum systems by providing descriptions of the recent efficient quantum simulation algorithm of Childs and Wiebe (Quantum Inf Comput 12(11-12):901-924, 2012) for the fast simulation of quantum systems with a sparse Hamiltonian, and the quantum simulation algorithm by Berry et al. (Phys Rev Lett 114:090502, 2015), which provides exponential improvement in precision for simulating systems with a sparse Hamiltonian.
Computationally efficient Bayesian inference for inverse problems.
Marzouk, Youssef M.; Najm, Habib N.; Rahn, Larry A.
2007-10-01
Bayesian statistics provides a foundation for inference from noisy and incomplete data, a natural mechanism for regularization in the form of prior information, and a quantitative assessment of uncertainty in the inferred results. Inverse problems - representing indirect estimation of model parameters, inputs, or structural components - can be fruitfully cast in this framework. Complex and computationally intensive forward models arising in physical applications, however, can render a Bayesian approach prohibitive. This difficulty is compounded by high-dimensional model spaces, as when the unknown is a spatiotemporal field. We present new algorithmic developments for Bayesian inference in this context, showing strong connections with the forward propagation of uncertainty. In particular, we introduce a stochastic spectral formulation that dramatically accelerates the Bayesian solution of inverse problems via rapid evaluation of a surrogate posterior. We also explore dimensionality reduction for the inference of spatiotemporal fields, using truncated spectral representations of Gaussian process priors. These new approaches are demonstrated on scalar transport problems arising in contaminant source inversion and in the inference of inhomogeneous material or transport properties. We also present a Bayesian framework for parameter estimation in stochastic models, where intrinsic stochasticity may be intermingled with observational noise. Evaluation of a likelihood function may not be analytically tractable in these cases, and thus several alternative Markov chain Monte Carlo (MCMC) schemes, operating on the product space of the observations and the parameters, are introduced.
Earthquake detection through computationally efficient similarity search
Yoon, Clara E.; O’Reilly, Ossian; Bergen, Karianne J.; Beroza, Gregory C.
2015-01-01
Seismology is experiencing rapid growth in the quantity of data, which has outpaced the development of processing algorithms. Earthquake detection—identification of seismic events in continuous data—is a fundamental operation for observational seismology. We developed an efficient method to detect earthquakes using waveform similarity that overcomes the disadvantages of existing detection methods. Our method, called Fingerprint And Similarity Thresholding (FAST), can analyze a week of continuous seismic waveform data in less than 2 hours, or 140 times faster than autocorrelation. FAST adapts a data mining algorithm, originally designed to identify similar audio clips within large databases; it first creates compact “fingerprints” of waveforms by extracting key discriminative features, then groups similar fingerprints together within a database to facilitate fast, scalable search for similar fingerprint pairs, and finally generates a list of earthquake detections. FAST detected most (21 of 24) cataloged earthquakes and 68 uncataloged earthquakes in 1 week of continuous data from a station located near the Calaveras Fault in central California, achieving detection performance comparable to that of autocorrelation, with some additional false detections. FAST is expected to realize its full potential when applied to extremely long duration data sets over a distributed network of seismic stations. The widespread application of FAST has the potential to aid in the discovery of unexpected seismic signals, improve seismic monitoring, and promote a greater understanding of a variety of earthquake processes. PMID:26665176
Earthquake detection through computationally efficient similarity search.
Yoon, Clara E; O'Reilly, Ossian; Bergen, Karianne J; Beroza, Gregory C
2015-12-01
Seismology is experiencing rapid growth in the quantity of data, which has outpaced the development of processing algorithms. Earthquake detection-identification of seismic events in continuous data-is a fundamental operation for observational seismology. We developed an efficient method to detect earthquakes using waveform similarity that overcomes the disadvantages of existing detection methods. Our method, called Fingerprint And Similarity Thresholding (FAST), can analyze a week of continuous seismic waveform data in less than 2 hours, or 140 times faster than autocorrelation. FAST adapts a data mining algorithm, originally designed to identify similar audio clips within large databases; it first creates compact "fingerprints" of waveforms by extracting key discriminative features, then groups similar fingerprints together within a database to facilitate fast, scalable search for similar fingerprint pairs, and finally generates a list of earthquake detections. FAST detected most (21 of 24) cataloged earthquakes and 68 uncataloged earthquakes in 1 week of continuous data from a station located near the Calaveras Fault in central California, achieving detection performance comparable to that of autocorrelation, with some additional false detections. FAST is expected to realize its full potential when applied to extremely long duration data sets over a distributed network of seismic stations. The widespread application of FAST has the potential to aid in the discovery of unexpected seismic signals, improve seismic monitoring, and promote a greater understanding of a variety of earthquake processes. PMID:26665176
Mechanical Drafting with CAD. Teacher Edition.
ERIC Educational Resources Information Center
McClain, Gerald R.
This instructor's manual contains 13 units of instruction for a course on mechanical drafting with options for using computer-aided drafting (CAD). Each unit includes some or all of the following basic components of a unit of instruction: objective sheet, suggested activities for the teacher, assignment sheets and answers to assignment sheets,…
A Case Study in CAD Design Automation
ERIC Educational Resources Information Center
Lowe, Andrew G.; Hartman, Nathan W.
2011-01-01
Computer-aided design (CAD) software and other product life-cycle management (PLM) tools have become ubiquitous in industry during the past 20 years. Over this time they have continuously evolved, becoming programs with enormous capabilities, but the companies that use them have not evolved their design practices at the same rate. Due to the…
Efficiently modeling neural networks on massively parallel computers
NASA Technical Reports Server (NTRS)
Farber, Robert M.
1993-01-01
Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper describes the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD (Single Instruction Multiple Data) architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SIMD computers and can be implemented on MIMD computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors (which has a sub-linear runtime growth on the order of O(log(number of processors)). We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can extend to arbitrarily large networks (within memory limitations) by merging the memory space of separate processors with fast adjacent processor interprocessor communications. This paper will consider the simulation of only feed forward neural network although this method is extendable to recurrent networks.
An efficient method for computation of the manipulator inertia matrix
NASA Technical Reports Server (NTRS)
Fijany, Amir; Bejczy, Antal K.
1989-01-01
An efficient method of computation of the manipulator inertia matrix is presented. Using spatial notations, the method leads to the definition of the composite rigid-body spatial inertia, which is a spatial representation of the notion of augmented body. The previously proposed methods, the physical interpretations leading to their derivation, and their redundancies are analyzed. The proposed method achieves a greater efficiency by eliminating the redundancy in the intrinsic equations as well as by a better choice of coordinate frame for their projection. In this case, removing the redundancy leads to greater efficiency of the computation in both serial and parallel senses.
Refocusing CAD and CAE on O and M
Podczerwinski, C.A.; Wittenauer, J.P.; Irish, J.D.
1995-09-01
In the late 1980s, computer-aided design (CAD) software started to migrate from larger computer equipment to personal computers. Since then, competition in the desktop computer market transformed the personal computer (PC) into an office equipment commodity. The technological improvements accompanying that change transformed CAD from an expensive, specialized tool to an office software commodity that is a graphical counterpart to the word processor. The cost reductions and performance improvements have made many application concepts, previously too cumbersome to apply, cost effective and helpful. Applying these ideas has increased the level of CAD usage in their offices dramatically. Part of that growth has been an increasing number of projects directly aimed at helping reduce operation and maintenance (O and M) costs. This paper describes those projects and discusses the application of CAD to O and M work.
Effect of image variation on computer-aided detection systems
NASA Astrophysics Data System (ADS)
Rabbani, S. P.; Maduskar, P.; Philipsen, R. H. H. M.; Hogeweg, L.; van Ginneken, B.
2014-03-01
As the importance of Computer Aided Detection (CAD) systems application is rising in medical imaging field due to the advantages they generate; it is essential to know their weaknesses and try to find a proper solution for them. A common possible practical problem that affects CAD systems performance is: dissimilar training and testing datasets declines the efficiency of CAD systems. In this paper normalizing images is proposed, three different normalization methods are applied on chest radiographs namely (1) Simple normalization (2) Local Normalization (3) Multi Band Local Normalization. The supervised lung segmentation CAD system performance is evaluated on normalized chest radiographs with these three different normalization methods in terms of Jaccard index. As a conclusion the normalization enhances the performance of CAD system and among these three normalization methods Local Normalization and Multi band Local normalization improve performance of CAD system more significantly than the simple normalization.
Revisiting the Efficiency of Malicious Two-Party Computation
NASA Astrophysics Data System (ADS)
Woodruff, David P.
In a recent paper Mohassel and Franklin study the efficiency of secure two-party computation in the presence of malicious behavior. Their aim is to make classical solutions to this problem, such as zero-knowledge compilation, more efficient. The authors provide several schemes which are the most efficient to date. We propose a modification to their main scheme using expanders. Our modification asymptotically improves at least one measure of efficiency of all known schemes. We also point out an error, and improve the analysis of one of their schemes.
Overview of NASA MSFC IEC Multi-CAD Collaboration Capability
NASA Technical Reports Server (NTRS)
Moushon, Brian; McDuffee, Patrick
2005-01-01
This viewgraph presentation provides an overview of a Design and Data Management System (DDMS) for Computer Aided Design (CAD) collaboration in order to support the Integrated Engineering Capability (IEC) at Marshall Space Flight Center (MSFC).
CAD-CAE in Electrical Machines and Drives Teaching.
ERIC Educational Resources Information Center
Belmans, R.; Geysen, W.
1988-01-01
Describes the use of computer-aided design (CAD) techniques in teaching the design of electrical motors. Approaches described include three technical viewpoints, such as electromagnetics, thermal, and mechanical aspects. Provides three diagrams, a table, and conclusions. (YP)
Do Computers Improve the Drawing of a Geometrical Figure for 10 Year-Old Children?
ERIC Educational Resources Information Center
Martin, Perrine; Velay, Jean-Luc
2012-01-01
Nowadays, computer aided design (CAD) is widely used by designers. Would children learn to draw more easily and more efficiently if they were taught with computerised tools? To answer this question, we made an experiment designed to compare two methods for children to do the same drawing: the classical "pen and paper" method and a CAD method. We…
CAD/CAM ceramic restorations in the operatory and laboratory.
Fasbinder, Dennis J
2003-08-01
Computer assisted design/computer assisted machining (CAD/CAM) technology has received considerable clinical and research interest from modern dental practices as a means of delivering all-ceramic restorations. The CEREC, System offers CAD/CAM dental technology designed for clinical use by dentists, as well as a separate system designed for dental laboratory technicians. The CEREC 3 system is indicated for dental operatory applications, and the CEREC inLab, system is indicated for dental laboratory applications. Although both systems rely on similar CAD/CAM technology, several significant differences exist in the processing techniques involved, restorative materials used, and types of restoration provided. PMID:14692164
A scheme for efficient quantum computation with linear optics
NASA Astrophysics Data System (ADS)
Knill, E.; Laflamme, R.; Milburn, G. J.
2001-01-01
Quantum computers promise to increase greatly the efficiency of solving problems such as factoring large integers, combinatorial optimization and quantum physics simulation. One of the greatest challenges now is to implement the basic quantum-computational elements in a physical system and to demonstrate that they can be reliably and scalably controlled. One of the earliest proposals for quantum computation is based on implementing a quantum bit with two optical modes containing one photon. The proposal is appealing because of the ease with which photon interference can be observed. Until now, it suffered from the requirement for non-linear couplings between optical modes containing few photons. Here we show that efficient quantum computation is possible using only beam splitters, phase shifters, single photon sources and photo-detectors. Our methods exploit feedback from photo-detectors and are robust against errors from photon loss and detector inefficiency. The basic elements are accessible to experimental investigation with current technology.
I/O-Efficient Scientific Computation Using TPIE
NASA Technical Reports Server (NTRS)
Vengroff, Darren Erik; Vitter, Jeffrey Scott
1996-01-01
In recent years, input/output (I/O)-efficient algorithms for a wide variety of problems have appeared in the literature. However, systems specifically designed to assist programmers in implementing such algorithms have remained scarce. TPIE is a system designed to support I/O-efficient paradigms for problems from a variety of domains, including computational geometry, graph algorithms, and scientific computation. The TPIE interface frees programmers from having to deal not only with explicit read and write calls, but also the complex memory management that must be performed for I/O-efficient computation. In this paper we discuss applications of TPIE to problems in scientific computation. We discuss algorithmic issues underlying the design and implementation of the relevant components of TPIE and present performance results of programs written to solve a series of benchmark problems using our current TPIE prototype. Some of the benchmarks we present are based on the NAS parallel benchmarks while others are of our own creation. We demonstrate that the central processing unit (CPU) overhead required to manage I/O is small and that even with just a single disk, the I/O overhead of I/O-efficient computation ranges from negligible to the same order of magnitude as CPU time. We conjecture that if we use a number of disks in parallel this overhead can be all but eliminated.
Equilibrium analysis of the efficiency of an autonomous molecular computer
NASA Astrophysics Data System (ADS)
Rose, John A.; Deaton, Russell J.; Hagiya, Masami; Suyama, Akira
2002-02-01
In the whiplash polymerase chain reaction (WPCR), autonomous molecular computation is implemented in vitro by the recursive, self-directed polymerase extension of a mixture of DNA hairpins. Although computational efficiency is known to be reduced by a tendency for DNAs to self-inhibit by backhybridization, both the magnitude of this effect and its dependence on the reaction conditions have remained open questions. In this paper, the impact of backhybridization on WPCR efficiency is addressed by modeling the recursive extension of each strand as a Markov chain. The extension efficiency per effective polymerase-DNA encounter is then estimated within the framework of a statistical thermodynamic model. Model predictions are shown to provide close agreement with the premature halting of computation reported in a recent in vitro WPCR implementation, a particularly significant result, given that backhybridization had been discounted as the dominant error process. The scaling behavior further indicates completion times to be sufficiently long to render WPCR-based massive parallelism infeasible. A modified architecture, PNA-mediated WPCR (PWPCR) is then proposed in which the occupancy of backhybridized hairpins is reduced by targeted PNA2/DNA triplex formation. The efficiency of PWPCR is discussed using a modified form of the model developed for WPCR. Predictions indicate the PWPCR efficiency is sufficient to allow the implementation of autonomous molecular computation on a massive scale.
Complete denture fabrication with CAD/CAM record bases.
McLaughlin, J Bryan; Ramos, Van
2015-10-01
One of the primary goals of new materials and processes for complete denture fabrication has been to reduce polymerization shrinkage. The introduction of computer-aided design and computer-aided manufacturing (CAD/CAM) technology into complete denture fabrication has eliminated polymerization shrinkage in the definitive denture. The use of CAD/CAM record bases for complete denture fabrication can provide a better-fitting denture with fewer postprocessing occlusal errors. PMID:26139040
A SINDA thermal model using CAD/CAE technologies
NASA Technical Reports Server (NTRS)
Rodriguez, Jose A.; Spencer, Steve
1992-01-01
The approach to thermal analysis described by this paper is a technique that incorporates Computer Aided Design (CAD) and Computer Aided Engineering (CAE) to develop a thermal model that has the advantages of Finite Element Methods (FEM) without abandoning the unique advantages of Finite Difference Methods (FDM) in the analysis of thermal systems. The incorporation of existing CAD geometry, the powerful use of a pre and post processor and the ability to do interdisciplinary analysis, will be described.
CAD in the processing plant environment or managing the CAD revolution
Woolbert, M.A.; Bennett, R.S.; Haring, W.I.
1985-10-01
The author presents a case report on the use of a Computer Aided Design/Computer Aided Drafting (CAD) system. Illustrated is a four-work station system, in addition to which there are two 70 megabyte disk drives, a check plotter and a 24-inch wide electrostatic plotter, a 300 megabyte disk for on-line storage, a tape drive for archive and backup, and a 1-megabyte network process server. It is a distributed logic system. The author states that the CAD system both inexpensive enough and powerful enough for the plant environment is relatively new on the market, made possible by the advent of super microcomputers. Also discussed is the impact the CAD system has had on productivity.
Keshvari, Jafar; Heikkilä, Teemu
2011-12-01
Previous studies comparing SAR difference in the head of children and adults used highly simplified generic models or half-wave dipole antennas. The objective of this study was to investigate the SAR difference in the head of children and adults using realistic EMF sources based on CAD models of commercial mobile phones. Four MRI-based head phantoms were used in the study. CAD models of Nokia 8310 and 6630 mobile phones were used as exposure sources. Commercially available FDTD software was used for the SAR calculations. SAR values were simulated at frequencies 900 MHz and 1747 MHz for Nokia 8310, and 900 MHz, 1747 MHz and 1950 MHz for Nokia 6630. The main finding of this study was that the SAR distribution/variation in the head models highly depends on the structure of the antenna and phone model, which suggests that the type of the exposure source is the main parameter in EMF exposure studies to be focused on. Although the previous findings regarding significant role of the anatomy of the head, phone position, frequency, local tissue inhomogeneity and tissue composition specifically in the exposed area on SAR difference were confirmed, the SAR values and SAR distributions caused by generic source models cannot be extrapolated to the real device exposures. The general conclusion is that from a volume averaged SAR point of view, no systematic differences between child and adult heads were found. PMID:22005524
Selz, Christian F; Vuck, Alexander; Guess, Petra C
2016-02-01
Esthetic full-mouth rehabilitation represents a great challenge for clinicians and dental technicians. Computer-aided design/ computer-assisted manufacture (CAD/CAM) technology and novel ceramic materials in combination with adhesive cementation provide a reliable, predictable, and economic workflow. Polychromatic feldspathic CAD/CAM ceramics that are specifically designed for anterior indications result in superior esthetics, whereas novel CAD/CAM hybrid ceramics provide sufficient fracture resistance and adsorption of the occlusal load in posterior areas. Screw-retained monolithic CAD/CAM lithium disilicate crowns (ie, hybrid abutment crowns) represent a reliable and time- and cost-efficient prosthetic implant solution. This case report details a CAD/CAM approach to the full-arch rehabilitation of a 65-year-old patient with toothand implant-supported restorations and provides an overview of the applied CAD/CAM materials and the utilized chairside intraoral scanner. The esthetics, functional occlusion, and gingival and peri-implant tissues remained stable over a follow-up period of 3 years. No signs of fractures within the restorations were observed. PMID:26417616
NASA Astrophysics Data System (ADS)
Tabary, J.; Glière, A.
A Monte Carlo radiation transport simulation program, EGS Nova, and a Computer Aided Design software, BRL-CAD, have been coupled within the framework of Sindbad, a Nondestructive Evaluation (NDE) simulation system. In its current status, the program is very valuable in a NDE laboratory context, as it helps simulate the images due to the uncollided and scattered photon fluxes in a single NDE software environment, without having to switch to a Monte Carlo code parameters set. Numerical validations show a good agreement with EGS4 computed and published data. As the program's major drawback is the execution time, computational efficiency improvements are foreseen.
CAD-CAM at Bendix Kansas city: the BICAM system
Witte, D.R.
1983-04-01
Bendix Kansas City Division (BEKC) has been involved in Computer Aided Manufacturing (CAM) technology since the late 1950's when the numerical control (N/C) analysts installed computers to aid in N/C tape preparation for numerically controlled machines. Computer Aided Design (CAD) technology was introduced in 1976, when a number of 2D turnkey drafting stations were procured for printed wiring board (PWB) drawing definition and maintenance. In June, 1980, CAD-CAM Operations was formed to incorporate an integrated CAD-CAM capability into Bendix operations. In March 1982, a ninth division was added to the existing eight divisions at Bendix. Computer Integrated Manufacturing (CIM) is a small organization, reporting directly to the general manager, who has responsibility to coordinate the overall integration of computer aided systems at Bendix. As a long range plan, CIM has adopted a National Bureau of Standards (NBS) architecture titled Factory of the Future. Conceptually, the Bendix CAD-CAM system has a centrally located data base which can be accessed by both CAD and CAM tools, processes, and personnel thus forming an integrated Computer Aided Engineering (CAE) System. This is a key requirement of the Bendix CAD-CAM system that will be presented in more detail.
Popescu-Rohrlich correlations imply efficient instantaneous nonlocal quantum computation
NASA Astrophysics Data System (ADS)
Broadbent, Anne
2016-08-01
In instantaneous nonlocal quantum computation, two parties cooperate in order to perform a quantum computation on their joint inputs, while being restricted to a single round of simultaneous communication. Previous results showed that instantaneous nonlocal quantum computation is possible, at the cost of an exponential amount of prior shared entanglement (in the size of the input). Here, we show that a linear amount of entanglement suffices, (in the size of the computation), as long as the parties share nonlocal correlations as given by the Popescu-Rohrlich box. This means that communication is not required for efficient instantaneous nonlocal quantum computation. Exploiting the well-known relation to position-based cryptography, our result also implies the impossibility of secure position-based cryptography against adversaries with nonsignaling correlations. Furthermore, our construction establishes a quantum analog of the classical communication complexity collapse under nonsignaling correlations.
Efficient Turing-Universal Computation with DNA Polymers
NASA Astrophysics Data System (ADS)
Qian, Lulu; Soloveichik, David; Winfree, Erik
Bennett's proposed chemical Turing machine is one of the most important thought experiments in the study of the thermodynamics of computation. Yet the sophistication of molecular engineering required to physically construct Bennett's hypothetical polymer substrate and enzymes has deterred experimental implementations. Here we propose a chemical implementation of stack machines - a Turing-universal model of computation similar to Turing machines - using DNA strand displacement cascades as the underlying chemical primitive. More specifically, the mechanism described herein is the addition and removal of monomers from the end of a DNA polymer, controlled by strand displacement logic. We capture the motivating feature of Bennett's scheme: that physical reversibility corresponds to logically reversible computation, and arbitrarily little energy per computation step is required. Further, as a method of embedding logic control into chemical and biological systems, polymer-based chemical computation is significantly more efficient than geometry-free chemical reaction networks.
Communication-efficient parallel architectures and algorithms for image computations
Alnuweiri, H.M.
1989-01-01
The main purpose of this dissertation is the design of efficient parallel techniques for image computations which require global operations on image pixels, as well as the development of parallel architectures with special communication features which can support global data movement efficiently. The class of image problems considered in this dissertation involves global operations on image pixels, and irregular (data-dependent) data movement operations. Such problems include histogramming, component labeling, proximity computations, computing the Hough Transform, computing convexity of regions and related properties such as computing the diameter and a smallest area enclosing rectangle for each region. Images with multiple figures and multiple labeled-sets of pixels are also considered. Efficient solutions to such problems involve integer sorting, graph theoretic techniques, and techniques from computational geometry. Although such solutions are not computationally intensive (they all require O(n{sup 2}) operations to be performed on an n {times} n image), they require global communications. The emphasis here is on developing parallel techniques for data movement, reduction, and distribution, which lead to processor-time optimal solutions for such problems on the proposed organizations. The proposed parallel architectures are based on a memory array which can be viewed as an arrangement of memory modules in a k-dimensional space such that the modules are connected to buses placed parallel to the orthogonal axes of the space, and each bus is connected to one processor or a group of processors. It will be shown that such organizations are communication-efficient and are thus highly suited to the image problems considered here, and also to several other classes of problems. The proposed organizations have p processors and O(n{sup 2}) words of memory to process n {times} n images.
Put Your Computers in the Most Efficient Environment.
ERIC Educational Resources Information Center
Yeaman, Andrew R. J.
1984-01-01
Discusses factors that should be considered in selecting video display screens and furniture and designing work spaces for computerized instruction that will provide optimal conditions for student health and learning efficiency. Use of work patterns found to be least stressful by computer workers is also suggested. (MBR)
A Computationally Efficient Algorithm for Aerosol Phase Equilibrium
Zaveri, Rahul A.; Easter, Richard C.; Peters, Len K.; Wexler, Anthony S.
2004-10-04
Three-dimensional models of atmospheric inorganic aerosols need an accurate yet computationally efficient thermodynamic module that is repeatedly used to compute internal aerosol phase state equilibrium. In this paper, we describe the development and evaluation of a computationally efficient numerical solver called MESA (Multicomponent Equilibrium Solver for Aerosols). The unique formulation of MESA allows iteration of all the equilibrium equations simultaneously while maintaining overall mass conservation and electroneutrality in both the solid and liquid phases. MESA is unconditionally stable, shows robust convergence, and typically requires only 10 to 20 single-level iterations (where all activity coefficients and aerosol water content are updated) per internal aerosol phase equilibrium calculation. Accuracy of MESA is comparable to that of the highly accurate Aerosol Inorganics Model (AIM), which uses a rigorous Gibbs free energy minimization approach. Performance evaluation will be presented for a number of complex multicomponent mixtures commonly found in urban and marine tropospheric aerosols.
An overview of energy efficiency techniques in cluster computing systems
Valentini, Giorgio Luigi; Lassonde, Walter; Khan, Samee Ullah; Min-Allah, Nasro; Madani, Sajjad A.; Li, Juan; Zhang, Limin; Wang, Lizhe; Ghani, Nasir; Kolodziej, Joanna; Li, Hongxiang; Zomaya, Albert Y.; Xu, Cheng-Zhong; Balaji, Pavan; Vishnu, Abhinav; Pinel, Fredric; Pecero, Johnatan E.; Kliazovich, Dzmitry; Bouvry, Pascal
2011-09-10
Two major constraints demand more consideration for energy efficiency in cluster computing: (a) operational costs, and (b) system reliability. Increasing energy efficiency in cluster systems will reduce energy consumption, excess heat, lower operational costs, and improve system reliability. Based on the energy-power relationship, and the fact that energy consumption can be reduced with strategic power management, we focus in this survey on the characteristic of two main power management technologies: (a) static power management (SPM) systems that utilize low-power components to save the energy, and (b) dynamic power management (DPM) systems that utilize software and power-scalable components to optimize the energy consumption. We present the current state of the art in both of the SPM and DPM techniques, citing representative examples. The survey is concluded with a brief discussion and some assumptions about the possible future directions that could be explored to improve the energy efficiency in cluster computing.
NASA Astrophysics Data System (ADS)
Zhou, Haiguang; Han, Min
2003-10-01
We focus at CAD/CAM for optomechatronics. We have developed a kind of CAD/CAM, which is not only for mechanics but also for optics and electronic. The software can be used for training and education. We introduce mechanical CAD, optical CAD and electrical CAD, we show how to draw a circuit diagram, mechanical diagram and luminous transmission diagram, from 2D drawing to 3D drawing. We introduce how to create 2D and 3D parts for optomechatronics, how to edit tool paths, how to select parameters for process, how to run the post processor, dynamic show the tool path and generate the CNC programming. We introduce the joint application of CAD&CAM. We aim at how to match the requirement of optical, mechanical and electronics.
ProperCAD: A portable object-oriented parallel environment for VLSI CAD
NASA Technical Reports Server (NTRS)
Ramkumar, Balkrishna; Banerjee, Prithviraj
1993-01-01
Most parallel algorithms for VLSI CAD proposed to date have one important drawback: they work efficiently only on machines that they were designed for. As a result, algorithms designed to date are dependent on the architecture for which they are developed and do not port easily to other parallel architectures. A new project under way to address this problem is described. A Portable object-oriented parallel environment for CAD algorithms (ProperCAD) is being developed. The objectives of this research are (1) to develop new parallel algorithms that run in a portable object-oriented environment (CAD algorithms using a general purpose platform for portable parallel programming called CARM is being developed and a C++ environment that is truly object-oriented and specialized for CAD applications is also being developed); and (2) to design the parallel algorithms around a good sequential algorithm with a well-defined parallel-sequential interface (permitting the parallel algorithm to benefit from future developments in sequential algorithms). One CAD application that has been implemented as part of the ProperCAD project, flat VLSI circuit extraction, is described. The algorithm, its implementation, and its performance on a range of parallel machines are discussed in detail. It currently runs on an Encore Multimax, a Sequent Symmetry, Intel iPSC/2 and i860 hypercubes, a NCUBE 2 hypercube, and a network of Sun Sparc workstations. Performance data for other applications that were developed are provided: namely test pattern generation for sequential circuits, parallel logic synthesis, and standard cell placement.
DeviceEditor visual biological CAD canvas
2012-01-01
Background Biological Computer Aided Design (bioCAD) assists the de novo design and selection of existing genetic components to achieve a desired biological activity, as part of an integrated design-build-test cycle. To meet the emerging needs of Synthetic Biology, bioCAD tools must address the increasing prevalence of combinatorial library design, design rule specification, and scar-less multi-part DNA assembly. Results We report the development and deployment of web-based bioCAD software, DeviceEditor, which provides a graphical design environment that mimics the intuitive visual whiteboard design process practiced in biological laboratories. The key innovations of DeviceEditor include visual combinatorial library design, direct integration with scar-less multi-part DNA assembly design automation, and a graphical user interface for the creation and modification of design specification rules. We demonstrate how biological designs are rendered on the DeviceEditor canvas, and we present effective visualizations of genetic component ordering and combinatorial variations within complex designs. Conclusions DeviceEditor liberates researchers from DNA base-pair manipulation, and enables users to create successful prototypes using standardized, functional, and visual abstractions. Open and documented software interfaces support further integration of DeviceEditor with other bioCAD tools and software platforms. DeviceEditor saves researcher time and institutional resources through correct-by-construction design, the automation of tedious tasks, design reuse, and the minimization of DNA assembly costs. PMID:22373390
NASA Astrophysics Data System (ADS)
Patel, Thaneswer; Sanjog, J.; Karmakar, Sougata
2016-06-01
Computer-aided Design (CAD) and Digital Human Modeling (DHM) (specialized CAD software for virtual human representation) technologies endow unique opportunities to incorporate human factors pro-actively in design development. Challenges of enhancing agricultural productivity through improvement of agricultural tools/machineries and better human-machine compatibility can be ensured by adoption of these modern technologies. Objectives of present work are to provide the detailed scenario of CAD and DHM applications in agricultural sector; and finding out means for wide adoption of these technologies for design and development of cost-effective, user-friendly, efficient and safe agricultural tools/equipment and operator's workplace. Extensive literature review has been conducted for systematic segregation and representation of available information towards drawing inferences. Although applications of various CAD software have momentum in agricultural research particularly for design and manufacturing of agricultural equipment/machinery, use of DHM is still at its infancy in this sector. Current review discusses about reasons of less adoption of these technologies in agricultural sector and steps to be taken for their wide adoption. It also suggests possible future research directions to come up with better ergonomic design strategies for improvement of agricultural equipment/machines and workstations through application of CAD and DHM.
On the Use of Electrooculogram for Efficient Human Computer Interfaces
Usakli, A. B.; Gurkan, S.; Aloise, F.; Vecchiato, G.; Babiloni, F.
2010-01-01
The aim of this study is to present electrooculogram signals that can be used for human computer interface efficiently. Establishing an efficient alternative channel for communication without overt speech and hand movements is important to increase the quality of life for patients suffering from Amyotrophic Lateral Sclerosis or other illnesses that prevent correct limb and facial muscular responses. We have made several experiments to compare the P300-based BCI speller and EOG-based new system. A five-letter word can be written on average in 25 seconds and in 105 seconds with the EEG-based device. Giving message such as “clean-up” could be performed in 3 seconds with the new system. The new system is more efficient than P300-based BCI system in terms of accuracy, speed, applicability, and cost efficiency. Using EOG signals, it is possible to improve the communication abilities of those patients who can move their eyes. PMID:19841687
Generating Composite Overlapping Grids on CAD Geometries
Henshaw, W.D.
2002-02-07
We describe some algorithms and tools that have been developed to generate composite overlapping grids on geometries that have been defined with computer aided design (CAD) programs. This process consists of five main steps. Starting from a description of the surfaces defining the computational domain we (1) correct errors in the CAD representation, (2) determine topology of the patched-surface, (3) build a global triangulation of the surface, (4) construct structured surface and volume grids using hyperbolic grid generation, and (5) generate the overlapping grid by determining the holes and the interpolation points. The overlapping grid generator which is used for the final step also supports the rapid generation of grids for block-structured adaptive mesh refinement and for moving grids. These algorithms have been implemented as part of the Overture object-oriented framework.
Cost reduction advantages of CAD/CAM
NASA Astrophysics Data System (ADS)
Parsons, G. T.
1983-05-01
Features of the CAD/CAM system implemented at the General Dynamics Convair division are summarized. CAD/CAM was initiated in 1976 to enhance engineering, manufacturing and quality assurance and thereby the company's competitive bidding position. Numerical models are substituted for hardware models wherever possible and numerical criteria are defined in design for guiding computer-controlled parts manufacturing machines. The system comprises multiple terminals, a data base, digitizer, printers, disk and tape drives, and graphics displays. The applications include the design and manufacture of parts and components for avionics, structures, scientific investigations, and aircraft structural components. Interfaces with other computers allow structural analyses by finite element codes. Although time savings have not been gained compared to manual drafting, components of greater complexity than could have been designed by hand have been designed and manufactured.
Efficient computations of quantum canonical Gibbs state in phase space
NASA Astrophysics Data System (ADS)
Bondar, Denys I.; Campos, Andre G.; Cabrera, Renan; Rabitz, Herschel A.
2016-06-01
The Gibbs canonical state, as a maximum entropy density matrix, represents a quantum system in equilibrium with a thermostat. This state plays an essential role in thermodynamics and serves as the initial condition for nonequilibrium dynamical simulations. We solve a long standing problem for computing the Gibbs state Wigner function with nearly machine accuracy by solving the Bloch equation directly in the phase space. Furthermore, the algorithms are provided yielding high quality Wigner distributions for pure stationary states as well as for Thomas-Fermi and Bose-Einstein distributions. The developed numerical methods furnish a long-sought efficient computation framework for nonequilibrium quantum simulations directly in the Wigner representation.
A compute-Efficient Bitmap Compression Index for Database Applications
Energy Science and Technology Software Center (ESTSC)
2006-01-01
FastBit: A Compute-Efficient Bitmap Compression Index for Database Applications The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is highly efficient for performing search and retrieval operations on large datasets. The WAH technique is optimized for computational efficiency. The WAH-based bitmap indexing software, called FastBit, is particularly appropriate to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry. Some commercial database products already include some Version of a bitmap index,more » which could possibly be replaced by the WAR bitmap compression techniques for potentially large operational speedup. Experimental results show performance improvements by an average factor of 10 over bitmap technology used by industry, as well as increased efficiencies in constructing compressed bitmaps. FastBit can be use as a stand-alone index, or integrated into a database system. ien integrated into a database system, this technique may be particularly useful for real-time business analysis applications. Additional FastRit applications may include efficient real-time exploration of scientific models, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization. FastBit was proven theoretically to be time-optimal because it provides a search time proportional to the number of elements selected by the index.« less
A compute-Efficient Bitmap Compression Index for Database Applications
Wu, Kesheng; Shoshani, Arie
2006-01-01
FastBit: A Compute-Efficient Bitmap Compression Index for Database Applications The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is highly efficient for performing search and retrieval operations on large datasets. The WAH technique is optimized for computational efficiency. The WAH-based bitmap indexing software, called FastBit, is particularly appropriate to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry. Some commercial database products already include some Version of a bitmap index, which could possibly be replaced by the WAR bitmap compression techniques for potentially large operational speedup. Experimental results show performance improvements by an average factor of 10 over bitmap technology used by industry, as well as increased efficiencies in constructing compressed bitmaps. FastBit can be use as a stand-alone index, or integrated into a database system. ien integrated into a database system, this technique may be particularly useful for real-time business analysis applications. Additional FastRit applications may include efficient real-time exploration of scientific models, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization. FastBit was proven theoretically to be time-optimal because it provides a search time proportional to the number of elements selected by the index.
A Novel Green Cloud Computing Framework for Improving System Efficiency
NASA Astrophysics Data System (ADS)
Lin, Chen
As the prevalence of Cloud computing continues to rise, the need for power saving mechanisms within the Cloud also increases. In this paper we have presented a novel Green Cloud framework for improving system efficiency in a data center. To demonstrate the potential of our framework, we have presented new energy efficient scheduling, VM system image, and image management components that explore new ways to conserve power. Though our research presented in this paper, we have found new ways to save vast amounts of energy while minimally impacting performance.
CAD/CAM improves productivity in nonaerospace job shops
NASA Astrophysics Data System (ADS)
Koenig, D. T.
1982-12-01
Business cost improvements that can result from Computer Aided Design/Computer Aided Manufacturing (CAD/CAM), when properly applied, are discussed. Emphasis is placed on the use of CAD/CAM for machine and process control, design and planning control, and production and measurement control. It is pointed out that the implementation of CAD/CAM should be based on the following priorities: (1) recognize interrelationships between the principal functions of CAD/CAM; (2) establish a Systems Council to determine overall strategy and specify the communications/decision-making system; (3) implement the communications/decision-making system to improve productivity; and (4) implement interactive graphics and other additions to further improve productivity.
Resin-composite Blocks for Dental CAD/CAM Applications
Ruse, N.D.; Sadoun, M.J.
2014-01-01
Advances in digital impression technology and manufacturing processes have led to a dramatic paradigm shift in dentistry and to the widespread use of computer-aided design/computer-aided manufacturing (CAD/CAM) in the fabrication of indirect dental restorations. Research and development in materials suitable for CAD/CAM applications are currently the most active field in dental materials. Two classes of materials are used in the production of CAD/CAM restorations: glass-ceramics/ceramics and resin composites. While glass-ceramics/ceramics have overall superior mechanical and esthetic properties, resin-composite materials may offer significant advantages related to their machinability and intra-oral reparability. This review summarizes recent developments in resin-composite materials for CAD/CAM applications, focusing on both commercial and experimental materials. PMID:25344335
Resin-composite blocks for dental CAD/CAM applications.
Ruse, N D; Sadoun, M J
2014-12-01
Advances in digital impression technology and manufacturing processes have led to a dramatic paradigm shift in dentistry and to the widespread use of computer-aided design/computer-aided manufacturing (CAD/CAM) in the fabrication of indirect dental restorations. Research and development in materials suitable for CAD/CAM applications are currently the most active field in dental materials. Two classes of materials are used in the production of CAD/CAM restorations: glass-ceramics/ceramics and resin composites. While glass-ceramics/ceramics have overall superior mechanical and esthetic properties, resin-composite materials may offer significant advantages related to their machinability and intra-oral reparability. This review summarizes recent developments in resin-composite materials for CAD/CAM applications, focusing on both commercial and experimental materials. PMID:25344335
Computational methods for efficient structural reliability and reliability sensitivity analysis
NASA Technical Reports Server (NTRS)
Wu, Y.-T.
1993-01-01
This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.
A Computationally Efficient Method for Polyphonic Pitch Estimation
NASA Astrophysics Data System (ADS)
Zhou, Ruohua; Reiss, Joshua D.; Mattavelli, Marco; Zoia, Giorgio
2009-12-01
This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI) as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.
An image database management system for conducting CAD research
NASA Astrophysics Data System (ADS)
Gruszauskas, Nicholas; Drukker, Karen; Giger, Maryellen L.
2007-03-01
The development of image databases for CAD research is not a trivial task. The collection and management of images and their related metadata from multiple sources is a time-consuming but necessary process. By standardizing and centralizing the methods in which these data are maintained, one can generate subsets of a larger database that match the specific criteria needed for a particular research project in a quick and efficient manner. A research-oriented management system of this type is highly desirable in a multi-modality CAD research environment. An online, webbased database system for the storage and management of research-specific medical image metadata was designed for use with four modalities of breast imaging: screen-film mammography, full-field digital mammography, breast ultrasound and breast MRI. The system was designed to consolidate data from multiple clinical sources and provide the user with the ability to anonymize the data. Input concerning the type of data to be stored as well as desired searchable parameters was solicited from researchers in each modality. The backbone of the database was created using MySQL. A robust and easy-to-use interface for entering, removing, modifying and searching information in the database was created using HTML and PHP. This standardized system can be accessed using any modern web-browsing software and is fundamental for our various research projects on computer-aided detection, diagnosis, cancer risk assessment, multimodality lesion assessment, and prognosis. Our CAD database system stores large amounts of research-related metadata and successfully generates subsets of cases that match the user's desired search criteria.
A new CAD approach for improving efficacy of cancer screening
NASA Astrophysics Data System (ADS)
Zheng, Bin; Qian, Wei; Li, Lihua; Pu, Jiantao; Kang, Yan; Lure, Fleming; Tan, Maxine; Qiu, Yuchen
2015-03-01
Since performance and clinical utility of current computer-aided detection (CAD) schemes of detecting and classifying soft tissue lesions (e.g., breast masses and lung nodules) is not satisfactory, many researchers in CAD field call for new CAD research ideas and approaches. The purpose of presenting this opinion paper is to share our vision and stimulate more discussions of how to overcome or compensate the limitation of current lesion-detection based CAD schemes in the CAD research community. Since based on our observation that analyzing global image information plays an important role in radiologists' decision making, we hypothesized that using the targeted quantitative image features computed from global images could also provide highly discriminatory power, which are supplementary to the lesion-based information. To test our hypothesis, we recently performed a number of independent studies. Based on our published preliminary study results, we demonstrated that global mammographic image features and background parenchymal enhancement of breast MR images carried useful information to (1) predict near-term breast cancer risk based on negative screening mammograms, (2) distinguish between true- and false-positive recalls in mammography screening examinations, and (3) classify between malignant and benign breast MR examinations. The global case-based CAD scheme only warns a risk level of the cases without cueing a large number of false-positive lesions. It can also be applied to guide lesion-based CAD cueing to reduce false-positives but enhance clinically relevant true-positive cueing. However, before such a new CAD approach is clinically acceptable, more work is needed to optimize not only the scheme performance but also how to integrate with lesion-based CAD schemes in the clinical practice.
Computationally efficient, rotational nonequilibrium CW chemical laser model
Sentman, L.H.; Rushmore, W.
1981-10-01
The essential fluid dynamic and kinetic phenomena required for a quantitative, computationally efficient, rotational nonequilibrium model of a CW HF chemical laser are identified. It is shown that, in addition to the pumping, collisional deactivation, and rotational relaxation reactions, F-atom wall recombination, the hot pumping reaction, and multiquantum deactivation reactions play a significant role in determining laser performance. Several problems with the HF kinetics package are identified. The effect of various parameters on run time is discussed.
Efficient MATLAB computations with sparse and factored tensors.
Bader, Brett William; Kolda, Tamara Gibson (Sandia National Lab, Livermore, CA)
2006-12-01
In this paper, the term tensor refers simply to a multidimensional or N-way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose storing sparse tensors using coordinate format and describe the computational efficiency of this scheme for various mathematical operations, including those typical to tensor decomposition algorithms. Second, we study factored tensors, which have the property that they can be assembled from more basic components. We consider two specific types: a Tucker tensor can be expressed as the product of a core tensor (which itself may be dense, sparse, or factored) and a matrix along each mode, and a Kruskal tensor can be expressed as the sum of rank-1 tensors. We are interested in the case where the storage of the components is less than the storage of the full tensor, and we demonstrate that many elementary operations can be computed using only the components. All of the efficiencies described in this paper are implemented in the Tensor Toolbox for MATLAB.
Improving computational efficiency of Monte Carlo simulations with variance reduction
Turner, A.
2013-07-01
CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)
CAD of control systems: Application of nonlinear programming to a linear quadratic formulation
NASA Technical Reports Server (NTRS)
Fleming, P.
1983-01-01
The familiar suboptimal regulator design approach is recast as a constrained optimization problem and incorporated in a Computer Aided Design (CAD) package where both design objective and constraints are quadratic cost functions. This formulation permits the separate consideration of, for example, model following errors, sensitivity measures and control energy as objectives to be minimized or limits to be observed. Efficient techniques for computing the interrelated cost functions and their gradients are utilized in conjunction with a nonlinear programming algorithm. The effectiveness of the approach and the degree of insight into the problem which it affords is illustrated in a helicopter regulation design example.
Purchasing Computer-Aided Design Software.
ERIC Educational Resources Information Center
Smith, Roger A.
1992-01-01
Presents a model for the purchase of computer-aided design (CAD) software: collect general information, observe CAD in use, arrange onsite demonstrations, select CAD software and hardware, and choose a vendor. (JOW)
NASA Technical Reports Server (NTRS)
Bray, O. H.
1984-01-01
The role of data base management in CAD/CAM, particularly for geometric data is described. First, long term and short term objectives for CAD/CAM data management are identified. Second, the benefits of the data base management approach are explained. Third, some of the additional work needed in the data base area is discussed.
Efficient and accurate computation of generalized singular-value decompositions
NASA Astrophysics Data System (ADS)
Drmac, Zlatko
2001-11-01
We present a new family of algorithms for accurate floating--point computation of the singular value decomposition (SVD) of various forms of products (quotients) of two or three matrices. The main goal of such an algorithm is to compute all singular values to high relative accuracy. This means that we are seeking guaranteed number of accurate digits even in the smallest singular values. We also want to achieve computational efficiency, while maintaining high accuracy. To illustrate, consider the SVD of the product A=BTSC. The new algorithm uses certain preconditioning (based on diagonal scalings, the LU and QR factorizations) to replace A with A'=(B')TS'C', where A and A' have the same singular values and the matrix A' is computed explicitly. Theoretical analysis and numerical evidence show that, in the case of full rank B, C, S, the accuracy of the new algorithm is unaffected by replacing B, S, C with, respectively, D1B, D2SD3, D4C, where Di, i=1,...,4 are arbitrary diagonal matrices. As an application, the paper proposes new accurate algorithms for computing the (H,K)-SVD and (H1,K)-SVD of S.
Energy Efficient Biomolecular Simulations with FPGA-based Reconfigurable Computing
Hampton, Scott S; Agarwal, Pratul K
2010-05-01
Reconfigurable computing (RC) is being investigated as a hardware solution for improving time-to-solution for biomolecular simulations. A number of popular molecular dynamics (MD) codes are used to study various aspects of biomolecules. These codes are now capable of simulating nanosecond time-scale trajectories per day on conventional microprocessor-based hardware, but biomolecular processes often occur at the microsecond time-scale or longer. A wide gap exists between the desired and achievable simulation capability; therefore, there is considerable interest in alternative algorithms and hardware for improving the time-to-solution of MD codes. The fine-grain parallelism provided by Field Programmable Gate Arrays (FPGA) combined with their low power consumption make them an attractive solution for improving the performance of MD simulations. In this work, we use an FPGA-based coprocessor to accelerate the compute-intensive calculations of LAMMPS, a popular MD code, achieving up to 5.5 fold speed-up on the non-bonded force computations of the particle mesh Ewald method and up to 2.2 fold speed-up in overall time-to-solution, and potentially an increase by a factor of 9 in power-performance efficiencies for the pair-wise computations. The results presented here provide an example of the multi-faceted benefits to an application in a heterogeneous computing environment.
CAD Model and Visual Assisted Control System for NIF Target Area Positioners
Tekle, E A; Wilson, E F; Paik, T S
2007-10-03
The National Ignition Facility (NIF) target chamber contains precision motion control systems that reach up to 6 meters into the target chamber for handling targets and diagnostics. Systems include the target positioner, an alignment sensor, and diagnostic manipulators (collectively called positioners). Target chamber shot experiments require a variety of positioner arrangements near the chamber center to be aligned to an accuracy of 10 micrometers. Positioners are some of the largest devices in NIF, and they require careful monitoring and control in 3 dimensions to prevent interferences. The Integrated Computer Control System provides efficient and flexible multi-positioner controls. This is accomplished through advanced video-control integration incorporating remote position sensing and realtime analysis of a CAD model of target chamber devices. The control system design, the method used to integrate existing mechanical CAD models, and the offline test laboratory used to verify proper operation of the control system are described.
Energy efficient hybrid computing systems using spin devices
NASA Astrophysics Data System (ADS)
Sharad, Mrigank
Emerging spin-devices like magnetic tunnel junctions (MTJ's), spin-valves and domain wall magnets (DWM) have opened new avenues for spin-based logic design. This work explored potential computing applications which can exploit such devices for higher energy-efficiency and performance. The proposed applications involve hybrid design schemes, where charge-based devices supplement the spin-devices, to gain large benefits at the system level. As an example, lateral spin valves (LSV) involve switching of nanomagnets using spin-polarized current injection through a metallic channel such as Cu. Such spin-torque based devices possess several interesting properties that can be exploited for ultra-low power computation. Analog characteristic of spin current facilitate non-Boolean computation like majority evaluation that can be used to model a neuron. The magneto-metallic neurons can operate at ultra-low terminal voltage of ˜20mV, thereby resulting in small computation power. Moreover, since nano-magnets inherently act as memory elements, these devices can facilitate integration of logic and memory in interesting ways. The spin based neurons can be integrated with CMOS and other emerging devices leading to different classes of neuromorphic/non-Von-Neumann architectures. The spin-based designs involve `mixed-mode' processing and hence can provide very compact and ultra-low energy solutions for complex computation blocks, both digital as well as analog. Such low-power, hybrid designs can be suitable for various data processing applications like cognitive computing, associative memory, and currentmode on-chip global interconnects. Simulation results for these applications based on device-circuit co-simulation framework predict more than ˜100x improvement in computation energy as compared to state of the art CMOS design, for optimal spin-device parameters.
Improving robustness and computational efficiency using modern C++
NASA Astrophysics Data System (ADS)
Paterno, M.; Kowalkowski, J.; Green, C.
2014-06-01
For nearly two decades, the C++ programming language has been the dominant programming language for experimental HEP. The publication of ISO/IEC 14882:2011, the current version of the international standard for the C++ programming language, makes available a variety of language and library facilities for improving the robustness, expressiveness, and computational efficiency of C++ code. However, much of the C++ written by the experimental HEP community does not take advantage of the features of the language to obtain these benefits, either due to lack of familiarity with these features or concern that these features must somehow be computationally inefficient. In this paper, we address some of the features of modern C+-+, and show how they can be used to make programs that are both robust and computationally efficient. We compare and contrast simple yet realistic examples of some common implementation patterns in C, currently-typical C++, and modern C++, and show (when necessary, down to the level of generated assembly language code) the quality of the executable code produced by recent C++ compilers, with the aim of allowing the HEP community to make informed decisions on the costs and benefits of the use of modern C++.
Improving robustness and computational efficiency using modern C++
Paterno, M.; Kowalkowski, J.; Green, C.
2014-01-01
For nearly two decades, the C++ programming language has been the dominant programming language for experimental HEP. The publication of ISO/IEC 14882:2011, the current version of the international standard for the C++ programming language, makes available a variety of language and library facilities for improving the robustness, expressiveness, and computational efficiency of C++ code. However, much of the C++ written by the experimental HEP community does not take advantage of the features of the language to obtain these benefits, either due to lack of familiarity with these features or concern that these features must somehow be computationally inefficient. In this paper, we address some of the features of modern C+-+, and show how they can be used to make programs that are both robust and computationally efficient. We compare and contrast simple yet realistic examples of some common implementation patterns in C, currently-typical C++, and modern C++, and show (when necessary, down to the level of generated assembly language code) the quality of the executable code produced by recent C++ compilers, with the aim of allowing the HEP community to make informed decisions on the costs and benefits of the use of modern C++.
Methods for increased computational efficiency of multibody simulations
NASA Astrophysics Data System (ADS)
Epple, Alexander
This thesis is concerned with the efficient numerical simulation of finite element based flexible multibody systems. Scaling operations are systematically applied to the governing index-3 differential algebraic equations in order to solve the problem of ill conditioning for small time step sizes. The importance of augmented Lagrangian terms is demonstrated. The use of fast sparse solvers is justified for the solution of the linearized equations of motion resulting in significant savings of computational costs. Three time stepping schemes for the integration of the governing equations of flexible multibody systems are discussed in detail. These schemes are the two-stage Radau IIA scheme, the energy decaying scheme, and the generalized-a method. Their formulations are adapted to the specific structure of the governing equations of flexible multibody systems. The efficiency of the time integration schemes is comprehensively evaluated on a series of test problems. Formulations for structural and constraint elements are reviewed and the problem of interpolation of finite rotations in geometrically exact structural elements is revisited. This results in the development of a new improved interpolation algorithm, which preserves the objectivity of the strain field and guarantees stable simulations in the presence of arbitrarily large rotations. Finally, strategies for the spatial discretization of beams in the presence of steep variations in cross-sectional properties are developed. These strategies reduce the number of degrees of freedom needed to accurately analyze beams with discontinuous properties, resulting in improved computational efficiency.
Improving the Efficiency of Abdominal Aortic Aneurysm Wall Stress Computations
Zelaya, Jaime E.; Goenezen, Sevan; Dargon, Phong T.; Azarbal, Amir-Farzin; Rugonyi, Sandra
2014-01-01
An abdominal aortic aneurysm is a pathological dilation of the abdominal aorta, which carries a high mortality rate if ruptured. The most commonly used surrogate marker of rupture risk is the maximal transverse diameter of the aneurysm. More recent studies suggest that wall stress from models of patient-specific aneurysm geometries extracted, for instance, from computed tomography images may be a more accurate predictor of rupture risk and an important factor in AAA size progression. However, quantification of wall stress is typically computationally intensive and time-consuming, mainly due to the nonlinear mechanical behavior of the abdominal aortic aneurysm walls. These difficulties have limited the potential of computational models in clinical practice. To facilitate computation of wall stresses, we propose to use a linear approach that ensures equilibrium of wall stresses in the aneurysms. This proposed linear model approach is easy to implement and eliminates the burden of nonlinear computations. To assess the accuracy of our proposed approach to compute wall stresses, results from idealized and patient-specific model simulations were compared to those obtained using conventional approaches and to those of a hypothetical, reference abdominal aortic aneurysm model. For the reference model, wall mechanical properties and the initial unloaded and unstressed configuration were assumed to be known, and the resulting wall stresses were used as reference for comparison. Our proposed linear approach accurately approximates wall stresses for varying model geometries and wall material properties. Our findings suggest that the proposed linear approach could be used as an effective, efficient, easy-to-use clinical tool to estimate patient-specific wall stresses. PMID:25007052
Exploiting stoichiometric redundancies for computational efficiency and network reduction.
Ingalls, Brian P; Bembenek, Eric
2015-01-01
Analysis of metabolic networks typically begins with construction of the stoichiometry matrix, which characterizes the network topology. This matrix provides, via the balance equation, a description of the potential steady-state flow distribution. This paper begins with the observation that the balance equation depends only on the structure of linear redundancies in the network, and so can be stated in a succinct manner, leading to computational efficiencies in steady-state analysis. This alternative description of steady-state behaviour is then used to provide a novel method for network reduction, which complements existing algorithms for describing intracellular networks in terms of input-output macro-reactions (to facilitate bioprocess optimization and control). Finally, it is demonstrated that this novel reduction method can be used to address elementary mode analysis of large networks: the modes supported by a reduced network can capture the input-output modes of a metabolic module with significantly reduced computational effort. PMID:25547516
Exploiting stoichiometric redundancies for computational efficiency and network reduction
Ingalls, Brian P.; Bembenek, Eric
2015-01-01
Abstract Analysis of metabolic networks typically begins with construction of the stoichiometry matrix, which characterizes the network topology. This matrix provides, via the balance equation, a description of the potential steady-state flow distribution. This paper begins with the observation that the balance equation depends only on the structure of linear redundancies in the network, and so can be stated in a succinct manner, leading to computational efficiencies in steady-state analysis. This alternative description of steady-state behaviour is then used to provide a novel method for network reduction, which complements existing algorithms for describing intracellular networks in terms of input-output macro-reactions (to facilitate bioprocess optimization and control). Finally, it is demonstrated that this novel reduction method can be used to address elementary mode analysis of large networks: the modes supported by a reduced network can capture the input-output modes of a metabolic module with significantly reduced computational effort. PMID:25547516
An Algorithm for Projecting Points onto a Patched CAD Model
Henshaw, W D
2001-05-29
We are interested in building structured overlapping grids for geometries defined by computer-aided-design (CAD) packages. Geometric information defining the boundary surfaces of a computation domain is often provided in the form of a collection of possibly hundreds of trimmed patches. The first step in building an overlapping volume grid on such a geometry is to build overlapping surface grids. A surface grid is typically built using hyperbolic grid generation; starting from a curve on the surface, a grid is grown by marching over the surface. A given hyperbolic grid will typically cover many of the underlying CAD surface patches. The fundamental operation needed for building surface grids is that of projecting a point in space onto the closest point on the CAD surface. We describe an fast algorithm for performing this projection, it will make use of a fairly coarse global triangulation of the CAD geometry. We describe how to build this global triangulation by first determining the connectivity of the CAD surface patches. This step is necessary since it often the case that the CAD description will contain no information specifying how a given patch connects to other neighboring patches. Determining the connectivity is difficult since the surface patches may contain mistakes such as gaps or overlaps between neighboring patches.
Differential area profiles: decomposition properties and efficient computation.
Ouzounis, Georgios K; Pesaresi, Martino; Soille, Pierre
2012-08-01
Differential area profiles (DAPs) are point-based multiscale descriptors used in pattern analysis and image segmentation. They are defined through sets of size-based connected morphological filters that constitute a joint area opening top-hat and area closing bottom-hat scale-space of the input image. The work presented in this paper explores the properties of this image decomposition through sets of area zones. An area zone defines a single plane of the DAP vector field and contains all the peak components of the input image, whose size is between the zone's attribute extrema. Area zones can be computed efficiently from hierarchical image representation structures, in a way similar to regular attribute filters. Operations on the DAP vector field can then be computed without the need for exporting it first, and an example with the leveling-like convex/concave segmentation scheme is given. This is referred to as the one-pass method and it is demonstrated on the Max-Tree structure. Its computational performance is tested and compared against conventional means for computing differential profiles, relying on iterative application of area openings and closings. Applications making use of the area zone decomposition are demonstrated in problems related to remote sensing and medical image analysis. PMID:22184259
Efficient parallel global garbage collection on massively parallel computers
Kamada, Tomio; Matsuoka, Satoshi; Yonezawa, Akinori
1994-12-31
On distributed-memory high-performance MPPs where processors are interconnected by an asynchronous network, efficient Garbage Collection (GC) becomes difficult due to inter-node references and references within pending, unprocessed messages. The parallel global GC algorithm (1) takes advantage of reference locality, (2) efficiently traverses references over nodes, (3) admits minimum pause time of ongoing computations, and (4) has been shown to scale up to 1024 node MPPs. The algorithm employs a global weight counting scheme to substantially reduce message traffic. The two methods for confirming the arrival of pending messages are used: one counts numbers of messages and the other uses network `bulldozing.` Performance evaluation in actual implementations on a multicomputer with 32-1024 nodes, Fujitsu AP1000, reveals various favorable properties of the algorithm.
Computationally efficient strategies to perform anomaly detection in hyperspectral images
NASA Astrophysics Data System (ADS)
Rossi, Alessandro; Acito, Nicola; Diani, Marco; Corsini, Giovanni
2012-11-01
In remote sensing, hyperspectral sensors are effectively used for target detection and recognition because of their high spectral resolution that allows discrimination of different materials in the sensed scene. When a priori information about the spectrum of the targets of interest is not available, target detection turns into anomaly detection (AD), i.e. searching for objects that are anomalous with respect to the scene background. In the field of AD, anomalies can be generally associated to observations that statistically move away from background clutter, being this latter intended as a local neighborhood surrounding the observed pixel or as a large part of the image. In this context, many efforts have been put to reduce the computational load of AD algorithms so as to furnish information for real-time decision making. In this work, a sub-class of AD methods is considered that aim at detecting small rare objects that are anomalous with respect to their local background. Such techniques not only are characterized by mathematical tractability but also allow the design of real-time strategies for AD. Within these methods, one of the most-established anomaly detectors is the RX algorithm which is based on a local Gaussian model for background modeling. In the literature, the RX decision rule has been employed to develop computationally efficient algorithms implemented in real-time systems. In this work, a survey of computationally efficient methods to implement the RX detector is presented where advanced algebraic strategies are exploited to speed up the estimate of the covariance matrix and of its inverse. The comparison of the overall number of operations required by the different implementations of the RX algorithms is given and discussed by varying the RX parameters in order to show the computational improvements achieved with the introduced algebraic strategy.
Increasing computational efficiency of cochlear models using boundary layers
NASA Astrophysics Data System (ADS)
Alkhairy, Samiya A.; Shera, Christopher A.
2015-12-01
Our goal is to develop methods to improve the efficiency of computational models of the cochlea for applications that require the solution accurately only within a basal region of interest, specifically by decreasing the number of spatial sections needed for simulation of the problem with good accuracy. We design algebraic spatial and parametric transformations to computational models of the cochlea. These transformations are applied after the basal region of interest and allow for spatial preservation, driven by the natural characteristics of approximate spatial causality of cochlear models. The project is of foundational nature and hence the goal is to design, characterize and develop an understanding and framework rather than optimization and globalization. Our scope is as follows: designing the transformations; understanding the mechanisms by which computational load is decreased for each transformation; development of performance criteria; characterization of the results of applying each transformation to a specific physical model and discretization and solution schemes. In this manuscript, we introduce one of the proposed methods (complex spatial transformation) for a case study physical model that is a linear, passive, transmission line model in which the various abstraction layers (electric parameters, filter parameters, wave parameters) are clearer than other models. This is conducted in the frequency domain for multiple frequencies using a second order finite difference scheme for discretization and direct elimination for solving the discrete system of equations. The performance is evaluated using two developed simulative criteria for each of the transformations. In conclusion, the developed methods serve to increase efficiency of a computational traveling wave cochlear model when spatial preservation can hold, while maintaining good correspondence with the solution of interest and good accuracy, for applications in which the interest is in the solution
Efficient Parallel Kernel Solvers for Computational Fluid Dynamics Applications
NASA Technical Reports Server (NTRS)
Sun, Xian-He
1997-01-01
Distributed-memory parallel computers dominate today's parallel computing arena. These machines, such as Intel Paragon, IBM SP2, and Cray Origin2OO, have successfully delivered high performance computing power for solving some of the so-called "grand-challenge" problems. Despite initial success, parallel machines have not been widely accepted in production engineering environments due to the complexity of parallel programming. On a parallel computing system, a task has to be partitioned and distributed appropriately among processors to reduce communication cost and to attain load balance. More importantly, even with careful partitioning and mapping, the performance of an algorithm may still be unsatisfactory, since conventional sequential algorithms may be serial in nature and may not be implemented efficiently on parallel machines. In many cases, new algorithms have to be introduced to increase parallel performance. In order to achieve optimal performance, in addition to partitioning and mapping, a careful performance study should be conducted for a given application to find a good algorithm-machine combination. This process, however, is usually painful and elusive. The goal of this project is to design and develop efficient parallel algorithms for highly accurate Computational Fluid Dynamics (CFD) simulations and other engineering applications. The work plan is 1) developing highly accurate parallel numerical algorithms, 2) conduct preliminary testing to verify the effectiveness and potential of these algorithms, 3) incorporate newly developed algorithms into actual simulation packages. The work plan has well achieved. Two highly accurate, efficient Poisson solvers have been developed and tested based on two different approaches: (1) Adopting a mathematical geometry which has a better capacity to describe the fluid, (2) Using compact scheme to gain high order accuracy in numerical discretization. The previously developed Parallel Diagonal Dominant (PDD) algorithm
IMPROVING TACONITE PROCESSING PLANT EFFICIENCY BY COMPUTER SIMULATION, Final Report
William M. Bond; Salih Ersayin
2007-03-30
This project involved industrial scale testing of a mineral processing simulator to improve the efficiency of a taconite processing plant, namely the Minorca mine. The Concentrator Modeling Center at the Coleraine Minerals Research Laboratory, University of Minnesota Duluth, enhanced the capabilities of available software, Usim Pac, by developing mathematical models needed for accurate simulation of taconite plants. This project provided funding for this technology to prove itself in the industrial environment. As the first step, data representing existing plant conditions were collected by sampling and sample analysis. Data were then balanced and provided a basis for assessing the efficiency of individual devices and the plant, and also for performing simulations aimed at improving plant efficiency. Performance evaluation served as a guide in developing alternative process strategies for more efficient production. A large number of computer simulations were then performed to quantify the benefits and effects of implementing these alternative schemes. Modification of makeup ball size was selected as the most feasible option for the target performance improvement. This was combined with replacement of existing hydrocyclones with more efficient ones. After plant implementation of these modifications, plant sampling surveys were carried out to validate findings of the simulation-based study. Plant data showed very good agreement with the simulated data, confirming results of simulation. After the implementation of modifications in the plant, several upstream bottlenecks became visible. Despite these bottlenecks limiting full capacity, concentrator energy improvement of 7% was obtained. Further improvements in energy efficiency are expected in the near future. The success of this project demonstrated the feasibility of a simulation-based approach. Currently, the Center provides simulation-based service to all the iron ore mining companies operating in northern
A computational efficient modelling of laminar separation bubbles
NASA Technical Reports Server (NTRS)
Dini, Paolo; Maughmer, Mark D.
1990-01-01
In predicting the aerodynamic characteristics of airfoils operating at low Reynolds numbers, it is often important to account for the effects of laminar (transitional) separation bubbles. Previous approaches to the modelling of this viscous phenomenon range from fast but sometimes unreliable empirical correlations for the length of the bubble and the associated increase in momentum thickness, to more accurate but significantly slower displacement-thickness iteration methods employing inverse boundary-layer formulations in the separated regions. Since the penalty in computational time associated with the more general methods is unacceptable for airfoil design applications, use of an accurate yet computationally efficient model is highly desirable. To this end, a semi-empirical bubble model was developed and incorporated into the Eppler and Somers airfoil design and analysis program. The generality and the efficiency was achieved by successfully approximating the local viscous/inviscid interaction, the transition location, and the turbulent reattachment process within the framework of an integral boundary-layer method. Comparisons of the predicted aerodynamic characteristics with experimental measurements for several airfoils show excellent and consistent agreement for Reynolds numbers from 2,000,000 down to 100,000.
Computationally efficient sub-band coding of ECG signals.
Husøy, J H; Gjerde, T
1996-03-01
A data compression technique is presented for the compression of discrete time electrocardiogram (ECG) signals. The compression system is based on sub-band coding, a technique traditionally used for compressing speech and images. The sub-band coder employs quadrature mirror filter banks (QMF) with up to 32 critically sampled sub-bands. Both finite impulse response (FIR) and the more computationally efficient infinite impulse response (IIR) filter banks are considered as candidates in a complete ECG coding system. The sub-bands are threshold, quantized using uniform quantizers and run-length coded. The output of the run-length coder is further compressed by a Huffman coder. Extensive simulations indicate that 16 sub-bands are a suitable choice for this application. Furthermore, IIR filter banks are preferable due to their superiority in terms of computational efficiency. We conclude that the present scheme, which is suitable for real time implementation on a PC, can provide compression ratios between 5 and 15 without loss of clinical information. PMID:8673319
A computational efficient modelling of laminar separation bubbles
NASA Astrophysics Data System (ADS)
Dini, Paolo; Maughmer, Mark D.
1990-07-01
In predicting the aerodynamic characteristics of airfoils operating at low Reynolds numbers, it is often important to account for the effects of laminar (transitional) separation bubbles. Previous approaches to the modelling of this viscous phenomenon range from fast but sometimes unreliable empirical correlations for the length of the bubble and the associated increase in momentum thickness, to more accurate but significantly slower displacement-thickness iteration methods employing inverse boundary-layer formulations in the separated regions. Since the penalty in computational time associated with the more general methods is unacceptable for airfoil design applications, use of an accurate yet computationally efficient model is highly desirable. To this end, a semi-empirical bubble model was developed and incorporated into the Eppler and Somers airfoil design and analysis program. The generality and the efficiency was achieved by successfully approximating the local viscous/inviscid interaction, the transition location, and the turbulent reattachment process within the framework of an integral boundary-layer method. Comparisons of the predicted aerodynamic characteristics with experimental measurements for several airfoils show excellent and consistent agreement for Reynolds numbers from 2,000,000 down to 100,000.
A computationally efficient modelling of laminar separation bubbles
NASA Astrophysics Data System (ADS)
Dini, Paolo
1990-08-01
In predicting the aerodynamic characteristics of airfoils operating at low Reynolds numbers, it is often important to account for the effects of laminar (transitional) separation bubbles. Previous approaches to the modeling of this viscous phenomenon range from fast by sometimes unreliable empirical correlations for the length of the bubble and the associated increase in momentum thickness, to more accurate but significantly slower displacement thickness iteration methods employing inverse boundary layer formulations in the separated regions. Since the penalty in computational time associated with the more general methods is unacceptable for airfoil design applications, use of an accurate yet computationally efficient model is highly desirable. To this end, a semi-empirical bubble model was developed and incorporated into the Eppler and Somers airfoil design and analysis program. The generality and the efficiency were achieved by successfully approximating the local viscous/inviscid interaction, the transition location, and the turbulent reattachment process within the framework of an integral boundary-layer method. Comparisons of the predicted aerodynamic characteristics with experimental measurements for several airfoils show excellent and consistent agreement for Reynolds numbers from 2,000,000 down to 100,000.
Schools (Students) Exchanging CAD/CAM Files over the Internet.
ERIC Educational Resources Information Center
Mahoney, Gary S.; Smallwood, James E.
This document discusses how students and schools can benefit from exchanging computer-aided design/computer-aided manufacturing (CAD/CAM) files over the Internet, explains how files are exchanged, and examines the problem of selected hardware/software incompatibility. Key terms associated with information search services are defined, and several…
Efficient Computation of the Topology of Level Sets
Pascucci, V; Cole-McLaughlin, K
2002-07-19
This paper introduces two efficient algorithms that compute the Contour Tree of a 3D scalar field F and its augmented version with the Betti numbers of each isosurface. The Contour Tree is a fundamental data structure in scientific visualization that is used to pre-process the domain mesh to allow optimal computation of isosurfaces with minimal storage overhead. The Contour Tree can be also used to build user interfaces reporting the complete topological characterization of a scalar field, as shown in Figure 1. In the first part of the paper we present a new scheme that augments the Contour Tree with the Betti numbers of each isocontour in linear time. We show how to extend the scheme introduced in 3 with the Betti number computation without increasing its complexity. Thus we improve on the time complexity from our previous approach 8 from 0(m log m) to 0(n log n+m), where m is the number of tetrahedra and n is the number of vertices in the domain of F. In the second part of the paper we introduce a new divide and conquer algorithm that computes the Augmented Contour Tree for scalar fields defined on rectilinear grids. The central part of the scheme computes the output contour tree by merging two intermediate contour trees and is independent of the interpolant. In this way we confine any knowledge regarding a specific interpolant to an oracle that computes the tree for a single cell. We have implemented this oracle for the trilinear interpolant and plan to replace it with higher order interpolants when needed. The complexity of the scheme is O(n + t log n), where t is the number of critical points of F. This allows for the first time to compute the Contour Tree in linear time in many practical cases when t = O(n{sup 1-e}). We report the running times for a parallel implementation of our algorithm, showing good scalability with the number of processors.
Computationally efficient implementation of combustion chemistry in parallel PDF calculations
NASA Astrophysics Data System (ADS)
Lu, Liuyan; Lantz, Steven R.; Ren, Zhuyin; Pope, Stephen B.
2009-08-01
In parallel calculations of combustion processes with realistic chemistry, the serial in situ adaptive tabulation (ISAT) algorithm [S.B. Pope, Computationally efficient implementation of combustion chemistry using in situ adaptive tabulation, Combustion Theory and Modelling, 1 (1997) 41-63; L. Lu, S.B. Pope, An improved algorithm for in situ adaptive tabulation, Journal of Computational Physics 228 (2009) 361-386] substantially speeds up the chemistry calculations on each processor. To improve the parallel efficiency of large ensembles of such calculations in parallel computations, in this work, the ISAT algorithm is extended to the multi-processor environment, with the aim of minimizing the wall clock time required for the whole ensemble. Parallel ISAT strategies are developed by combining the existing serial ISAT algorithm with different distribution strategies, namely purely local processing (PLP), uniformly random distribution (URAN), and preferential distribution (PREF). The distribution strategies enable the queued load redistribution of chemistry calculations among processors using message passing. They are implemented in the software x2f_mpi, which is a Fortran 95 library for facilitating many parallel evaluations of a general vector function. The relative performance of the parallel ISAT strategies is investigated in different computational regimes via the PDF calculations of multiple partially stirred reactors burning methane/air mixtures. The results show that the performance of ISAT with a fixed distribution strategy strongly depends on certain computational regimes, based on how much memory is available and how much overlap exists between tabulated information on different processors. No one fixed strategy consistently achieves good performance in all the regimes. Therefore, an adaptive distribution strategy, which blends PLP, URAN and PREF, is devised and implemented. It yields consistently good performance in all regimes. In the adaptive parallel
Recent Algorithmic and Computational Efficiency Improvements in the NIMROD Code
NASA Astrophysics Data System (ADS)
Plimpton, S. J.; Sovinec, C. R.; Gianakon, T. A.; Parker, S. E.
1999-11-01
Extreme anisotropy and temporal stiffness impose severe challenges to simulating low frequency, nonlinear behavior in magnetized fusion plasmas. To address these challenges in computations of realistic experiment configurations, NIMROD(Glasser, et al., Plasma Phys. Control. Fusion 41) (1999) A747. uses a time-split, semi-implicit advance of the two-fluid equations for magnetized plasmas with a finite element/Fourier series spatial representation. The stiffness and anisotropy lead to ill-conditioned linear systems of equations, and they emphasize any truncation errors that may couple different modes of the continuous system. Recent work significantly improves NIMROD's performance in these areas. Implementing a parallel global preconditioning scheme in structured-grid regions permits scaling to large problems and large time steps, which are critical for achieving realistic S-values. In addition, coupling to the AZTEC parallel linear solver package now permits efficient computation with regions of unstructured grid. Changes in the time-splitting scheme improve numerical behavior in simulations with strong flow, and quadratic basis elements are being explored for accuracy. Different numerical forms of anisotropic thermal conduction, critical for slow island evolution, are compared. Algorithms for including gyrokinetic ions in the finite element computations are discussed.
Efficient Homotopy Continuation Algorithms with Application to Computational Fluid Dynamics
NASA Astrophysics Data System (ADS)
Brown, David A.
New homotopy continuation algorithms are developed and applied to a parallel implicit finite-difference Newton-Krylov-Schur external aerodynamic flow solver for the compressible Euler, Navier-Stokes, and Reynolds-averaged Navier-Stokes equations with the Spalart-Allmaras one-equation turbulence model. Many new analysis tools, calculations, and numerical algorithms are presented for the study and design of efficient and robust homotopy continuation algorithms applicable to solving very large and sparse nonlinear systems of equations. Several specific homotopies are presented and studied and a methodology is presented for assessing the suitability of specific homotopies for homotopy continuation. . A new class of homotopy continuation algorithms, referred to as monolithic homotopy continuation algorithms, is developed. These algorithms differ from classical predictor-corrector algorithms by combining the predictor and corrector stages into a single update, significantly reducing the amount of computation and avoiding wasted computational effort resulting from over-solving in the corrector phase. The new algorithms are also simpler from a user perspective, with fewer input parameters, which also improves the user's ability to choose effective parameters on the first flow solve attempt. Conditional convergence is proved analytically and studied numerically for the new algorithms. The performance of a fully-implicit monolithic homotopy continuation algorithm is evaluated for several inviscid, laminar, and turbulent flows over NACA 0012 airfoils and ONERA M6 wings. The monolithic algorithm is demonstrated to be more efficient than the predictor-corrector algorithm for all applications investigated. It is also demonstrated to be more efficient than the widely-used pseudo-transient continuation algorithm for all inviscid and laminar cases investigated, and good performance scaling with grid refinement is demonstrated for the inviscid cases. Performance is also demonstrated
Computer Aided Drafting. Instructor's Guide.
ERIC Educational Resources Information Center
Henry, Michael A.
This guide is intended for use in introducing students to the operation and applications of computer-aided drafting (CAD) systems. The following topics are covered in the individual lessons: understanding CAD (CAD versus traditional manual drafting and care of software and hardware); using the components of a CAD system (primary and other input…
Efficient Universal Computing Architectures for Decoding Neural Activity
Rapoport, Benjamin I.; Turicchia, Lorenzo; Wattanapanitch, Woradorn; Davidson, Thomas J.; Sarpeshkar, Rahul
2012-01-01
The ability to decode neural activity into meaningful control signals for prosthetic devices is critical to the development of clinically useful brain– machine interfaces (BMIs). Such systems require input from tens to hundreds of brain-implanted recording electrodes in order to deliver robust and accurate performance; in serving that primary function they should also minimize power dissipation in order to avoid damaging neural tissue; and they should transmit data wirelessly in order to minimize the risk of infection associated with chronic, transcutaneous implants. Electronic architectures for brain– machine interfaces must therefore minimize size and power consumption, while maximizing the ability to compress data to be transmitted over limited-bandwidth wireless channels. Here we present a system of extremely low computational complexity, designed for real-time decoding of neural signals, and suited for highly scalable implantable systems. Our programmable architecture is an explicit implementation of a universal computing machine emulating the dynamics of a network of integrate-and-fire neurons; it requires no arithmetic operations except for counting, and decodes neural signals using only computationally inexpensive logic operations. The simplicity of this architecture does not compromise its ability to compress raw neural data by factors greater than . We describe a set of decoding algorithms based on this computational architecture, one designed to operate within an implanted system, minimizing its power consumption and data transmission bandwidth; and a complementary set of algorithms for learning, programming the decoder, and postprocessing the decoded output, designed to operate in an external, nonimplanted unit. The implementation of the implantable portion is estimated to require fewer than 5000 operations per second. A proof-of-concept, 32-channel field-programmable gate array (FPGA) implementation of this portion is consequently energy efficient
The Efficiency of Various Computers and Optimizations in Performing Finite Element Computations
NASA Technical Reports Server (NTRS)
Marcus, Martin H.; Broduer, Steve (Technical Monitor)
2001-01-01
With the advent of computers with many processors, it becomes unclear how to best exploit this advantage. For example, matrices can be inverted by applying several processors to each vector operation, or one processor can be applied to each matrix. The former approach has diminishing returns beyond a handful of processors, but how many processors depends on the computer architecture. Applying one processor to each matrix is feasible with enough ram memory and scratch disk space, but the speed at which this is done is found to vary by a factor of three depending on how it is done. The cost of the computer must also be taken into account. A computer with many processors and fast interprocessor communication is much more expensive than the same computer and processors with slow interprocessor communication. Consequently, for problems that require several matrices to be inverted, the best speed per dollar for computers is found to be several small workstations that are networked together, such as in a Beowulf cluster. Since these machines typically have two processors per node, each matrix is most efficiently inverted with no more than two processors assigned to it.
Productivity improvements through the use of CAD/CAM
NASA Astrophysics Data System (ADS)
Wehrman, M. D.
This paper focuses on Computer Aided Design/Computer Aided Manufacturing (CAD/CAM) productivity improvements that occurred in the Boeing Commercial Airplane Company (BCAC) between 1979 and 1983, with a look at future direction. Since the introduction of numerically controlled machinery in the 1950s, a wide range of engineering and manufacturing applications has evolved. The main portion of this paper includes a summarized and illustrated cross-section of these applications, touching on benefits such as reduced tooling, shortened flow time, increased accuracy, and reduced labor hours. The current CAD/CAM integration activity, directed toward capitalizing on this productivity in the future, is addressed.
Optimization of computation efficiency in underwater acoustic navigation system.
Lee, Hua
2016-04-01
This paper presents a technique for the estimation of the relative bearing angle between the unmanned underwater vehicle (UUV) and the base station for the homing and docking operations. The key requirement of this project includes computation efficiency and estimation accuracy for direct implementation onto the UUV electronic hardware, subject to the extreme constraints of physical limitation of the hardware due to the size and dimension of the UUV housing, electric power consumption for the requirement of UUV survey duration and range coverage, and heat dissipation of the hardware. Subsequent to the design and development of the algorithm, two phases of experiments were conducted to illustrate the feasibility and capability of this technique. The presentation of this paper includes system modeling, mathematical analysis, and results from laboratory experiments and full-scale sea tests. PMID:27106337
A method for assurance of image integrity in CAD-PACS integration
NASA Astrophysics Data System (ADS)
Zhou, Zheng
2007-03-01
Computer Aided Detection/Diagnosis (CAD) can greatly assist in the clinical decision making process, and therefore, has drawn tremendous research efforts. However, integrating independent CAD workstation results with the clinical diagnostic workflow still remains challenging. We have presented a CAD-PACS integration toolkit that complies with DICOM standard and IHE profiles. One major issue in CAD-PACS integration is the security of the images used in CAD post-processing and the corresponding CAD result images. In this paper, we present a method for assuring the integrity of both DICOM images used in CAD post-processing and the CAD image results that are in BMP or JPEG format. The method is evaluated in a PACS simulator that simulates clinical PACS workflow. It can also be applied to multiple CAD applications that are integrated with the PACS simulator. The successful development and evaluation of this method will provide a useful approach for assuring image integrity of the CAD-PACS integration in clinical diagnosis.
Efficient Computer Network Anomaly Detection by Changepoint Detection Methods
NASA Astrophysics Data System (ADS)
Tartakovsky, Alexander G.; Polunchenko, Aleksey S.; Sokolov, Grigory
2013-02-01
We consider the problem of efficient on-line anomaly detection in computer network traffic. The problem is approached statistically, as that of sequential (quickest) changepoint detection. A multi-cyclic setting of quickest change detection is a natural fit for this problem. We propose a novel score-based multi-cyclic detection algorithm. The algorithm is based on the so-called Shiryaev-Roberts procedure. This procedure is as easy to employ in practice and as computationally inexpensive as the popular Cumulative Sum chart and the Exponentially Weighted Moving Average scheme. The likelihood ratio based Shiryaev-Roberts procedure has appealing optimality properties, particularly it is exactly optimal in a multi-cyclic setting geared to detect a change occurring at a far time horizon. It is therefore expected that an intrusion detection algorithm based on the Shiryaev-Roberts procedure will perform better than other detection schemes. This is confirmed experimentally for real traces. We also discuss the possibility of complementing our anomaly detection algorithm with a spectral-signature intrusion detection system with false alarm filtering and true attack confirmation capability, so as to obtain a synergistic system.
Efficient computation of spontaneous emission dynamics in arbitrary photonic structures
NASA Astrophysics Data System (ADS)
Teimourpour, M. H.; El-Ganainy, R.
2015-12-01
Defining a quantum mechanical wavefunction for photons is one of the remaining open problems in quantum physics. Thus quantum states of light are usually treated within the realm of second quantization. Consequently, spontaneous emission (SE) in arbitrary photonic media is often described by Fock space Hamiltonians. Here, we present a real space formulation of the SE process that can capture the physics of the problem accurately under different coupling conditions. Starting from first principles, we map the unitary evolution of a dressed two-level quantum emitter onto the problem of electromagnetic radiation from a self-interacting complex harmonic oscillator. Our formalism naturally leads to an efficient computational scheme of SE dynamics using finite difference time domain method without the need for calculating the photonic eigenmodes of the surrounding environment. In contrast to earlier investigations, our computational framework provides a unified numerical treatment for both weak and strong coupling regimes alike. We illustrate the versatility of our scheme by considering several different examples.
An efficient parallel algorithm for accelerating computational protein design
Zhou, Yichao; Xu, Wei; Donald, Bruce R.; Zeng, Jianyang
2014-01-01
Motivation: Structure-based computational protein design (SCPR) is an important topic in protein engineering. Under the assumption of a rigid backbone and a finite set of discrete conformations of side-chains, various methods have been proposed to address this problem. A popular method is to combine the dead-end elimination (DEE) and A* tree search algorithms, which provably finds the global minimum energy conformation (GMEC) solution. Results: In this article, we improve the efficiency of computing A* heuristic functions for protein design and propose a variant of A* algorithm in which the search process can be performed on a single GPU in a massively parallel fashion. In addition, we make some efforts to address the memory exceeding problem in A* search. As a result, our enhancements can achieve a significant speedup of the A*-based protein design algorithm by four orders of magnitude on large-scale test data through pre-computation and parallelization, while still maintaining an acceptable memory overhead. We also show that our parallel A* search algorithm could be successfully combined with iMinDEE, a state-of-the-art DEE criterion, for rotamer pruning to further improve SCPR with the consideration of continuous side-chain flexibility. Availability: Our software is available and distributed open-source under the GNU Lesser General License Version 2.1 (GNU, February 1999). The source code can be downloaded from http://www.cs.duke.edu/donaldlab/osprey.php or http://iiis.tsinghua.edu.cn/∼compbio/software.html. Contact: zengjy321@tsinghua.edu.cn Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24931991
Textbook Multigrid Efficiency for Computational Fluid Dynamics Simulations
NASA Technical Reports Server (NTRS)
Brandt, Achi; Thomas, James L.; Diskin, Boris
2001-01-01
Considerable progress over the past thirty years has been made in the development of large-scale computational fluid dynamics (CFD) solvers for the Euler and Navier-Stokes equations. Computations are used routinely to design the cruise shapes of transport aircraft through complex-geometry simulations involving the solution of 25-100 million equations; in this arena the number of wind-tunnel tests for a new design has been substantially reduced. However, simulations of the entire flight envelope of the vehicle, including maximum lift, buffet onset, flutter, and control effectiveness have not been as successful in eliminating the reliance on wind-tunnel testing. These simulations involve unsteady flows with more separation and stronger shock waves than at cruise. The main reasons limiting further inroads of CFD into the design process are: (1) the reliability of turbulence models; and (2) the time and expense of the numerical simulation. Because of the prohibitive resolution requirements of direct simulations at high Reynolds numbers, transition and turbulence modeling is expected to remain an issue for the near term. The focus of this paper addresses the latter problem by attempting to attain optimal efficiencies in solving the governing equations. Typically current CFD codes based on the use of multigrid acceleration techniques and multistage Runge-Kutta time-stepping schemes are able to converge lift and drag values for cruise configurations within approximately 1000 residual evaluations. An optimally convergent method is defined as having textbook multigrid efficiency (TME), meaning the solutions to the governing system of equations are attained in a computational work which is a small (less than 10) multiple of the operation count in the discretized system of equations (residual equations). In this paper, a distributed relaxation approach to achieving TME for Reynolds-averaged Navier-Stokes (RNAS) equations are discussed along with the foundations that form the
CAD data exchange with Martin Marietta Energy Systems, Inc., Oak Ridge, TN
Smith, K.L.
1994-10-01
This document has been developed to provide guidance in the interchange of electronic CAD data with Martin Marietta Energy Systems, Inc., Oak Ridge, Tennessee. It is not meant to be as comprehensive as the existing standards and specifications, but to provide a minimum set of practices that will enhance the success of the CAD data exchange. It is now a Department of Energy (DOE) Oak Ridge Field Office requirement that Architect-Engineering (A-E) firms prepare all new drawings using a Computer Aided Design (CAD) system that is compatible with the Facility Manager`s (FM) CAD system. For Oak Ridge facilities, the CAD system used for facility design by the FM, Martin Marietta Energy Systems, Inc., is Intregraph. The format for interchange of CAD data for Oak Ridge facilities will be the Intergraph MicroStation/IGDS format.
Computer-Aided Apparel Design in University Curricula.
ERIC Educational Resources Information Center
Belleau, Bonnie D.; Bourgeois, Elva B.
1991-01-01
As computer-assisted design (CAD) become an integral part of the fashion industry, universities must integrate CAD into the apparel curriculum. Louisiana State University's curriculum enables students to collaborate in CAD problem solving with industry personnel. (SK)
Use of CAD output to guide the intelligent display of digital mammograms
NASA Astrophysics Data System (ADS)
Bloomquist, Aili K.; Yaffe, Martin J.; Mawdsley, Gordon E.; Morgan, Trevor; Rico, Dan; Jong, Roberta A.
2003-05-01
For digital mammography to be efficient, methods are needed to choose an initial default image presentation that maximizes the amount of relevant information perceived by the radiologist and minimizes the amount of time spent adjusting the image display parameters. The purpose of this work is to explore the possibility of using the output of computer aided detection (CAD) software to guide image enhancement and presentation. A set of 16 digital mammograms with lesions of known pathology was used to develop and evaluate an enhancement and display protocol to improve the initial softcopy presentation of digital mammograms. Lesions were identified by CAD and the DICOM structured report produced by the CAD program was used to determine what enhancement algorithm should be applied in the identified regions of the image. An improved version of contrast limited adaptive histogram equalization (CLAHE) is used to enhance calcifications. For masses, the image is first smoothed using a non-linear diffusion technique; subsequently, local contrast is enhanced with a method based on morphological operators. A non-linear lookup table is automatically created to optimize the contrast in the regions of interest (detected lesions) without losing the context of the periphery of the breast. The effectiveness of the enhancement will be compared with the default presentation of the images using a forced choice preference study.
Computer aided design of computer generated holograms for electron beam fabrication.
Urquhart, K S; Lee, S H; Guest, C C; Feldman, M R; Farhoosh, H
1989-08-15
Computer Aided Design (CAD) systems that have been developed for electrical and mechanical design tasks are also effective tools for the process of designing Computer Generated Holograms (CGHs), particularly when these holograms are to be fabricated using electron beam lithography. CAD workstations provide efficient and convenient means of computing, storing, displaying, and preparing for fabrication many of the features that are common to CGH designs. Experience gained in the process of designing CGHs with various types of encoding methods is presented. Suggestions are made so that future workstations may further accommodate the CGH design process. PMID:20555710
Computer Aided Design of Computer Generated Holograms for electron beam fabrication
NASA Technical Reports Server (NTRS)
Urquhart, Kristopher S.; Lee, Sing H.; Guest, Clark C.; Feldman, Michael R.; Farhoosh, Hamid
1989-01-01
Computer Aided Design (CAD) systems that have been developed for electrical and mechanical design tasks are also effective tools for the process of designing Computer Generated Holograms (CGHs), particularly when these holograms are to be fabricated using electron beam lithography. CAD workstations provide efficient and convenient means of computing, storing, displaying, and preparing for fabrication many of the features that are common to CGH designs. Experience gained in the process of designing CGHs with various types of encoding methods is presented. Suggestions are made so that future workstations may further accommodate the CGH design process.
Teaching an Introductory CAD Course with the System-Engineering Approach.
ERIC Educational Resources Information Center
Pao, Y. C.
1985-01-01
Advocates that introductory computer aided design (CAD) courses be incorporated into engineering curricula in close conjunction with the system dynamics course. Block diagram manipulation/Bode analysis and finite elementary analysis are used as examples to illustrate the interdisciplinary nature of CAD teaching. (JN)
The Use of a Parametric Feature Based CAD System to Teach Introductory Engineering Graphics.
ERIC Educational Resources Information Center
Howell, Steven K.
1995-01-01
Describes the use of a parametric-feature-based computer-aided design (CAD) System, AutoCAD Designer, in teaching concepts of three dimensional geometrical modeling and design. Allows engineering graphics to go beyond the role of documentation and communication and allows an engineer to actually build a virtual prototype of a design idea and…
3D-CAD Effects on Creative Design Performance of Different Spatial Abilities Students
ERIC Educational Resources Information Center
Chang, Y.
2014-01-01
Students' creativity is an important focus globally and is interrelated with students' spatial abilities. Additionally, three-dimensional computer-assisted drawing (3D-CAD) overcomes barriers to spatial expression during the creative design process. Does 3D-CAD affect students' creative abilities? The purpose of this study was to…
A computationally efficient particle-simulation method suited to vector-computer architectures
McDonald, J.D.
1990-01-01
Recent interest in a National Aero-Space Plane (NASP) and various Aero-assisted Space Transfer Vehicles (ASTVs) presents the need for a greater understanding of high-speed rarefied flight conditions. Particle simulation techniques such as the Direct Simulation Monte Carlo (DSMC) method are well suited to such problems, but the high cost of computation limits the application of the methods to two-dimensional or very simple three-dimensional problems. This research re-examines the algorithmic structure of existing particle simulation methods and re-structures them to allow efficient implementation on vector-oriented supercomputers. A brief overview of the DSMC method and the Cray-2 vector computer architecture are provided, and the elements of the DSMC method that inhibit substantial vectorization are identified. One such element is the collision selection algorithm. A complete reformulation of underlying kinetic theory shows that this may be efficiently vectorized for general gas mixtures. The mechanics of collisions are vectorizable in the DSMC method, but several optimizations are suggested that greatly enhance performance. Also this thesis proposes a new mechanism for the exchange of energy between vibration and other energy modes. The developed scheme makes use of quantized vibrational states and is used in place of the Borgnakke-Larsen model. Finally, a simplified representation of physical space and boundary conditions is utilized to further reduce the computational cost of the developed method. Comparison to solutions obtained from the DSMC method for the relaxation of internal energy modes in a homogeneous gas, as well as single and multiple specie shock wave profiles, are presented. Additionally, a large scale simulation of the flow about the proposed Aeroassisted Flight Experiment (AFE) vehicle is included as an example of the new computational capability of the developed particle simulation method.
On the Use of CAD-Native Predicates and Geometry in Surface Meshing
NASA Technical Reports Server (NTRS)
Aftosmis, M. J.
1999-01-01
Several paradigms for accessing computer-aided design (CAD) geometry during surface meshing for computational fluid dynamics are discussed. File translation, inconsistent geometry engines, and nonnative point construction are all identified as sources of nonrobustness. The paper argues in favor of accessing CAD parts and assemblies in their native format, without translation, and for the use of CAD-native predicates and constructors in surface mesh generation. The discussion also emphasizes the importance of examining the computational requirements for exact evaluation of triangulation predicates during surface meshing.
Mammogram CAD, hybrid registration and iconic analysis
NASA Astrophysics Data System (ADS)
Boucher, A.; Cloppet, F.; Vincent, N.
2013-03-01
This paper aims to develop a computer aided diagnosis (CAD) based on a two-step methodology to register and analyze pairs of temporal mammograms. The concept of "medical file", including all the previous medical information on a patient, enables joint analysis of different acquisitions taken at different times, and the detection of significant modifications. The developed registration method aims to superimpose at best the different anatomical structures of the breast. The registration is designed in order to get rid of deformation undergone by the acquisition process while preserving those due to breast changes indicative of malignancy. In order to reach this goal, a referent image is computed from control points based on anatomical features that are extracted automatically. Then the second image of the couple is realigned on the referent image, using a coarse-to-fine approach according to expert knowledge that allows both rigid and non-rigid transforms. The joint analysis detects the evolution between two images representing the same scene. In order to achieve this, it is important to know the registration error limits in order to adapt the observation scale. The approach used in this paper is based on an image sparse representation. Decomposed in regular patterns, the images are analyzed under a new angle. The evolution detection problem has many practical applications, especially in medical images. The CAD is evaluated using recall and precision of differences in mammograms.
Sagbo, S; Blochaou, F; Langlotz, F; Vangenot, C; Nolte, L-P; Zheng, G
2005-01-01
Computer-Assisted Orthopaedic Surgery (CAOS) has made much progress over the last 10 years. Navigation systems have been recognized as important tools that help surgeons, and various such systems have been developed. A disadvantage of these systems is that they use non-standard formalisms and techniques. As a result, there are no standard concepts for implant and tool management or data formats to store information for use in 3D planning and navigation. We addressed these limitations and developed a practical and generic solution that offers benefits for surgeons, implant manufacturers, and CAS application developers. We developed a virtual implant database containing geometrical as well as calibration information for orthopedic implants and instruments, with a focus on trauma. This database has been successfully tested for various applications in the client/server mode. The implant information is not static, however, because manufacturers periodically revise their implants, resulting in the deletion of some implants and the introduction of new ones. Tracking these continuous changes and keeping CAS systems up to date is a tedious task if done manually. This leads to additional costs for system development, and some errors are inevitably generated due to the huge amount of information that has to be processed. To ease management with respect to implant life cycle, we developed a tool to assist end-users (surgeons, hospitals, CAS system providers, and implant manufacturers) in managing their implants. Our system can be used for pre-operative planning and intra-operative navigation, and also for any surgical simulation involving orthopedic implants. Currently, this tool allows addition of new implants, modification of existing ones, deletion of obsolete implants, export of a given implant, and also creation of backups. Our implant management system has been successfully tested in the laboratory with very promising results. It makes it possible to fill the current gap
Present State of CAD Teaching in Spanish Universities
ERIC Educational Resources Information Center
Garcia, Ramon Rubio; Santos, Ramon Gallego; Quiros, Javier Suarez; Penin, Pedro I. Alvarez
2005-01-01
During the 1990s, all Spanish Universities updated the syllabuses of their courses as a result of the entry into force of the new Organic Law of Universities ("Ley Organica de Universidades") and, for the first time, "Computer Assisted Design" (CAD) appears in the list of core subjects (compulsory teaching content set by the government) in many of…
Alternatives for Saving and Viewing CAD Graphics for the Web.
ERIC Educational Resources Information Center
Harris, La Verne Abe; Sadowski, Mary A.
2001-01-01
Introduces some alternatives for preparing and viewing computer aided design (CAD) graphics for Internet output on a budget, without the fear of copyright infringement, and without having to go back to college to learn a complex graphic application. (Author/YDS)
Correlating Trainee Attributes to Performance in 3D CAD Training
ERIC Educational Resources Information Center
Hamade, Ramsey F.; Artail, Hassan A.; Sikstrom, Sverker
2007-01-01
Purpose: The purpose of this exploratory study is to identify trainee attributes relevant for development of skills in 3D computer-aided design (CAD). Design/methodology/approach: Participants were trained to perform cognitive tasks of comparable complexity over time. Performance data were collected on the time needed to construct test models, and…
The design and construction of the CAD-1 airship
NASA Technical Reports Server (NTRS)
Kleiner, H. J.; Schneider, R.; Duncan, J. L.
1975-01-01
The background history, design philosophy and Computer application as related to the design of the envelope shape, stress calculations and flight trajectories of the CAD-1 airship, now under construction by Canadian Airship Development Corporation are reported. A three-phase proposal for future development of larger cargo carrying airships is included.
Program Evolves from Basic CAD to Total Manufacturing Experience
ERIC Educational Resources Information Center
Cassola, Joel
2011-01-01
Close to a decade ago, John Hersey High School (JHHS) in Arlington Heights, Illinois, made a transition from a traditional classroom-based pre-engineering program. The new program is geared towards helping students understand the entire manufacturing process. Previously, a JHHS student would design a project in computer-aided design (CAD) software…
The Effect of Computer Automation on Institutional Review Board (IRB) Office Efficiency
ERIC Educational Resources Information Center
Oder, Karl; Pittman, Stephanie
2015-01-01
Companies purchase computer systems to make their processes more efficient through automation. Some academic medical centers (AMC) have purchased computer systems for their institutional review boards (IRB) to increase efficiency and compliance with regulations. IRB computer systems are expensive to purchase, deploy, and maintain. An AMC should…
How to Quickly Import CAD Geometry into Thermal Desktop
NASA Technical Reports Server (NTRS)
Wright, Shonte; Beltran, Emilio
2002-01-01
There are several groups at JPL (Jet Propulsion Laboratory) that are committed to concurrent design efforts, two are featured here. Center for Space Mission Architecture and Design (CSMAD) enables the practical application of advanced process technologies in JPL's mission architecture process. Team I functions as an incubator for projects that are in the Discovery, and even pre-Discovery proposal stages. JPL's concurrent design environment is to a large extent centered on the CAD (Computer Aided Design) file. During concurrent design sessions CAD geometry is ported to other more specialized engineering design packages.
How to Quickly Import CAD Geometry into Thermal Desktop
NASA Astrophysics Data System (ADS)
Wright, Shonte; Beltran, Emilio
2002-07-01
There are several groups at JPL (Jet Propulsion Laboratory) that are committed to concurrent design efforts, two are featured here. Center for Space Mission Architecture and Design (CSMAD) enables the practical application of advanced process technologies in JPL's mission architecture process. Team I functions as an incubator for projects that are in the Discovery, and even pre-Discovery proposal stages. JPL's concurrent design environment is to a large extent centered on the CAD (Computer Aided Design) file. During concurrent design sessions CAD geometry is ported to other more specialized engineering design packages.
Smith, M.L.
1982-12-01
The Bendix Site Plan for CAD-CAM encompasses the development and integration of interactive graphics systems, factory data management systems, robotics, direct numerical control, automated inspection, factory automation, and shared data bases to achieve significant plant-wide gains in productivity. This plan does not address all current or planned computerization projects within our facility. A summary of planning proposals and rationale is presented in the following paragraphs. Interactive Graphics System (IGS) capability presently consists of two Applicon CAD systems and the CD-2000 software program processing on a time-shared CYBER 174 computer and a dedicated CYBER 173. Proposed plans include phased procurment through FY85 of additional computers and sufficient graphics terminals to support projected needs in drafting, tool/gage design, N/C programming, and process engineering. Planned procurement of additional computer equipment in FY86 and FY87 will provide the capacity necessary for a comprehensive graphics data base management system, computer-aided process planning graphics, and special graphics requirements in facilities and test equipment design. The overall IGS plan, designated BICAM (Bendix Integrated Computer Aided Manufacturing), will provide the capability and capacity to integrate manufacturing activities through a shared product data base and standardized data exchange format. Planned efforts in robotics will result in productive applications of low to medium technology robots beginning in FY82, and extending by FY85 to work cell capabilities utilizing higher technology robots with sensors such as vision and instrumented remote compliance devices. A number of robots are projected to be in service by 1990.
Building Efficient Wireless Infrastructures for Pervasive Computing Environments
ERIC Educational Resources Information Center
Sheng, Bo
2010-01-01
Pervasive computing is an emerging concept that thoroughly brings computing devices and the consequent technology into people's daily life and activities. Most of these computing devices are very small, sometimes even "invisible", and often embedded into the objects surrounding people. In addition, these devices usually are not isolated, but…
Balancing Accuracy and Computational Efficiency for Ternary Gas Hydrate Systems
NASA Astrophysics Data System (ADS)
White, M. D.
2011-12-01
phase transitions. This paper describes and demonstrates a numerical solution scheme for ternary hydrate systems that seeks a balance between accuracy and computational efficiency. This scheme uses a generalize cubic equation of state, functional forms for the hydrate equilibria and cage occupancies, variable switching scheme for phase transitions, and kinetic exchange of hydrate formers (i.e., CH4, CO2, and N2) between the mobile phases (i.e., aqueous, liquid CO2, and gas) and hydrate phase. Accuracy of the scheme will be evaluated by comparing property values and phase equilibria against experimental data. Computational efficiency of the scheme will be evaluated by comparing the base scheme against variants. The application of interest will the production of a natural gas hydrate deposit from a geologic formation, using the guest molecule exchange process; where, a mixture of CO2 and N2 are injected into the formation. During the guest-molecule exchange, CO2 and N2 will predominately replace CH4 in the large and small cages of the sI structure, respectively.
CAD Skills Increased through Multicultural Design Project
ERIC Educational Resources Information Center
Clemons, Stephanie
2006-01-01
This article discusses how students in a college-entry-level CAD course researched four generations of their family histories and documented cultural and symbolic influences within their family backgrounds. AutoCAD software was then used to manipulate those cultural and symbolic images to create the design for a multicultural area rug. AutoCAD was…
Cool-and Unusual-CAD Applications
ERIC Educational Resources Information Center
Calhoun, Ken
2004-01-01
This article describes several very useful applications of AutoCAD that may lie outside the normal scope of application. AutoCAD commands used in this article are based on AutoCAD 2000I. The author and his students used a Hewlett Packard 750C DesignJet plotter for plotting. (Contains 5 figures and 5 photos.)
Comparative fracture strength analysis of Lava and Digident CAD/CAM zirconia ceramic crowns
Kwon, Taek-Ka; Pak, Hyun-Soon; Han, Jung-Suk; Lee, Jai-Bong; Kim, Sung-Hun
2013-01-01
PURPOSE All-ceramic crowns are subject to fracture during function. To minimize this common clinical complication, zirconium oxide has been used as the framework for all-ceramic crowns. The aim of this study was to compare the fracture strengths of two computer-aided design/computer-aided manufacturing (CAD/CAM) zirconia crown systems: Lava and Digident. MATERIALS AND METHODS Twenty Lava CAD/CAM zirconia crowns and twenty Digident CAD/CAM zirconia crowns were fabricated. A metal die was also duplicated from the original prepared tooth for fracture testing. A universal testing machine was used to determine the fracture strength of the crowns. RESULTS The mean fracture strengths were as follows: 54.9 ± 15.6 N for the Lava CAD/CAM zirconia crowns and 87.0 ± 16.0 N for the Digident CAD/CAM zirconia crowns. The difference between the mean fracture strengths of the Lava and Digident crowns was statistically significant (P<.001). Lava CAD/CAM zirconia crowns showed a complete fracture of both the veneering porcelain and the core whereas the Digident CAD/CAM zirconia crowns showed fracture only of the veneering porcelain. CONCLUSION The fracture strengths of CAD/CAM zirconia crowns differ depending on the compatibility of the core material and the veneering porcelain. PMID:23755332
Huo, Zhimin; Summers, Ronald M; Paquerault, Sophie; Lo, Joseph; Hoffmeister, Jeffrey; Armato, Samuel G; Freedman, Matthew T; Lin, Jesse; Lo, Shih-Chung Ben; Petrick, Nicholas; Sahiner, Berkman; Fryd, David; Yoshida, Hiroyuki; Chan, Heang-Ping
2013-07-01
Computer-aided detection/diagnosis (CAD) is increasingly used for decision support by clinicians for detection and interpretation of diseases. However, there are no quality assurance (QA) requirements for CAD in clinical use at present. QA of CAD is important so that end users can be made aware of changes in CAD performance both due to intentional or unintentional causes. In addition, end-user training is critical to prevent improper use of CAD, which could potentially result in lower overall clinical performance. Research on QA of CAD and user training are limited to date. The purpose of this paper is to bring attention to these issues, inform the readers of the opinions of the members of the American Association of Physicists in Medicine (AAPM) CAD subcommittee, and thus stimulate further discussion in the CAD community on these topics. The recommendations in this paper are intended to be work items for AAPM task groups that will be formed to address QA and user training issues on CAD in the future. The work items may serve as a framework for the discussion and eventual design of detailed QA and training procedures for physicists and users of CAD. Some of the recommendations are considered by the subcommittee to be reasonably easy and practical and can be implemented immediately by the end users; others are considered to be "best practice" approaches, which may require significant effort, additional tools, and proper training to implement. The eventual standardization of the requirements of QA procedures for CAD will have to be determined through consensus from members of the CAD community, and user training may require support of professional societies. It is expected that high-quality CAD and proper use of CAD could allow these systems to achieve their true potential, thus benefiting both the patients and the clinicians, and may bring about more widespread clinical use of CAD for many other diseases and applications. It is hoped that the awareness of the need
CAD/CAM-assisted breast reconstruction.
Melchels, Ferry; Wiggenhauser, Paul Severin; Warne, David; Barry, Mark; Ong, Fook Rhu; Chong, Woon Shin; Hutmacher, Dietmar Werner; Schantz, Jan-Thorsten
2011-09-01
The application of computer-aided design and manufacturing (CAD/CAM) techniques in the clinic is growing slowly but steadily. The ability to build patient-specific models based on medical imaging data offers major potential. In this work we report on the feasibility of employing laser scanning with CAD/CAM techniques to aid in breast reconstruction. A patient was imaged with laser scanning, an economical and facile method for creating an accurate digital representation of the breasts and surrounding tissues. The obtained model was used to fabricate a customized mould that was employed as an intra-operative aid for the surgeon performing autologous tissue reconstruction of the breast removed due to cancer. Furthermore, a solid breast model was derived from the imaged data and digitally processed for the fabrication of customized scaffolds for breast tissue engineering. To this end, a novel generic algorithm for creating porosity within a solid model was developed, using a finite element model as intermediate. PMID:21900731