Sample records for computational procedure based

  1. Model of Procedure Usage – Results from a Qualitative Study to Inform Design of Computer-Based Procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johanna H Oxstrand; Katya L Le Blanc

    The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts wemore » are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory, the Institute for Energy Technology, and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field operators. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do this. The underlying philosophy in the research effort is “Stop – Start – Continue”, i.e. what features from the use of paper-based procedures should we not incorporate (Stop), what should we keep (Continue), and what new features or work processes should be added (Start). One step in identifying the Stop – Start – Continue was to conduct a baseline study where affordances related to the current usage of paper-based procedures were identified. The purpose of the study was to develop a model of paper based procedure use which will help to identify desirable features for computer based procedure prototypes. Affordances such as note taking, markups, sharing procedures between fellow coworkers, the use of multiple procedures at once, etc. were considered. The model describes which affordances associated with paper based procedures should be transferred to computer-based procedures as well as what features should not be incorporated. The model also provides a means to identify what new features not present in paper based procedures need to be added to the computer-based procedures to further enhance performance. The next step is to use the requirements and specifications to develop concepts and prototypes of computer-based procedures. User tests and other data collection efforts will be conducted to ensure that the real issues with field procedures and their usage are being addressed and solved in the best manner possible. This paper describes the baseline study, the construction of the model of procedure use, and the requirements and specifications for computer-based procedures that were developed based on the model. It also addresses how the model and the insights gained from it were used to develop concepts and prototypes for computer based procedures.« less

  2. Situation awareness and trust in computer-based procedures in nuclear power plant operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Throneburg, E. B.; Jones, J. M.

    2006-07-01

    Situation awareness and trust are two issues that need to be addressed in the design of computer-based procedures for nuclear power plants. Situation awareness, in relation to computer-based procedures, concerns the operators' knowledge of the plant's state while following the procedures. Trust concerns the amount of faith that the operators put into the automated procedures, which can affect situation awareness. This paper first discusses the advantages and disadvantages of computer-based procedures. It then discusses the known aspects of situation awareness and trust as applied to computer-based procedures in nuclear power plants. An outline of a proposed experiment is then presentedmore » that includes methods of measuring situation awareness and trust so that these aspects can be analyzed for further study. (authors)« less

  3. Development and applications of two computational procedures for determining the vibration modes of structural systems. [aircraft structures - aerospaceplanes

    NASA Technical Reports Server (NTRS)

    Kvaternik, R. G.

    1975-01-01

    Two computational procedures for analyzing complex structural systems for their natural modes and frequencies of vibration are presented. Both procedures are based on a substructures methodology and both employ the finite-element stiffness method to model the constituent substructures. The first procedure is a direct method based on solving the eigenvalue problem associated with a finite-element representation of the complete structure. The second procedure is a component-mode synthesis scheme in which the vibration modes of the complete structure are synthesized from modes of substructures into which the structure is divided. The analytical basis of the methods contains a combination of features which enhance the generality of the procedures. The computational procedures exhibit a unique utilitarian character with respect to the versatility, computational convenience, and ease of computer implementation. The computational procedures were implemented in two special-purpose computer programs. The results of the application of these programs to several structural configurations are shown and comparisons are made with experiment.

  4. Development of an efficient procedure for calculating the aerodynamic effects of planform variation

    NASA Technical Reports Server (NTRS)

    Mercer, J. E.; Geller, E. W.

    1981-01-01

    Numerical procedures to compute gradients in aerodynamic loading due to planform shape changes using panel method codes were studied. Two procedures were investigated: one computed the aerodynamic perturbation directly; the other computed the aerodynamic loading on the perturbed planform and on the base planform and then differenced these values to obtain the perturbation in loading. It is indicated that computing the perturbed values directly can not be done satisfactorily without proper aerodynamic representation of the pressure singularity at the leading edge of a thin wing. For the alternative procedure, a technique was developed which saves most of the time-consuming computations from a panel method calculation for the base planform. Using this procedure the perturbed loading can be calculated in about one-tenth the time of that for the base solution.

  5. Factors Affecting Utilization of Information Output of Computer-Based Modeling Procedures in Local Government Organizations.

    ERIC Educational Resources Information Center

    Komsky, Susan

    Fiscal Impact Budgeting Systems (FIBS) are sophisticated computer based modeling procedures used in local government organizations, whose results, however, are often overlooked or ignored by decision makers. A study attempted to discover the reasons for this situation by focusing on four factors: potential usefulness, faith in computers,…

  6. Benefits of computer screen-based simulation in learning cardiac arrest procedures.

    PubMed

    Bonnetain, Elodie; Boucheix, Jean-Michel; Hamet, Maël; Freysz, Marc

    2010-07-01

    What is the best way to train medical students early so that they acquire basic skills in cardiopulmonary resuscitation as effectively as possible? Studies have shown the benefits of high-fidelity patient simulators, but have also demonstrated their limits. New computer screen-based multimedia simulators have fewer constraints than high-fidelity patient simulators. In this area, as yet, there has been no research on the effectiveness of transfer of learning from a computer screen-based simulator to more realistic situations such as those encountered with high-fidelity patient simulators. We tested the benefits of learning cardiac arrest procedures using a multimedia computer screen-based simulator in 28 Year 2 medical students. Just before the end of the traditional resuscitation course, we compared two groups. An experiment group (EG) was first asked to learn to perform the appropriate procedures in a cardiac arrest scenario (CA1) in the computer screen-based learning environment and was then tested on a high-fidelity patient simulator in another cardiac arrest simulation (CA2). While the EG was learning to perform CA1 procedures in the computer screen-based learning environment, a control group (CG) actively continued to learn cardiac arrest procedures using practical exercises in a traditional class environment. Both groups were given the same amount of practice, exercises and trials. The CG was then also tested on the high-fidelity patient simulator for CA2, after which it was asked to perform CA1 using the computer screen-based simulator. Performances with both simulators were scored on a precise 23-point scale. On the test on a high-fidelity patient simulator, the EG trained with a multimedia computer screen-based simulator performed significantly better than the CG trained with traditional exercises and practice (16.21 versus 11.13 of 23 possible points, respectively; p<0.001). Computer screen-based simulation appears to be effective in preparing learners to use high-fidelity patient simulators, which present simulations that are closer to real-life situations.

  7. Operating Policies and Procedures of Computer Data-Base Systems.

    ERIC Educational Resources Information Center

    Anderson, David O.

    Speaking on the operating policies and procedures of computer data bases containing information on students, the author divides his remarks into three parts: content decisions, data base security, and user access. He offers nine recommended practices that should increase the data base's usefulness to the user community: (1) the cost of developing…

  8. Evaluation of Manual Spelling, Observational and Incidental Learning Using Computer-Based Instruction with a Tablet PC, Large Screen Projection, and a Forward Chaining Procedure

    ERIC Educational Resources Information Center

    Purrazzella, Kimberly; Mechling, Linda C.

    2013-01-01

    The study employed a multiple probe design to investigate the effects of computer-based instruction (CBI) and a forward chaining procedure to teach manual spelling of words to three young adults with moderate intellectual disability in a small group arrangement. The computer-based program included a tablet PC whereby students wrote words directly…

  9. 47 CFR 1.2202 - Competitive bidding design options.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Section 1.2202 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Grants...) Procedures that utilize mathematical computer optimization software, such as integer programming, to evaluate... evaluating bids using a ranking based on specified factors. (B) Procedures that combine computer optimization...

  10. Kernel and System Procedures in Flex.

    DTIC Science & Technology

    1983-08-01

    System procedures on which the operating system for the Flex computer is based. These are the low level rOCedures Whbich are used to implement the compilers, file-store* coummand interpreters etc on Flex. 168 ... System procedures on which the operating system for the Flex computer is based. These are the low level procedures which are used to implement the...privileged mode. They form the interface between the user and a particular operating system written on top of the Kernel.

  11. CBP for Field Workers – Results and Insights from Three Usability and Interface Design Evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxstrand, Johanna Helene; Le Blanc, Katya Lee; Bly, Aaron Douglas

    2015-09-01

    Nearly all activities that involve human interaction with the systems in a nuclear power plant are guided by procedures. Even though the paper-based procedures (PBPs) currently used by industry have a demonstrated history of ensuring safety, improving procedure use could yield significant savings in increased efficiency as well as improved nuclear safety through human performance gains. The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use and adherence, researchers in the Light-Water Reactor Sustainability (LWRS) Program, togethermore » with the nuclear industry, have been investigating the possibility and feasibility of replacing the current paper-based procedure process with a computer-based procedure (CBP) system. This report describes a field evaluation of new design concepts of a prototype computer-based procedure system.« less

  12. Using GOMS models and hypertext to create representations of medical procedures for online display

    NASA Technical Reports Server (NTRS)

    Gugerty, Leo; Halgren, Shannon; Gosbee, John; Rudisill, Marianne

    1991-01-01

    This study investigated two methods to improve organization and presentation of computer-based medical procedures. A literature review suggested that the GOMS (goals, operators, methods, and selecton rules) model can assist in rigorous task analysis, which can then help generate initial design ideas for the human-computer interface. GOMS model are hierarchical in nature, so this study also investigated the effect of hierarchical, hypertext interfaces. We used a 2 x 2 between subjects design, including the following independent variables: procedure organization - GOMS model based vs. medical-textbook based; navigation type - hierarchical vs. linear (booklike). After naive subjects studies the online procedures, measures were taken of their memory for the content and the organization of the procedures. This design was repeated for two medical procedures. For one procedure, subjects who studied GOMS-based and hierarchical procedures remembered more about the procedures than other subjects. The results for the other procedure were less clear. However, data for both procedures showed a 'GOMSification effect'. That is, when asked to do a free recall of a procedure, subjects who had studies a textbook procedure often recalled key information in a location inconsistent with the procedure they actually studied, but consistent with the GOMS-based procedure.

  13. Effects of Computer-Based Training on Procedural Modifications to Standard Functional Analyses

    ERIC Educational Resources Information Center

    Schnell, Lauren K.; Sidener, Tina M.; DeBar, Ruth M.; Vladescu, Jason C.; Kahng, SungWoo

    2018-01-01

    Few studies have evaluated methods for training decision-making when functional analysis data are undifferentiated. The current study evaluated computer-based training to teach 20 graduate students to arrange functional analysis conditions, analyze functional analysis data, and implement procedural modifications. Participants were exposed to…

  14. Development of a personal computer-based secondary task procedure as a surrogate for a driving simulator

    DOT National Transportation Integrated Search

    2007-08-01

    This research was conducted to develop and test a personal computer-based study procedure (PCSP) with secondary task loading for use in human factors laboratory experiments in lieu of a driving simulator to test reading time and understanding of traf...

  15. Application of advanced computational procedures for modeling solar-wind interactions with Venus: Theory and computer code

    NASA Technical Reports Server (NTRS)

    Stahara, S. S.; Klenke, D.; Trudinger, B. C.; Spreiter, J. R.

    1980-01-01

    Computational procedures are developed and applied to the prediction of solar wind interaction with nonmagnetic terrestrial planet atmospheres, with particular emphasis to Venus. The theoretical method is based on a single fluid, steady, dissipationless, magnetohydrodynamic continuum model, and is appropriate for the calculation of axisymmetric, supersonic, super-Alfvenic solar wind flow past terrestrial planets. The procedures, which consist of finite difference codes to determine the gasdynamic properties and a variety of special purpose codes to determine the frozen magnetic field, streamlines, contours, plots, etc. of the flow, are organized into one computational program. Theoretical results based upon these procedures are reported for a wide variety of solar wind conditions and ionopause obstacle shapes. Plasma and magnetic field comparisons in the ionosheath are also provided with actual spacecraft data obtained by the Pioneer Venus Orbiter.

  16. Spacecraft crew procedures from paper to computers

    NASA Technical Reports Server (NTRS)

    Oneal, Michael; Manahan, Meera

    1991-01-01

    Described here is a research project that uses human factors and computer systems knowledge to explore and help guide the design and creation of an effective Human-Computer Interface (HCI) for spacecraft crew procedures. By having a computer system behind the user interface, it is possible to have increased procedure automation, related system monitoring, and personalized annotation and help facilities. The research project includes the development of computer-based procedure system HCI prototypes and a testbed for experiments that measure the effectiveness of HCI alternatives in order to make design recommendations. The testbed will include a system for procedure authoring, editing, training, and execution. Progress on developing HCI prototypes for a middeck experiment performed on Space Shuttle Mission STS-34 and for upcoming medical experiments are discussed. The status of the experimental testbed is also discussed.

  17. Computer-based System for the Virtual-Endoscopic Guidance of Bronchoscopy.

    PubMed

    Helferty, J P; Sherbondy, A J; Kiraly, A P; Higgins, W E

    2007-11-01

    The standard procedure for diagnosing lung cancer involves two stages: three-dimensional (3D) computed-tomography (CT) image assessment, followed by interventional bronchoscopy. In general, the physician has no link between the 3D CT image assessment results and the follow-on bronchoscopy. Thus, the physician essentially performs bronchoscopic biopsy of suspect cancer sites blindly. We have devised a computer-based system that greatly augments the physician's vision during bronchoscopy. The system uses techniques from computer graphics and computer vision to enable detailed 3D CT procedure planning and follow-on image-guided bronchoscopy. The procedure plan is directly linked to the bronchoscope procedure, through a live registration and fusion of the 3D CT data and bronchoscopic video. During a procedure, the system provides many visual tools, fused CT-video data, and quantitative distance measures; this gives the physician considerable visual feedback on how to maneuver the bronchoscope and where to insert the biopsy needle. Central to the system is a CT-video registration technique, based on normalized mutual information. Several sets of results verify the efficacy of the registration technique. In addition, we present a series of test results for the complete system for phantoms, animals, and human lung-cancer patients. The results indicate that not only is the variation in skill level between different physicians greatly reduced by the system over the standard procedure, but that biopsy effectiveness increases.

  18. Computer-Based and Paper-Based Measurement of Recognition Performance.

    ERIC Educational Resources Information Center

    Federico, Pat-Anthony

    To determine the relative reliabilities and validities of paper-based and computer-based measurement procedures, 83 male student pilots and radar intercept officers were administered computer and paper-based tests of aircraft recognition. The subject matter consisted of line drawings of front, side, and top silhouettes of aircraft. Reliabilities…

  19. Simplified methods for computing total sediment discharge with the modified Einstein procedure

    USGS Publications Warehouse

    Colby, Bruce R.; Hubbell, David Wellington

    1961-01-01

    A procedure was presented in 1950 by H. A. Einstein for computing the total discharge of sediment particles of sizes that are in appreciable quantities in the stream bed. This procedure was modified by the U.S. Geological Survey and adapted to computing the total sediment discharge of a stream on the basis of samples of bed sediment, depth-integrated samples of suspended sediment, streamflow measurements, and water temperature. This paper gives simplified methods for computing total sediment discharge by the modified Einstein procedure. Each of four homographs appreciably simplifies a major step in the computations. Within the stated limitations, use of the homographs introduces much less error than is present in either the basic data or the theories on which the computations of total sediment discharge are based. The results are nearly as accurate mathematically as those that could be obtained from the longer and more complex arithmetic and algebraic computations of the Einstein procedure.

  20. Logo and Von Neumann Ideas [and] Towards a Humanistic Use of Computers in Education = Hacia una insercion humanista de las computadoras en la educacion.

    ERIC Educational Resources Information Center

    Reggini, Horacio C.

    The first article, "LOGO and von Neumann Ideas," deals with the creation of new procedures based on procedures defined and stored in memory as LOGO lists of lists. This representation, which enables LOGO procedures to construct, modify, and run other LOGO procedures, is compared with basic computer concepts first formulated by John von…

  1. Prototype of a computer method for designing and analyzing heating, ventilating and air conditioning proportional, electronic control systems

    NASA Astrophysics Data System (ADS)

    Barlow, Steven J.

    1986-09-01

    The Air Force needs a better method of designing new and retrofit heating, ventilating and air conditioning (HVAC) control systems. Air Force engineers currently use manual design/predict/verify procedures taught at the Air Force Institute of Technology, School of Civil Engineering, HVAC Control Systems course. These existing manual procedures are iterative and time-consuming. The objectives of this research were to: (1) Locate and, if necessary, modify an existing computer-based method for designing and analyzing HVAC control systems that is compatible with the HVAC Control Systems manual procedures, or (2) Develop a new computer-based method of designing and analyzing HVAC control systems that is compatible with the existing manual procedures. Five existing computer packages were investigated in accordance with the first objective: MODSIM (for modular simulation), HVACSIM (for HVAC simulation), TRNSYS (for transient system simulation), BLAST (for building load and system thermodynamics) and Elite Building Energy Analysis Program. None were found to be compatible or adaptable to the existing manual procedures, and consequently, a prototype of a new computer method was developed in accordance with the second research objective.

  2. The rid-redundant procedure in C-Prolog

    NASA Technical Reports Server (NTRS)

    Chen, Huo-Yan; Wah, Benjamin W.

    1987-01-01

    C-Prolog can conveniently be used for logical inferences on knowledge bases. However, as similar to many search methods using backward chaining, a large number of redundant computation may be produced in recursive calls. To overcome this problem, the 'rid-redundant' procedure was designed to rid all redundant computations in running multi-recursive procedures. Experimental results obtained for C-Prolog on the Vax 11/780 computer show that there is an order of magnitude improvement in the running time and solvable problem size.

  3. An Interactive Computer-Based Training Program for Beginner Personal Computer Maintenance.

    ERIC Educational Resources Information Center

    Summers, Valerie Brooke

    A computer-assisted instructional program, which was developed for teaching beginning computer maintenance to employees of Unisys, covered external hardware maintenance, proper diskette care, making software backups, and electro-static discharge prevention. The procedure used in developing the program was based upon the Dick and Carey (1985) model…

  4. Continuing challenges for computer-based neuropsychological tests.

    PubMed

    Letz, Richard

    2003-08-01

    A number of issues critical to the development of computer-based neuropsychological testing systems that remain continuing challenges to their widespread use in occupational and environmental health are reviewed. Several computer-based neuropsychological testing systems have been developed over the last 20 years, and they have contributed substantially to the study of neurologic effects of a number of environmental exposures. However, many are no longer supported and do not run on contemporary personal computer operating systems. Issues that are continuing challenges for development of computer-based neuropsychological tests in environmental and occupational health are discussed: (1) some current technological trends that generally make test development more difficult; (2) lack of availability of usable speech recognition of the type required for computer-based testing systems; (3) implementing computer-based procedures and tasks that are improvements over, not just adaptations of, their manually-administered predecessors; (4) implementing tests of a wider range of memory functions than the limited range now available; (5) paying more attention to motivational influences that affect the reliability and validity of computer-based measurements; and (6) increasing the usability of and audience for computer-based systems. Partial solutions to some of these challenges are offered. The challenges posed by current technological trends are substantial and generally beyond the control of testing system developers. Widespread acceptance of the "tablet PC" and implementation of accurate small vocabulary, discrete, speaker-independent speech recognition would enable revolutionary improvements to computer-based testing systems, particularly for testing memory functions not covered in existing systems. Dynamic, adaptive procedures, particularly ones based on item-response theory (IRT) and computerized-adaptive testing (CAT) methods, will be implemented in new tests that will be more efficient, reliable, and valid than existing test procedures. These additional developments, along with implementation of innovative reporting formats, are necessary for more widespread acceptance of the testing systems.

  5. Solving satisfiability problems using a novel microarray-based DNA computer.

    PubMed

    Lin, Che-Hsin; Cheng, Hsiao-Ping; Yang, Chang-Biau; Yang, Chia-Ning

    2007-01-01

    An algorithm based on a modified sticker model accompanied with an advanced MEMS-based microarray technology is demonstrated to solve SAT problem, which has long served as a benchmark in DNA computing. Unlike conventional DNA computing algorithms needing an initial data pool to cover correct and incorrect answers and further executing a series of separation procedures to destroy the unwanted ones, we built solutions in parts to satisfy one clause in one step, and eventually solve the entire Boolean formula through steps. No time-consuming sample preparation procedures and delicate sample applying equipment were required for the computing process. Moreover, experimental results show the bound DNA sequences can sustain the chemical solutions during computing processes such that the proposed method shall be useful in dealing with large-scale problems.

  6. Adjoint-Based, Three-Dimensional Error Prediction and Grid Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.

    2002-01-01

    Engineering computational fluid dynamics (CFD) analysis and design applications focus on output functions (e.g., lift, drag). Errors in these output functions are generally unknown and conservatively accurate solutions may be computed. Computable error estimates can offer the possibility to minimize computational work for a prescribed error tolerance. Such an estimate can be computed by solving the flow equations and the linear adjoint problem for the functional of interest. The computational mesh can be modified to minimize the uncertainty of a computed error estimate. This robust mesh-adaptation procedure automatically terminates when the simulation is within a user specified error tolerance. This procedure for estimating and adapting to error in a functional is demonstrated for three-dimensional Euler problems. An adaptive mesh procedure that links to a Computer Aided Design (CAD) surface representation is demonstrated for wing, wing-body, and extruded high lift airfoil configurations. The error estimation and adaptation procedure yielded corrected functions that are as accurate as functions calculated on uniformly refined grids with ten times as many grid points.

  7. The Krigifier: A Procedure for Generating Pseudorandom Nonlinear Objective Functions for Computational Experimentation

    NASA Technical Reports Server (NTRS)

    Trosset, Michael W.

    1999-01-01

    Comprehensive computational experiments to assess the performance of algorithms for numerical optimization require (among other things) a practical procedure for generating pseudorandom nonlinear objective functions. We propose a procedure that is based on the convenient fiction that objective functions are realizations of stochastic processes. This report details the calculations necessary to implement our procedure for the case of certain stationary Gaussian processes and presents a specific implementation in the statistical programming language S-PLUS.

  8. Increased Memory Load during Task Completion when Procedures Are Presented on Mobile Screens

    ERIC Educational Resources Information Center

    Byrd, Keena S.; Caldwell, Barrett S.

    2011-01-01

    The primary objective of this research was to compare procedure-based task performance using three common mobile screen sizes: ultra mobile personal computer (7 in./17.8 cm), personal data assistant (3.5 in./8.9 cm), and SmartPhone (2.8 in./7.1 cm). Subjects used these three screen sizes to view and execute a computer maintenance procedure.…

  9. The J3 SCR model applied to resonant converter simulation

    NASA Technical Reports Server (NTRS)

    Avant, R. L.; Lee, F. C. Y.

    1985-01-01

    The J3 SCR model is a continuous topology computer model for the SCR. Its circuit analog and parameter estimation procedure are uniformly applicable to popular computer-aided design and analysis programs such as SPICE2 and SCEPTRE. The circuit analog is based on the intrinsic three pn junction structure of the SCR. The parameter estimation procedure requires only manufacturer's specification sheet quantities as a data base.

  10. Using Three-Dimensional Interactive Graphics To Teach Equipment Procedures.

    ERIC Educational Resources Information Center

    Hamel, Cheryl J.; Ryan-Jones, David L.

    1997-01-01

    Focuses on how three-dimensional graphical and interactive features of computer-based instruction can enhance learning and support human cognition during technical training of equipment procedures. Presents guidelines for using three-dimensional interactive graphics to teach equipment procedures based on studies of the effects of graphics, motion,…

  11. Using Computation Curriculum-Based Measurement Probes for Error Pattern Analysis

    ERIC Educational Resources Information Center

    Dennis, Minyi Shih; Calhoon, Mary Beth; Olson, Christopher L.; Williams, Cara

    2014-01-01

    This article describes how "curriculum-based measurement--computation" (CBM-C) mathematics probes can be used in combination with "error pattern analysis" (EPA) to pinpoint difficulties in basic computation skills for students who struggle with learning mathematics. Both assessment procedures provide ongoing assessment data…

  12. Round-off errors in cutting plane algorithms based on the revised simplex procedure

    NASA Technical Reports Server (NTRS)

    Moore, J. E.

    1973-01-01

    This report statistically analyzes computational round-off errors associated with the cutting plane approach to solving linear integer programming problems. Cutting plane methods require that the inverse of a sequence of matrices be computed. The problem basically reduces to one of minimizing round-off errors in the sequence of inverses. Two procedures for minimizing this problem are presented, and their influence on error accumulation is statistically analyzed. One procedure employs a very small tolerance factor to round computed values to zero. The other procedure is a numerical analysis technique for reinverting or improving the approximate inverse of a matrix. The results indicated that round-off accumulation can be effectively minimized by employing a tolerance factor which reflects the number of significant digits carried for each calculation and by applying the reinversion procedure once to each computed inverse. If 18 significant digits plus an exponent are carried for each variable during computations, then a tolerance value of 0.1 x 10 to the minus 12th power is reasonable.

  13. Spacecraft crew procedures from paper to computers

    NASA Technical Reports Server (NTRS)

    Oneal, Michael; Manahan, Meera

    1993-01-01

    Large volumes of paper are launched with each Space Shuttle Mission that contain step-by-step instructions for various activities that are to be performed by the crew during the mission. These instructions include normal operational procedures and malfunction or contingency procedures and are collectively known as the Flight Data File (FDF). An example of nominal procedures would be those used in the deployment of a satellite from the Space Shuttle; a malfunction procedure would describe actions to be taken if a specific problem developed during the deployment. A new FDF and associated system is being created for Space Station Freedom. The system will be called the Space Station Flight Data File (SFDF). NASA has determined that the SFDF will be computer-based rather than paper-based. Various aspects of the SFDF are discussed.

  14. An integrated computer-based procedure for teamwork in digital nuclear power plants.

    PubMed

    Gao, Qin; Yu, Wenzhu; Jiang, Xiang; Song, Fei; Pan, Jiajie; Li, Zhizhong

    2015-01-01

    Computer-based procedures (CBPs) are expected to improve operator performance in nuclear power plants (NPPs), but they may reduce the openness of interaction between team members and harm teamwork consequently. To support teamwork in the main control room of an NPP, this study proposed a team-level integrated CBP that presents team members' operation status and execution histories to one another. Through a laboratory experiment, we compared the new integrated design and the existing individual CBP design. Sixty participants, randomly divided into twenty teams of three people each, were assigned to the two conditions to perform simulated emergency operating procedures. The results showed that compared with the existing CBP design, the integrated CBP reduced the effort of team communication and improved team transparency. The results suggest that this novel design is effective to optim team process, but its impact on the behavioural outcomes may be moderated by more factors, such as task duration. The study proposed and evaluated a team-level integrated computer-based procedure, which present team members' operation status and execution history to one another. The experimental results show that compared with the traditional procedure design, the integrated design reduces the effort of team communication and improves team transparency.

  15. Standardized Procedure Content And Data Structure Based On Human Factors Requirements For Computer-Based Procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bly, Aaron; Oxstrand, Johanna; Le Blanc, Katya L

    2015-02-01

    Most activities that involve human interaction with systems in a nuclear power plant are guided by procedures. Traditionally, the use of procedures has been a paper-based process that supports safe operation of the nuclear power industry. However, the nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. Advances in digital technology make computer-based procedures (CBPs) a valid option that provides further enhancement of safety by improving human performance related to procedure use. The transition from paper-based procedures (PBPs) to CBPs creates a need for a computer-based proceduremore » system (CBPS). A CBPS needs to have the ability to perform logical operations in order to adjust to the inputs received from either users or real time data from plant status databases. Without the ability for logical operations the procedure is just an electronic copy of the paper-based procedure. In order to provide the CBPS with the information it needs to display the procedure steps to the user, special care is needed in the format used to deliver all data and instructions to create the steps. The procedure should be broken down into basic elements and formatted in a standard method for the CBPS. One way to build the underlying data architecture is to use an Extensible Markup Language (XML) schema, which utilizes basic elements to build each step in the smart procedure. The attributes of each step will determine the type of functionality that the system will generate for that step. The CBPS will provide the context for the step to deliver referential information, request a decision, or accept input from the user. The XML schema needs to provide all data necessary for the system to accurately perform each step without the need for the procedure writer to reprogram the CBPS. The research team at the Idaho National Laboratory has developed a prototype CBPS for field workers as well as the underlying data structure for such CBPS. The objective of the research effort is to develop guidance on how to design both the user interface and the underlying schema. This paper will describe the result and insights gained from the research activities conducted to date.« less

  16. Effects of computer-based training on procedural modifications to standard functional analyses.

    PubMed

    Schnell, Lauren K; Sidener, Tina M; DeBar, Ruth M; Vladescu, Jason C; Kahng, SungWoo

    2018-01-01

    Few studies have evaluated methods for training decision-making when functional analysis data are undifferentiated. The current study evaluated computer-based training to teach 20 graduate students to arrange functional analysis conditions, analyze functional analysis data, and implement procedural modifications. Participants were exposed to training materials using interactive software during a 1-day session. Following the training, mean scores on the posttest, novel cases probe, and maintenance probe increased for all participants. These results replicate previous findings during a 1-day session and include a measure of participant acceptability of the training. Recommendations for future research on computer-based training and functional analysis are discussed. © 2017 Society for the Experimental Analysis of Behavior.

  17. hp-Adaptive time integration based on the BDF for viscous flows

    NASA Astrophysics Data System (ADS)

    Hay, A.; Etienne, S.; Pelletier, D.; Garon, A.

    2015-06-01

    This paper presents a procedure based on the Backward Differentiation Formulas of order 1 to 5 to obtain efficient time integration of the incompressible Navier-Stokes equations. The adaptive algorithm performs both stepsize and order selections to control respectively the solution accuracy and the computational efficiency of the time integration process. The stepsize selection (h-adaptivity) is based on a local error estimate and an error controller to guarantee that the numerical solution accuracy is within a user prescribed tolerance. The order selection (p-adaptivity) relies on the idea that low-accuracy solutions can be computed efficiently by low order time integrators while accurate solutions require high order time integrators to keep computational time low. The selection is based on a stability test that detects growing numerical noise and deems a method of order p stable if there is no method of lower order that delivers the same solution accuracy for a larger stepsize. Hence, it guarantees both that (1) the used method of integration operates inside of its stability region and (2) the time integration procedure is computationally efficient. The proposed time integration procedure also features a time-step rejection and quarantine mechanisms, a modified Newton method with a predictor and dense output techniques to compute solution at off-step points.

  18. Comparison of Methods for Demonstrating Passage of Time When Using Computer-Based Video Prompting

    ERIC Educational Resources Information Center

    Mechling, Linda C.; Bryant, Kathryn J.; Spencer, Galen P.; Ayres, Kevin M.

    2015-01-01

    Two different video-based procedures for presenting the passage of time (how long a step lasts) were examined. The two procedures were presented within the framework of video prompting to promote independent multi-step task completion across four young adults with moderate intellectual disability. The two procedures demonstrating passage of the…

  19. Role of HPC in Advancing Computational Aeroelasticity

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.

    2004-01-01

    On behalf of the High Performance Computing and Modernization Program (HPCMP) and NASA Advanced Supercomputing Division (NAS) a study is conducted to assess the role of supercomputers on computational aeroelasticity of aerospace vehicles. The study is mostly based on the responses to a web based questionnaire that was designed to capture the nuances of high performance computational aeroelasticity, particularly on parallel computers. A procedure is presented to assign a fidelity-complexity index to each application. Case studies based on major applications using HPCMP resources are presented.

  20. Basic research and 12 years of clinical experience in computer-assisted navigation technology: a review.

    PubMed

    Ewers, R; Schicho, K; Undt, G; Wanschitz, F; Truppe, M; Seemann, R; Wagner, A

    2005-01-01

    Computer-aided surgical navigation technology is commonly used in craniomaxillofacial surgery. It offers substantial improvement regarding esthetic and functional aspects in a range of surgical procedures. Based on augmented reality principles, where the real operative site is merged with computer generated graphic information, computer-aided navigation systems were employed, among other procedures, in dental implantology, arthroscopy of the temporomandibular joint, osteotomies, distraction osteogenesis, image guided biopsies and removals of foreign bodies. The decision to perform a procedure with or without computer-aided intraoperative navigation depends on the expected benefit to the procedure as well as on the technical expenditure necessary to achieve that goal. This paper comprises the experience gained in 12 years of research, development and routine clinical application. One hundred and fifty-eight operations with successful application of surgical navigation technology--divided into five groups--are evaluated regarding the criteria "medical benefit" and "technical expenditure" necessary to perform these procedures. Our results indicate that the medical benefit is likely to outweight the expenditure of technology with few exceptions (calvaria transplant, resection of the temporal bone, reconstruction of the orbital floor). Especially in dental implantology, specialized software reduces time and additional costs necessary to plan and perform procedures with computer-aided surgical navigation.

  1. Evaluation of the performance of MP4-based procedures for a wide range of thermochemical and kinetic properties

    NASA Astrophysics Data System (ADS)

    Yu, Li-Juan; Wan, Wenchao; Karton, Amir

    2016-11-01

    We evaluate the performance of standard and modified MPn procedures for a wide set of thermochemical and kinetic properties, including atomization energies, structural isomerization energies, conformational energies, and reaction barrier heights. The reference data are obtained at the CCSD(T)/CBS level by means of the Wn thermochemical protocols. We find that none of the MPn-based procedures show acceptable performance for the challenging W4-11 and BH76 databases. For the other thermochemical/kinetic databases, the MP2.5 and MP3.5 procedures provide the most attractive accuracy-to-computational cost ratios. The MP2.5 procedure results in a weighted-total-root-mean-square deviation (WTRMSD) of 3.4 kJ/mol, whilst the computationally more expensive MP3.5 procedure results in a WTRMSD of 1.9 kJ/mol (the same WTRMSD obtained for the CCSD(T) method in conjunction with a triple-zeta basis set). We also assess the performance of the computationally economical CCSD(T)/CBS(MP2) method, which provides the best overall performance for all the considered databases, including W4-11 and BH76.

  2. The hack attack - Increasing computer system awareness of vulnerability threats

    NASA Technical Reports Server (NTRS)

    Quann, John; Belford, Peter

    1987-01-01

    The paper discusses the issue of electronic vulnerability of computer based systems supporting NASA Goddard Space Flight Center (GSFC) by unauthorized users. To test the security of the system and increase security awareness, NYMA, Inc. employed computer 'hackers' to attempt to infiltrate the system(s) under controlled conditions. Penetration procedures, methods, and descriptions are detailed in the paper. The procedure increased the security consciousness of GSFC management to the electronic vulnerability of the system(s).

  3. Computational Inquiry in Introductory Statistics

    ERIC Educational Resources Information Center

    Toews, Carl

    2017-01-01

    Inquiry-based pedagogies have a strong presence in proof-based undergraduate mathematics courses, but can be difficult to implement in courses that are large, procedural, or highly computational. An introductory course in statistics would thus seem an unlikely candidate for an inquiry-based approach, as these courses typically steer well clear of…

  4. A Proposal on the Validation Model of Equivalence between PBLT and CBLT

    ERIC Educational Resources Information Center

    Chen, Huilin

    2014-01-01

    The validity of the computer-based language test is possibly affected by three factors: computer familiarity, audio-visual cognitive competence, and other discrepancies in construct. Therefore, validating the equivalence between the paper-and-pencil language test and the computer-based language test is a key step in the procedure of designing a…

  5. Evaluating Procedures for Reducing Measurement Error in Math Curriculum-Based Measurement Probes

    ERIC Educational Resources Information Center

    Methe, Scott A.; Briesch, Amy M.; Hulac, David

    2015-01-01

    At present, it is unclear whether math curriculum-based measurement (M-CBM) procedures provide a dependable measure of student progress in math computation because support for its technical properties is based largely upon a body of correlational research. Recent investigations into the dependability of M-CBM scores have found that evaluating…

  6. The anatomy of floating shock fitting. [shock waves computation for flow field

    NASA Technical Reports Server (NTRS)

    Salas, M. D.

    1975-01-01

    The floating shock fitting technique is examined. Second-order difference formulas are developed for the computation of discontinuities. A procedure is developed to compute mesh points that are crossed by discontinuities. The technique is applied to the calculation of internal two-dimensional flows with arbitrary number of shock waves and contact surfaces. A new procedure, based on the coalescence of characteristics, is developed to detect the formation of shock waves. Results are presented to validate and demonstrate the versatility of the technique.

  7. Queueing Network Models for Parallel Processing of Task Systems: an Operational Approach

    NASA Technical Reports Server (NTRS)

    Mak, Victor W. K.

    1986-01-01

    Computer performance modeling of possibly complex computations running on highly concurrent systems is considered. Earlier works in this area either dealt with a very simple program structure or resulted in methods with exponential complexity. An efficient procedure is developed to compute the performance measures for series-parallel-reducible task systems using queueing network models. The procedure is based on the concept of hierarchical decomposition and a new operational approach. Numerical results for three test cases are presented and compared to those of simulations.

  8. Quantum wavepacket ab initio molecular dynamics: an approach for computing dynamically averaged vibrational spectra including critical nuclear quantum effects.

    PubMed

    Sumner, Isaiah; Iyengar, Srinivasan S

    2007-10-18

    We have introduced a computational methodology to study vibrational spectroscopy in clusters inclusive of critical nuclear quantum effects. This approach is based on the recently developed quantum wavepacket ab initio molecular dynamics method that combines quantum wavepacket dynamics with ab initio molecular dynamics. The computational efficiency of the dynamical procedure is drastically improved (by several orders of magnitude) through the utilization of wavelet-based techniques combined with the previously introduced time-dependent deterministic sampling procedure measure to achieve stable, picosecond length, quantum-classical dynamics of electrons and nuclei in clusters. The dynamical information is employed to construct a novel cumulative flux/velocity correlation function, where the wavepacket flux from the quantized particle is combined with classical nuclear velocities to obtain the vibrational density of states. The approach is demonstrated by computing the vibrational density of states of [Cl-H-Cl]-, inclusive of critical quantum nuclear effects, and our results are in good agreement with experiment. A general hierarchical procedure is also provided, based on electronic structure harmonic frequencies, classical ab initio molecular dynamics, computation of nuclear quantum-mechanical eigenstates, and employing quantum wavepacket ab initio dynamics to understand vibrational spectroscopy in hydrogen-bonded clusters that display large degrees of anharmonicities.

  9. Breast tumor segmentation in high resolution x-ray phase contrast analyzer based computed tomography.

    PubMed

    Brun, E; Grandl, S; Sztrókay-Gaul, A; Barbone, G; Mittone, A; Gasilov, S; Bravin, A; Coan, P

    2014-11-01

    Phase contrast computed tomography has emerged as an imaging method, which is able to outperform present day clinical mammography in breast tumor visualization while maintaining an equivalent average dose. To this day, no segmentation technique takes into account the specificity of the phase contrast signal. In this study, the authors propose a new mathematical framework for human-guided breast tumor segmentation. This method has been applied to high-resolution images of excised human organs, each of several gigabytes. The authors present a segmentation procedure based on the viscous watershed transform and demonstrate the efficacy of this method on analyzer based phase contrast images. The segmentation of tumors inside two full human breasts is then shown as an example of this procedure's possible applications. A correct and precise identification of the tumor boundaries was obtained and confirmed by manual contouring performed independently by four experienced radiologists. The authors demonstrate that applying the watershed viscous transform allows them to perform the segmentation of tumors in high-resolution x-ray analyzer based phase contrast breast computed tomography images. Combining the additional information provided by the segmentation procedure with the already high definition of morphological details and tissue boundaries offered by phase contrast imaging techniques, will represent a valuable multistep procedure to be used in future medical diagnostic applications.

  10. A computer-assisted personnel data system for a hospital department of dietetics. I. Development of the data base.

    PubMed

    Shick, G L; Hoover, L W; Moore, A N

    1979-04-01

    A data base was developed for a computer-assisted personnel data system for a university hospital department of dietetics which would store data on employees' employment, personnel information, attendance records, and termination. Development of the data base required designing computer programs and files, coding directions and forms for card input, and forms and procedures for on-line transmission. A program was written to compute accrued vacation, sick leave, and holiday time, and to generate historical records.

  11. Why CBI? An Examination of the Case for Computer-Based Instruction.

    ERIC Educational Resources Information Center

    Dean, Peter M.

    1977-01-01

    Discussion of the use of computers in instruction includes the relationship of theory to practice, the interactive nature of computer instruction, an overview of the Keller Plan, cost considerations, strategy for use of computers in instruction and training, and a look at examination procedure. (RAO)

  12. Three-dimensional turbopump flowfield analysis

    NASA Technical Reports Server (NTRS)

    Sharma, O. P.; Belford, K. A.; Ni, R. H.

    1992-01-01

    A program was conducted to develop a flow prediction method applicable to rocket turbopumps. The complex nature of a flowfield in turbopumps is described and examples of flowfields are discussed to illustrate that physics based models and analytical calculation procedures based on computational fluid dynamics (CFD) are needed to develop reliable design procedures for turbopumps. A CFD code developed at NASA ARC was used as the base code. The turbulence model and boundary conditions in the base code were modified, respectively, to: (1) compute transitional flows and account for extra rates of strain, e.g., rotation; and (2) compute surface heat transfer coefficients and allow computation through multistage turbomachines. Benchmark quality data from two and three-dimensional cascades were used to verify the code. The predictive capabilities of the present CFD code were demonstrated by computing the flow through a radial impeller and a multistage axial flow turbine. Results of the program indicate that the present code operated in a two-dimensional mode is a cost effective alternative to full three-dimensional calculations, and that it permits realistic predictions of unsteady loadings and losses for multistage machines.

  13. MIRADS-2 Implementation Manual

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The Marshall Information Retrieval and Display System (MIRADS) which is a data base management system designed to provide the user with a set of generalized file capabilities is presented. The system provides a wide variety of ways to process the contents of the data base and includes capabilities to search, sort, compute, update, and display the data. The process of creating, defining, and loading a data base is generally called the loading process. The steps in the loading process which includes (1) structuring, (2) creating, (3) defining, (4) and implementing the data base for use by MIRADS are defined. The execution of several computer programs is required to successfully complete all steps of the loading process. This library must be established as a cataloged mass storage file as the first step in MIRADS implementation. The procedure for establishing the MIRADS Library is given. The system is currently operational for the UNIVAC 1108 computer system utilizing the Executive Operating System. All procedures relate to the use of MIRADS on the U-1108 computer.

  14. Solution of quadratic matrix equations for free vibration analysis of structures.

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.

    1973-01-01

    An efficient digital computer procedure and the related numerical algorithm are presented herein for the solution of quadratic matrix equations associated with free vibration analysis of structures. Such a procedure enables accurate and economical analysis of natural frequencies and associated modes of discretized structures. The numerically stable algorithm is based on the Sturm sequence method, which fully exploits the banded form of associated stiffness and mass matrices. The related computer program written in FORTRAN V for the JPL UNIVAC 1108 computer proves to be substantially more accurate and economical than other existing procedures of such analysis. Numerical examples are presented for two structures - a cantilever beam and a semicircular arch.

  15. An analysis of ratings: A guide to RMRATE

    Treesearch

    Thomas C. Brown; Terry C. Daniel; Herbert W. Schroeder; Glen E. Brink

    1990-01-01

    This report describes RMRATE, a computer program for analyzing rating judgments. RMRATE scales ratings using several scaling procedures, and compares the resulting scale values. The scaling procedures include the median and simple mean, standardized values, scale values based on Thurstone's Law of Categorical Judgment, and regression-based values. RMRATE also...

  16. Marketing via Computer Diskette.

    ERIC Educational Resources Information Center

    Thombs, Michael

    This report describes the development and evaluation of an interactive marketing diskette which describes the characteristics, advantages, and application procedures for each of the major computer-based graduate programs at Nova University. Copies of the diskettes were distributed at the 1988 Florida Instructional Computing Conference and were…

  17. Preliminary Design of a Consultation Knowledge-Based System for the Minimization of Distortion in Welded Structures

    DTIC Science & Technology

    1989-02-01

    which capture the knowledge of such experts. These Expert Systems, or Knowledge-Based Systems’, differ from the usual computer programming techniques...their applications in the fields of structural design and welding is reviewed. 5.1 Introduction Expert Systems, or KBES, are computer programs using Al...procedurally constructed as conventional computer programs usually are; * The knowledge base of such systems is executable, unlike databases 3 "Ill

  18. A LSQR-type method provides a computationally efficient automated optimal choice of regularization parameter in diffuse optical tomography.

    PubMed

    Prakash, Jaya; Yalavarthy, Phaneendra K

    2013-03-01

    Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time.

  19. Face and construct validity of a computer-based virtual reality simulator for ERCP.

    PubMed

    Bittner, James G; Mellinger, John D; Imam, Toufic; Schade, Robert R; Macfadyen, Bruce V

    2010-02-01

    Currently, little evidence supports computer-based simulation for ERCP training. To determine face and construct validity of a computer-based simulator for ERCP and assess its perceived utility as a training tool. Novice and expert endoscopists completed 2 simulated ERCP cases by using the GI Mentor II. Virtual Education and Surgical Simulation Laboratory, Medical College of Georgia. Outcomes included times to complete the procedure, reach the papilla, and use fluoroscopy; attempts to cannulate the papilla, pancreatic duct, and common bile duct; and number of contrast injections and complications. Subjects assessed simulator graphics, procedural accuracy, difficulty, haptics, overall realism, and training potential. Only when performance data from cases A and B were combined did the GI Mentor II differentiate novices and experts based on times to complete the procedure, reach the papilla, and use fluoroscopy. Across skill levels, overall opinions were similar regarding graphics (moderately realistic), accuracy (similar to clinical ERCP), difficulty (similar to clinical ERCP), overall realism (moderately realistic), and haptics. Most participants (92%) claimed that the simulator has definite training potential or should be required for training. Small sample size, single institution. The GI Mentor II demonstrated construct validity for ERCP based on select metrics. Most subjects thought that the simulated graphics, procedural accuracy, and overall realism exhibit face validity. Subjects deemed it a useful training tool. Study repetition involving more participants and cases may help confirm results and establish the simulator's ability to differentiate skill levels based on ERCP-specific metrics.

  20. The Evolution of Computer Based Learning Software Design: Computer Assisted Teaching Unit Experience.

    ERIC Educational Resources Information Center

    Blandford, A. E.; Smith, P. R.

    1986-01-01

    Describes the style of design of computer simulations developed by Computer Assisted Teaching Unit at Queen Mary College with reference to user interface, input and initialization, input data vetting, effective display screen use, graphical results presentation, and need for hard copy. Procedures and problems relating to academic involvement are…

  1. Computer-Based Learning Packages Have a Role, but Care Needs to Be Given as to When They Are Delivered

    ERIC Educational Resources Information Center

    Quinn, Joseph G.; King, Karen; Roberts, David; Carey, Linda; Mousley, Angela

    2009-01-01

    It is compulsory for first year biological science students at Queens University Belfast to complete a range of assessed, laboratory-based practicals in various scientific procedures including dissection. This study investigates student performance and attitudes when they have to complete a traditional dissection and a computer based learning…

  2. [Computer diagnosis of traumatic impact by hepatic lesion].

    PubMed

    Kimbar, V I; Sevankeev, V V

    2007-01-01

    A method of computer-assisted diagnosis of traumatic affection by liver damage (HEPAR-test program) is described. The program is based on calculated diagnostic coefficients using Bayes' probability method with Wald's recognition procedure.

  3. Evaluation of spacecraft technology programs (effects on communication satellite business ventures), volume 2

    NASA Technical Reports Server (NTRS)

    Greenburg, J. S.; Kaplan, M.; Fishman, J.; Hopkins, C.

    1985-01-01

    The computational procedures used in the evaluation of spacecraft technology programs that impact upon commercial communication satellite operations are discussed. Computer programs and data bases are described.

  4. A Man-Machine System for Contemporary Counseling Practice: Diagnosis and Prediction.

    ERIC Educational Resources Information Center

    Roach, Arthur J.

    This paper looks at present and future capabilities for diagnosis and prediction in computer-based guidance efforts and reviews the problems and potentials which will accompany the implementation of such capabilities. In addition to necessary procedural refinement in prediction, future developments in computer-based educational and career…

  5. Helping Students Adapt to Computer-Based Encrypted Examinations

    ERIC Educational Resources Information Center

    Baker-Eveleth, Lori; Eveleth, Daniel M.; O'Neill, Michele; Stone, Robert W.

    2006-01-01

    The College of Business and Economics at the University of Idaho conducted a pilot study that used commercially available encryption software called Securexam to deliver computer-based examinations. A multi-step implementation procedure was developed, implemented, and then evaluated on the basis of what students viewed as valuable. Two key aspects…

  6. Local Education Agency Planning Analyst's Procedures. A Vocational Education Planning System for Local School Districts. Volume III.

    ERIC Educational Resources Information Center

    Goldman, Charles I.

    The manual is part of a series to assist in planning procedures for local and State vocational agencies. It details steps required to process a local education agency's data after the data have been coded onto keypunch forms. Program, course, and overhead data are input into a computer data base and error checks are performed. A computer model is…

  7. Benchmarking gate-based quantum computers

    NASA Astrophysics Data System (ADS)

    Michielsen, Kristel; Nocon, Madita; Willsch, Dennis; Jin, Fengping; Lippert, Thomas; De Raedt, Hans

    2017-11-01

    With the advent of public access to small gate-based quantum processors, it becomes necessary to develop a benchmarking methodology such that independent researchers can validate the operation of these processors. We explore the usefulness of a number of simple quantum circuits as benchmarks for gate-based quantum computing devices and show that circuits performing identity operations are very simple, scalable and sensitive to gate errors and are therefore very well suited for this task. We illustrate the procedure by presenting benchmark results for the IBM Quantum Experience, a cloud-based platform for gate-based quantum computing.

  8. An adaptive mesh-moving and refinement procedure for one-dimensional conservation laws

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Flaherty, Joseph E.; Arney, David C.

    1993-01-01

    We examine the performance of an adaptive mesh-moving and /or local mesh refinement procedure for the finite difference solution of one-dimensional hyperbolic systems of conservation laws. Adaptive motion of a base mesh is designed to isolate spatially distinct phenomena, and recursive local refinement of the time step and cells of the stationary or moving base mesh is performed in regions where a refinement indicator exceeds a prescribed tolerance. These adaptive procedures are incorporated into a computer code that includes a MacCormack finite difference scheme wih Davis' artificial viscosity model and a discretization error estimate based on Richardson's extrapolation. Experiments are conducted on three problems in order to qualify the advantages of adaptive techniques relative to uniform mesh computations and the relative benefits of mesh moving and refinement. Key results indicate that local mesh refinement, with and without mesh moving, can provide reliable solutions at much lower computational cost than possible on uniform meshes; that mesh motion can be used to improve the results of uniform mesh solutions for a modest computational effort; that the cost of managing the tree data structure associated with refinement is small; and that a combination of mesh motion and refinement reliably produces solutions for the least cost per unit accuracy.

  9. Logistical Consideration in Computer-Based Screening of Astronaut Applicants

    NASA Technical Reports Server (NTRS)

    Galarza, Laura

    2000-01-01

    This presentation reviews the logistical, ergonomic, and psychometric issues and data related to the development and operational use of a computer-based system for the psychological screening of astronaut applicants. The Behavioral Health and Performance Group (BHPG) at the Johnson Space Center upgraded its astronaut psychological screening and selection procedures for the 1999 astronaut applicants and subsequent astronaut selection cycles. The questionnaires, tests, and inventories were upgraded from a paper-and-pencil system to a computer-based system. Members of the BHPG and a computer programmer designed and developed needed interfaces (screens, buttons, etc.) and programs for the astronaut psychological assessment system. This intranet-based system included the user-friendly computer-based administration of tests, test scoring, generation of reports, the integration of test administration and test output to a single system, and a complete database for past, present, and future selection data. Upon completion of the system development phase, four beta and usability tests were conducted with the newly developed system. The first three tests included 1 to 3 participants each. The final system test was conducted with 23 participants tested simultaneously. Usability and ergonomic data were collected from the system (beta) test participants and from 1999 astronaut applicants who volunteered the information in exchange for anonymity. Beta and usability test data were analyzed to examine operational, ergonomic, programming, test administration and scoring issues related to computer-based testing. Results showed a preference for computer-based testing over paper-and -pencil procedures. The data also reflected specific ergonomic, usability, psychometric, and logistical concerns that should be taken into account in future selection cycles. Conclusion. Psychological, psychometric, human and logistical factors must be examined and considered carefully when developing and using a computer-based system for psychological screening and selection.

  10. Computer Proficiency for Online Learning: Factorial Invariance of Scores among Teachers

    ERIC Educational Resources Information Center

    Martin, Amy L.; Reeves, Todd D.; Smith, Thomas J.; Walker, David A.

    2016-01-01

    Online learning is variously employed in K-12 education, including for teacher professional development. However, the use of computer-based technologies for learning purposes assumes learner computer proficiency, making this construct an important domain of procedural knowledge in formal and informal online learning contexts. Addressing this…

  11. Partitioning strategy for efficient nonlinear finite element dynamic analysis on multiprocessor computers

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Peters, Jeanne M.

    1989-01-01

    A computational procedure is presented for the nonlinear dynamic analysis of unsymmetric structures on vector multiprocessor systems. The procedure is based on a novel hierarchical partitioning strategy in which the response of the unsymmetric and antisymmetric response vectors (modes), each obtained by using only a fraction of the degrees of freedom of the original finite element model. The three key elements of the procedure which result in high degree of concurrency throughout the solution process are: (1) mixed (or primitive variable) formulation with independent shape functions for the different fields; (2) operator splitting or restructuring of the discrete equations at each time step to delineate the symmetric and antisymmetric vectors constituting the response; and (3) two level iterative process for generating the response of the structure. An assessment is made of the effectiveness of the procedure on the CRAY X-MP/4 computers.

  12. A Computational Procedure for Identifying Bilinear Representations of Nonlinear Systems Using Volterra Kernels

    NASA Technical Reports Server (NTRS)

    Kvaternik, Raymond G.; Silva, Walter A.

    2008-01-01

    A computational procedure for identifying the state-space matrices corresponding to discrete bilinear representations of nonlinear systems is presented. A key feature of the method is the use of first- and second-order Volterra kernels (first- and second-order pulse responses) to characterize the system. The present method is based on an extension of a continuous-time bilinear system identification procedure given in a 1971 paper by Bruni, di Pillo, and Koch. The analytical and computational considerations that underlie the original procedure and its extension to the title problem are presented and described, pertinent numerical considerations associated with the process are discussed, and results obtained from the application of the method to a variety of nonlinear problems from the literature are presented. The results of these exploratory numerical studies are decidedly promising and provide sufficient credibility for further examination of the applicability of the method.

  13. Hand held data collection and monitoring system for nuclear facilities

    DOEpatents

    Brayton, D.D.; Scharold, P.G.; Thornton, M.W.; Marquez, D.L.

    1999-01-26

    Apparatus and method is disclosed for a data collection and monitoring system that utilizes a pen based hand held computer unit which has contained therein interaction software that allows the user to review maintenance procedures, collect data, compare data with historical trends and safety limits, and input new information at various collection sites. The system has a means to allow automatic transfer of the collected data to a main computer data base for further review, reporting, and distribution purposes and uploading updated collection and maintenance procedures. The hand held computer has a running to-do list so sample collection and other general tasks, such as housekeeping are automatically scheduled for timely completion. A done list helps users to keep track of all completed tasks. The built-in check list assures that work process will meet the applicable processes and procedures. Users can hand write comments or drawings with an electronic pen that allows the users to directly interface information on the screen. 15 figs.

  14. Hand held data collection and monitoring system for nuclear facilities

    DOEpatents

    Brayton, Darryl D.; Scharold, Paul G.; Thornton, Michael W.; Marquez, Diana L.

    1999-01-01

    Apparatus and method is disclosed for a data collection and monitoring system that utilizes a pen based hand held computer unit which has contained therein interaction software that allows the user to review maintenance procedures, collect data, compare data with historical trends and safety limits, and input new information at various collection sites. The system has a means to allow automatic transfer of the collected data to a main computer data base for further review, reporting, and distribution purposes and uploading updated collection and maintenance procedures. The hand held computer has a running to-do list so sample collection and other general tasks, such as housekeeping are automatically scheduled for timely completion. A done list helps users to keep track of all completed tasks. The built-in check list assures that work process will meet the applicable processes and procedures. Users can hand write comments or drawings with an electronic pen that allows the users to directly interface information on the screen.

  15. Development of a Computer-Based Measure of Listening Comprehension of Science Talk

    ERIC Educational Resources Information Center

    Lin, Sheau-Wen; Liu, Yu; Chen, Shin-Feng; Wang, Jing-Ru; Kao, Huey-Lien

    2015-01-01

    The purpose of this study was to develop a computer-based assessment for elementary school students' listening comprehension of science talk within an inquiry-oriented environment. The development procedure had 3 steps: a literature review to define the framework of the test, collecting and identifying key constructs of science talk, and…

  16. Computer versus paper--does it make any difference in test performance?

    PubMed

    Karay, Yassin; Schauber, Stefan K; Stosch, Christoph; Schüttpelz-Brauns, Katrin

    2015-01-01

    CONSTRUCT: In this study, we examine the differences in test performance between the paper-based and the computer-based version of the Berlin formative Progress Test. In this context it is the first study that allows controlling for students' prior performance. Computer-based tests make possible a more efficient examination procedure for test administration and review. Although university staff will benefit largely from computer-based tests, the question arises if computer-based tests influence students' test performance. A total of 266 German students from the 9th and 10th semester of medicine (comparable with the 4th-year North American medical school schedule) participated in the study (paper = 132, computer = 134). The allocation of the test format was conducted as a randomized matched-pair design in which students were first sorted according to their prior test results. The organizational procedure, the examination conditions, the room, and seating arrangements, as well as the order of questions and answers, were identical in both groups. The sociodemographic variables and pretest scores of both groups were comparable. The test results from the paper and computer versions did not differ. The groups remained within the allotted time, but students using the computer version (particularly the high performers) needed significantly less time to complete the test. In addition, we found significant differences in guessing behavior. Low performers using the computer version guess significantly more than low-performing students in the paper-pencil version. Participants in computer-based tests are not at a disadvantage in terms of their test results. The computer-based test required less processing time. The reason for the longer processing time when using the paper-pencil version might be due to the time needed to write the answer down, controlling for transferring the answer correctly. It is still not known why students using the computer version (particularly low-performing students) guess at a higher rate. Further studies are necessary to understand this finding.

  17. Computed tomography or rhinoscopy as the first-line procedure for suspected nasal tumor: a pilot study.

    PubMed

    Finck, Marlène; Ponce, Frédérique; Guilbaud, Laurent; Chervier, Cindy; Floch, Franck; Cadoré, Jean-Luc; Chuzel, Thomas; Hugonnard, Marine

    2015-02-01

    There are no evidence-based guidelines as to whether computed tomography (CT) or endoscopy should be selected as the first-line procedure when a nasal tumor is suspected in a dog or a cat and only one examination can be performed. Computed tomography and rhinoscopic features of 17 dogs and 5 cats with a histopathologically or cytologically confirmed nasal tumor were retrospectively reviewed. The level of suspicion for nasal neoplasia after CT and/or rhinoscopy was compared to the definitive diagnosis. Twelve animals underwent CT, 14 underwent rhinoscopy, and 4 both examinations. Of the 12 CT examinations performed, 11 (92%) resulted in the conclusion that a nasal tumor was the most likely diagnosis compared with 9/14 (64%) for rhinoscopies. Computed tomography appeared to be more reliable than rhinoscopy for detecting nasal tumors and should therefore be considered as the first-line procedure.

  18. Computed tomography or rhinoscopy as the first-line procedure for suspected nasal tumor: A pilot study

    PubMed Central

    Finck, Marlène; Ponce, Frédérique; Guilbaud, Laurent; Chervier, Cindy; Floch, Franck; Cadoré, Jean-Luc; Chuzel, Thomas; Hugonnard, Marine

    2015-01-01

    There are no evidence-based guidelines as to whether computed tomography (CT) or endoscopy should be selected as the first-line procedure when a nasal tumor is suspected in a dog or a cat and only one examination can be performed. Computed tomography and rhinoscopic features of 17 dogs and 5 cats with a histopathologically or cytologically confirmed nasal tumor were retrospectively reviewed. The level of suspicion for nasal neoplasia after CT and/or rhinoscopy was compared to the definitive diagnosis. Twelve animals underwent CT, 14 underwent rhinoscopy, and 4 both examinations. Of the 12 CT examinations performed, 11 (92%) resulted in the conclusion that a nasal tumor was the most likely diagnosis compared with 9/14 (64%) for rhinoscopies. Computed tomography appeared to be more reliable than rhinoscopy for detecting nasal tumors and should therefore be considered as the first-line procedure. PMID:25694669

  19. From serological to computer cross-matching in nine hospitals.

    PubMed

    Georgsen, J; Kristensen, T

    1998-01-01

    In 1991 it was decided to reorganise the transfusion service of the County of Funen. The aims were to standardise and improve the quality of blood components, laboratory procedures and the transfusion service and to reduce the number of outdated blood units. Part of the efficiency gains was reinvested in a dedicated computer system making it possible--among other things--to change the cross-match procedures from serological to computer cross-matching according to the ABCD-concept. This communication describes how this transition was performed in terms of laboratory techniques, education of personnel as well as implementation of the computer system and indicates the results obtained. The Funen Transfusion Service has by now performed more than 100.000 red cell transfusions based on ABCD-cross-matching and has not encountered any problems. Major results are the significant reductions of cross-match procedures, blood grouping as well as the number of outdated blood components.

  20. Computer based interpretation of infrared spectra-structure of the knowledge-base, automatic rule generation and interpretation

    NASA Astrophysics Data System (ADS)

    Ehrentreich, F.; Dietze, U.; Meyer, U.; Abbas, S.; Schulz, H.

    1995-04-01

    It is a main task within the SpecInfo-Project to develop interpretation tools that can handle a great deal more of the complicated, more specific spectrum-structure-correlations. In the first step the empirical knowledge about the assignment of structural groups and their characteristic IR-bands has been collected from literature and represented in a computer readable well-structured form. Vague, verbal rules are managed by introduction of linguistic variables. The next step was the development of automatic rule generating procedures. We had combined and enlarged the IDIOTS algorithm with the algorithm by Blaffert relying on set theory. The procedures were successfully applied to the SpecInfo database. The realization of the preceding items is a prerequisite for the improvement of the computerized structure elucidation procedure.

  1. Crew procedures development techniques

    NASA Technical Reports Server (NTRS)

    Arbet, J. D.; Benbow, R. L.; Hawk, M. L.; Mangiaracina, A. A.; Mcgavern, J. L.; Spangler, M. C.

    1975-01-01

    The study developed requirements, designed, developed, checked out and demonstrated the Procedures Generation Program (PGP). The PGP is a digital computer program which provides a computerized means of developing flight crew procedures based on crew action in the shuttle procedures simulator. In addition, it provides a real time display of procedures, difference procedures, performance data and performance evaluation data. Reconstruction of displays is possible post-run. Data may be copied, stored on magnetic tape and transferred to the document processor for editing and documentation distribution.

  2. Automation photometer of Hitachi U–2000 spectrophotometer with RS–232C–based computer

    PubMed Central

    Kumar, K. Senthil; Lakshmi, B. S.; Pennathur, Gautam

    1998-01-01

    The interfacing of a commonly used spectrophotometer, the Hitachi U2000, through its RS–232C port to a IBM compatible computer is described. The hardware for data acquisation was designed by suitably modifying readily available materials, and the software was written using the C programming language. The various steps involved in these procedures are elucidated in detail. The efficacy of the procedure was tested experimentally by running the visible spectrum of a cyanine dye. The spectrum was plotted through a printer hooked to the computer. The spectrum was also plotted by transforming the abscissa to the wavenumber scale. This was carried out by using another module written in C. The efficiency of the whole set-up has been calculated using standard procedures. PMID:18924834

  3. Preliminary Investigation of Time Remaining Display on the Computer-based Emergency Operating Procedure

    NASA Astrophysics Data System (ADS)

    Suryono, T. J.; Gofuku, A.

    2018-02-01

    One of the important thing in the mitigation of accidents in nuclear power plant accidents is time management. The accidents should be resolved as soon as possible in order to prevent the core melting and the release of radioactive material to the environment. In this case, operators should follow the emergency operating procedure related with the accident, in step by step order and in allowable time. Nowadays, the advanced main control rooms are equipped with computer-based procedures (CBPs) which is make it easier for operators to do their tasks of monitoring and controlling the reactor. However, most of the CBPs do not include the time remaining display feature which informs operators of time available for them to execute procedure steps and warns them if the they reach the time limit. Furthermore, the feature will increase the awareness of operators about their current situation in the procedure. This paper investigates this issue. The simplified of emergency operating procedure (EOP) of steam generator tube rupture (SGTR) accident of PWR plant is applied. In addition, the sequence of actions on each step of the procedure is modelled using multilevel flow modelling (MFM) and influenced propagation rule. The prediction of action time on each step is acquired based on similar case accidents and the Support Vector Regression. The derived time will be processed and then displayed on a CBP user interface.

  4. Redesigning the Human-Machine Interface for Computer-Mediated Visual Technologies.

    ERIC Educational Resources Information Center

    Acker, Stephen R.

    1986-01-01

    This study examined an application of a human machine interface which relies on the use of optical bar codes incorporated in a computer-based module to teach radio production. The sequencing procedure used establishes the user rather than the computer as the locus of control for the mediated instruction. (Author/MBR)

  5. A Computer-Based, Interactive Videodisc Job Aid and Expert System for Electron Beam Lithography Integration and Diagnostic Procedures.

    ERIC Educational Resources Information Center

    Stevenson, Kimberly

    This master's thesis describes the development of an expert system and interactive videodisc computer-based instructional job aid used for assisting in the integration of electron beam lithography devices. Comparable to all comprehensive training, expert system and job aid development require a criterion-referenced systems approach treatment to…

  6. Computational flow development for unsteady viscous flows: Foundation of the numerical method

    NASA Technical Reports Server (NTRS)

    Bratanow, T.; Spehert, T.

    1978-01-01

    A procedure is presented for effective consideration of viscous effects in computational development of high Reynolds number flows. The procedure is based on the interpretation of the Navier-Stokes equations as vorticity transport equations. The physics of the flow was represented in a form suitable for numerical analysis. Lighthill's concept for flow development for computational purposes was adapted. The vorticity transport equations were cast in a form convenient for computation. A statement for these equations was written using the method of weighted residuals and applying the Galerkin criterion. An integral representation of the induced velocity was applied on the basis of the Biot-Savart law. Distribution of new vorticity, produced at wing surfaces over small computational time intervals, was assumed to be confined to a thin region around the wing surfaces.

  7. An Assessment of Artificial Compressibility and Pressure Projection Methods for Incompressible Flow Simulations

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan; Kiris, C.; Smith, Charles A. (Technical Monitor)

    1998-01-01

    Performance of the two commonly used numerical procedures, one based on artificial compressibility method and the other pressure projection method, are compared. These formulations are selected primarily because they are designed for three-dimensional applications. The computational procedures are compared by obtaining steady state solutions of a wake vortex and unsteady solutions of a curved duct flow. For steady computations, artificial compressibility was very efficient in terms of computing time and robustness. For an unsteady flow which requires small physical time step, pressure projection method was found to be computationally more efficient than an artificial compressibility method. This comparison is intended to give some basis for selecting a method or a flow solution code for large three-dimensional applications where computing resources become a critical issue.

  8. Fuel Burn Estimation Using Real Track Data

    NASA Technical Reports Server (NTRS)

    Chatterji, Gano B.

    2011-01-01

    A procedure for estimating fuel burned based on actual flight track data, and drag and fuel-flow models is described. The procedure consists of estimating aircraft and wind states, lift, drag and thrust. Fuel-flow for jet aircraft is determined in terms of thrust, true airspeed and altitude as prescribed by the Base of Aircraft Data fuel-flow model. This paper provides a theoretical foundation for computing fuel-flow with most of the information derived from actual flight data. The procedure does not require an explicit model of thrust and calibrated airspeed/Mach profile which are typically needed for trajectory synthesis. To validate the fuel computation method, flight test data provided by the Federal Aviation Administration were processed. Results from this method show that fuel consumed can be estimated within 1% of the actual fuel consumed in the flight test. Next, fuel consumption was estimated with simplified lift and thrust models. Results show negligible difference with respect to the full model without simplifications. An iterative takeoff weight estimation procedure is described for estimating fuel consumption, when takeoff weight is unavailable, and for establishing fuel consumption uncertainty bounds. Finally, the suitability of using radar-based position information for fuel estimation is examined. It is shown that fuel usage could be estimated within 5.4% of the actual value using positions reported in the Airline Situation Display to Industry data with simplified models and iterative takeoff weight computation.

  9. Reduced complexity structural modeling for automated airframe synthesis

    NASA Technical Reports Server (NTRS)

    Hajela, Prabhat

    1987-01-01

    A procedure is developed for the optimum sizing of wing structures based on representing the built-up finite element assembly of the structure by equivalent beam models. The reduced-order beam models are computationally less demanding in an optimum design environment which dictates repetitive analysis of several trial designs. The design procedure is implemented in a computer program requiring geometry and loading information to create the wing finite element model and its equivalent beam model, and providing a rapid estimate of the optimum weight obtained from a fully stressed design approach applied to the beam. The synthesis procedure is demonstrated for representative conventional-cantilever and joined wing configurations.

  10. Computer-oriented synthesis of wide-band non-uniform negative resistance amplifiers

    NASA Technical Reports Server (NTRS)

    Branner, G. R.; Chan, S.-P.

    1975-01-01

    This paper presents a synthesis procedure which provides design values for broad-band amplifiers using non-uniform negative resistance devices. Employing a weighted least squares optimization scheme, the technique, based on an extension of procedures for uniform negative resistance devices, is capable of providing designs for a variety of matching network topologies. It also provides, for the first time, quantitative results for predicting the effects of parameter element variations on overall amplifier performance. The technique is also unique in that it employs exact partial derivatives for optimization and sensitivity computation. In comparison with conventional procedures, significantly improved broad-band designs are shown to result.

  11. Automating approximate Bayesian computation by local linear regression.

    PubMed

    Thornton, Kevin R

    2009-07-07

    In several biological contexts, parameter inference often relies on computationally-intensive techniques. "Approximate Bayesian Computation", or ABC, methods based on summary statistics have become increasingly popular. A particular flavor of ABC based on using a linear regression to approximate the posterior distribution of the parameters, conditional on the summary statistics, is computationally appealing, yet no standalone tool exists to automate the procedure. Here, I describe a program to implement the method. The software package ABCreg implements the local linear-regression approach to ABC. The advantages are: 1. The code is standalone, and fully-documented. 2. The program will automatically process multiple data sets, and create unique output files for each (which may be processed immediately in R), facilitating the testing of inference procedures on simulated data, or the analysis of multiple data sets. 3. The program implements two different transformation methods for the regression step. 4. Analysis options are controlled on the command line by the user, and the program is designed to output warnings for cases where the regression fails. 5. The program does not depend on any particular simulation machinery (coalescent, forward-time, etc.), and therefore is a general tool for processing the results from any simulation. 6. The code is open-source, and modular.Examples of applying the software to empirical data from Drosophila melanogaster, and testing the procedure on simulated data, are shown. In practice, the ABCreg simplifies implementing ABC based on local-linear regression.

  12. Parsing English. Course Notes for a Tutorial on Computational Semantics, March 17-22, 1975.

    ERIC Educational Resources Information Center

    Wilks, Yorick

    The course in parsing English is essentially a survey and comparison of several of the principal systems used for understanding natural language. The basic procedure of parsing is described. The discussion of the principal systems is based on the idea that "meaning is procedures," that is, that the procedures of application give a parsed…

  13. Physical and Chemical Properties of the Copper-Alanine System: An Advanced Laboratory Project

    ERIC Educational Resources Information Center

    Farrell, John J.

    1977-01-01

    An integrated physical-analytical-inorganic chemistry laboratory procedure for use with undergraduate biology majors is described. The procedure requires five to six laboratory periods and includes acid-base standardizations, potentiometric determinations, computer usage, spectrophotometric determinations of crystal-field splitting…

  14. Materials with periodic internal structure: Computation based on homogenization and comparison with experiment

    NASA Technical Reports Server (NTRS)

    Jansson, S.; Leckie, F. A.; Onat, E. T.; Ranaweera, M. P.

    1990-01-01

    The combination of thermal and mechanical loading expected in practice means that constitutive equations of metal matrix composites must be developed which deal with time-independent and time-dependent irreversible deformation. Also, the internal state of composites is extremely complicated which underlines the need to formulate macroscopic constitutive equations with a limited number of state variables which represent the internal state at the micro level. One available method for calculating the macro properties of composites in terms of the distribution and properties of the constituent materials is the method of homogenization whose formulation is based on the periodicity of the substructure of the composite. A homogenization procedure was developed which lends itself to the use of the finite element procedure. The efficiency of these procedures, to determine the macroscopic properties of a composite system from its constituent properties, was demonstrated utilizing an aluminum plate perforated by directionally oriented slits. The selection of this problem is based on the fact that, extensive experimental results exist, the macroscopic response is highly anisotropic, and that the slits provide very high stress gradients which severely test the effectiveness of the computational procedures. Furthermore, both elastic and plastic properties were investigated so that the application to practical systems with inelastic deformation should be able to proceed without difficulty. The effectiveness of the procedures was rigorously checked against experimental results and with the predictions of approximate calculations. Using the computational results it is illustrated how macroscopic constitutive equations can be expressed in forms of the elastic and limit load behavior.

  15. A computer program for the calculation of the flow field in supersonic mixed-compression inlets at angle of attack using the three-dimensional method of characteristics with discrete shock wave fitting

    NASA Technical Reports Server (NTRS)

    Vadyak, J.; Hoffman, J. D.; Bishop, A. R.

    1978-01-01

    The calculation procedure is based on the method of characteristics for steady three-dimensional flow. The bow shock wave and the internal shock wave system were computed using a discrete shock wave fitting procedure. The general structure of the computer program is discussed, and a brief description of each subroutine is given. All program input parameters are defined, and a brief discussion on interpretation of the output is provided. A number of sample cases, complete with data deck listings, are presented.

  16. A Computer Based Cognitive Simulation of Cataract Surgery

    DTIC Science & Technology

    2011-12-01

    for zonular absence, assess for notable lenticular astigmatism ** How and when do you decide to use a capsular tension ring? (Expert) Zonular...INTRODUCTION The Virtual Mentor Cataract Surgery Trainer is a computer based, cognitive simulation of phacoemulsification cataract surgery. It is...the Cataract Trainer. BODY Phacoemulsification cataract surgery (phaco) is a difficult procedure to learn, with little margin for error. As in other

  17. IDEA Technical Report No. 4. Description of IDEA Standard Form Data Base.

    ERIC Educational Resources Information Center

    Cashin, William E.; Perrin, Bruce M.

    The data and computational procedures used by the IDEA System to generate IDEA Reports from information collected on the Standard Form of the IDEA Survey Form are described in this technical report. The computations for each of the seven parts of the IDEA Report are explained. The data base used for this 1978-79 Kansas State University study…

  18. Equivalent model construction for a non-linear dynamic system based on an element-wise stiffness evaluation procedure and reduced analysis of the equivalent system

    NASA Astrophysics Data System (ADS)

    Kim, Euiyoung; Cho, Maenghyo

    2017-11-01

    In most non-linear analyses, the construction of a system matrix uses a large amount of computation time, comparable to the computation time required by the solving process. If the process for computing non-linear internal force matrices is substituted with an effective equivalent model that enables the bypass of numerical integrations and assembly processes used in matrix construction, efficiency can be greatly enhanced. A stiffness evaluation procedure (STEP) establishes non-linear internal force models using polynomial formulations of displacements. To efficiently identify an equivalent model, the method has evolved such that it is based on a reduced-order system. The reduction process, however, makes the equivalent model difficult to parameterize, which significantly affects the efficiency of the optimization process. In this paper, therefore, a new STEP, E-STEP, is proposed. Based on the element-wise nature of the finite element model, the stiffness evaluation is carried out element-by-element in the full domain. Since the unit of computation for the stiffness evaluation is restricted by element size, and since the computation is independent, the equivalent model can be constructed efficiently in parallel, even in the full domain. Due to the element-wise nature of the construction procedure, the equivalent E-STEP model is easily characterized by design parameters. Various reduced-order modeling techniques can be applied to the equivalent system in a manner similar to how they are applied in the original system. The reduced-order model based on E-STEP is successfully demonstrated for the dynamic analyses of non-linear structural finite element systems under varying design parameters.

  19. A Formally-Verified Decision Procedure for Univariate Polynomial Computation Based on Sturm's Theorem

    NASA Technical Reports Server (NTRS)

    Narkawicz, Anthony J.; Munoz, Cesar A.

    2014-01-01

    Sturm's Theorem is a well-known result in real algebraic geometry that provides a function that computes the number of roots of a univariate polynomial in a semiopen interval. This paper presents a formalization of this theorem in the PVS theorem prover, as well as a decision procedure that checks whether a polynomial is always positive, nonnegative, nonzero, negative, or nonpositive on any input interval. The soundness and completeness of the decision procedure is proven in PVS. The procedure and its correctness properties enable the implementation of a PVS strategy for automatically proving existential and universal univariate polynomial inequalities. Since the decision procedure is formally verified in PVS, the soundness of the strategy depends solely on the internal logic of PVS rather than on an external oracle. The procedure itself uses a combination of Sturm's Theorem, an interval bisection procedure, and the fact that a polynomial with exactly one root in a bounded interval is always nonnegative on that interval if and only if it is nonnegative at both endpoints.

  20. Transport of Space Environment Electrons: A Simplified Rapid-Analysis Computational Procedure

    NASA Technical Reports Server (NTRS)

    Nealy, John E.; Anderson, Brooke M.; Cucinotta, Francis A.; Wilson, John W.; Katz, Robert; Chang, C. K.

    2002-01-01

    A computational procedure for describing transport of electrons in condensed media has been formulated for application to effects and exposures from spectral distributions typical of electrons trapped in planetary magnetic fields. The procedure is based on earlier parameterizations established from numerous electron beam experiments. New parameterizations have been derived that logically extend the domain of application to low molecular weight (high hydrogen content) materials and higher energies (approximately 50 MeV). The production and transport of high energy photons (bremsstrahlung) generated in the electron transport processes have also been modeled using tabulated values of photon production cross sections. A primary purpose for developing the procedure has been to provide a means for rapidly performing numerous repetitive calculations essential for electron radiation exposure assessments for complex space structures. Several favorable comparisons have been made with previous calculations for typical space environment spectra, which have indicated that accuracy has not been substantially compromised at the expense of computational speed.

  1. Conditional Monte Carlo randomization tests for regression models.

    PubMed

    Parhat, Parwen; Rosenberger, William F; Diao, Guoqing

    2014-08-15

    We discuss the computation of randomization tests for clinical trials of two treatments when the primary outcome is based on a regression model. We begin by revisiting the seminal paper of Gail, Tan, and Piantadosi (1988), and then describe a method based on Monte Carlo generation of randomization sequences. The tests based on this Monte Carlo procedure are design based, in that they incorporate the particular randomization procedure used. We discuss permuted block designs, complete randomization, and biased coin designs. We also use a new technique by Plamadeala and Rosenberger (2012) for simple computation of conditional randomization tests. Like Gail, Tan, and Piantadosi, we focus on residuals from generalized linear models and martingale residuals from survival models. Such techniques do not apply to longitudinal data analysis, and we introduce a method for computation of randomization tests based on the predicted rate of change from a generalized linear mixed model when outcomes are longitudinal. We show, by simulation, that these randomization tests preserve the size and power well under model misspecification. Copyright © 2014 John Wiley & Sons, Ltd.

  2. Least-squares/parabolized Navier-Stokes procedure for optimizing hypersonic wind tunnel nozzles

    NASA Technical Reports Server (NTRS)

    Korte, John J.; Kumar, Ajay; Singh, D. J.; Grossman, B.

    1991-01-01

    A new procedure is demonstrated for optimizing hypersonic wind-tunnel-nozzle contours. The procedure couples a CFD computer code to an optimization algorithm, and is applied to both conical and contoured hypersonic nozzles for the purpose of determining an optimal set of parameters to describe the surface geometry. A design-objective function is specified based on the deviation from the desired test-section flow-field conditions. The objective function is minimized by optimizing the parameters used to describe the nozzle contour based on the solution to a nonlinear least-squares problem. The effect of the changes in the nozzle wall parameters are evaluated by computing the nozzle flow using the parabolized Navier-Stokes equations. The advantage of the new procedure is that it directly takes into account the displacement effect of the boundary layer on the wall contour. The new procedure provides a method for optimizing hypersonic nozzles of high Mach numbers which have been designed by classical procedures, but are shown to produce poor flow quality due to the large boundary layers present in the test section. The procedure is demonstrated by finding the optimum design parameters for a Mach 10 conical nozzle and a Mach 6 and a Mach 15 contoured nozzle.

  3. Improving the performance of computer color matching procedures.

    PubMed

    Karbasi, A; Moradian, S; Asiaban, S

    2008-09-01

    A premise was set up entailing the possibility of a synergistical combination of advantages of spectrophotometric and colorimetric matching procedures. Attempts were therefore made to test the performances of fifteen matching procedures, all based on the Kubelka-Munk theory, including two procedures utilizing the fundamental color stimulus R(FCS) of the spectral decomposition theory. Color differences CIE DeltaE(00) as well as concentration differences DeltaC(AVE) were used to theoretically rank the fifteen color matching procedures. Results showed that procedures based on R(FCS) were superior in accurately predicting colors and concentrations. Additionally, the metameric black component R(MB) of the decomposition theory also showed promise in predicting degrees of metamerism. This preliminary study, therefore, provides evidence for the premise of this investigation.

  4. SUPPORTING THE INDUSTRY BY DEVELOPING A DESIGN GUIDANCE FOR COMPUTER-BASED PROCEDURES FOR FIELD WORKERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxstrand, Johanna; LeBlanc, Katya

    The paper-based procedures currently used for nearly all activities in the commercial nuclear power industry have a long history of ensuring safe operation of the plants. However, there is potential to greatly increase efficiency and safety by improving how the human interacts with the procedures, which can be achieved through the use of computer-based procedures (CBPs). A CBP system offers a vast variety of improvements, such as context driven job aids, integrated human performance tools and dynamic step presentation. As a step toward the goal of improving procedure use performance, the U.S. Department of Energy Light Water Reactor Sustainability Programmore » researchers, together with the nuclear industry, have been investigating the possibility and feasibility of replacing current paper-based procedures with CBPs. The main purpose of the CBP research conducted at the Idaho National Laboratory was to provide design guidance to the nuclear industry to be used by both utilities and vendors. After studying existing design guidance for CBP systems, the researchers concluded that the majority of the existing guidance is intended for control room CBP systems, and does not necessarily address the challenges of designing CBP systems for instructions carried out in the field. Further, the guidance is often presented on a high level, which leaves the designer to interpret what is meant by the guidance and how to specifically implement it. The authors developed a design guidance to provide guidance specifically tailored to instructions that are carried out in the field based.« less

  5. Application of Minicomputers and Microcomputers to Information Handling.

    ERIC Educational Resources Information Center

    Griffiths, Jose-Marie

    This study assesses the application of both minicomputers and microcomputers to information-handling procedures and makes recommendations for automating such procedures, particularly in developing nations. The report is based on a survey of existing uses of small computing equipment in libraries, archives, and information centers which was…

  6. 32 CFR 806b.35 - Balancing protection.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ..., Computer Security, 5 for procedures on safeguarding personal information in automated records. 5 http://www... automated system with a log-on protocol. Others may require more sophisticated security protection based on the sensitivity of the information. Classified computer systems or those with established audit and...

  7. 32 CFR 806b.35 - Balancing protection.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ..., Computer Security, 5 for procedures on safeguarding personal information in automated records. 5 http://www... automated system with a log-on protocol. Others may require more sophisticated security protection based on the sensitivity of the information. Classified computer systems or those with established audit and...

  8. 32 CFR 806b.35 - Balancing protection.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ..., Computer Security, 5 for procedures on safeguarding personal information in automated records. 5 http://www... automated system with a log-on protocol. Others may require more sophisticated security protection based on the sensitivity of the information. Classified computer systems or those with established audit and...

  9. 32 CFR 806b.35 - Balancing protection.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ..., Computer Security, 5 for procedures on safeguarding personal information in automated records. 5 http://www... automated system with a log-on protocol. Others may require more sophisticated security protection based on the sensitivity of the information. Classified computer systems or those with established audit and...

  10. 32 CFR 806b.35 - Balancing protection.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., Computer Security, 5 for procedures on safeguarding personal information in automated records. 5 http://www... automated system with a log-on protocol. Others may require more sophisticated security protection based on the sensitivity of the information. Classified computer systems or those with established audit and...

  11. Development of a High Resolution 3D Infant Stomach Model for Surgical Planning

    NASA Astrophysics Data System (ADS)

    Chaudry, Qaiser; Raza, S. Hussain; Lee, Jeonggyu; Xu, Yan; Wulkan, Mark; Wang, May D.

    Medical surgical procedures have not changed much during the past century due to the lack of accurate low-cost workbench for testing any new improvement. The increasingly cheaper and powerful computer technologies have made computer-based surgery planning and training feasible. In our work, we have developed an accurate 3D stomach model, which aims to improve the surgical procedure that treats the infant pediatric and neonatal gastro-esophageal reflux disease (GERD). We generate the 3-D infant stomach model based on in vivo computer tomography (CT) scans of an infant. CT is a widely used clinical imaging modality that is cheap, but with low spatial resolution. To improve the model accuracy, we use the high resolution Visible Human Project (VHP) in model building. Next, we add soft muscle material properties to make the 3D model deformable. Then we use virtual reality techniques such as haptic devices to make the 3D stomach model deform upon touching force. This accurate 3D stomach model provides a workbench for testing new GERD treatment surgical procedures. It has the potential to reduce or eliminate the extensive cost associated with animal testing when improving any surgical procedure, and ultimately, to reduce the risk associated with infant GERD surgery.

  12. BRYNTRN: A baryon transport computer code, computation procedures and data base

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Townsend, Lawrence W.; Chun, Sang Y.; Buck, Warren W.; Khan, Ferdous; Cucinotta, Frank

    1988-01-01

    The development is described of an interaction data base and a numerical solution to the transport of baryons through the arbitrary shield material based on a straight ahead approximation of the Boltzmann equation. The code is most accurate for continuous energy boundary values but gives reasonable results for discrete spectra at the boundary with even a relatively coarse energy grid (30 points) and large spatial increments (1 cm in H2O).

  13. A selective-update affine projection algorithm with selective input vectors

    NASA Astrophysics Data System (ADS)

    Kong, NamWoong; Shin, JaeWook; Park, PooGyeon

    2011-10-01

    This paper proposes an affine projection algorithm (APA) with selective input vectors, which based on the concept of selective-update in order to reduce estimation errors and computations. The algorithm consists of two procedures: input- vector-selection and state-decision. The input-vector-selection procedure determines the number of input vectors by checking with mean square error (MSE) whether the input vectors have enough information for update. The state-decision procedure determines the current state of the adaptive filter by using the state-decision criterion. As the adaptive filter is in transient state, the algorithm updates the filter coefficients with the selected input vectors. On the other hand, as soon as the adaptive filter reaches the steady state, the update procedure is not performed. Through these two procedures, the proposed algorithm achieves small steady-state estimation errors, low computational complexity and low update complexity for colored input signals.

  14. Enhanced Multiobjective Optimization Technique for Comprehensive Aerospace Design. Part A

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Rajadas, John N.

    1997-01-01

    A multidisciplinary design optimization procedure which couples formal multiobjectives based techniques and complex analysis procedures (such as computational fluid dynamics (CFD) codes) developed. The procedure has been demonstrated on a specific high speed flow application involving aerodynamics and acoustics (sonic boom minimization). In order to account for multiple design objectives arising from complex performance requirements, multiobjective formulation techniques are used to formulate the optimization problem. Techniques to enhance the existing Kreisselmeier-Steinhauser (K-S) function multiobjective formulation approach have been developed. The K-S function procedure used in the proposed work transforms a constrained multiple objective functions problem into an unconstrained problem which then is solved using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. Weight factors are introduced during the transformation process to each objective function. This enhanced procedure will provide the designer the capability to emphasize specific design objectives during the optimization process. The demonstration of the procedure utilizes a computational Fluid dynamics (CFD) code which solves the three-dimensional parabolized Navier-Stokes (PNS) equations for the flow field along with an appropriate sonic boom evaluation procedure thus introducing both aerodynamic performance as well as sonic boom as the design objectives to be optimized simultaneously. Sensitivity analysis is performed using a discrete differentiation approach. An approximation technique has been used within the optimizer to improve the overall computational efficiency of the procedure in order to make it suitable for design applications in an industrial setting.

  15. Inviscid Design of Hypersonic Wind Tunnel Nozzles for a Real Gas

    NASA Technical Reports Server (NTRS)

    Korte, J. J.

    2000-01-01

    A straightforward procedure has been developed to quickly determine an inviscid design of a hypersonic wind tunnel nozzle when the test crash is both calorically and thermally imperfect. This real gas procedure divides the nozzle into four distinct parts: subsonic, throat to conical, conical, and turning flow regions. The design process is greatly simplified by treating the imperfect gas effects only in the source flow region. This simplification can be justified for a large class of hypersonic wind tunnel nozzle design problems. The final nozzle design is obtained either by doing a classical boundary layer correction or by using this inviscid design as the starting point for a viscous design optimization based on computational fluid dynamics. An example of a real gas nozzle design is used to illustrate the method. The accuracy of the real gas design procedure is shown to compare favorably with an ideal gas design based on computed flow field solutions.

  16. Rapid Profile: A Second Language Screening Procedure.

    ERIC Educational Resources Information Center

    Mackey, Alison; And Others

    1991-01-01

    Rapid Profile, developed by Manfred Pienemann of National Languages Institute of Australia/Language Acquisition Research Centre, is a computer-based procedure for screening speech samples collected from language learners to assess their level of language development as compared to standard patterns in the acquisition of the target language. Rapid…

  17. Inferential Procedures for Correlation Coefficients Corrected for Attenuation.

    ERIC Educational Resources Information Center

    Hakstian, A. Ralph; And Others

    1988-01-01

    A model and computation procedure based on classical test score theory are presented for determination of a correlation coefficient corrected for attenuation due to unreliability. Delta and Monte Carlo method applications are discussed. A power analysis revealed no serious loss in efficiency resulting from correction for attentuation. (TJH)

  18. Procedure for extraction of disparate data from maps into computerized data bases

    NASA Technical Reports Server (NTRS)

    Junkin, B. G.

    1979-01-01

    A procedure is presented for extracting disparate sources of data from geographic maps and for the conversion of these data into a suitable format for processing on a computer-oriented information system. Several graphic digitizing considerations are included and related to the NASA Earth Resources Laboratory's Digitizer System. Current operating procedures for the Digitizer System are given in a simplified and logical manner. The report serves as a guide to those organizations interested in converting map-based data by using a comparable map digitizing system.

  19. Cybersecurity Workforce Development and the Protection of Critical Infrastructure

    DTIC Science & Technology

    2017-03-31

    communicat ions products, and limited travel for site visits and conferencing. The CSCC contains a developed web-based coordination site, computer ...the CSCC. The Best Practices Ana~yst position maintains a lisr of best practices, computer related patches. and standard operating procedures (SOP...involved in conducting vulnerability assessments of computer networks. To adequately exercise and experiment with industry standard software, it was

  20. Computational Phenotyping in Psychiatry: A Worked Example

    PubMed Central

    2016-01-01

    Abstract Computational psychiatry is a rapidly emerging field that uses model-based quantities to infer the behavioral and neuronal abnormalities that underlie psychopathology. If successful, this approach promises key insights into (pathological) brain function as well as a more mechanistic and quantitative approach to psychiatric nosology—structuring therapeutic interventions and predicting response and relapse. The basic procedure in computational psychiatry is to build a computational model that formalizes a behavioral or neuronal process. Measured behavioral (or neuronal) responses are then used to infer the model parameters of a single subject or a group of subjects. Here, we provide an illustrative overview over this process, starting from the modeling of choice behavior in a specific task, simulating data, and then inverting that model to estimate group effects. Finally, we illustrate cross-validation to assess whether between-subject variables (e.g., diagnosis) can be recovered successfully. Our worked example uses a simple two-step maze task and a model of choice behavior based on (active) inference and Markov decision processes. The procedural steps and routines we illustrate are not restricted to a specific field of research or particular computational model but can, in principle, be applied in many domains of computational psychiatry. PMID:27517087

  1. Computational Phenotyping in Psychiatry: A Worked Example.

    PubMed

    Schwartenbeck, Philipp; Friston, Karl

    2016-01-01

    Computational psychiatry is a rapidly emerging field that uses model-based quantities to infer the behavioral and neuronal abnormalities that underlie psychopathology. If successful, this approach promises key insights into (pathological) brain function as well as a more mechanistic and quantitative approach to psychiatric nosology-structuring therapeutic interventions and predicting response and relapse. The basic procedure in computational psychiatry is to build a computational model that formalizes a behavioral or neuronal process. Measured behavioral (or neuronal) responses are then used to infer the model parameters of a single subject or a group of subjects. Here, we provide an illustrative overview over this process, starting from the modeling of choice behavior in a specific task, simulating data, and then inverting that model to estimate group effects. Finally, we illustrate cross-validation to assess whether between-subject variables (e.g., diagnosis) can be recovered successfully. Our worked example uses a simple two-step maze task and a model of choice behavior based on (active) inference and Markov decision processes. The procedural steps and routines we illustrate are not restricted to a specific field of research or particular computational model but can, in principle, be applied in many domains of computational psychiatry.

  2. Delayed flap approach procedures for noise abatement and fuel conservation

    NASA Technical Reports Server (NTRS)

    Edwards, F. G.; Bull, J. S.; Foster, J. D.; Hegarty, D. M.; Drinkwater, F. J., III

    1976-01-01

    The NASA/Ames Research Center is currently investigating the delayed flap approach during which pilot actions are determined and prescribed by an onboard digital computer. The onboard digital computer determines the proper timing for the deployment of the landing gear and flaps based on the existing winds and airplane gross weight. Advisory commands are displayed to the pilot. The approach is flown along the conventional ILS glide slope but is initiated at a higher airspeed and in a clean aircraft configuration that allows for low thrust and results in reduced noise and fuel consumption. Topics discussed include operational procedures, pilot acceptability of these procedures, and fuel/noise benefits resulting from flight tests and simulation.

  3. Decision Support Systems and the Conflict Model of Decision Making: A Stimulus for New Computer-Assisted Careers Guidance Systems.

    ERIC Educational Resources Information Center

    Ballantine, R. Malcolm

    Decision Support Systems (DSSs) are computer-based decision aids to use when making decisions which are partially amenable to rational decision-making procedures but contain elements where intuitive judgment is an essential component. In such situations, DSSs are used to improve the quality of decision-making. The DSS approach is based on Simon's…

  4. Computer-Based Reading Instruction for Young Children with Disabilities

    ERIC Educational Resources Information Center

    Lee, Yeunjoo; Vail, Cynthia O.

    2005-01-01

    This investigation examined the effectiveness of a computer program in teaching sight word recognition to four young children with developmental disabilities. The intervention program was developed through a formative evaluation process. It embedded a constant-time-delay procedure and involved sounds, video, text, and animations. Dependent…

  5. Brain-Congruent Instruction: Does the Computer Make It Feasible?

    ERIC Educational Resources Information Center

    Stewart, William J.

    1984-01-01

    Based on the premise that computers could translate brain research findings into classroom practice, this article presents discoveries concerning human brain development, organization, and operation, and describes brain activity monitoring devices, brain function and structure variables, and a procedure for monitoring and analyzing brain activity…

  6. Numerical simulation of the vortical flow around a pitching airfoil

    NASA Astrophysics Data System (ADS)

    Fu, Xiang; Li, Gaohua; Wang, Fuxin

    2017-04-01

    In order to study the dynamic behaviors of the flapping wing, the vortical flow around a pitching NACA0012 airfoil is investigated. The unsteady flow field is obtained by a very efficient zonal procedure based on the velocity-vorticity formulation and the Reynolds number based on the chord length of the airfoil is set to 1 million. The zonal procedure divides up the whole computation domain in to three zones: potential flow zone, boundary layer zone and Navier-Stokes zone. Since the vorticity is absent in the potential flow zone, the vorticity transport equation needs only to be solved in the boundary layer zone and Navier-Stokes zone. Moreover, the boundary layer equations are solved in the boundary layer zone. This arrangement drastically reduces the computation time against the traditional numerical method. After the flow field computation, the evolution of the vortices around the airfoil is analyzed in detail.

  7. Scalable software-defined optical networking with high-performance routing and wavelength assignment algorithms.

    PubMed

    Lee, Chankyun; Cao, Xiaoyuan; Yoshikane, Noboru; Tsuritani, Takehiro; Rhee, June-Koo Kevin

    2015-10-19

    The feasibility of software-defined optical networking (SDON) for a practical application critically depends on scalability of centralized control performance. The paper, highly scalable routing and wavelength assignment (RWA) algorithms are investigated on an OpenFlow-based SDON testbed for proof-of-concept demonstration. Efficient RWA algorithms are proposed to achieve high performance in achieving network capacity with reduced computation cost, which is a significant attribute in a scalable centralized-control SDON. The proposed heuristic RWA algorithms differ in the orders of request processes and in the procedures of routing table updates. Combined in a shortest-path-based routing algorithm, a hottest-request-first processing policy that considers demand intensity and end-to-end distance information offers both the highest throughput of networks and acceptable computation scalability. We further investigate trade-off relationship between network throughput and computation complexity in routing table update procedure by a simulation study.

  8. C-Language Integrated Production System, Version 6.0

    NASA Technical Reports Server (NTRS)

    Riley, Gary; Donnell, Brian; Ly, Huyen-Anh Bebe; Ortiz, Chris

    1995-01-01

    C Language Integrated Production System (CLIPS) computer programs are specifically intended to model human expertise or other knowledge. CLIPS is designed to enable research on, and development and delivery of, artificial intelligence on conventional computers. CLIPS 6.0 provides cohesive software tool for handling wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented, and procedural. Rule-based programming: representation of knowledge as heuristics - essentially, rules of thumb that specify set of actions performed in given situation. Object-oriented programming: modeling of complex systems comprised of modular components easily reused to model other systems or create new components. Procedural-programming: representation of knowledge in ways similar to those of such languages as C, Pascal, Ada, and LISP. Version of CLIPS 6.0 for IBM PC-compatible computers requires DOS v3.3 or later and/or Windows 3.1 or later.

  9. Generic Hypersonic Inlet Module Analysis

    NASA Technical Reports Server (NTRS)

    Cockrell, Chares E., Jr.; Huebner, Lawrence D.

    2004-01-01

    A computational study associated with an internal inlet drag analysis was performed for a generic hypersonic inlet module. The purpose of this study was to determine the feasibility of computing the internal drag force for a generic scramjet engine module using computational methods. The computational study consisted of obtaining two-dimensional (2D) and three-dimensional (3D) computational fluid dynamics (CFD) solutions using the Euler and parabolized Navier-Stokes (PNS) equations. The solution accuracy was assessed by comparisons with experimental pitot pressure data. The CFD analysis indicates that the 3D PNS solutions show the best agreement with experimental pitot pressure data. The internal inlet drag analysis consisted of obtaining drag force predictions based on experimental data and 3D CFD solutions. A comparative assessment of each of the drag prediction methods is made and the sensitivity of CFD drag values to computational procedures is documented. The analysis indicates that the CFD drag predictions are highly sensitive to the computational procedure used.

  10. Combining Rosetta with molecular dynamics (MD): A benchmark of the MD-based ensemble protein design.

    PubMed

    Ludwiczak, Jan; Jarmula, Adam; Dunin-Horkawicz, Stanislaw

    2018-07-01

    Computational protein design is a set of procedures for computing amino acid sequences that will fold into a specified structure. Rosetta Design, a commonly used software for protein design, allows for the effective identification of sequences compatible with a given backbone structure, while molecular dynamics (MD) simulations can thoroughly sample near-native conformations. We benchmarked a procedure in which Rosetta design is started on MD-derived structural ensembles and showed that such a combined approach generates 20-30% more diverse sequences than currently available methods with only a slight increase in computation time. Importantly, the increase in diversity is achieved without a loss in the quality of the designed sequences assessed by their resemblance to natural sequences. We demonstrate that the MD-based procedure is also applicable to de novo design tasks started from backbone structures without any sequence information. In addition, we implemented a protocol that can be used to assess the stability of designed models and to select the best candidates for experimental validation. In sum our results demonstrate that the MD ensemble-based flexible backbone design can be a viable method for protein design, especially for tasks that require a large pool of diverse sequences. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. MPL-A program for computations with iterated integrals on moduli spaces of curves of genus zero

    NASA Astrophysics Data System (ADS)

    Bogner, Christian

    2016-06-01

    We introduce the Maple program MPL for computations with multiple polylogarithms. The program is based on homotopy invariant iterated integrals on moduli spaces M0,n of curves of genus 0 with n ordered marked points. It includes the symbol map and procedures for the analytic computation of period integrals on M0,n. It supports the automated computation of a certain class of Feynman integrals.

  12. Maximized gust loads for a nonlinear airplane using matched filter theory and constrained optimization

    NASA Technical Reports Server (NTRS)

    Scott, Robert C.; Pototzky, Anthony S.; Perry, Boyd, III

    1991-01-01

    Two matched filter theory based schemes are described and illustrated for obtaining maximized and time correlated gust loads for a nonlinear aircraft. The first scheme is computationally fast because it uses a simple 1-D search procedure to obtain its answers. The second scheme is computationally slow because it uses a more complex multi-dimensional search procedure to obtain its answers, but it consistently provides slightly higher maximum loads than the first scheme. Both schemes are illustrated with numerical examples involving a nonlinear control system.

  13. Stochastic Process Creation

    NASA Astrophysics Data System (ADS)

    Esparza, Javier

    In many areas of computer science entities can “reproduce”, “replicate”, or “create new instances”. Paramount examples are threads in multithreaded programs, processes in operating systems, and computer viruses, but many others exist: procedure calls create new incarnations of the callees, web crawlers discover new pages to be explored (and so “create” new tasks), divide-and-conquer procedures split a problem into subproblems, and leaves of tree-based data structures become internal nodes with children. For lack of a better name, I use the generic term systems with process creation to refer to all these entities.

  14. Maximized gust loads for a nonlinear airplane using matched filter theory and constrained optimization

    NASA Technical Reports Server (NTRS)

    Scott, Robert C.; Perry, Boyd, III; Pototzky, Anthony S.

    1991-01-01

    This paper describes and illustrates two matched-filter-theory based schemes for obtaining maximized and time-correlated gust-loads for a nonlinear airplane. The first scheme is computationally fast because it uses a simple one-dimensional search procedure to obtain its answers. The second scheme is computationally slow because it uses a more complex multidimensional search procedure to obtain its answers, but it consistently provides slightly higher maximum loads than the first scheme. Both schemes are illustrated with numerical examples involving a nonlinear control system.

  15. Computer-aided design of bevel gear tooth surfaces

    NASA Technical Reports Server (NTRS)

    Shuo, Hung Chang; Huston, Ronald L.; Coy, John J.

    1989-01-01

    This paper presents a computer-aided design procedure for generating bevel gears. The development is based on examining a perfectly plastic, cone-shaped gear blank rolling over a cutting tooth on a plane crown rack. The resulting impression on the plastic gear blank is the envelope of the cutting tooth. This impression and envelope thus form a conjugate tooth surface. Equations are presented for the locus of points on the tooth surface. The same procedures are then extended to simulate the generation of a spiral bevel gear. The corresponding governing equations are presented.

  16. Computer aided design of bevel gear tooth surfaces

    NASA Technical Reports Server (NTRS)

    Chang, S. H.; Huston, R. L.; Coy, J. J.

    1989-01-01

    This paper presents a computer-aided design procedure for generating bevel gears. The development is based on examining a perfectly plastic, cone-shaped gear blank rolling over a cutting tooth on a plane crown rack. The resulting impression on the plastic gear blank is the envelope of the cutting tooth. This impression and envelope thus form a conjugate tooth surface. Equations are presented for the locus of points on the tooth surface. The same procedures are then extended to simulate the generation of a spiral bevel gear. The corresponding governing equations are presented.

  17. User's manual for the Shuttle Electric Power System analysis computer program (SEPS), volume 2 of program documentation

    NASA Technical Reports Server (NTRS)

    Bains, R. W.; Herwig, H. A.; Luedeman, J. K.; Torina, E. M.

    1974-01-01

    The Shuttle Electric Power System Analysis SEPS computer program which performs detailed load analysis including predicting energy demands and consumables requirements of the shuttle electric power system along with parameteric and special case studies on the shuttle electric power system is described. The functional flow diagram of the SEPS program is presented along with data base requirements and formats, procedure and activity definitions, and mission timeline input formats. Distribution circuit input and fixed data requirements are included. Run procedures and deck setups are described.

  18. Equilibrium paths analysis of materials with rheological properties by using the chaos theory

    NASA Astrophysics Data System (ADS)

    Bednarek, Paweł; Rządkowski, Jan

    2018-01-01

    The numerical equilibrium path analysis of the material with random rheological properties by using standard procedures and specialist computer programs was not successful. The proper solution for the analysed heuristic model of the material was obtained on the base of chaos theory elements and neural networks. The paper deals with mathematical reasons of used computer programs and also are elaborated the properties of the attractor used in analysis. There are presented results of conducted numerical analysis both in a numerical and in graphical form for the used procedures.

  19. Improved Savitzky-Golay-method-based fluorescence subtraction algorithm for rapid recovery of Raman spectra.

    PubMed

    Chen, Kun; Zhang, Hongyuan; Wei, Haoyun; Li, Yan

    2014-08-20

    In this paper, we propose an improved subtraction algorithm for rapid recovery of Raman spectra that can substantially reduce the computation time. This algorithm is based on an improved Savitzky-Golay (SG) iterative smoothing method, which involves two key novel approaches: (a) the use of the Gauss-Seidel method and (b) the introduction of a relaxation factor into the iterative procedure. By applying a novel successive relaxation (SG-SR) iterative method to the relaxation factor, additional improvement in the convergence speed over the standard Savitzky-Golay procedure is realized. The proposed improved algorithm (the RIA-SG-SR algorithm), which uses SG-SR-based iteration instead of Savitzky-Golay iteration, has been optimized and validated with a mathematically simulated Raman spectrum, as well as experimentally measured Raman spectra from non-biological and biological samples. The method results in a significant reduction in computing cost while yielding consistent rejection of fluorescence and noise for spectra with low signal-to-fluorescence ratios and varied baselines. In the simulation, RIA-SG-SR achieved 1 order of magnitude improvement in iteration number and 2 orders of magnitude improvement in computation time compared with the range-independent background-subtraction algorithm (RIA). Furthermore the computation time of the experimentally measured raw Raman spectrum processing from skin tissue decreased from 6.72 to 0.094 s. In general, the processing of the SG-SR method can be conducted within dozens of milliseconds, which can provide a real-time procedure in practical situations.

  20. Computations on Wings With Full-Span Oscillating Control Surfaces Using Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.

    2013-01-01

    A dual-level parallel procedure is presented for computing large databases to support aerospace vehicle design. This procedure has been developed as a single Unix script within the Parallel Batch Submission environment utilizing MPIexec and runs MPI based analysis software. It has been developed to provide a process for aerospace designers to generate data for large numbers of cases with the highest possible fidelity and reasonable wall clock time. A single job submission environment has been created to avoid keeping track of multiple jobs and the associated system administration overhead. The process has been demonstrated for computing large databases for the design of typical aerospace configurations, a launch vehicle and a rotorcraft.

  1. Existence and instability of steady states for a triangular cross-diffusion system: A computer-assisted proof

    NASA Astrophysics Data System (ADS)

    Breden, Maxime; Castelli, Roberto

    2018-05-01

    In this paper, we present and apply a computer-assisted method to study steady states of a triangular cross-diffusion system. Our approach consist in an a posteriori validation procedure, that is based on using a fixed point argument around a numerically computed solution, in the spirit of the Newton-Kantorovich theorem. It allows to prove the existence of various non homogeneous steady states for different parameter values. In some situations, we obtain as many as 13 coexisting steady states. We also apply the a posteriori validation procedure to study the linear stability of the obtained steady states, proving that many of them are in fact unstable.

  2. Pen-based computers: Computers without keys

    NASA Technical Reports Server (NTRS)

    Conklin, Cheryl L.

    1994-01-01

    The National Space Transportation System (NSTS) is comprised of many diverse and highly complex systems incorporating the latest technologies. Data collection associated with ground processing of the various Space Shuttle system elements is extremely challenging due to the many separate processing locations where data is generated. This presents a significant problem when the timely collection, transfer, collation, and storage of data is required. This paper describes how new technology, referred to as Pen-Based computers, is being used to transform the data collection process at Kennedy Space Center (KSC). Pen-Based computers have streamlined procedures, increased data accuracy, and now provide more complete information than previous methods. The end results is the elimination of Shuttle processing delays associated with data deficiencies.

  3. Iterative refinement of structure-based sequence alignments by Seed Extension

    PubMed Central

    Kim, Changhoon; Tai, Chin-Hsien; Lee, Byungkook

    2009-01-01

    Background Accurate sequence alignment is required in many bioinformatics applications but, when sequence similarity is low, it is difficult to obtain accurate alignments based on sequence similarity alone. The accuracy improves when the structures are available, but current structure-based sequence alignment procedures still mis-align substantial numbers of residues. In order to correct such errors, we previously explored the possibility of replacing the residue-based dynamic programming algorithm in structure alignment procedures with the Seed Extension algorithm, which does not use a gap penalty. Here, we describe a new procedure called RSE (Refinement with Seed Extension) that iteratively refines a structure-based sequence alignment. Results RSE uses SE (Seed Extension) in its core, which is an algorithm that we reported recently for obtaining a sequence alignment from two superimposed structures. The RSE procedure was evaluated by comparing the correctly aligned fractions of residues before and after the refinement of the structure-based sequence alignments produced by popular programs. CE, DaliLite, FAST, LOCK2, MATRAS, MATT, TM-align, SHEBA and VAST were included in this analysis and the NCBI's CDD root node set was used as the reference alignments. RSE improved the average accuracy of sequence alignments for all programs tested when no shift error was allowed. The amount of improvement varied depending on the program. The average improvements were small for DaliLite and MATRAS but about 5% for CE and VAST. More substantial improvements have been seen in many individual cases. The additional computation times required for the refinements were negligible compared to the times taken by the structure alignment programs. Conclusion RSE is a computationally inexpensive way of improving the accuracy of a structure-based sequence alignment. It can be used as a standalone procedure following a regular structure-based sequence alignment or to replace the traditional iterative refinement procedures based on residue-level dynamic programming algorithm in many structure alignment programs. PMID:19589133

  4. Design Guidance for Computer-Based Procedures for Field Workers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxstrand, Johanna; Le Blanc, Katya; Bly, Aaron

    Nearly all activities that involve human interaction with nuclear power plant systems are guided by procedures, instructions, or checklists. Paper-based procedures (PBPs) currently used by most utilities have a demonstrated history of ensuring safety; however, improving procedure use could yield significant savings in increased efficiency, as well as improved safety through human performance gains. The nuclear industry is constantly trying to find ways to decrease human error rates, especially human error rates associated with procedure use. As a step toward the goal of improving field workers’ procedure use and adherence and hence improve human performance and overall system reliability, themore » U.S. Department of Energy Light Water Reactor Sustainability (LWRS) Program researchers, together with the nuclear industry, have been investigating the possibility and feasibility of replacing current paper-based procedures with computer-based procedures (CBPs). PBPs have ensured safe operation of plants for decades, but limitations in paper-based systems do not allow them to reach the full potential for procedures to prevent human errors. The environment in a nuclear power plant is constantly changing, depending on current plant status and operating mode. PBPs, which are static by nature, are being applied to a constantly changing context. This constraint often results in PBPs that are written in a manner that is intended to cover many potential operating scenarios. Hence, the procedure layout forces the operator to search through a large amount of irrelevant information to locate the pieces of information relevant for the task and situation at hand, which has potential consequences of taking up valuable time when operators must be responding to the situation, and potentially leading operators down an incorrect response path. Other challenges related to use of PBPs are management of multiple procedures, place-keeping, finding the correct procedure for a task, and relying on other sources of additional information to ensure a functional and accurate understanding of the current plant status (Converse, 1995; Fink, Killian, Hanes, and Naser, 2009; Le Blanc, Oxstrand, and Waicosky, 2012). This report provides design guidance to be used when designing the human-system interaction and the design of the graphical user interface for a CBP system. The guidance is based on human factors research related to the design and usability of CBPs conducted by Idaho National Laboratory, 2012 - 2016.« less

  5. Randomization Procedures Applied to Analysis of Ballistic Data

    DTIC Science & Technology

    1991-06-01

    test,;;15. NUMBER OF PAGES data analysis; computationally intensive statistics ; randomization tests; permutation tests; 16 nonparametric statistics ...be 0.13. 8 Any reasonable statistical procedure would fail to support the notion of improvement of dynamic over standard indexing based on this data ...AD-A238 389 TECHNICAL REPORT BRL-TR-3245 iBRL RANDOMIZATION PROCEDURES APPLIED TO ANALYSIS OF BALLISTIC DATA MALCOLM S. TAYLOR BARRY A. BODT - JUNE

  6. Assessment of Collaborative Learning Experiences by Graphical Analysis of Wiki Contributions

    ERIC Educational Resources Information Center

    Palomo-Duarte, Manuel; Dodero, Juan Manuel; Medina-Bulo, Inmaculada; Rodríguez-Posada, Emilio J.; Ruiz-Rube, Iván

    2014-01-01

    The widespread adoption of computers and Internet in our life has reached the classrooms, where computer-supported collaborative learning (CSCL) based on wikis offers new ways of collaboration and encourages student participation. When the number of contributions from students increases, traditional assessment procedures of e-learning settings…

  7. RAMP: a computer system for mapping regional areas

    Treesearch

    Bradley B. Nickey

    1975-01-01

    Until 1972, the U.S. Forest Service's Individual Fire Reports recorded locations by the section-township-range system..These earlier fire reports, therefore, lacked congruent locations. RAMP (Regional Area Mapping Procedure) was designed to make the reports more useful for quantitative analysis. This computer-based technique converts locations expressed in...

  8. High level language for measurement complex control based on the computer E-100I

    NASA Technical Reports Server (NTRS)

    Zubkov, B. V.

    1980-01-01

    A high level language was designed to control the process of conducting an experiment using the computer "Elektrinika-1001". Program examples are given to control the measuring and actuating devices. The procedure of including these programs in the suggested high level language is described.

  9. Radiant Heat Transfer Between Nongray Parallel Plates of Tungsten

    NASA Technical Reports Server (NTRS)

    Branstetter, J. Robert

    1961-01-01

    Net radiant heat flow between two infinite, parallel, tungsten plates was computed by summing the monochromatic energy exchange; the results are graphically presented as a function of the temperatures of the two surfaces. In general these fluxes range from approximately a to 25 percent greater than the results of gray-body computations based on the same emissivity data. The selection of spectral emissivity data and the computational procedure are discussed. The present analytical procedure is so arranged that, as spectral emissivity data for a material become available, these data can be readily introduced into the NASA data-reduction equipment, which has been programmed to compute the net heat flux for the particular geometry and basic assumptions cited in the text. Nongray-body computational techniques for determining radiant heat flux appear practical provided the combination of select spectral emissivity data and the proper mechanized data-reduction equipment are brought to bear on the problem.

  10. A dc model for power switching transistors suitable for computer-aided design and analysis

    NASA Technical Reports Server (NTRS)

    Wilson, P. M.; George, R. T., Jr.; Owen, H. A.; Wilson, T. G.

    1979-01-01

    A model for bipolar junction power switching transistors whose parameters can be readily obtained by the circuit design engineer, and which can be conveniently incorporated into standard computer-based circuit analysis programs is presented. This formulation results from measurements which may be made with standard laboratory equipment. Measurement procedures, as well as a comparison between actual and computed results, are presented.

  11. Conceptual Design Oriented Wing Structural Analysis and Optimization

    NASA Technical Reports Server (NTRS)

    Lau, May Yuen

    1996-01-01

    Airplane optimization has always been the goal of airplane designers. In the conceptual design phase, a designer's goal could be tradeoffs between maximum structural integrity, minimum aerodynamic drag, or maximum stability and control, many times achieved separately. Bringing all of these factors into an iterative preliminary design procedure was time consuming, tedious, and not always accurate. For example, the final weight estimate would often be based upon statistical data from past airplanes. The new design would be classified based on gross characteristics, such as number of engines, wingspan, etc., to see which airplanes of the past most closely resembled the new design. This procedure works well for conventional airplane designs, but not very well for new innovative designs. With the computing power of today, new methods are emerging for the conceptual design phase of airplanes. Using finite element methods, computational fluid dynamics, and other computer techniques, designers can make very accurate disciplinary-analyses of an airplane design. These tools are computationally intensive, and when used repeatedly, they consume a great deal of computing time. In order to reduce the time required to analyze a design and still bring together all of the disciplines (such as structures, aerodynamics, and controls) into the analysis, simplified design computer analyses are linked together into one computer program. These design codes are very efficient for conceptual design. The work in this thesis is focused on a finite element based conceptual design oriented structural synthesis capability (CDOSS) tailored to be linked into ACSYNT.

  12. Security Considerations and Recommendations in Computer-Based Testing

    PubMed Central

    Al-Saleem, Saleh M.

    2014-01-01

    Many organizations and institutions around the globe are moving or planning to move their paper-and-pencil based testing to computer-based testing (CBT). However, this conversion will not be the best option for all kinds of exams and it will require significant resources. These resources may include the preparation of item banks, methods for test delivery, procedures for test administration, and last but not least test security. Security aspects may include but are not limited to the identification and authentication of examinee, the risks that are associated with cheating on the exam, and the procedures related to test delivery to the examinee. This paper will mainly investigate the security considerations associated with CBT and will provide some recommendations for the security of these kinds of tests. We will also propose a palm-based biometric authentication system incorporated with basic authentication system (username/password) in order to check the identity and authenticity of the examinee. PMID:25254250

  13. Security considerations and recommendations in computer-based testing.

    PubMed

    Al-Saleem, Saleh M; Ullah, Hanif

    2014-01-01

    Many organizations and institutions around the globe are moving or planning to move their paper-and-pencil based testing to computer-based testing (CBT). However, this conversion will not be the best option for all kinds of exams and it will require significant resources. These resources may include the preparation of item banks, methods for test delivery, procedures for test administration, and last but not least test security. Security aspects may include but are not limited to the identification and authentication of examinee, the risks that are associated with cheating on the exam, and the procedures related to test delivery to the examinee. This paper will mainly investigate the security considerations associated with CBT and will provide some recommendations for the security of these kinds of tests. We will also propose a palm-based biometric authentication system incorporated with basic authentication system (username/password) in order to check the identity and authenticity of the examinee.

  14. Text, Graphics, and Multimedia Materials Employed in Learning a Computer-Based Procedural Task

    ERIC Educational Resources Information Center

    Coffindaffer, Kari Christine Carlson

    2010-01-01

    The present research study investigated the interaction of graphic design students with different forms of software training materials. Four versions of the procedural task instructions were developed (A) Traditional Textbook with Still Images, (B) Modified Text with Integrated Still Images, (C) Onscreen Modified Text with Silent Onscreen Video…

  15. Confidence Intervals for Weighted Composite Scores under the Compound Binomial Error Model

    ERIC Educational Resources Information Center

    Kim, Kyung Yong; Lee, Won-Chan

    2018-01-01

    Reporting confidence intervals with test scores helps test users make important decisions about examinees by providing information about the precision of test scores. Although a variety of estimation procedures based on the binomial error model are available for computing intervals for test scores, these procedures assume that items are randomly…

  16. Coupling of Multiple Coulomb Scattering with Energy Loss and Straggling in HZETRN

    NASA Technical Reports Server (NTRS)

    Mertens, Christopher J.; Wilson, John W.; Walker, Steven A.; Tweed, John

    2007-01-01

    The new version of the HZETRN deterministic transport code based on Green's function methods, and the incorporation of ground-based laboratory boundary conditions, has lead to the development of analytical and numerical procedures to include off-axis dispersion of primary ion beams due to small-angle multiple Coulomb scattering. In this paper we present the theoretical formulation and computational procedures to compute ion beam broadening and a methodology towards achieving a self-consistent approach to coupling multiple scattering interactions with ionization energy loss and straggling. Our initial benchmark case is a 60 MeV proton beam on muscle tissue, for which we can compare various attributes of beam broadening with Monte Carlo simulations reported in the open literature.

  17. Investigation of advanced counterrotation blade configuration concepts for high speed turboprop systems. Task 4: Advanced fan section aerodynamic analysis computer program user's manual

    NASA Technical Reports Server (NTRS)

    Crook, Andrew J.; Delaney, Robert A.

    1992-01-01

    The computer program user's manual for the ADPACAPES (Advanced Ducted Propfan Analysis Code-Average Passage Engine Simulation) program is included. The objective of the computer program is development of a three-dimensional Euler/Navier-Stokes flow analysis for fan section/engine geometries containing multiple blade rows and multiple spanwise flow splitters. An existing procedure developed by Dr. J. J. Adamczyk and associates at the NASA Lewis Research Center was modified to accept multiple spanwise splitter geometries and simulate engine core conditions. The numerical solution is based upon a finite volume technique with a four stage Runge-Kutta time marching procedure. Multiple blade row solutions are based upon the average-passage system of equations. The numerical solutions are performed on an H-type grid system, with meshes meeting the requirement of maintaining a common axisymmetric mesh for each blade row grid. The analysis was run on several geometry configurations ranging from one to five blade rows and from one to four radial flow splitters. The efficiency of the solution procedure was shown to be the same as the original analysis.

  18. Computer Based Procedures for Field Workers - FY16 Research Activities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxstrand, Johanna; Bly, Aaron

    The Computer-Based Procedure (CBP) research effort is a part of the Light-Water Reactor Sustainability (LWRS) Program, which provides the technical foundations for licensing and managing the long-term, safe, and economical operation of current nuclear power plants. One of the primary missions of the LWRS program is to help the U.S. nuclear industry adopt new technologies and engineering solutions that facilitate the continued safe operation of the plants and extension of the current operating licenses. One area that could yield tremendous savings in increased efficiency and safety is in improving procedure use. A CBP provides the opportunity to incorporate context-driven jobmore » aids, such as drawings, photos, and just-in-time training. The presentation of information in CBPs can be much more flexible and tailored to the task, actual plant condition, and operation mode. The dynamic presentation of the procedure will guide the user down the path of relevant steps, thus minimizing time spent by the field worker to evaluate plant conditions and decisions related to the applicability of each step. This dynamic presentation of the procedure also minimizes the risk of conducting steps out of order and/or incorrectly assessed applicability of steps. This report provides a summary of the main research activities conducted in the Computer-Based Procedures for Field Workers effort since 2012. The main focus of the report is on the research activities conducted in fiscal year 2016. The activities discussed are the Nuclear Electronic Work Packages – Enterprise Requirements initiative, the development of a design guidance for CBPs (which compiles all insights gained through the years of CBP research), the facilitation of vendor studies at the Idaho National Laboratory (INL) Advanced Test Reactor (ATR), a pilot study for how to enhance the plant design modification work process, the collection of feedback from a field evaluation study at Plant Vogtle, and path forward to commercialize INL’s CBP system.« less

  19. Computational study of engine external aerodynamics as a part of multidisciplinary optimization procedure

    NASA Astrophysics Data System (ADS)

    Savelyev, Andrey; Anisimov, Kirill; Kazhan, Egor; Kursakov, Innocentiy; Lysenkov, Alexandr

    2016-10-01

    The paper is devoted to the development of methodology to optimize external aerodynamics of the engine. Optimization procedure is based on numerical solution of the Reynolds-averaged Navier-Stokes equations. As a method of optimization the surrogate based method is used. As a test problem optimal shape design of turbofan nacelle is considered. The results of the first stage, which investigates classic airplane configuration with engine located under the wing, are presented. Described optimization procedure is considered in the context of multidisciplinary optimization of the 3rd generation, developed in the project AGILE.

  20. A web-based procedure for liver segmentation in CT images

    NASA Astrophysics Data System (ADS)

    Yuan, Rong; Luo, Ming; Wang, Luyao; Xie, Qingguo

    2015-03-01

    Liver segmentation in CT images has been acknowledged as a basic and indispensable part in systems of computer aided liver surgery for operation design and risk evaluation. In this paper, we will introduce and implement a web-based procedure for liver segmentation to help radiologists and surgeons get an accurate result efficiently and expediently. Several clinical datasets are used to evaluate the accessibility and the accuracy. This procedure seems a promising approach for extraction of liver volumetry of various shapes. Moreover, it is possible for user to access the segmentation wherever the Internet is available without any specific machine.

  1. Verification and Validation: High Charge and Energy (HZE) Transport Codes and Future Development

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Tripathi, Ram K.; Mertens, Christopher J.; Blattnig, Steve R.; Clowdsley, Martha S.; Cucinotta, Francis A.; Tweed, John; Heinbockel, John H.; Walker, Steven A.; Nealy, John E.

    2005-01-01

    In the present paper, we give the formalism for further developing a fully three-dimensional HZETRN code using marching procedures but also development of a new Green's function code is discussed. The final Green's function code is capable of not only validation in the space environment but also in ground based laboratories with directed beams of ions of specific energy and characterized with detailed diagnostic particle spectrometer devices. Special emphasis is given to verification of the computational procedures and validation of the resultant computational model using laboratory and spaceflight measurements. Due to historical requirements, two parallel development paths for computational model implementation using marching procedures and Green s function techniques are followed. A new version of the HZETRN code capable of simulating HZE ions with either laboratory or space boundary conditions is under development. Validation of computational models at this time is particularly important for President Bush s Initiative to develop infrastructure for human exploration with first target demonstration of the Crew Exploration Vehicle (CEV) in low Earth orbit in 2008.

  2. The microcomputer in the dental office: a new diagnostic aid.

    PubMed

    van der Stelt, P F

    1985-06-01

    The first computer applications in the dental office were based upon standard accountancy procedures. Recently, more and more computer applications have become available to meet the specific requirements of dental practice. This implies not only business procedures, but also facilities to store patient records in the system and retrieve them easily. Another development concerns the automatic calculation of diagnostic data such as those provided in cephalometric analysis. Furthermore, growth and surgical results in the craniofacial area can be predicted by computerized extrapolation. Computers have been useful in obtaining the patient's anamnestic data objectively and for the making of decisions based on such data. Computer-aided instruction systems have been developed for undergraduate students to bridge the gap between textbook and patient interaction without the risks inherent in the latter. Radiology will undergo substantial changes as a result of the application of electronic imaging devices instead of the conventional radiographic films. Computer-assisted electronic imaging will enable image processing, image enhancement, pattern recognition and data transmission for consultation and storage purposes. Image processing techniques will increase image quality whilst still allowing low-dose systems. Standardization of software and system configuration and the development of 'user friendly' programs is the major concern for the near future.

  3. Design of Energy Storage Reactors for Dc-To-Dc Converters. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chen, D. Y.

    1975-01-01

    Two methodical approaches to the design of energy-storage reactors for a group of widely used dc-to-dc converters are presented. One of these approaches is based on a steady-state time-domain analysis of piecewise-linearized circuit models of the converters, while the other approach is based on an analysis of the same circuit models, but from an energy point of view. The design procedure developed from the first approach includes a search through a stored data file of magnetic core characteristics and results in a list of usable reactor designs which meet a particular converter's requirements. Because of the complexity of this procedure, a digital computer usually is used to implement the design algorithm. The second approach, based on a study of the storage and transfer of energy in the magnetic reactors, leads to a straightforward design procedure which can be implemented with hand calculations. An equation to determine the lower-bound volume of workable cores for given converter design specifications is derived. Using this computer lower-bound volume, a comparative evaluation of various converter configurations is presented.

  4. High-speed technique based on a parallel projection correlation procedure for digital image correlation

    NASA Astrophysics Data System (ADS)

    Zaripov, D. I.; Renfu, Li

    2018-05-01

    The implementation of high-efficiency digital image correlation methods based on a zero-normalized cross-correlation (ZNCC) procedure for high-speed, time-resolved measurements using a high-resolution digital camera is associated with big data processing and is often time consuming. In order to speed-up ZNCC computation, a high-speed technique based on a parallel projection correlation procedure is proposed. The proposed technique involves the use of interrogation window projections instead of its two-dimensional field of luminous intensity. This simplification allows acceleration of ZNCC computation up to 28.8 times compared to ZNCC calculated directly, depending on the size of interrogation window and region of interest. The results of three synthetic test cases, such as a one-dimensional uniform flow, a linear shear flow and a turbulent boundary-layer flow, are discussed in terms of accuracy. In the latter case, the proposed technique is implemented together with an iterative window-deformation technique. On the basis of the results of the present work, the proposed technique is recommended to be used for initial velocity field calculation, with further correction using more accurate techniques.

  5. A CFD-based aerodynamic design procedure for hypersonic wind-tunnel nozzles

    NASA Technical Reports Server (NTRS)

    Korte, John J.

    1993-01-01

    A new procedure which unifies the best of current classical design practices, computational fluid dynamics (CFD), and optimization procedures is demonstrated for designing the aerodynamic lines of hypersonic wind-tunnel nozzles. The new procedure can be used to design hypersonic wind tunnel nozzles with thick boundary layers where the classical design procedure has been shown to break down. An efficient CFD code, which solves the parabolized Navier-Stokes (PNS) equations using an explicit upwind algorithm, is coupled to a least-squares (LS) optimization procedure. A LS problem is formulated to minimize the difference between the computed flow field and the objective function, consisting of the centerline Mach number distribution and the exit Mach number and flow angle profiles. The aerodynamic lines of the nozzle are defined using a cubic spline, the slopes of which are optimized with the design procedure. The advantages of the new procedure are that it allows full use of powerful CFD codes in the design process, solves an optimization problem to determine the new contour, can be used to design new nozzles or improve sections of existing nozzles, and automatically compensates the nozzle contour for viscous effects as part of the unified design procedure. The new procedure is demonstrated by designing two Mach 15, a Mach 12, and a Mach 18 helium nozzles. The flexibility of the procedure is demonstrated by designing the two Mach 15 nozzles using different constraints, the first nozzle for a fixed length and exit diameter and the second nozzle for a fixed length and throat diameter. The computed flow field for the Mach 15 least squares parabolized Navier-Stokes (LS/PNS) designed nozzle is compared with the classically designed nozzle and demonstrates a significant improvement in the flow expansion process and uniform core region.

  6. A Risk-Analysis Approach to Implementing Web-Based Assessment

    ERIC Educational Resources Information Center

    Ricketts, Chris; Zakrzewski, Stan

    2005-01-01

    Computer-Based Assessment is a risky business. This paper proposes the use of a model for web-based assessment systems that identifies pedagogic, operational, technical (non web-based), web-based and financial risks. The strategies and procedures for risk elimination or reduction arise from risk analysis and management and are the means by which…

  7. A New Material Mapping Procedure for Quantitative Computed Tomography-Based, Continuum Finite Element Analyses of the Vertebra

    PubMed Central

    Unnikrishnan, Ginu U.; Morgan, Elise F.

    2011-01-01

    Inaccuracies in the estimation of material properties and errors in the assignment of these properties into finite element models limit the reliability, accuracy, and precision of quantitative computed tomography (QCT)-based finite element analyses of the vertebra. In this work, a new mesh-independent, material mapping procedure was developed to improve the quality of predictions of vertebral mechanical behavior from QCT-based finite element models. In this procedure, an intermediate step, called the material block model, was introduced to determine the distribution of material properties based on bone mineral density, and these properties were then mapped onto the finite element mesh. A sensitivity study was first conducted on a calibration phantom to understand the influence of the size of the material blocks on the computed bone mineral density. It was observed that varying the material block size produced only marginal changes in the predictions of mineral density. Finite element (FE) analyses were then conducted on a square column-shaped region of the vertebra and also on the entire vertebra in order to study the effect of material block size on the FE-derived outcomes. The predicted values of stiffness for the column and the vertebra decreased with decreasing block size. When these results were compared to those of a mesh convergence analysis, it was found that the influence of element size on vertebral stiffness was less than that of the material block size. This mapping procedure allows the material properties in a finite element study to be determined based on the block size required for an accurate representation of the material field, while the size of the finite elements can be selected independently and based on the required numerical accuracy of the finite element solution. The mesh-independent, material mapping procedure developed in this study could be particularly helpful in improving the accuracy of finite element analyses of vertebroplasty and spine metastases, as these analyses typically require mesh refinement at the interfaces between distinct materials. Moreover, the mapping procedure is not specific to the vertebra and could thus be applied to many other anatomic sites. PMID:21823740

  8. Computer Use of a Medical Dictionary to Select Search Words.

    ERIC Educational Resources Information Center

    O'Connor, John

    1986-01-01

    Explains an experiment in text-searching retrieval for cancer questions which developed and used computer procedures (via human simulation) to select search words from medical dictionaries. This study is based on an earlier one in which search words were humanly selected, and the recall results of the two studies are compared. (Author/LRW)

  9. Strategic Flexibility in Computational Estimation for Chinese- and Canadian-Educated Adults

    ERIC Educational Resources Information Center

    Xu, Chang; Wells, Emma; LeFevre, Jo-Anne; Imbo, Ineke

    2014-01-01

    The purpose of the present study was to examine factors that influence strategic flexibility in computational estimation for Chinese- and Canadian-educated adults. Strategic flexibility was operationalized as the percentage of trials on which participants chose the problem-based procedure that best balanced proximity to the correct answer with…

  10. IP Addressing: Problem-Based Learning Approach on Computer Networks

    ERIC Educational Resources Information Center

    Jevremovic, Aleksandar; Shimic, Goran; Veinovic, Mladen; Ristic, Nenad

    2017-01-01

    The case study presented in this paper describes the pedagogical aspects and experience gathered while using an e-learning tool named IPA-PBL. Its main purpose is to provide additional motivation for adopting theoretical principles and procedures in a computer networks course. In the proposed model, the sequencing of activities of the learning…

  11. IDEA Technical Report No. 2. Description of Data Base, 1976-77.

    ERIC Educational Resources Information Center

    Cashin, William E.; Slawson, Hugh M.

    The data and computational procedures used by the IDEA system at Kansas State University (during the 1976-77 academic year) to interpret ratings of teacher performance are described in this technical report. The computations for each of the seven parts (evaluation, course description, students' self ratings, methods, additional questions,…

  12. IDEA Technical Report No. 3. Description of Data Base, 1977-78.

    ERIC Educational Resources Information Center

    Cashin, William E.; Slawson, Hugh M.

    The data and computational procedures used by the IDEA System during the 1977-78 academic year at Kansas State University to interpret ratings of teacher performance are described in this technical report. The computations for each of the seven parts (evaluation, course description, students' self-ratings, methods, additional questions, diagnostic…

  13. Prediction of Combustion Gas Deposit Compositions

    NASA Technical Reports Server (NTRS)

    Kohl, F. J.; Mcbride, B. J.; Zeleznik, F. J.; Gordon, S.

    1985-01-01

    Demonstrated procedure used to predict accurately chemical compositions of complicated deposit mixtures. NASA Lewis Research Center's Computer Program for Calculation of Complex Chemical Equilibrium Compositions (CEC) used in conjunction with Computer Program for Calculation of Ideal Gas Thermodynamic Data (PAC) and resulting Thermodynamic Data Base (THDATA) to predict deposit compositions from metal or mineral-seeded combustion processes.

  14. Using Agent Base Models to Optimize Large Scale Network for Large System Inventories

    NASA Technical Reports Server (NTRS)

    Shameldin, Ramez Ahmed; Bowling, Shannon R.

    2010-01-01

    The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.

  15. Fuel Injector Design Optimization for an Annular Scramjet Geometry

    NASA Technical Reports Server (NTRS)

    Steffen, Christopher J., Jr.

    2003-01-01

    A four-parameter, three-level, central composite experiment design has been used to optimize the configuration of an annular scramjet injector geometry using computational fluid dynamics. The computational fluid dynamic solutions played the role of computer experiments, and response surface methodology was used to capture the simulation results for mixing efficiency and total pressure recovery within the scramjet flowpath. An optimization procedure, based upon the response surface results of mixing efficiency, was used to compare the optimal design configuration against the target efficiency value of 92.5%. The results of three different optimization procedures are presented and all point to the need to look outside the current design space for different injector geometries that can meet or exceed the stated mixing efficiency target.

  16. A fast bottom-up algorithm for computing the cut sets of noncoherent fault trees

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corynen, G.C.

    1987-11-01

    An efficient procedure for finding the cut sets of large fault trees has been developed. Designed to address coherent or noncoherent systems, dependent events, shared or common-cause events, the method - called SHORTCUT - is based on a fast algorithm for transforming a noncoherent tree into a quasi-coherent tree (COHERE), and on a new algorithm for reducing cut sets (SUBSET). To assure sufficient clarity and precision, the procedure is discussed in the language of simple sets, which is also developed in this report. Although the new method has not yet been fully implemented on the computer, we report theoretical worst-casemore » estimates of its computational complexity. 12 refs., 10 figs.« less

  17. Direct computation of turbulence and noise

    NASA Technical Reports Server (NTRS)

    Berman, C.; Gordon, G.; Karniadakis, G.; Batcho, P.; Jackson, E.; Orszag, S.

    1991-01-01

    Jet exhaust turbulence noise is computed using a time dependent solution of the three dimensional Navier-Stokes equations to supply the source terms for an acoustic computation based on the Phillips convected wave equation. An extrapolation procedure is then used to determine the far field noise spectrum in terms of the near field sound. This will lay the groundwork for studies of more complex flows typical of noise suppression nozzles.

  18. A dc model for power switching transistors suitable for computer-aided design and analysis

    NASA Technical Reports Server (NTRS)

    Wilson, P. M.; George, R. T., Jr.; Owen, H. A., Jr.; Wilson, T. G.

    1979-01-01

    The proposed dc model for bipolar junction power switching transistors is based on measurements which may be made with standard laboratory equipment. Those nonlinearities which are of importance to power electronics design are emphasized. Measurements procedures are discussed in detail. A model formulation adapted for use with a computer program is presented, and a comparison between actual and computer-generated results is made.

  19. Max-AUC Feature Selection in Computer-Aided Detection of Polyps in CT Colonography

    PubMed Central

    Xu, Jian-Wu; Suzuki, Kenji

    2014-01-01

    We propose a feature selection method based on a sequential forward floating selection (SFFS) procedure to improve the performance of a classifier in computerized detection of polyps in CT colonography (CTC). The feature selection method is coupled with a nonlinear support vector machine (SVM) classifier. Unlike the conventional linear method based on Wilks' lambda, the proposed method selected the most relevant features that would maximize the area under the receiver operating characteristic curve (AUC), which directly maximizes classification performance, evaluated based on AUC value, in the computer-aided detection (CADe) scheme. We presented two variants of the proposed method with different stopping criteria used in the SFFS procedure. The first variant searched all feature combinations allowed in the SFFS procedure and selected the subsets that maximize the AUC values. The second variant performed a statistical test at each step during the SFFS procedure, and it was terminated if the increase in the AUC value was not statistically significant. The advantage of the second variant is its lower computational cost. To test the performance of the proposed method, we compared it against the popular stepwise feature selection method based on Wilks' lambda for a colonic-polyp database (25 polyps and 2624 nonpolyps). We extracted 75 morphologic, gray-level-based, and texture features from the segmented lesion candidate regions. The two variants of the proposed feature selection method chose 29 and 7 features, respectively. Two SVM classifiers trained with these selected features yielded a 96% by-polyp sensitivity at false-positive (FP) rates of 4.1 and 6.5 per patient, respectively. Experiments showed a significant improvement in the performance of the classifier with the proposed feature selection method over that with the popular stepwise feature selection based on Wilks' lambda that yielded 18.0 FPs per patient at the same sensitivity level. PMID:24608058

  20. Max-AUC feature selection in computer-aided detection of polyps in CT colonography.

    PubMed

    Xu, Jian-Wu; Suzuki, Kenji

    2014-03-01

    We propose a feature selection method based on a sequential forward floating selection (SFFS) procedure to improve the performance of a classifier in computerized detection of polyps in CT colonography (CTC). The feature selection method is coupled with a nonlinear support vector machine (SVM) classifier. Unlike the conventional linear method based on Wilks' lambda, the proposed method selected the most relevant features that would maximize the area under the receiver operating characteristic curve (AUC), which directly maximizes classification performance, evaluated based on AUC value, in the computer-aided detection (CADe) scheme. We presented two variants of the proposed method with different stopping criteria used in the SFFS procedure. The first variant searched all feature combinations allowed in the SFFS procedure and selected the subsets that maximize the AUC values. The second variant performed a statistical test at each step during the SFFS procedure, and it was terminated if the increase in the AUC value was not statistically significant. The advantage of the second variant is its lower computational cost. To test the performance of the proposed method, we compared it against the popular stepwise feature selection method based on Wilks' lambda for a colonic-polyp database (25 polyps and 2624 nonpolyps). We extracted 75 morphologic, gray-level-based, and texture features from the segmented lesion candidate regions. The two variants of the proposed feature selection method chose 29 and 7 features, respectively. Two SVM classifiers trained with these selected features yielded a 96% by-polyp sensitivity at false-positive (FP) rates of 4.1 and 6.5 per patient, respectively. Experiments showed a significant improvement in the performance of the classifier with the proposed feature selection method over that with the popular stepwise feature selection based on Wilks' lambda that yielded 18.0 FPs per patient at the same sensitivity level.

  1. Thermal-stress analysis for a wood composite blade

    NASA Technical Reports Server (NTRS)

    Fu, K. C.; Harb, A.

    1984-01-01

    A thermal-stress analysis of a wind turbine blade made of wood composite material is reported. First, the governing partial differential equation on heat conduction is derived, then, a finite element procedure using variational approach is developed for the solution of the governing equation. Thus, the temperature distribution throughout the blade is determined. Next, based on the temperature distribution, a finite element procedure using potential energy approach is applied to determine the thermal-stress distribution. A set of results is obtained through the use of a computer, which is considered to be satisfactory. All computer programs are contained in the report.

  2. Self-Motion Perception: Assessment by Real-Time Computer Generated Animations

    NASA Technical Reports Server (NTRS)

    Parker, Donald E.

    1999-01-01

    Our overall goal is to develop materials and procedures for assessing vestibular contributions to spatial cognition. The specific objective of the research described in this paper is to evaluate computer-generated animations as potential tools for studying self-orientation and self-motion perception. Specific questions addressed in this study included the following. First, does a non- verbal perceptual reporting procedure using real-time animations improve assessment of spatial orientation? Are reports reliable? Second, do reports confirm expectations based on stimuli to vestibular apparatus? Third, can reliable reports be obtained when self-motion description vocabulary training is omitted?

  3. Acquiring, Representing, and Evaluating a Competence Model of Diagnostic Strategy.

    ERIC Educational Resources Information Center

    Clancey, William J.

    This paper describes NEOMYCIN, a computer program that models one physician's diagnostic reasoning within a limited area of medicine. NEOMYCIN's knowledge base and reasoning procedure constitute a model of how human knowledge is organized and how it is used in diagnosis. The hypothesis is tested that such a procedure can be used to simulate both…

  4. Numerical solutions of a control problem governed by functional differential equations

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Thrift, P. R.; Burns, J. A.; Cliff, E. M.

    1978-01-01

    A numerical procedure is proposed for solving optimal control problems governed by linear retarded functional differential equations. The procedure is based on the idea of 'averaging approximations', due to Banks and Burns (1975). For illustration, numerical results generated on an IBM 370/158 computer, which demonstrate the rapid convergence of the method are presented.

  5. What does an MRI scan cost?

    PubMed

    Young, David W

    2015-11-01

    Historically, hospital departments have computed the costs of individual tests or procedures using the ratio of cost to charges (RCC) method, which can produce inaccurate results. To determine a more accurate cost of a test or procedure, the activity-based costing (ABC) method must be used. Accurate cost calculations will ensure reliable information about the profitability of a hospital's DRGs.

  6. Implant treatment planning regarding augmentation procedures: panoramic radiographs vs. cone beam computed tomography images.

    PubMed

    Dagassan-Berndt, Dorothea C; Zitzmann, Nicola U; Walter, Clemens; Schulze, Ralf K W

    2016-08-01

    To evaluate the impact of cone beam computed tomography (CBCT) imaging on treatment planning regarding augmentation procedures for implant placement. Panoramic radiographs and CBCT images of 40 patients requesting single-tooth implants in 59 sites were retrospectively analyzed by six specialists in implantology, and treatment planning was performed. Therapeutic recommendations were compared with the surgical protocol performed initially. Bone height estimation from panoramic radiographs yielded to higher measures and greater variability compared to CBCT. The suggested treatment plan for lateral and vertical augmentation procedures based on CBCT or panoramic radiographs coincided for 55-72% of the cases. A trend to a more invasive augmentation procedure was seen when planning was based on CBCT. Panoramic radiography revealed 57-63% (lateral) vs. 67% (vertical augmentation) congruent plans in agreement with surgery. Among the dissenting sites, there was a trend toward less invasive planning for lateral augmentation with panoramic radiographs, while vertical augmentation requirements were more frequently more invasive when based on CBCT. Vertical augmentation requirements can be adequately determined from panoramic radiographs. In difficult cases with a deficient lateral alveolar bone, the augmentation schedule may better be evaluated from CBCT to avoid underestimation, which occurs more frequently when based on panoramic radiographs only. However, overall, radiographic interpretation and diagnostic thinking accuracy seem to be mainly depending on the opinion of observers. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  7. An analytical procedure and automated computer code used to design model nozzles which meet MSFC base pressure similarity parameter criteria. [space shuttle

    NASA Technical Reports Server (NTRS)

    Sulyma, P. R.

    1980-01-01

    Fundamental equations and similarity definition and application are described as well as the computational steps of a computer program developed to design model nozzles for wind tunnel tests conducted to define power-on aerodynamic characteristics of the space shuttle over a range of ascent trajectory conditions. The computer code capabilities, a user's guide for the model nozzle design program, and the output format are examined. A program listing is included.

  8. Patient comprehension of an interactive, computer-based information program for cardiac catheterization: a comparison with standard information.

    PubMed

    Tait, Alan R; Voepel-Lewis, Terri; Moscucci, Mauro; Brennan-Martinez, Colleen M; Levine, Robert

    2009-11-09

    Several studies suggest that standard verbal and written consent information for treatment is often poorly understood by patients and their families. The present study examines the effect of an interactive computer-based information program on patients' understanding of cardiac catheterization. Adult patients scheduled to undergo diagnostic cardiac catheterization (n = 135) were randomized to receive details about the procedure using either standard institutional verbal and written information (SI) or interactive computerized information (ICI) preloaded on a laptop computer. Understanding was measured using semistructured interviews at baseline (ie, before information was given), immediately following cardiac catheterization (early understanding), and 2 weeks after the procedure (late understanding). The primary study outcome was the change from baseline to early understanding between groups. Subjects randomized to the ICI intervention had significantly greater improvement in understanding compared with those who received the SI (net change, 0.81; 95% confidence interval, 0.01-1.6). Significantly more subjects in the ICI group had complete understanding of the risks of cardiac catheterization (53.6% vs 23.1%) (P = .001) and options for treatment (63.2% vs 46.2%) (P = .048) compared with the SI group. Several predictors of improved understanding were identified, including baseline knowledge (P < .001), younger age (P = .002), and use of the ICI (P = .003). Results suggest that an interactive computer-based information program for cardiac catheterization may be more effective in improving patient understanding than conventional written consent information. This technology, therefore, holds promise as a means of presenting understandable detailed information regarding a variety of medical treatments and procedures.

  9. Computationally efficient stochastic optimization using multiple realizations

    NASA Astrophysics Data System (ADS)

    Bayer, P.; Bürger, C. M.; Finkel, M.

    2008-02-01

    The presented study is concerned with computationally efficient methods for solving stochastic optimization problems involving multiple equally probable realizations of uncertain parameters. A new and straightforward technique is introduced that is based on dynamically ordering the stack of realizations during the search procedure. The rationale is that a small number of critical realizations govern the output of a reliability-based objective function. By utilizing a problem, which is typical to designing a water supply well field, several variants of this "stack ordering" approach are tested. The results are statistically assessed, in terms of optimality and nominal reliability. This study demonstrates that the simple ordering of a given number of 500 realizations while applying an evolutionary search algorithm can save about half of the model runs without compromising the optimization procedure. More advanced variants of stack ordering can, if properly configured, save up to more than 97% of the computational effort that would be required if the entire number of realizations were considered. The findings herein are promising for similar problems of water management and reliability-based design in general, and particularly for non-convex problems that require heuristic search techniques.

  10. Building a computer program to support children, parents, and distraction during healthcare procedures.

    PubMed

    Hanrahan, Kirsten; McCarthy, Ann Marie; Kleiber, Charmaine; Ataman, Kaan; Street, W Nick; Zimmerman, M Bridget; Ersig, Anne L

    2012-10-01

    This secondary data analysis used data mining methods to develop predictive models of child risk for distress during a healthcare procedure. Data used came from a study that predicted factors associated with children's responses to an intravenous catheter insertion while parents provided distraction coaching. From the 255 items used in the primary study, 44 predictive items were identified through automatic feature selection and used to build support vector machine regression models. Models were validated using multiple cross-validation tests and by comparing variables identified as explanatory in the traditional versus support vector machine regression. Rule-based approaches were applied to the model outputs to identify overall risk for distress. A decision tree was then applied to evidence-based instructions for tailoring distraction to characteristics and preferences of the parent and child. The resulting decision support computer application, titled Children, Parents and Distraction, is being used in research. Future use will support practitioners in deciding the level and type of distraction intervention needed by a child undergoing a healthcare procedure.

  11. The development of an automatically produced cholangiography procedure using the reconstruction of portal-phase multidetector-row computed tomography images: preliminary experience.

    PubMed

    Hirose, Tomoaki; Igami, Tsuyoshi; Koga, Kusuto; Hayashi, Yuichiro; Ebata, Tomoki; Yokoyama, Yukihiro; Sugawara, Gen; Mizuno, Takashi; Yamaguchi, Junpei; Mori, Kensaku; Nagino, Masato

    2017-03-01

    Fusion angiography using reconstructed multidetector-row computed tomography (MDCT) images, and cholangiography using reconstructed images from MDCT with a cholangiographic agent include an anatomical gap due to the different periods of MDCT scanning. To conquer such gaps, we attempted to develop a cholangiography procedure that automatically reconstructs a cholangiogram from portal-phase MDCT images. The automatically produced cholangiography procedure utilized an original software program that was developed by the Graduate School of Information Science, Nagoya University. This program structured 5 candidate biliary tracts, and automatically selected one as the candidate for cholangiography. The clinical value of the automatically produced cholangiography procedure was estimated based on a comparison with manually produced cholangiography. Automatically produced cholangiograms were reconstructed for 20 patients who underwent MDCT scanning before biliary drainage for distal biliary obstruction. The procedure showed the ability to extract the 5 main biliary branches and the 21 subsegmental biliary branches in 55 and 25 % of the cases, respectively. The extent of aberrant connections and aberrant extractions outside the biliary tract was acceptable. Among all of the cholangiograms, 5 were clinically applied with no correction, 8 were applied with modest improvements, and 3 produced a correct cholangiography before automatic selection. Although our procedure requires further improvement based on the analysis of additional patient data, it may represent an alternative to direct cholangiography in the future.

  12. Enabling Computational Nanotechnology through JavaGenes in a Cycle Scavenging Environment

    NASA Technical Reports Server (NTRS)

    Globus, Al; Menon, Madhu; Srivastava, Deepak; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    A genetic algorithm procedure is developed and implemented for fitting parameters for many-body inter-atomic force field functions for simulating nanotechnology atomistic applications using portable Java on cycle-scavenged heterogeneous workstations. Given a physics based analytic functional form for the force field, correlated parameters in a multi-dimensional environment are typically chosen to fit properties given either by experiments and/or by higher accuracy quantum mechanical simulations. The implementation automates this tedious procedure using an evolutionary computing algorithm operating on hundreds of cycle-scavenged computers. As a proof of concept, we demonstrate the procedure for evaluating the Stillinger-Weber (S-W) potential by (a) reproducing the published parameters for Si using S-W energies in the fitness function, and (b) evolving a "new" set of parameters using semi-empirical tightbinding energies in the fitness function. The "new" parameters are significantly better suited for Si cluster energies and forces as compared to even the published S-W potential.

  13. [Application of virtual instrumentation technique in toxicological studies].

    PubMed

    Moczko, Jerzy A

    2005-01-01

    Research investigations require frequently direct connection of measuring equipment to the computer. Virtual instrumentation technique considerably facilitates programming of sophisticated acquisition-and-analysis procedures. In standard approach these two steps are performed subsequently with separate software tools. The acquired data are transfered with export / import procedures of particular program to the another one which executes next step of analysis. The described procedure is cumbersome, time consuming and may be potential source of the errors. In 1987 National Instruments Corporation introduced LabVIEW language based on the concept of graphical programming. Contrary to conventional textual languages it allows the researcher to concentrate on the resolved problem and omit all syntactical rules. Programs developed in LabVIEW are called as virtual instruments (VI) and are portable among different computer platforms as PCs, Macintoshes, Sun SPARCstations, Concurrent PowerMAX stations, HP PA/RISK workstations. This flexibility warrants that the programs prepared for one particular platform would be also appropriate to another one. In presented paper basic principles of connection of research equipment to computer systems were described.

  14. Extension of a three-dimensional viscous wing flow analysis

    NASA Technical Reports Server (NTRS)

    Weinberg, Bernard C.; Chen, Shyi-Yaung; Thoren, Stephen J.; Shamroth, Stephen J.

    1990-01-01

    Three-dimensional unsteady viscous effects can significantly influence the performance of fixed and rotary wing aircraft. These effects are important in both flows about helicopter rotors in forward flight and flows about 3-D (swept and tapered) supercritical wings. A computational procedure for calculating such flow field is developed, and therefore would be of great value in the design process as well as in understanding the corresponding flow phenomena. The procedure is based upon an alternating direction technique employing the Linearized Block Implicit method for solving 3-D viscous flow problems. In order to demonstrate the viability of this method, 2-D and 3-D problems are computed. These include the flow over a 2-D NACA 0012 airfoil under steady and oscillating conditions, and the steady, skewed, 3-D flow on a flat plate. Although actual 3-D flows over wings were not obtained, the ground work was laid for considering such flows. The description of the computational procedure and results are given.

  15. Alternative Modal Basis Selection Procedures for Nonlinear Random Response Simulation

    NASA Technical Reports Server (NTRS)

    Przekop, Adam; Guo, Xinyun; Rizzi, Stephen A.

    2010-01-01

    Three procedures to guide selection of an efficient modal basis in a nonlinear random response analysis are examined. One method is based only on proper orthogonal decomposition, while the other two additionally involve smooth orthogonal decomposition. Acoustic random response problems are employed to assess the performance of the three modal basis selection approaches. A thermally post-buckled beam exhibiting snap-through behavior, a shallowly curved arch in the auto-parametric response regime and a plate structure are used as numerical test articles. The results of the three reduced-order analyses are compared with the results of the computationally taxing simulation in the physical degrees of freedom. For the cases considered, all three methods are shown to produce modal bases resulting in accurate and computationally efficient reduced-order nonlinear simulations.

  16. Finite element design procedure for correcting the coining die profiles

    NASA Astrophysics Data System (ADS)

    Alexandrino, Paulo; Leitão, Paulo J.; Alves, Luis M.; Martins, Paulo A. F.

    2018-05-01

    This paper presents a new finite element based design procedure for correcting the coining die profiles in order to optimize the distribution of pressure and the alignment of the resultant vertical force at the end of the die stroke. The procedure avoids time consuming and costly try-outs, does not interfere with the creative process of the sculptors and extends the service life of the coining dies by significantly decreasing the applied pressure and bending moments. The numerical simulations were carried out in a computer program based on the finite element flow formulation that is currently being developed by the authors in collaboration with the Portuguese Mint. A new experimental procedure based on the stack compression test is also proposed for determining the stress-strain curve of the materials directly from the coin blanks.

  17. Accuracy of the microcanonical Lanczos method to compute real-frequency dynamical spectral functions of quantum models at finite temperatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Okamoto, Satoshi; Alvarez, Gonzalo; Dagotto, Elbio

    We examine the accuracy of the microcanonical Lanczos method (MCLM) developed by Long et al. [Phys. Rev. B 68, 235106 (2003)] to compute dynamical spectral functions of interacting quantum models at finite temperatures. The MCLM is based on the microcanonical ensemble, which becomes exact in the thermodynamic limit. To apply the microcanonical ensemble at a fixed temperature, one has to find energy eigenstates with the energy eigenvalue corresponding to the internal energy in the canonical ensemble. Here in this paper, we propose to use thermal pure quantum state methods by Sugiura and Shimizu [Phys. Rev. Lett. 111, 010401 (2013)] tomore » obtain the internal energy. After obtaining the energy eigenstates using the Lanczos diagonalization method, dynamical quantities are computed via a continued fraction expansion, a standard procedure for Lanczos-based numerical methods. Using one-dimensional antiferromagnetic Heisenberg chains with S = 1/2, we demonstrate that the proposed procedure is reasonably accurate, even for relatively small systems.« less

  18. Accuracy of the microcanonical Lanczos method to compute real-frequency dynamical spectral functions of quantum models at finite temperatures

    DOE PAGES

    Okamoto, Satoshi; Alvarez, Gonzalo; Dagotto, Elbio; ...

    2018-04-20

    We examine the accuracy of the microcanonical Lanczos method (MCLM) developed by Long et al. [Phys. Rev. B 68, 235106 (2003)] to compute dynamical spectral functions of interacting quantum models at finite temperatures. The MCLM is based on the microcanonical ensemble, which becomes exact in the thermodynamic limit. To apply the microcanonical ensemble at a fixed temperature, one has to find energy eigenstates with the energy eigenvalue corresponding to the internal energy in the canonical ensemble. Here in this paper, we propose to use thermal pure quantum state methods by Sugiura and Shimizu [Phys. Rev. Lett. 111, 010401 (2013)] tomore » obtain the internal energy. After obtaining the energy eigenstates using the Lanczos diagonalization method, dynamical quantities are computed via a continued fraction expansion, a standard procedure for Lanczos-based numerical methods. Using one-dimensional antiferromagnetic Heisenberg chains with S = 1/2, we demonstrate that the proposed procedure is reasonably accurate, even for relatively small systems.« less

  19. Unusual anchor pins placement and protocol deviations during computer-based implant placement in a patient with severe mandibular bone atrophy. A case report.

    PubMed

    Pozzi, A; Tallarico, M; Barlattani, A

    2012-06-01

    The aim of the study was to describe an unusual method to stabilize the surgical template during the treatment of a severely resorbed edentulous mandible by means of computer-guided implant surgery. Pre-operative computer-based planning revealed the difficulty to stabilize the surgical template in a 67-year-old healthy woman. A deviation of the original NobelGuide™ protocol was performed to ensure the stability of the surgical template: four anchor pins were used to stabilize the surgical template, two in the buccal side of the anterior template and two unusual pins were placed in the lingual site. Four straight TiUnite® Nobel Biocare™ implants were then placed, according to a modified All-on-4™ concept procedure, to avoid graft procedures and to reduce the cumulative chair-side treatment time and costs. A temporary restoration was placed immediately after implant placement. Three months later a definitive, full acrylic restoration was delivered.

  20. Computer-aided diagnostic strategy selection.

    PubMed

    Greenes, R A

    1986-03-01

    Determination of the optimal diagnostic work-up strategy for the patient is becoming a major concern for the practicing physician. Overlap of the indications for various diagnostic procedures, differences in their invasiveness or risk, and high costs have made physicians aware of the need to consider the choice of procedure carefully, as well as its relation to management actions available. In this article, the author discusses research approaches that aim toward development of formal decision analytic methods to allow the physician to determine optimal strategy; clinical algorithms or rules as guides to physician decisions; improved measures for characterizing the performance of diagnostic tests; educational tools for increasing the familiarity of physicians with the concepts underlying these measures and analytic procedures; and computer-based aids for facilitating the employment of these resources in actual clinical practice.

  1. Dexterity optimization by port placement in robot-assisted minimally invasive surgery

    NASA Astrophysics Data System (ADS)

    Selha, Shaun; Dupont, Pierre; Howe, Robert D.; Torchiana, David F.

    2002-02-01

    A computer-based algorithm has been developed which uses preoperative images to provide a surgeon with a list of feasible port triplets ranked according to tool dexterity and endoscopic view quality at each surgical site involved in a procedure. A computer simulation allows the surgeon to select from among the proposed port locations. The procedure selected for the development of the system consists of a coronary artery bypass graft (CABG). In this procedure, the interior mammary artery (IMA) is mobilized from the interior chest wall, and one end is attached to the coronary arteries to provide a new blood supply for the heart. Approximately 10-20 cm is dissected free, using blunt dissection and a harmonic scalpel or electrocautery. At present, the port placement system is being evaluated in clinical trials.

  2. Evaluation of solution procedures for material and/or geometrically nonlinear structural analysis by the direct stiffness method.

    NASA Technical Reports Server (NTRS)

    Stricklin, J. A.; Haisler, W. E.; Von Riesemann, W. A.

    1972-01-01

    This paper presents an assessment of the solution procedures available for the analysis of inelastic and/or large deflection structural behavior. A literature survey is given which summarized the contribution of other researchers in the analysis of structural problems exhibiting material nonlinearities and combined geometric-material nonlinearities. Attention is focused at evaluating the available computation and solution techniques. Each of the solution techniques is developed from a common equation of equilibrium in terms of pseudo forces. The solution procedures are applied to circular plates and shells of revolution in an attempt to compare and evaluate each with respect to computational accuracy, economy, and efficiency. Based on the numerical studies, observations and comments are made with regard to the accuracy and economy of each solution technique.

  3. Ultrasound-guided versus computed tomography-scan guided biopsy of pleural-based lung lesions

    PubMed Central

    Khosla, Rahul; McLean, Anna W; Smith, Jessica A

    2016-01-01

    Background: Computed tomography (CT) guided biopsies have long been the standard technique to obtain tissue from the thoracic cavity and is traditionally performed by interventional radiologists. Ultrasound (US) guided biopsy of pleural-based lesions, performed by pulmonologists is gaining popularity and has the advantage of multi-planar imaging, real-time technique, and the absence of radiation exposure to patients. In this study, we aim to determine the diagnostic accuracy, the time to diagnosis after the initial consult placement, and the complications rates between the two different modalities. Methods: A retrospective study of electronic medical records was done of patients who underwent CT-guided biopsies and US-guided biopsies for pleural-based lesions between 2005 and 2014 and the data collected were analyzed for comparing the two groups. Results: A total of 158 patients underwent 162 procedures during the study period. 86 patients underwent 89 procedures in the US group, and 72 patients underwent 73 procedures in the CT group. The overall yield in the US group was 82/89 (92.1%) versus 67/73 (91.8%) in the CT group (P = 1.0). Average days to the procedure was 7.2 versus 17.5 (P = 0.00001) in the US and CT group, respectively. Complication rate was higher in CT group 17/73 (23.3%) versus 1/89 (1.1%) in the US group (P < 0.0001). Conclusions: For pleural-based lesions the diagnostic accuracy of US guided biopsy is similar to that of CT-guided biopsy, with a lower complication rate and a significantly reduced time to the procedure. PMID:27625440

  4. The Implementation of Blended Learning Using Android-Based Tutorial Video in Computer Programming Course II

    NASA Astrophysics Data System (ADS)

    Huda, C.; Hudha, M. N.; Ain, N.; Nandiyanto, A. B. D.; Abdullah, A. G.; Widiaty, I.

    2018-01-01

    Computer programming course is theoretical. Sufficient practice is necessary to facilitate conceptual understanding and encouraging creativity in designing computer programs/animation. The development of tutorial video in an Android-based blended learning is needed for students’ guide. Using Android-based instructional material, students can independently learn anywhere and anytime. The tutorial video can facilitate students’ understanding about concepts, materials, and procedures of programming/animation making in detail. This study employed a Research and Development method adapting Thiagarajan’s 4D model. The developed Android-based instructional material and tutorial video were validated by experts in instructional media and experts in physics education. The expert validation results showed that the Android-based material was comprehensive and very feasible. The tutorial video was deemed feasible as it received average score of 92.9%. It was also revealed that students’ conceptual understanding, skills, and creativity in designing computer program/animation improved significantly.

  5. Description and Initial Evaluation of a Computer-Based Individual Trainer for the Radar Intercept Observer.

    ERIC Educational Resources Information Center

    Rigney, Joseph W.; And Others

    An individual trainer for giving students in the radar intercept observer (RIO) schools concentrated practice in procedures for air-to-air intercepts was designed around a programmable graphics terminal with two integral minicomputers and 8k of core memory. The trainer automatically administers practice in computing values of variables in the…

  6. Data management in engineering

    NASA Technical Reports Server (NTRS)

    Browne, J. C.

    1976-01-01

    An introduction to computer based data management is presented with an orientation toward the needs of engineering application. The characteristics and structure of data management systems are discussed. A link to familiar engineering applications of computing is established through a discussion of data structure and data access procedures. An example data management system for a hypothetical engineering application is presented.

  7. A procedure for calculating daily moisture stress and its utility in regressions of growth and weather

    Treesearch

    Robert Zahner; Albert R. Stage

    1966-01-01

    A method is described for computing daily values of moisture stress on forest vegetation, or water deficits, based on the differences between Thornthwaite's potential evapotranspiration and computed soil-moisture depletion. More realistic functions are used for soil-moisture depletion on specific soil types than have been customary. These functions relate daily...

  8. DIALOG: An executive computer program for linking independent programs

    NASA Technical Reports Server (NTRS)

    Glatt, C. R.; Hague, D. S.; Watson, D. A.

    1973-01-01

    A very large scale computer programming procedure called the DIALOG executive system was developed for the CDC 6000 series computers. The executive computer program, DIALOG, controls the sequence of execution and data management function for a library of independent computer programs. Communication of common information is accomplished by DIALOG through a dynamically constructed and maintained data base of common information. Each computer program maintains its individual identity and is unaware of its contribution to the large scale program. This feature makes any computer program a candidate for use with the DIALOG executive system. The installation and uses of the DIALOG executive system are described.

  9. Testing homogeneity of proportion ratios for stratified correlated bilateral data in two-arm randomized clinical trials.

    PubMed

    Pei, Yanbo; Tian, Guo-Liang; Tang, Man-Lai

    2014-11-10

    Stratified data analysis is an important research topic in many biomedical studies and clinical trials. In this article, we develop five test statistics for testing the homogeneity of proportion ratios for stratified correlated bilateral binary data based on an equal correlation model assumption. Bootstrap procedures based on these test statistics are also considered. To evaluate the performance of these statistics and procedures, we conduct Monte Carlo simulations to study their empirical sizes and powers under various scenarios. Our results suggest that the procedure based on score statistic performs well generally and is highly recommended. When the sample size is large, procedures based on the commonly used weighted least square estimate and logarithmic transformation with Mantel-Haenszel estimate are recommended as they do not involve any computation of maximum likelihood estimates requiring iterative algorithms. We also derive approximate sample size formulas based on the recommended test procedures. Finally, we apply the proposed methods to analyze a multi-center randomized clinical trial for scleroderma patients. Copyright © 2014 John Wiley & Sons, Ltd.

  10. Computational simulation of matrix micro-slip bands in SiC/Ti-15 composite

    NASA Technical Reports Server (NTRS)

    Mital, S. K.; Lee, H.-J.; Murthy, P. L. N.; Chamis, C. C.

    1992-01-01

    Computational simulation procedures are used to identify the key deformation mechanisms for (0)(sub 8) and (90)(sub 8) SiC/Ti-15 metal matrix composites. The computational simulation procedures employed consist of a three-dimensional finite-element analysis and a micromechanics based computer code METCAN. The interphase properties used in the analysis have been calibrated using the METCAN computer code with the (90)(sub 8) experimental stress-strain curve. Results of simulation show that although shear stresses are sufficiently high to cause the formation of some slip bands in the matrix concentrated mostly near the fibers, the nonlinearity in the composite stress-strain curve in the case of (90)(sub 8) composite is dominated by interfacial damage, such as microcracks and debonding rather than microplasticity. The stress-strain curve for (0)(sub 8) composite is largely controlled by the fibers and shows only slight nonlinearity at higher strain levels that could be the result of matrix microplasticity.

  11. Installing computers in older adults' homes and teaching them to access a patient education web site: a systematic approach.

    PubMed

    Dauz, Emily; Moore, Jan; Smith, Carol E; Puno, Florence; Schaag, Helen

    2004-01-01

    This article describes the experiences of nurses who, as part of a large clinical trial, brought the Internet into older adults' homes by installing a computer, if needed, and connecting to a patient education Web site. Most of these patients had not previously used the Internet and were taught even basic computer skills when necessary. Because of increasing use of the Internet in patient education, assessment, and home monitoring, nurses in various roles currently connect with patients to monitor their progress, teach about medications, and answer questions about appointments and treatments. Thus, nurses find themselves playing the role of technology managers for patients with home-based Internet connections. This article provides step-by-step procedures for computer installation and training in the form of protocols, checklists, and patient user guides. By following these procedures, nurses can install computers, arrange Internet access, teach and connect to their patients, and prepare themselves to install future generations of technological devices.

  12. The Next Step in Deployment of Computer Based Procedures For Field Workers: Insights And Results From Field Evaluations at Nuclear Power Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxstrand, Johanna; Le Blanc, Katya L.; Bly, Aaron

    The paper-based procedures currently used for nearly all activities in the commercial nuclear power industry have a long history of ensuring safe operation of the plants. However, there is potential to greatly increase efficiency and safety by improving how the human operator interacts with the procedures. One way to achieve these improvements is through the use of computer-based procedures (CBPs). A CBP system offers a vast variety of improvements, such as context driven job aids, integrated human performance tools (e.g., placekeeping, correct component verification, etc.), and dynamic step presentation. The latter means that the CBP system could only display relevantmore » steps based on operating mode, plant status, and the task at hand. A dynamic presentation of the procedure (also known as context-sensitive procedures) will guide the operator down the path of relevant steps based on the current conditions. This feature will reduce the operator’s workload and inherently reduce the risk of incorrectly marking a step as not applicable and the risk of incorrectly performing a step that should be marked as not applicable. The research team at the Idaho National Laboratory has developed a prototype CBP system for field workers, which has been evaluated from a human factors and usability perspective in four laboratory studies. Based on the results from each study revisions were made to the CBP system. However, a crucial step to get the end users' (e.g., auxiliary operators, maintenance technicians, etc.) acceptance is to put the system in their hands and let them use it as a part of their everyday work activities. In the spring 2014 the first field evaluation of the INL CBP system was conducted at a nuclear power plant. Auxiliary operators conduct a functional test of one out of three backup air compressors each week. During the field evaluation activity, one auxiliary operator conducted the test with the paper-based procedure while a second auxiliary operator followed along with the computer-based procedure. After each conducted functional test the operators were asked a series of questions designed to provide feedback on the feasibility to use a CBP system in the plant and the general user experience of the CBP system. This paper will describe the field evaluation and its results in detail. For example, the result shows that the context driven job aids and the incorporated human performance tools are much liked by the auxiliary operators. The paper will describe and present initial findings from a second field evaluation conducted at second nuclear utility. For this field evaluation a preventive maintenance work order for the HVAC system was used. In addition, there will be a description of the method and objective of two field evaluations planned to be conducted late 2014 or early 2015.« less

  13. On the usage of ultrasound computational models for decision making under ambiguity

    NASA Astrophysics Data System (ADS)

    Dib, Gerges; Sexton, Samuel; Prowant, Matthew; Crawford, Susan; Diaz, Aaron

    2018-04-01

    Computer modeling and simulation is becoming pervasive within the non-destructive evaluation (NDE) industry as a convenient tool for designing and assessing inspection techniques. This raises a pressing need for developing quantitative techniques for demonstrating the validity and applicability of the computational models. Computational models provide deterministic results based on deterministic and well-defined input, or stochastic results based on inputs defined by probability distributions. However, computational models cannot account for the effects of personnel, procedures, and equipment, resulting in ambiguity about the efficacy of inspections based on guidance from computational models only. In addition, ambiguity arises when model inputs, such as the representation of realistic cracks, cannot be defined deterministically, probabilistically, or by intervals. In this work, Pacific Northwest National Laboratory demonstrates the ability of computational models to represent field measurements under known variabilities, and quantify the differences using maximum amplitude and power spectrum density metrics. Sensitivity studies are also conducted to quantify the effects of different input parameters on the simulation results.

  14. Neural Net-Based Redesign of Transonic Turbines for Improved Unsteady Aerodynamic Performance

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.; Rai, Man Mohan; Huber, Frank W.

    1998-01-01

    A recently developed neural net-based aerodynamic design procedure is used in the redesign of a transonic turbine stage to improve its unsteady aerodynamic performance. The redesign procedure used incorporates the advantages of both traditional response surface methodology (RSM) and neural networks by employing a strategy called parameter-based partitioning of the design space. Starting from the reference design, a sequence of response surfaces based on both neural networks and polynomial fits are constructed to traverse the design space in search of an optimal solution that exhibits improved unsteady performance. The procedure combines the power of neural networks and the economy of low-order polynomials (in terms of number of simulations required and network training requirements). A time-accurate, two-dimensional, Navier-Stokes solver is used to evaluate the various intermediate designs and provide inputs to the optimization procedure. The optimization procedure yields a modified design that improves the aerodynamic performance through small changes to the reference design geometry. The computed results demonstrate the capabilities of the neural net-based design procedure, and also show the tremendous advantages that can be gained by including high-fidelity unsteady simulations that capture the relevant flow physics in the design optimization process.

  15. Strategic flexibility in computational estimation for Chinese- and Canadian-educated adults.

    PubMed

    Xu, Chang; Wells, Emma; LeFevre, Jo-Anne; Imbo, Ineke

    2014-09-01

    The purpose of the present study was to examine factors that influence strategic flexibility in computational estimation for Chinese- and Canadian-educated adults. Strategic flexibility was operationalized as the percentage of trials on which participants chose the problem-based procedure that best balanced proximity to the correct answer with simplification of the required calculation. For example, on 42 × 57, the optimal problem-based solution is 40 × 60 because 2,400 is closer to the exact answer 2,394 than is 40 × 50 or 50 × 60. In Experiment 1 (n = 50), where participants had free choice of estimation procedures, Chinese-educated participants were more likely to choose the optimal problem-based procedure (80% of trials) than Canadian-educated participants (50%). In Experiment 2 (n = 48), participants had to choose 1 of 3 solution procedures. They showed moderate strategic flexibility that was equal across groups (60%). In Experiment 3 (n = 50), participants were given the same 3 procedure choices as in Experiment 2 but different instructions and explicit feedback. When instructed to respond quickly, both groups showed moderate strategic flexibility as in Experiment 2 (60%). When instructed to respond as accurately as possible or to balance speed and accuracy, they showed very high strategic flexibility (greater than 90%). These findings suggest that solvers will show very different levels of strategic flexibility in response to instructions, feedback, and problem characteristics and that these factors interact with individual differences (e.g., arithmetic skills, nationality) to produce variable response patterns.

  16. Establishing Reliable Cognitive Change in Children with Epilepsy: The Procedures and Results for a Sample with Epilepsy

    ERIC Educational Resources Information Center

    van Iterson, Loretta; Augustijn, Paul B.; de Jong, Peter F.; van der Leij, Aryan

    2013-01-01

    The goal of this study was to investigate reliable cognitive change in epilepsy by developing computational procedures to determine reliable change index scores (RCIs) for the Dutch Wechsler Intelligence Scales for Children. First, RCIs were calculated based on stability coefficients from a reference sample. Then, these RCIs were applied to a…

  17. Communications network design and costing model users manual

    NASA Technical Reports Server (NTRS)

    Logan, K. P.; Somes, S. S.; Clark, C. A.

    1983-01-01

    The information and procedures needed to exercise the communications network design and costing model for performing network analysis are presented. Specific procedures are included for executing the model on the NASA Lewis Research Center IBM 3033 computer. The concepts, functions, and data bases relating to the model are described. Model parameters and their format specifications for running the model are detailed.

  18. Identification of Printed Nonsense Words for an Individual with Autism: A Comparison of Constant Time Delay and Stimulus Fading

    ERIC Educational Resources Information Center

    Redhair, Emily

    2011-01-01

    This study compared a stimulus fading (SF) procedure with a constant time delay (CTD) procedure for identification of consonant-vowel-consonant (CVC) nonsense words for a participant with autism. An alternating treatments design was utilized through a computer-based format. Receptive identification of target words was evaluated using a computer…

  19. Identification of Printed Nonsense Words for an Individual with Autism: A Comparison of Constant Time Delay and Stimulus Fading

    ERIC Educational Resources Information Center

    Redhair, Emily I.; McCoy, Kathleen M.; Zucker, Stanley H.; Mathur, Sarup R.; Caterino, Linda

    2013-01-01

    This study compared a stimulus fading (SF) procedure with a constant time delay (CTD) procedure for identification of consonant-vowel-consonant (CVC) nonsense words for a participant with autism. An alternating treatments design was utilized through a computer-based format. Receptive identification of target words was evaluated using a computer…

  20. Blood Pump Development Using Rocket Engine Flow Simulation Technology

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan; Kiris, Cetin

    2001-01-01

    This paper reports the progress made towards developing complete blood flow simulation capability in humans, especially in the presence of artificial devices such as valves and ventricular assist devices. Devices modeling poses unique challenges different from computing the blood flow in natural hearts and arteries. There are many elements needed to quantify the flow in these devices such as flow solvers, geometry modeling including flexible walls, moving boundary procedures and physiological characterization of blood. As a first step, computational technology developed for aerospace applications was extended to the analysis and development of a ventricular assist device (VAD), i.e., a blood pump. The blood flow in a VAD is practically incompressible and Newtonian, and thus an incompressible Navier-Stokes solution procedure can be applied. A primitive variable formulation is used in conjunction with the overset grid approach to handle complex moving geometry. The primary purpose of developing the incompressible flow analysis capability was to quantify the flow in advanced turbopump for space propulsion system. The same procedure has been extended to the development of NASA-DeBakey VAD that is based on an axial blood pump. Due to massive computing requirements, high-end computing is necessary for simulating three-dimensional flow in these pumps. Computational, experimental, and clinical results are presented.

  1. Efficient stabilization and acceleration of numerical simulation of fluid flows by residual recombination

    NASA Astrophysics Data System (ADS)

    Citro, V.; Luchini, P.; Giannetti, F.; Auteri, F.

    2017-09-01

    The study of the stability of a dynamical system described by a set of partial differential equations (PDEs) requires the computation of unstable states as the control parameter exceeds its critical threshold. Unfortunately, the discretization of the governing equations, especially for fluid dynamic applications, often leads to very large discrete systems. As a consequence, matrix based methods, like for example the Newton-Raphson algorithm coupled with a direct inversion of the Jacobian matrix, lead to computational costs too large in terms of both memory and execution time. We present a novel iterative algorithm, inspired by Krylov-subspace methods, which is able to compute unstable steady states and/or accelerate the convergence to stable configurations. Our new algorithm is based on the minimization of the residual norm at each iteration step with a projection basis updated at each iteration rather than at periodic restarts like in the classical GMRES method. The algorithm is able to stabilize any dynamical system without increasing the computational time of the original numerical procedure used to solve the governing equations. Moreover, it can be easily inserted into a pre-existing relaxation (integration) procedure with a call to a single black-box subroutine. The procedure is discussed for problems of different sizes, ranging from a small two-dimensional system to a large three-dimensional problem involving the Navier-Stokes equations. We show that the proposed algorithm is able to improve the convergence of existing iterative schemes. In particular, the procedure is applied to the subcritical flow inside a lid-driven cavity. We also discuss the application of Boostconv to compute the unstable steady flow past a fixed circular cylinder (2D) and boundary-layer flow over a hemispherical roughness element (3D) for supercritical values of the Reynolds number. We show that Boostconv can be used effectively with any spatial discretization, be it a finite-difference, finite-volume, finite-element or spectral method.

  2. Hybrid Grid Techniques for Propulsion Applications

    NASA Technical Reports Server (NTRS)

    Koomullil, Roy P.; Soni, Bharat K.; Thornburg, Hugh J.

    1996-01-01

    During the past decade, computational simulation of fluid flow for propulsion activities has progressed significantly, and many notable successes have been reported in the literature. However, the generation of a high quality mesh for such problems has often been reported as a pacing item. Hence, much effort has been expended to speed this portion of the simulation process. Several approaches have evolved for grid generation. Two of the most common are structured multi-block, and unstructured based procedures. Structured grids tend to be computationally efficient, and have high aspect ratio cells necessary for efficently resolving viscous layers. Structured multi-block grids may or may not exhibit grid line continuity across the block interface. This relaxation of the continuity constraint at the interface is intended to ease the grid generation process, which is still time consuming. Flow solvers supporting non-contiguous interfaces require specialized interpolation procedures which may not ensure conservation at the interface. Unstructured or generalized indexing data structures offer greater flexibility, but require explicit connectivity information and are not easy to generate for three dimensional configurations. In addition, unstructured mesh based schemes tend to be less efficient and it is difficult to resolve viscous layers. Recently hybrid or generalized element solution and grid generation techniques have been developed with the objective of combining the attractive features of both structured and unstructured techniques. In the present work, recently developed procedures for hybrid grid generation and flow simulation are critically evaluated, and compared to existing structured and unstructured procedures in terms of accuracy and computational requirements.

  3. A Permutation Approach for Selecting the Penalty Parameter in Penalized Model Selection

    PubMed Central

    Sabourin, Jeremy A; Valdar, William; Nobel, Andrew B

    2015-01-01

    Summary We describe a simple, computationally effcient, permutation-based procedure for selecting the penalty parameter in LASSO penalized regression. The procedure, permutation selection, is intended for applications where variable selection is the primary focus, and can be applied in a variety of structural settings, including that of generalized linear models. We briefly discuss connections between permutation selection and existing theory for the LASSO. In addition, we present a simulation study and an analysis of real biomedical data sets in which permutation selection is compared with selection based on the following: cross-validation (CV), the Bayesian information criterion (BIC), Scaled Sparse Linear Regression, and a selection method based on recently developed testing procedures for the LASSO. PMID:26243050

  4. A basis for solid modeling of gear teeth with application in design and manufacture

    NASA Technical Reports Server (NTRS)

    Huston, Ronald L.; Mavriplis, Dimitrios; Oswald, Fred B.; Liu, Yung Sheng

    1992-01-01

    A new approach to modeling gear tooth surfaces is discussed. A computer graphics solid modeling procedure is used to simulate the tooth fabrication process. This procedure is based on the principles of differential geometry that pertain to envelopes of curves and surfaces. The procedure is illustrated with the modeling of spur, helical, bevel, spiral bevel, and hypoid gear teeth. Applications in design and manufacturing are discussed. Extensions to nonstandard tooth forms, to cams, and to rolling element bearings are proposed.

  5. A Basis for Solid Modeling of Gear Teeth with Application in Design and Manufacture

    NASA Technical Reports Server (NTRS)

    Huston, Ronald L.; Mavriplis, Dimitrios; Oswald, Fred B.; Liu, Yung Sheng

    1994-01-01

    This paper discusses a new approach to modeling gear tooth surfaces. A computer graphics solid modeling procedure is used to simulate the tooth fabrication processes. This procedure is based on the principles of differential geometry that pertain to envelopes of curves and surfaces. The procedure is illustrated with the modeling of spur, helical, bevel, spiral bevel and hypoid gear teeth. Applications in design and manufacturing arc discussed. Extensions to nonstandard tooth forms, to cams, and to rolling element hearings are proposed.

  6. Measuring Attitudes Toward Reading. An Annotated ERIC Bibliography.

    ERIC Educational Resources Information Center

    Hahn, Christine T., Comp.

    This 30-item annotated bibliography provides access to information concerning the development and use of tests and procedures for evaluating student attitudes toward reading. Based upon a computer search of five data bases--Educational Resources Information Center (ERIC), Psychological Abstracts, Exceptional Child Education Abstracts, Sociological…

  7. Ontology-based vector space model and fuzzy query expansion to retrieve knowledge on medical computational problem solutions.

    PubMed

    Bratsas, Charalampos; Koutkias, Vassilis; Kaimakamis, Evangelos; Bamidis, Panagiotis; Maglaveras, Nicos

    2007-01-01

    Medical Computational Problem (MCP) solving is related to medical problems and their computerized algorithmic solutions. In this paper, an extension of an ontology-based model to fuzzy logic is presented, as a means to enhance the information retrieval (IR) procedure in semantic management of MCPs. We present herein the methodology followed for the fuzzy expansion of the ontology model, the fuzzy query expansion procedure, as well as an appropriate ontology-based Vector Space Model (VSM) that was constructed for efficient mapping of user-defined MCP search criteria and MCP acquired knowledge. The relevant fuzzy thesaurus is constructed by calculating the simultaneous occurrences of terms and the term-to-term similarities derived from the ontology that utilizes UMLS (Unified Medical Language System) concepts by using Concept Unique Identifiers (CUI), synonyms, semantic types, and broader-narrower relationships for fuzzy query expansion. The current approach constitutes a sophisticated advance for effective, semantics-based MCP-related IR.

  8. Supporting the future nuclear workforce with computer-based procedures

    DOE PAGES

    Oxstrand, Johanna; Le Blanc, Katya

    2016-05-01

    Here we see that computer-based tools have dramatically increased ease and efficiency of everyday tasks. Gone are the days of paging through a paper catalog, transcribing product numbers, and calculating totals. Today, a consumer can find a product online with a simple search engine, and then purchase it in a matter of a few clicks. Paper catalogs have their place, but it is hard to imagine life without on-line shopping sites. All tasks conducted in a nuclear power plant are guided by procedures, which helps ensure safe and reliable operation of the plants. One prominent goal of the nuclear industrymore » is to minimize the risk of human errors. To achieve this goal one has to ensure tasks are correctly and consistently executed. This is partly achieved by training and by a structured approach to task execution, which is provided by procedures and work instructions. Procedures are used in the nuclear industry to direct workers' actions in a proper sequence. The governing idea is to minimize the reliance on memory and choices made in the field. However, the procedure document may not contain sufficient information to successfully complete the task. Therefore, the worker might have to carry additional documents such as turnover sheets, operation experience, drawings, and other procedures to the work site. The nuclear industry is operated with paper procedures like paper catalogs of the past. A field worker may carry a large stack of documents needed to complete a task to the field. Even though the paper process has helped keep the industry safe for decades, there are limitations to using paper. Paper procedures are static (i.e., the content does not change after the document is printed), difficult to search, and rely heavily on the field worker’s situational awareness and ability to consistently meet the high expectation of human performance excellence. With computer-based procedures (CBPs) that stack of papers may be reduced to the size of a small tablet or even a smart phone. Instead of manually matching equipment identification numbers listed in the procedure with the number on the physical equipment the field worker can simply scan a barcode to ensure the correct valve is opened while simultaneously creating a record. Instead of navigating through a maze of cross-references, CBPs enable intelligent work path navigation which accounts for past decisions and observation, thereby enabling more efficient and safe task completion.« less

  9. Supporting the future nuclear workforce with computer-based procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxstrand, Johanna; Le Blanc, Katya

    Here we see that computer-based tools have dramatically increased ease and efficiency of everyday tasks. Gone are the days of paging through a paper catalog, transcribing product numbers, and calculating totals. Today, a consumer can find a product online with a simple search engine, and then purchase it in a matter of a few clicks. Paper catalogs have their place, but it is hard to imagine life without on-line shopping sites. All tasks conducted in a nuclear power plant are guided by procedures, which helps ensure safe and reliable operation of the plants. One prominent goal of the nuclear industrymore » is to minimize the risk of human errors. To achieve this goal one has to ensure tasks are correctly and consistently executed. This is partly achieved by training and by a structured approach to task execution, which is provided by procedures and work instructions. Procedures are used in the nuclear industry to direct workers' actions in a proper sequence. The governing idea is to minimize the reliance on memory and choices made in the field. However, the procedure document may not contain sufficient information to successfully complete the task. Therefore, the worker might have to carry additional documents such as turnover sheets, operation experience, drawings, and other procedures to the work site. The nuclear industry is operated with paper procedures like paper catalogs of the past. A field worker may carry a large stack of documents needed to complete a task to the field. Even though the paper process has helped keep the industry safe for decades, there are limitations to using paper. Paper procedures are static (i.e., the content does not change after the document is printed), difficult to search, and rely heavily on the field worker’s situational awareness and ability to consistently meet the high expectation of human performance excellence. With computer-based procedures (CBPs) that stack of papers may be reduced to the size of a small tablet or even a smart phone. Instead of manually matching equipment identification numbers listed in the procedure with the number on the physical equipment the field worker can simply scan a barcode to ensure the correct valve is opened while simultaneously creating a record. Instead of navigating through a maze of cross-references, CBPs enable intelligent work path navigation which accounts for past decisions and observation, thereby enabling more efficient and safe task completion.« less

  10. Computations of total sediment discharge, Niobrara River near Cody, Nebraska

    USGS Publications Warehouse

    Colby, Bruce R.; Hembree, C.H.

    1955-01-01

    A natural chute in the Niobrara River near Cody, Nebr., constricts the flow of the river except at high stages to a narrow channel in which the turbulence is sufficient to suspend nearly the total sediment discharge. Because much of the flow originates in the sandhills area of Nebraska, the water discharge and sediment discharge are relatively uniform. Sediment discharges based on depth-integrated samples at a contracted section in the chute and on streamflow records at a recording gage about 1,900 feet upstream are available for the period from April 1948 to September 1953 but are not given directly as continuous records in this report. Sediment measurements have been made periodically near the gage and at other nearby relatively unconfined sections of the stream for comparison with measurements at the contracted section. Sediment discharge at these relatively unconfined sections was computed from formulas for comparison with measured sediment discharges at the contracted section. A form of the Du Boys formula gave computed tonnages of sediment that were unsatisfactory. Sediment discharges as computed from the Schoklitsch formula agreed well with measured sediment discharges that were low, but they were much too low at measured sediment discharges that were higher. The Straub formula gave computed discharges, presumably of bed material, that were several times larger than measured discharges of sediment coarser than 0.125 millimeter. All three of these formulas gave computed sediment discharges that increased with water discharges much less rapidly than the measured discharges of sediment coarser than 0.125 millimeter. The Einstein procedure when applied to a reach that included 10 defined cross sections gave much better agreement between computed sediment discharge and measured sediment discharge than did anyone of the three other formulas that were used. This procedure does not compute the discharge of sediment that is too small to be found in the stream bed in appreciable quantities. Hence, total sediment discharges were obtained by adding computed discharges of sediment larger than 0.125 millimeter to measured discharges of sediment smaller than 0.125 millimeter. The size distributions of the computed sediment discharge compared poorly with the size distributions of sediment discharge at the contracted section. Ten sediment discharges computed from the Einstein procedure as applied to a single section averaged several times the measured sediment discharge for the contracted section and gave size distributions that were unsatisfactory. The Einstein procedure was modified to compute total sediment discharge at an alluvial section from readily measurable field data. The modified procedure uses measurements of bed-material particle sizes, suspended-sediment concentrations and particle sizes from depth-integrated samples, streamflow, and water temperatures. Computations of total sediment discharge were made by using this modified procedure, some for the section at the gaging station and some for each of two other relatively unconfined sections. The size distributions of the computed and the measured sediment discharges agreed reasonably well. Major advantages of this modified procedure include applicability to a single section rather than to a reach of channel, use of measured velocity instead of water-surface slope, use of depth-integrated samples, and apparently fair accuracy for computing both total sediment discharge and approximate size distribution of the sediment. Because of these advantages this modified procedure is being further studied to increase its accuracy, to simplify the required computations, and to define its limitations. In the development of the modified procedure, some relationships concerning theories of sediment transport were reviewed and checked against field data. Vertical distributions of suspended sediment at relatively unconfined sections did not agree well with theoretical dist

  11. Virtual Transgenics: Using a Molecular Biology Simulation to Impact Student Academic Achievement and Attitudes

    NASA Astrophysics Data System (ADS)

    Shegog, Ross; Lazarus, Melanie M.; Murray, Nancy G.; Diamond, Pamela M.; Sessions, Nathalie; Zsigmond, Eva

    2012-10-01

    The transgenic mouse model is useful for studying the causes and potential cures for human genetic diseases. Exposing high school biology students to laboratory experience in developing transgenic animal models is logistically prohibitive. Computer-based simulation, however, offers this potential in addition to advantages of fidelity and reach. This study describes and evaluates a computer-based simulation to train advanced placement high school science students in laboratory protocols, a transgenic mouse model was produced. A simulation module on preparing a gene construct in the molecular biology lab was evaluated using a randomized clinical control design with advanced placement high school biology students in Mercedes, Texas ( n = 44). Pre-post tests assessed procedural and declarative knowledge, time on task, attitudes toward computers for learning and towards science careers. Students who used the simulation increased their procedural and declarative knowledge regarding molecular biology compared to those in the control condition (both p < 0.005). Significant increases continued to occur with additional use of the simulation ( p < 0.001). Students in the treatment group became more positive toward using computers for learning ( p < 0.001). The simulation did not significantly affect attitudes toward science in general. Computer simulation of complex transgenic protocols have potential to provide a "virtual" laboratory experience as an adjunct to conventional educational approaches.

  12. TORC3: Token-ring clearing heuristic for currency circulation

    NASA Astrophysics Data System (ADS)

    Humes, Carlos, Jr.; Lauretto, Marcelo S.; Nakano, Fábio; Pereira, Carlos A. B.; Rafare, Guilherme F. G.; Stern, Julio Michael

    2012-10-01

    Clearing algorithms are at the core of modern payment systems, facilitating the settling of multilateral credit messages with (near) minimum transfers of currency. Traditional clearing procedures use batch processing based on MILP - mixed-integer linear programming algorithms. The MILP approach demands intensive computational resources; moreover, it is also vulnerable to operational risks generated by possible defaults during the inter-batch period. This paper presents TORC3 - the Token-Ring Clearing Algorithm for Currency Circulation. In contrast to the MILP approach, TORC3 is a real time heuristic procedure, demanding modest computational resources, and able to completely shield the clearing operation against the participating agents' risk of default.

  13. Model-based surgical planning and simulation of cranial base surgery.

    PubMed

    Abe, M; Tabuchi, K; Goto, M; Uchino, A

    1998-11-01

    Plastic skull models of seven individual patients were fabricated by stereolithography from three-dimensional data based on computed tomography bone images. Skull models were utilized for neurosurgical planning and simulation in the seven patients with cranial base lesions that were difficult to remove. Surgical approaches and areas of craniotomy were evaluated using the fabricated skull models. In preoperative simulations, hand-made models of the tumors, major vessels and nerves were placed in the skull models. Step-by-step simulation of surgical procedures was performed using actual surgical tools. The advantages of using skull models to plan and simulate cranial base surgery include a better understanding of anatomic relationships, preoperative evaluation of the proposed procedure, increased understanding by the patient and family, and improved educational experiences for residents and other medical staff. The disadvantages of using skull models include the time and cost of making the models. The skull models provide a more realistic tool that is easier to handle than computer-graphic images. Surgical simulation using models facilitates difficult cranial base surgery and may help reduce surgical complications.

  14. Computed Flow Through An Artificial Heart Valve

    NASA Technical Reports Server (NTRS)

    Rogers, Stewart E.; Kwak, Dochan; Kiris, Cetin; Chang, I-Dee

    1994-01-01

    Report discusses computations of blood flow through prosthetic tilting disk valve. Computational procedure developed in simulation used to design better artificial hearts and valves by reducing or eliminating following adverse flow characteristics: large pressure losses, which prevent hearts from working efficiently; separated and secondary flows, which causes clotting; and high turbulent shear stresses, which damages red blood cells. Report reiterates and expands upon part of NASA technical memorandum "Computed Flow Through an Artificial Heart and Valve" (ARC-12983). Also based partly on research described in "Numerical Simulation of Flow Through an Artificial Heart" (ARC-12478).

  15. Method for evaluation of predictive models of microwave ablation via post-procedural clinical imaging

    NASA Astrophysics Data System (ADS)

    Collins, Jarrod A.; Brown, Daniel; Kingham, T. Peter; Jarnagin, William R.; Miga, Michael I.; Clements, Logan W.

    2015-03-01

    Development of a clinically accurate predictive model of microwave ablation (MWA) procedures would represent a significant advancement and facilitate an implementation of patient-specific treatment planning to achieve optimal probe placement and ablation outcomes. While studies have been performed to evaluate predictive models of MWA, the ability to quantify the performance of predictive models via clinical data has been limited to comparing geometric measurements of the predicted and actual ablation zones. The accuracy of placement, as determined by the degree of spatial overlap between ablation zones, has not been achieved. In order to overcome this limitation, a method of evaluation is proposed where the actual location of the MWA antenna is tracked and recorded during the procedure via a surgical navigation system. Predictive models of the MWA are then computed using the known position of the antenna within the preoperative image space. Two different predictive MWA models were used for the preliminary evaluation of the proposed method: (1) a geometric model based on the labeling associated with the ablation antenna and (2) a 3-D finite element method based computational model of MWA using COMSOL. Given the follow-up tomographic images that are acquired at approximately 30 days after the procedure, a 3-D surface model of the necrotic zone was generated to represent the true ablation zone. A quantification of the overlap between the predicted ablation zones and the true ablation zone was performed after a rigid registration was computed between the pre- and post-procedural tomograms. While both model show significant overlap with the true ablation zone, these preliminary results suggest a slightly higher degree of overlap with the geometric model.

  16. Artificial Intelligence Methods: Challenge in Computer Based Polymer Design

    NASA Astrophysics Data System (ADS)

    Rusu, Teodora; Pinteala, Mariana; Cartwright, Hugh

    2009-08-01

    This paper deals with the use of Artificial Intelligence Methods (AI) in the design of new molecules possessing desired physical, chemical and biological properties. This is an important and difficult problem in the chemical, material and pharmaceutical industries. Traditional methods involve a laborious and expensive trial-and-error procedure, but computer-assisted approaches offer many advantages in the automation of molecular design.

  17. Aerodynamic design optimization using sensitivity analysis and computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay; Eleshaky, Mohamed E.

    1991-01-01

    A new and efficient method is presented for aerodynamic design optimization, which is based on a computational fluid dynamics (CFD)-sensitivity analysis algorithm. The method is applied to design a scramjet-afterbody configuration for an optimized axial thrust. The Euler equations are solved for the inviscid analysis of the flow, which in turn provides the objective function and the constraints. The CFD analysis is then coupled with the optimization procedure that uses a constrained minimization method. The sensitivity coefficients, i.e. gradients of the objective function and the constraints, needed for the optimization are obtained using a quasi-analytical method rather than the traditional brute force method of finite difference approximations. During the one-dimensional search of the optimization procedure, an approximate flow analysis (predicted flow) based on a first-order Taylor series expansion is used to reduce the computational cost. Finally, the sensitivity of the optimum objective function to various design parameters, which are kept constant during the optimization, is computed to predict new optimum solutions. The flow analysis of the demonstrative example are compared with the experimental data. It is shown that the method is more efficient than the traditional methods.

  18. Efficient computation of the genomic relationship matrix and other matrices used in single-step evaluation.

    PubMed

    Aguilar, I; Misztal, I; Legarra, A; Tsuruta, S

    2011-12-01

    Genomic evaluations can be calculated using a unified procedure that combines phenotypic, pedigree and genomic information. Implementation of such a procedure requires the inverse of the relationship matrix based on pedigree and genomic relationships. The objective of this study was to investigate efficient computing options to create relationship matrices based on genomic markers and pedigree information as well as their inverses. SNP maker information was simulated for a panel of 40 K SNPs, with the number of genotyped animals up to 30 000. Matrix multiplication in the computation of the genomic relationship was by a simple 'do' loop, by two optimized versions of the loop, and by a specific matrix multiplication subroutine. Inversion was by a generalized inverse algorithm and by a LAPACK subroutine. With the most efficient choices and parallel processing, creation of matrices for 30 000 animals would take a few hours. Matrices required to implement a unified approach can be computed efficiently. Optimizations can be either by modifications of existing code or by the use of efficient automatic optimizations provided by open source or third-party libraries. © 2011 Blackwell Verlag GmbH.

  19. Efficient and accurate Greedy Search Methods for mining functional modules in protein interaction networks.

    PubMed

    He, Jieyue; Li, Chaojun; Ye, Baoliu; Zhong, Wei

    2012-06-25

    Most computational algorithms mainly focus on detecting highly connected subgraphs in PPI networks as protein complexes but ignore their inherent organization. Furthermore, many of these algorithms are computationally expensive. However, recent analysis indicates that experimentally detected protein complexes generally contain Core/attachment structures. In this paper, a Greedy Search Method based on Core-Attachment structure (GSM-CA) is proposed. The GSM-CA method detects densely connected regions in large protein-protein interaction networks based on the edge weight and two criteria for determining core nodes and attachment nodes. The GSM-CA method improves the prediction accuracy compared to other similar module detection approaches, however it is computationally expensive. Many module detection approaches are based on the traditional hierarchical methods, which is also computationally inefficient because the hierarchical tree structure produced by these approaches cannot provide adequate information to identify whether a network belongs to a module structure or not. In order to speed up the computational process, the Greedy Search Method based on Fast Clustering (GSM-FC) is proposed in this work. The edge weight based GSM-FC method uses a greedy procedure to traverse all edges just once to separate the network into the suitable set of modules. The proposed methods are applied to the protein interaction network of S. cerevisiae. Experimental results indicate that many significant functional modules are detected, most of which match the known complexes. Results also demonstrate that the GSM-FC algorithm is faster and more accurate as compared to other competing algorithms. Based on the new edge weight definition, the proposed algorithm takes advantages of the greedy search procedure to separate the network into the suitable set of modules. Experimental analysis shows that the identified modules are statistically significant. The algorithm can reduce the computational time significantly while keeping high prediction accuracy.

  20. Using Curriculum-Based Measurement To Monitor Kindergarteners' Mathematics Development

    ERIC Educational Resources Information Center

    Seethaler, Pamela M.; Fuchs, Lynn S.

    2011-01-01

    The purpose of this study was to examine technical and instructional features of a kindergarten curriculum-based measurement (CBM) tool designed to track students' mathematics progress in terms of computational concepts, procedures, and counting strategies. Students in 10 kindergarten classrooms in three elementary schools completed alternate…

  1. Alternative Modal Basis Selection Procedures For Reduced-Order Nonlinear Random Response Simulation

    NASA Technical Reports Server (NTRS)

    Przekop, Adam; Guo, Xinyun; Rizi, Stephen A.

    2012-01-01

    Three procedures to guide selection of an efficient modal basis in a nonlinear random response analysis are examined. One method is based only on proper orthogonal decomposition, while the other two additionally involve smooth orthogonal decomposition. Acoustic random response problems are employed to assess the performance of the three modal basis selection approaches. A thermally post-buckled beam exhibiting snap-through behavior, a shallowly curved arch in the auto-parametric response regime and a plate structure are used as numerical test articles. The results of a computationally taxing full-order analysis in physical degrees of freedom are taken as the benchmark for comparison with the results from the three reduced-order analyses. For the cases considered, all three methods are shown to produce modal bases resulting in accurate and computationally efficient reduced-order nonlinear simulations.

  2. A Rapid Aerodynamic Design Procedure Based on Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan

    2001-01-01

    An aerodynamic design procedure that uses neural networks to model the functional behavior of the objective function in design space has been developed. This method incorporates several improvements to an earlier method that employed a strategy called parameter-based partitioning of the design space in order to reduce the computational costs associated with design optimization. As with the earlier method, the current method uses a sequence of response surfaces to traverse the design space in search of the optimal solution. The new method yields significant reductions in computational costs by using composite response surfaces with better generalization capabilities and by exploiting synergies between the optimization method and the simulation codes used to generate the training data. These reductions in design optimization costs are demonstrated for a turbine airfoil design study where a generic shape is evolved into an optimal airfoil.

  3. Basic design considerations for free-electron lasers driven by electron beams from RF accelerators

    NASA Astrophysics Data System (ADS)

    Gover, A.; Freund, H.; Granatstein, V. L.; McAdoo, J. H.; Tang, C.-M.

    A design procedure and design criteria are derived for free-electron lasers driven by electron beams from RF accelerators. The procedure and criteria permit an estimate of the oscillation-buildup time and the laser output power of various FEL schemes: with waveguide resonator or open resonator, with initial seed-radiation injection or with spontaneous-emission radiation source, with a linear wiggler or with a helical wiggler. Expressions are derived for computing the various FEL parameters, allowing for the design and optimization of the FEL operational characteristics under ideal conditions or with nonideal design parameters that may be limited by technological or practical constraints. The design procedure enables one to derive engineering curves and scaling laws for the FEL operating parameters. This can be done most conveniently with a computer program based on flowcharts given in the appendices.

  4. Identifying parameter regions for multistationarity

    PubMed Central

    Conradi, Carsten; Mincheva, Maya; Wiuf, Carsten

    2017-01-01

    Mathematical modelling has become an established tool for studying the dynamics of biological systems. Current applications range from building models that reproduce quantitative data to identifying systems with predefined qualitative features, such as switching behaviour, bistability or oscillations. Mathematically, the latter question amounts to identifying parameter values associated with a given qualitative feature. We introduce a procedure to partition the parameter space of a parameterized system of ordinary differential equations into regions for which the system has a unique or multiple equilibria. The procedure is based on the computation of the Brouwer degree, and it creates a multivariate polynomial with parameter depending coefficients. The signs of the coefficients determine parameter regions with and without multistationarity. A particular strength of the procedure is the avoidance of numerical analysis and parameter sampling. The procedure consists of a number of steps. Each of these steps might be addressed algorithmically using various computer programs and available software, or manually. We demonstrate our procedure on several models of gene transcription and cell signalling, and show that in many cases we obtain a complete partitioning of the parameter space with respect to multistationarity. PMID:28972969

  5. Constraint treatment techniques and parallel algorithms for multibody dynamic analysis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chiou, Jin-Chern

    1990-01-01

    Computational procedures for kinematic and dynamic analysis of three-dimensional multibody dynamic (MBD) systems are developed from the differential-algebraic equations (DAE's) viewpoint. Constraint violations during the time integration process are minimized and penalty constraint stabilization techniques and partitioning schemes are developed. The governing equations of motion, a two-stage staggered explicit-implicit numerical algorithm, are treated which takes advantage of a partitioned solution procedure. A robust and parallelizable integration algorithm is developed. This algorithm uses a two-stage staggered central difference algorithm to integrate the translational coordinates and the angular velocities. The angular orientations of bodies in MBD systems are then obtained by using an implicit algorithm via the kinematic relationship between Euler parameters and angular velocities. It is shown that the combination of the present solution procedures yields a computationally more accurate solution. To speed up the computational procedures, parallel implementation of the present constraint treatment techniques, the two-stage staggered explicit-implicit numerical algorithm was efficiently carried out. The DAE's and the constraint treatment techniques were transformed into arrowhead matrices to which Schur complement form was derived. By fully exploiting the sparse matrix structural analysis techniques, a parallel preconditioned conjugate gradient numerical algorithm is used to solve the systems equations written in Schur complement form. A software testbed was designed and implemented in both sequential and parallel computers. This testbed was used to demonstrate the robustness and efficiency of the constraint treatment techniques, the accuracy of the two-stage staggered explicit-implicit numerical algorithm, and the speed up of the Schur-complement-based parallel preconditioned conjugate gradient algorithm on a parallel computer.

  6. 48 CFR 552.216-72 - Placement of Orders.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Acquisition Service (FAS) will place all orders by EDI using computer-to-computer EDI. If computer-to-computer... EDI. (d) When computer-to-computer EDI procedures will be used to place orders, the Contractor shall... electronic orders are placed, the transaction sets used, security procedures, and guidelines for...

  7. 48 CFR 552.216-72 - Placement of Orders.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Acquisition Service (FAS) will place all orders by EDI using computer-to-computer EDI. If computer-to-computer... EDI. (d) When computer-to-computer EDI procedures will be used to place orders, the Contractor shall... electronic orders are placed, the transaction sets used, security procedures, and guidelines for...

  8. 48 CFR 552.216-72 - Placement of Orders.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Acquisition Service (FAS) will place all orders by EDI using computer-to-computer EDI. If computer-to-computer... EDI. (d) When computer-to-computer EDI procedures will be used to place orders, the Contractor shall... electronic orders are placed, the transaction sets used, security procedures, and guidelines for...

  9. 48 CFR 552.216-72 - Placement of Orders.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Acquisition Service (FAS) will place all orders by EDI using computer-to-computer EDI. If computer-to-computer... EDI. (d) When computer-to-computer EDI procedures will be used to place orders, the Contractor shall... electronic orders are placed, the transaction sets used, security procedures, and guidelines for...

  10. 48 CFR 552.216-72 - Placement of Orders.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Acquisition Service (FAS) will place all orders by EDI using computer-to-computer EDI. If computer-to-computer... EDI. (d) When computer-to-computer EDI procedures will be used to place orders, the Contractor shall... electronic orders are placed, the transaction sets used, security procedures, and guidelines for...

  11. Review of edgematchimg procedures for digital cartographic data used in Geographic Information Systems (GIS)

    USGS Publications Warehouse

    Nebert, D.D.

    1989-01-01

    In the process of developing a continuous hydrographic data layer for water resources applications in the Pacific Northwest, map-edge discontinuities in the U.S. Geological Survey 1:100 ,000-scale digital data that required application of computer-assisted edgematching procedures were identified. The spatial data sets required by the project must have line features that match closely enough across map boundaries to ensure full line topology when adjacent files are joined by the computer. Automated edgematching techniques are evaluated as to their effects on positional accuracy. Interactive methods such as selective node-matching and on-screen editing are also reviewed. Interactive procedures complement automated methods by allowing supervision of edgematching in a cartographic and hydrologic context. Common edge conditions encountered in the preparation of the Northwest Rivers data base are described, as are recommended processing solutions. Suggested edgematching procedures for 1:100,000-scale hydrography data are included in an appendix to encourage consistent processing of this theme on a national scale. (USGS)

  12. General rigid motion correction for computed tomography imaging based on locally linear embedding

    NASA Astrophysics Data System (ADS)

    Chen, Mianyi; He, Peng; Feng, Peng; Liu, Baodong; Yang, Qingsong; Wei, Biao; Wang, Ge

    2018-02-01

    The patient motion can damage the quality of computed tomography images, which are typically acquired in cone-beam geometry. The rigid patient motion is characterized by six geometric parameters and are more challenging to correct than in fan-beam geometry. We extend our previous rigid patient motion correction method based on the principle of locally linear embedding (LLE) from fan-beam to cone-beam geometry and accelerate the computational procedure with the graphics processing unit (GPU)-based all scale tomographic reconstruction Antwerp toolbox. The major merit of our method is that we need neither fiducial markers nor motion-tracking devices. The numerical and experimental studies show that the LLE-based patient motion correction is capable of calibrating the six parameters of the patient motion simultaneously, reducing patient motion artifacts significantly.

  13. Computer program for the computation of total sediment discharge by the modified Einstein procedure

    USGS Publications Warehouse

    Stevens, H.H.

    1985-01-01

    Two versions of a computer program to compute total sediment discharge by the modified Einstein procedure are presented. The FORTRAN 77 language version is for use on the PRIME computer, and the BASIC language version is for use on most microcomputers. The program contains built-in limitations and input-output options that closely follow the original modified Einstein procedure. Program documentation and listings of both versions of the program are included. (USGS)

  14. Inverse finite-size scaling for high-dimensional significance analysis

    NASA Astrophysics Data System (ADS)

    Xu, Yingying; Puranen, Santeri; Corander, Jukka; Kabashima, Yoshiyuki

    2018-06-01

    We propose an efficient procedure for significance determination in high-dimensional dependence learning based on surrogate data testing, termed inverse finite-size scaling (IFSS). The IFSS method is based on our discovery of a universal scaling property of random matrices which enables inference about signal behavior from much smaller scale surrogate data than the dimensionality of the original data. As a motivating example, we demonstrate the procedure for ultra-high-dimensional Potts models with order of 1010 parameters. IFSS reduces the computational effort of the data-testing procedure by several orders of magnitude, making it very efficient for practical purposes. This approach thus holds considerable potential for generalization to other types of complex models.

  15. Progressive fracture of fiber composites

    NASA Technical Reports Server (NTRS)

    Irvin, T. B.; Ginty, C. A.

    1983-01-01

    Refined models and procedures are described for determining progressive composite fracture in graphite/epoxy angleplied laminates. Lewis Research Center capabilities are utilized including the Real Time Ultrasonic C Scan (RUSCAN) experimental facility and the Composite Durability Structural Analysis (CODSTRAN) computer code. The CODSTRAN computer code is used to predict the fracture progression based on composite mechanics, finite element stress analysis, and fracture criteria modules. The RUSCAN facility, CODSTRAN computer code, and scanning electron microscope are used to determine durability and identify failure mechanisms in graphite/epoxy composites.

  16. 40 CFR 600.002-93 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... traveled by an automobile or group of automobiles per volume of fuel consumed as computed in § 600.113 or § 600.207; or (ii) The equivalent petroleum-based fuel economy for an electrically powered automobile as... means the equivalent petroleum-based fuel economy value as determined by the calculation procedure...

  17. Self-guaranteed measurement-based quantum computation

    NASA Astrophysics Data System (ADS)

    Hayashi, Masahito; Hajdušek, Michal

    2018-05-01

    In order to guarantee the output of a quantum computation, we usually assume that the component devices are trusted. However, when the total computation process is large, it is not easy to guarantee the whole system when we have scaling effects, unexpected noise, or unaccounted for correlations between several subsystems. If we do not trust the measurement basis or the prepared entangled state, we do need to be worried about such uncertainties. To this end, we propose a self-guaranteed protocol for verification of quantum computation under the scheme of measurement-based quantum computation where no prior-trusted devices (measurement basis or entangled state) are needed. The approach we present enables the implementation of verifiable quantum computation using the measurement-based model in the context of a particular instance of delegated quantum computation where the server prepares the initial computational resource and sends it to the client, who drives the computation by single-qubit measurements. Applying self-testing procedures, we are able to verify the initial resource as well as the operation of the quantum devices and hence the computation itself. The overhead of our protocol scales with the size of the initial resource state to the power of 4 times the natural logarithm of the initial state's size.

  18. Automatic computer procedure for generating exact and analytical kinetic energy operators based on the polyspherical approach: General formulation and removal of singularities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ndong, Mamadou; Lauvergnat, David; Nauts, André

    2013-11-28

    We present new techniques for an automatic computation of the kinetic energy operator in analytical form. These techniques are based on the use of the polyspherical approach and are extended to take into account Cartesian coordinates as well. An automatic procedure is developed where analytical expressions are obtained by symbolic calculations. This procedure is a full generalization of the one presented in Ndong et al., [J. Chem. Phys. 136, 034107 (2012)]. The correctness of the new implementation is analyzed by comparison with results obtained from the TNUM program. We give several illustrations that could be useful for users of themore » code. In particular, we discuss some cyclic compounds which are important in photochemistry. Among others, we show that choosing a well-adapted parameterization and decomposition into subsystems can allow one to avoid singularities in the kinetic energy operator. We also discuss a relation between polyspherical and Z-matrix coordinates: this comparison could be helpful for building an interface between the new code and a quantum chemistry package.« less

  19. Building a Computer Program to Support Children, Parents, and Distraction during Healthcare Procedures

    PubMed Central

    McCarthy, Ann Marie; Kleiber, Charmaine; Ataman, Kaan; Street, W. Nick; Zimmerman, M. Bridget; Ersig, Anne L.

    2012-01-01

    This secondary data analysis used data mining methods to develop predictive models of child risk for distress during a healthcare procedure. Data used came from a study that predicted factors associated with children’s responses to an intravenous catheter insertion while parents provided distraction coaching. From the 255 items used in the primary study, 44 predictive items were identified through automatic feature selection and used to build support vector machine regression models. Models were validated using multiple cross-validation tests and by comparing variables identified as explanatory in the traditional versus support vector machine regression. Rule-based approaches were applied to the model outputs to identify overall risk for distress. A decision tree was then applied to evidence-based instructions for tailoring distraction to characteristics and preferences of the parent and child. The resulting decision support computer application, the Children, Parents and Distraction (CPaD), is being used in research. Future use will support practitioners in deciding the level and type of distraction intervention needed by a child undergoing a healthcare procedure. PMID:22805121

  20. PC based temporary shielding administrative procedure (TSAP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olsen, D.E.; Pederson, G.E.; Hamby, P.N.

    1995-03-01

    A completely new Administrative Procedure for temporary shielding was developed for use at Commonwealth Edison`s six nuclear stations. This procedure promotes the use of shielding, and addresses industry requirements for the use and control of temporary shielding. The importance of an effective procedure has increased since more temporary shielding is being used as ALARA goals become more ambitious. To help implement the administrative procedure, a personal computer software program was written to incorporate the procedural requirements. This software incorporates the useability of a Windows graphical user interface with extensive help and database features. This combination of a comprehensive administrative proceduremore » and user friendly software promotes the effective use and management of temporary shielding while ensuring that industry requirements are met.« less

  1. Computer vision based nacre thickness measurement of Tahitian pearls

    NASA Astrophysics Data System (ADS)

    Loesdau, Martin; Chabrier, Sébastien; Gabillon, Alban

    2017-03-01

    The Tahitian Pearl is the most valuable export product of French Polynesia contributing with over 61 million Euros to more than 50% of the total export income. To maintain its excellent reputation on the international market, an obligatory quality control for every pearl deemed for exportation has been established by the local government. One of the controlled quality parameters is the pearls nacre thickness. The evaluation is currently done manually by experts that are visually analyzing X-ray images of the pearls. In this article, a computer vision based approach to automate this procedure is presented. Even though computer vision based approaches for pearl nacre thickness measurement exist in the literature, the very specific features of the Tahitian pearl, namely the large shape variety and the occurrence of cavities, have so far not been considered. The presented work closes the. Our method consists of segmenting the pearl from X-ray images with a model-based approach, segmenting the pearls nucleus with an own developed heuristic circle detection and segmenting possible cavities with region growing. Out of the obtained boundaries, the 2-dimensional nacre thickness profile can be calculated. A certainty measurement to consider imaging and segmentation imprecisions is included in the procedure. The proposed algorithms are tested on 298 manually evaluated Tahitian pearls, showing that it is generally possible to automatically evaluate the nacre thickness of Tahitian pearls with computer vision. Furthermore the results show that the automatic measurement is more precise and faster than the manual one.

  2. System Study at SUNY College Bookstore/Oswego

    ERIC Educational Resources Information Center

    DeVita, Richard; And Others

    1975-01-01

    A system study of the textbook ordering department is presented including systems flow chart, chart of activities, and description of operations and procedures for utilizing the computer system. Changes based on the study are noted. (JT)

  3. LUMIS: Land Use Management and Information Systems; coordinate oriented program documentation

    NASA Technical Reports Server (NTRS)

    1976-01-01

    An integrated geographic information system to assist program managers and planning groups in metropolitan regions is presented. The series of computer software programs and procedures involved in data base construction uses the census DIME file and point-in-polygon architectures. The system is described in two parts: (1) instructions to operators with regard to digitizing and editing procedures, and (2) application of data base construction algorithms to achieve map registration, assure the topological integrity of polygon files, and tabulate land use acreages within administrative districts.

  4. Constraint-Based Modeling: From Cognitive Theory to Computer Tutoring--and Back Again

    ERIC Educational Resources Information Center

    Ohlsson, Stellan

    2016-01-01

    The ideas behind the constraint-based modeling (CBM) approach to the design of intelligent tutoring systems (ITSs) grew out of attempts in the 1980's to clarify how declarative and procedural knowledge interact during skill acquisition. The learning theory that underpins CBM was based on two conceptual innovations. The first innovation was to…

  5. National Hospital Discharge Survey: 2001 annual summary with detailed diagnosis and procedure data.

    PubMed

    Kozak, Lola Jean; Owings, Maria F; Hall, Margaret J

    2004-06-01

    This report presents 2001 national estimates and selected trend data on the use of non-Federal short-stay hospitals in the United States. Estimates are provided by selected patient and hospital characteristics, diagnoses, and surgical and nonsurgical procedures performed. Admission source and type, collected for the first time in the 2001 National Hospital Discharge Survey, are shown. The estimates are based on data collected through the National Hospital Discharge Survey (NHDS). The survey has been conducted annually since 1965. In 2001, data were collected for approximately 330,000 discharges. Of the 477 eligible non-Federal short-stay hospitals in the sample, 448 (94 percent) responded to the survey. Estimates of diagnoses and procedures are presented according to International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) code numbers. Rates are computed with 2001 population estimates based on the 2000 census. The appendix includes a comparison of rates computed with 1990 and 2000 census-based population estimates. An estimated 32.7 million inpatients were discharged from non-Federal short-stay hospitals in 2001. They used 159.4 million days of care and had an average length of stay of 4.9 days. Common first-listed discharge diagnoses included delivery, psychoses, pneumonia, malignant neoplasm, and coronary atherosclerosis. Males had higher rates for procedures such as cardiac catheterization and coronary artery bypass graft, and females had higher rates for procedures such as cholecystectomy and total knee replacement. The rates of all cesarean deliveries, primary and repeat, rose from 1995 to 2001; the rate of vaginal birth after cesarean delivery dropped 37 percent during this period.

  6. Slice-thickness evaluation in CT and MRI: an alternative computerised procedure.

    PubMed

    Acri, G; Tripepi, M G; Causa, F; Testagrossa, B; Novario, R; Vermiglio, G

    2012-04-01

    The efficient use of computed tomography (CT) and magnetic resonance imaging (MRI) equipment necessitates establishing adequate quality-control (QC) procedures. In particular, the accuracy of slice thickness (ST) requires scan exploration of phantoms containing test objects (plane, cone or spiral). To simplify such procedures, a novel phantom and a computerised LabView-based procedure have been devised, enabling determination of full width at half maximum (FWHM) in real time. The phantom consists of a polymethyl methacrylate (PMMA) box, diagonally crossed by a PMMA septum dividing the box into two sections. The phantom images were acquired and processed using the LabView-based procedure. The LabView (LV) results were compared with those obtained by processing the same phantom images with commercial software, and the Fisher exact test (F test) was conducted on the resulting data sets to validate the proposed methodology. In all cases, there was no statistically significant variation between the two different procedures and the LV procedure, which can therefore be proposed as a valuable alternative to other commonly used procedures and be reliably used on any CT and MRI scanner.

  7. SKYSINE-II procedure: calculation of the effects of structure design on neutron, primary gamma-ray and secondary gamma-ray dose rates in air

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lampley, C.M.

    1979-01-01

    An updated version of the SKYSHINE Monte Carlo procedure has been developed. The new computer code, SKYSHINE-II, provides a substantial increase in versatility in that the program possesses the ability to address three types of point-isotropic radiation sources: (1) primary gamma rays, (2) neutrons, and (3) secondary gamma rays. In addition, the emitted radiation may now be characterized by an energy emission spectrum product of a new energy-dependent atmospheric transmission data base developed by Radiation Research Associates, Inc. for each of the three source types described above. Most of the computational options present in the original program have been retainedmore » in the new version. Hence, the SKYSHINE-II computer code provides a versatile and viable tool for the analysis of the radiation environment in the vicinity of a building structure containing radiation sources, situated within the confines of a nuclear power plant. This report describes many of the calculational methods employed within the SKYSHINE-II program. A brief description of the new data base is included. Utilization instructions for the program are provided for operation of the SKYSHINE-II code on the Brookhaven National Laboratory Central Scientific Computing Facility. A listing of the source decks, block data routines, and the new atmospheric transmission data base are provided in the appendices of the report.« less

  8. A novel technique for presurgical nasoalveolar molding using computer-aided reverse engineering and rapid prototyping.

    PubMed

    Yu, Quan; Gong, Xin; Wang, Guo-Min; Yu, Zhe-Yuan; Qian, Yu-Fen; Shen, Gang

    2011-01-01

    To establish a new method of presurgical nasoalveolar molding (NAM) using computer-aided reverse engineering and rapid prototyping technique in infants with unilateral cleft lip and palate (UCLP). Five infants (2 males and 3 females with mean age of 1.2 w) with complete UCLP were recruited. All patients were subjected to NAM before the cleft lip repair. The upper denture casts were recorded using a three-dimensional laser scanner within 2 weeks after birth in UCLP infants. A digital model was constructed and analyzed to simulate the NAM procedure with reverse engineering software. The digital geometrical data were exported to print the solid model with rapid prototyping system. The whole set of appliances was fabricated based on these solid models. Laser scanning and digital model construction simplified the NAM procedure and estimated the treatment objective. The appliances were fabricated based on the rapid prototyping technique, and for each patient, the complete set of appliances could be obtained at one time. By the end of presurgical NAM treatment, the cleft was narrowed, and the malformation of nasoalveolar segments was aligned normally. We have developed a novel technique of presurgical NAM based on a computer-aided design. The accurate digital denture model of UCLP infants could be obtained with laser scanning. The treatment design and appliance fabrication could be simplified with a computer-aided reverse engineering and rapid prototyping technique.

  9. Thermodynamic evaluation of transonic compressor rotors using the finite volume approach

    NASA Technical Reports Server (NTRS)

    Moore, John; Nicholson, Stephen; Moore, Joan G.

    1986-01-01

    The development of a computational capability to handle viscous flow with an explicit time-marching method based on the finite volume approach is summarized. Emphasis is placed on the extensions to the computational procedure which allow the handling of shock induced separation and large regions of strong backflow. Appendices contain abstracts of papers and whole reports generated during the contract period.

  10. Evaluative methodology for comprehensive water quality management planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dyer, H. L.

    Computer-based evaluative methodologies have been developed to provide for the analysis of coupled phenomena associated with natural resource comprehensive planning requirements. Provisions for planner/computer interaction have been included. Each of the simulation models developed is described in terms of its coded procedures. An application of the models for water quality management planning is presented; and the data requirements for each of the models are noted.

  11. Reliable Early Classification on Multivariate Time Series with Numerical and Categorical Attributes

    DTIC Science & Technology

    2015-05-22

    design a procedure of feature extraction in REACT named MEG (Mining Equivalence classes with shapelet Generators) based on the concept of...Equivalence Classes Mining [12, 15]. MEG can efficiently and effectively generate the discriminative features. In addition, several strategies are proposed...technique of parallel computing [4] to propose a process of pa- rallel MEG for substantially reducing the computational overhead of discovering shapelet

  12. Thermal stress analysis of reusable surface insulation for shuttle

    NASA Technical Reports Server (NTRS)

    Ojalvo, I. U.; Levy, A.; Austin, F.

    1974-01-01

    An iterative procedure for accurately determining tile stresses associated with static mechanical and thermally induced internal loads is presented. The necessary conditions for convergence of the method are derived. An user-oriented computer program based upon the present method of analysis was developed. The program is capable of analyzing multi-tiled panels and determining the associated stresses. Typical numerical results from this computer program are presented.

  13. A comparative study of serial and parallel aeroelastic computations of wings

    NASA Technical Reports Server (NTRS)

    Byun, Chansup; Guruswamy, Guru P.

    1994-01-01

    A procedure for computing the aeroelasticity of wings on parallel multiple-instruction, multiple-data (MIMD) computers is presented. In this procedure, fluids are modeled using Euler equations, and structures are modeled using modal or finite element equations. The procedure is designed in such a way that each discipline can be developed and maintained independently by using a domain decomposition approach. In the present parallel procedure, each computational domain is scalable. A parallel integration scheme is used to compute aeroelastic responses by solving fluid and structural equations concurrently. The computational efficiency issues of parallel integration of both fluid and structural equations are investigated in detail. This approach, which reduces the total computational time by a factor of almost 2, is demonstrated for a typical aeroelastic wing by using various numbers of processors on the Intel iPSC/860.

  14. A General Cross-Layer Cloud Scheduling Framework for Multiple IoT Computer Tasks.

    PubMed

    Wu, Guanlin; Bao, Weidong; Zhu, Xiaomin; Zhang, Xiongtao

    2018-05-23

    The diversity of IoT services and applications brings enormous challenges to improving the performance of multiple computer tasks' scheduling in cross-layer cloud computing systems. Unfortunately, the commonly-employed frameworks fail to adapt to the new patterns on the cross-layer cloud. To solve this issue, we design a new computer task scheduling framework for multiple IoT services in cross-layer cloud computing systems. Specifically, we first analyze the features of the cross-layer cloud and computer tasks. Then, we design the scheduling framework based on the analysis and present detailed models to illustrate the procedures of using the framework. With the proposed framework, the IoT services deployed in cross-layer cloud computing systems can dynamically select suitable algorithms and use resources more effectively to finish computer tasks with different objectives. Finally, the algorithms are given based on the framework, and extensive experiments are also given to validate its effectiveness, as well as its superiority.

  15. Design of transonic airfoil sections using a similarity theory

    NASA Technical Reports Server (NTRS)

    Nixon, D.

    1978-01-01

    A study of the available methods for transonic airfoil and wing design indicates that the most powerful technique is the numerical optimization procedure. However, the computer time for this method is relatively large because of the amount of computation required in the searches during optimization. The optimization method requires that base and calibration solutions be computed to determine a minimum drag direction. The design space is then computationally searched in this direction; it is these searches that dominate the computation time. A recent similarity theory allows certain transonic flows to be calculated rapidly from the base and calibration solutions. In this paper the application of the similarity theory to design problems is examined with the object of at least partially eliminating the costly searches of the design optimization method. An example of an airfoil design is presented.

  16. Accuracy of the microcanonical Lanczos method to compute real-frequency dynamical spectral functions of quantum models at finite temperatures.

    PubMed

    Okamoto, Satoshi; Alvarez, Gonzalo; Dagotto, Elbio; Tohyama, Takami

    2018-04-01

    We examine the accuracy of the microcanonical Lanczos method (MCLM) developed by Long et al. [Phys. Rev. B 68, 235106 (2003)PRBMDO0163-182910.1103/PhysRevB.68.235106] to compute dynamical spectral functions of interacting quantum models at finite temperatures. The MCLM is based on the microcanonical ensemble, which becomes exact in the thermodynamic limit. To apply the microcanonical ensemble at a fixed temperature, one has to find energy eigenstates with the energy eigenvalue corresponding to the internal energy in the canonical ensemble. Here, we propose to use thermal pure quantum state methods by Sugiura and Shimizu [Phys. Rev. Lett. 111, 010401 (2013)PRLTAO0031-900710.1103/PhysRevLett.111.010401] to obtain the internal energy. After obtaining the energy eigenstates using the Lanczos diagonalization method, dynamical quantities are computed via a continued fraction expansion, a standard procedure for Lanczos-based numerical methods. Using one-dimensional antiferromagnetic Heisenberg chains with S=1/2, we demonstrate that the proposed procedure is reasonably accurate, even for relatively small systems.

  17. Enabling Big Geoscience Data Analytics with a Cloud-Based, MapReduce-Enabled and Service-Oriented Workflow Framework

    PubMed Central

    Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew

    2015-01-01

    Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists. PMID:25742012

  18. Accuracy of the microcanonical Lanczos method to compute real-frequency dynamical spectral functions of quantum models at finite temperatures

    NASA Astrophysics Data System (ADS)

    Okamoto, Satoshi; Alvarez, Gonzalo; Dagotto, Elbio; Tohyama, Takami

    2018-04-01

    We examine the accuracy of the microcanonical Lanczos method (MCLM) developed by Long et al. [Phys. Rev. B 68, 235106 (2003), 10.1103/PhysRevB.68.235106] to compute dynamical spectral functions of interacting quantum models at finite temperatures. The MCLM is based on the microcanonical ensemble, which becomes exact in the thermodynamic limit. To apply the microcanonical ensemble at a fixed temperature, one has to find energy eigenstates with the energy eigenvalue corresponding to the internal energy in the canonical ensemble. Here, we propose to use thermal pure quantum state methods by Sugiura and Shimizu [Phys. Rev. Lett. 111, 010401 (2013), 10.1103/PhysRevLett.111.010401] to obtain the internal energy. After obtaining the energy eigenstates using the Lanczos diagonalization method, dynamical quantities are computed via a continued fraction expansion, a standard procedure for Lanczos-based numerical methods. Using one-dimensional antiferromagnetic Heisenberg chains with S =1 /2 , we demonstrate that the proposed procedure is reasonably accurate, even for relatively small systems.

  19. Computing tools for implementing standards for single-case designs.

    PubMed

    Chen, Li-Ting; Peng, Chao-Ying Joanne; Chen, Ming-E

    2015-11-01

    In the single-case design (SCD) literature, five sets of standards have been formulated and distinguished: design standards, assessment standards, analysis standards, reporting standards, and research synthesis standards. This article reviews computing tools that can assist researchers and practitioners in meeting the analysis standards recommended by the What Works Clearinghouse: Procedures and Standards Handbook-the WWC standards. These tools consist of specialized web-based calculators or downloadable software for SCD data, and algorithms or programs written in Excel, SAS procedures, SPSS commands/Macros, or the R programming language. We aligned these tools with the WWC standards and evaluated them for accuracy and treatment of missing data, using two published data sets. All tools were tested to be accurate. When missing data were present, most tools either gave an error message or conducted analysis based on the available data. Only one program used a single imputation method. This article concludes with suggestions for an inclusive computing tool or environment, additional research on the treatment of missing data, and reasonable and flexible interpretations of the WWC standards. © The Author(s) 2015.

  20. Enabling big geoscience data analytics with a cloud-based, MapReduce-enabled and service-oriented workflow framework.

    PubMed

    Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew

    2015-01-01

    Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists.

  1. A survey of GPU-based acceleration techniques in MRI reconstructions

    PubMed Central

    Wang, Haifeng; Peng, Hanchuan; Chang, Yuchou

    2018-01-01

    Image reconstruction in magnetic resonance imaging (MRI) clinical applications has become increasingly more complicated. However, diagnostic and treatment require very fast computational procedure. Modern competitive platforms of graphics processing unit (GPU) have been used to make high-performance parallel computations available, and attractive to common consumers for computing massively parallel reconstruction problems at commodity price. GPUs have also become more and more important for reconstruction computations, especially when deep learning starts to be applied into MRI reconstruction. The motivation of this survey is to review the image reconstruction schemes of GPU computing for MRI applications and provide a summary reference for researchers in MRI community. PMID:29675361

  2. A survey of GPU-based acceleration techniques in MRI reconstructions.

    PubMed

    Wang, Haifeng; Peng, Hanchuan; Chang, Yuchou; Liang, Dong

    2018-03-01

    Image reconstruction in magnetic resonance imaging (MRI) clinical applications has become increasingly more complicated. However, diagnostic and treatment require very fast computational procedure. Modern competitive platforms of graphics processing unit (GPU) have been used to make high-performance parallel computations available, and attractive to common consumers for computing massively parallel reconstruction problems at commodity price. GPUs have also become more and more important for reconstruction computations, especially when deep learning starts to be applied into MRI reconstruction. The motivation of this survey is to review the image reconstruction schemes of GPU computing for MRI applications and provide a summary reference for researchers in MRI community.

  3. System Design Considerations for Microcomputer Based Instructional Laboratories.

    DTIC Science & Technology

    1986-04-01

    when wrong procedures are tried as well as correct procedures. This is sometimes called " free play " simulation. While this form of simulation...steps are performed correctly. Unlike " free play " system simulations, the student must perform the operation in an approved manner. 28 V. Technical...Supports free play exercises o Typically does not tutor a student o Used for skill development and performance measurement Task Simulation o Computer

  4. Intermuscular pterygoid-temporal abscess following inferior alveolar nerve block anesthesia–A computer tomography based navigated surgical intervention: Case report and review

    PubMed Central

    Wallner, Jürgen; Reinbacher, Knut Ernst; Pau, Mauro; Feichtinger, Matthias

    2014-01-01

    Inferior alveolar nerve block (IANB) anesthesia is a common local anesthetic procedure. Although IANB anesthesia is known for its safety, complications can still occur. Today immediately or delayed occurring disorders following IANB anesthesia and their treatment are well-recognized. We present a case of a patient who developed a symptomatic abscess in the pterygoid region as a result of several inferior alveolar nerve injections. Clinical symptoms included diffuse pain, reduced mouth opening and jaw's hypomobility and were persistent under a first step conservative treatment. Since image-based navigated interventions have gained in importance and are used for various procedures a navigated surgical intervention was initiated as a second step therapy. Thus precise, atraumatic surgical intervention was performed by an optical tracking system in a difficult anatomical region. A symptomatic abscess was treated by a computed tomography-based navigated surgical intervention at our department. Advantages and disadvantages of this treatment strategy are evaluated. PMID:24987612

  5. Intermuscular pterygoid-temporal abscess following inferior alveolar nerve block anesthesia-A computer tomography based navigated surgical intervention: Case report and review.

    PubMed

    Wallner, Jürgen; Reinbacher, Knut Ernst; Pau, Mauro; Feichtinger, Matthias

    2014-01-01

    Inferior alveolar nerve block (IANB) anesthesia is a common local anesthetic procedure. Although IANB anesthesia is known for its safety, complications can still occur. Today immediately or delayed occurring disorders following IANB anesthesia and their treatment are well-recognized. We present a case of a patient who developed a symptomatic abscess in the pterygoid region as a result of several inferior alveolar nerve injections. Clinical symptoms included diffuse pain, reduced mouth opening and jaw's hypomobility and were persistent under a first step conservative treatment. Since image-based navigated interventions have gained in importance and are used for various procedures a navigated surgical intervention was initiated as a second step therapy. Thus precise, atraumatic surgical intervention was performed by an optical tracking system in a difficult anatomical region. A symptomatic abscess was treated by a computed tomography-based navigated surgical intervention at our department. Advantages and disadvantages of this treatment strategy are evaluated.

  6. Assessing the utility of phase-space-localized basis functions: Exploiting direct product structure and a new basis function selection procedure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, James, E-mail: 9jhb3@queensu.ca; Carrington, Tucker, E-mail: Tucker.Carrington@queensu.ca

    In this paper we show that it is possible to use an iterative eigensolver in conjunction with Halverson and Poirier’s symmetrized Gaussian (SG) basis [T. Halverson and B. Poirier, J. Chem. Phys. 137, 224101 (2012)] to compute accurate vibrational energy levels of molecules with as many as five atoms. This is done, without storing and manipulating large matrices, by solving a regular eigenvalue problem that makes it possible to exploit direct-product structure. These ideas are combined with a new procedure for selecting which basis functions to use. The SG basis we work with is orders of magnitude smaller than themore » basis made by using a classical energy criterion. We find significant convergence errors in previous calculations with SG bases. For sum-of-product Hamiltonians, SG bases large enough to compute accurate levels are orders of magnitude larger than even simple pruned bases composed of products of harmonic oscillator functions.« less

  7. Recent developments in computer vision-based analytical chemistry: A tutorial review.

    PubMed

    Capitán-Vallvey, Luis Fermín; López-Ruiz, Nuria; Martínez-Olmos, Antonio; Erenas, Miguel M; Palma, Alberto J

    2015-10-29

    Chemical analysis based on colour changes recorded with imaging devices is gaining increasing interest. This is due to its several significant advantages, such as simplicity of use, and the fact that it is easily combinable with portable and widely distributed imaging devices, resulting in friendly analytical procedures in many areas that demand out-of-lab applications for in situ and real-time monitoring. This tutorial review covers computer vision-based analytical (CVAC) procedures and systems from 2005 to 2015, a period of time when 87.5% of the papers on this topic were published. The background regarding colour spaces and recent analytical system architectures of interest in analytical chemistry is presented in the form of a tutorial. Moreover, issues regarding images, such as the influence of illuminants, and the most relevant techniques for processing and analysing digital images are addressed. Some of the most relevant applications are then detailed, highlighting their main characteristics. Finally, our opinion about future perspectives is discussed. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Evolution of Geometric Sensitivity Derivatives from Computer Aided Design Models

    NASA Technical Reports Server (NTRS)

    Jones, William T.; Lazzara, David; Haimes, Robert

    2010-01-01

    The generation of design parameter sensitivity derivatives is required for gradient-based optimization. Such sensitivity derivatives are elusive at best when working with geometry defined within the solid modeling context of Computer-Aided Design (CAD) systems. Solid modeling CAD systems are often proprietary and always complex, thereby necessitating ad hoc procedures to infer parameter sensitivity. A new perspective is presented that makes direct use of the hierarchical associativity of CAD features to trace their evolution and thereby track design parameter sensitivity. In contrast to ad hoc methods, this method provides a more concise procedure following the model design intent and determining the sensitivity of CAD geometry directly to its respective defining parameters.

  9. Static and Dynamic Model Update of an Inflatable/Rigidizable Torus Structure

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, mercedes C.

    2006-01-01

    The present work addresses the development of an experimental and computational procedure for validating finite element models. A torus structure, part of an inflatable/rigidizable Hexapod, is used to demonstrate the approach. Because of fabrication, materials, and geometric uncertainties, a statistical approach combined with optimization is used to modify key model parameters. Static test results are used to update stiffness parameters and dynamic test results are used to update the mass distribution. Updated parameters are computed using gradient and non-gradient based optimization algorithms. Results show significant improvements in model predictions after parameters are updated. Lessons learned in the areas of test procedures, modeling approaches, and uncertainties quantification are presented.

  10. How to Write a Reproducible Paper

    NASA Astrophysics Data System (ADS)

    Irving, D. B.

    2016-12-01

    The geosciences have undergone a computational revolution in recent decades, to the point where almost all modern research relies heavily on software and code. Despite this profound change in the research methods employed by geoscientists, the reporting of computational results has changed very little in academic journals. This lag has led to something of a reproducibility crisis, whereby it is impossible to replicate and verify most of today's published computational results. While it is tempting to decry the slow response of journals and funding agencies in the face of this crisis, there are very few examples of reproducible research upon which to base new communication standards. In an attempt to address this deficiency, this presentation will describe a procedure for reporting computational results that was employed in a recent Journal of Climate paper. The procedure was developed to be consistent with recommended computational best practices and seeks to minimize the time burden on authors, which has been identified as the most important barrier to publishing code. It should provide a starting point for geoscientists looking to publish reproducible research, and could be adopted by journals as a formal minimum communication standard.

  11. Microscope self-calibration based on micro laser line imaging and soft computing algorithms

    NASA Astrophysics Data System (ADS)

    Apolinar Muñoz Rodríguez, J.

    2018-06-01

    A technique to perform microscope self-calibration via micro laser line and soft computing algorithms is presented. In this technique, the microscope vision parameters are computed by means of soft computing algorithms based on laser line projection. To implement the self-calibration, a microscope vision system is constructed by means of a CCD camera and a 38 μm laser line. From this arrangement, the microscope vision parameters are represented via Bezier approximation networks, which are accomplished through the laser line position. In this procedure, a genetic algorithm determines the microscope vision parameters by means of laser line imaging. Also, the approximation networks compute the three-dimensional vision by means of the laser line position. Additionally, the soft computing algorithms re-calibrate the vision parameters when the microscope vision system is modified during the vision task. The proposed self-calibration improves accuracy of the traditional microscope calibration, which is accomplished via external references to the microscope system. The capability of the self-calibration based on soft computing algorithms is determined by means of the calibration accuracy and the micro-scale measurement error. This contribution is corroborated by an evaluation based on the accuracy of the traditional microscope calibration.

  12. Prediction of resource volumes at untested locations using simple local prediction models

    USGS Publications Warehouse

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2006-01-01

    This paper shows how local spatial nonparametric prediction models can be applied to estimate volumes of recoverable gas resources at individual undrilled sites, at multiple sites on a regional scale, and to compute confidence bounds for regional volumes based on the distribution of those estimates. An approach that combines cross-validation, the jackknife, and bootstrap procedures is used to accomplish this task. Simulation experiments show that cross-validation can be applied beneficially to select an appropriate prediction model. The cross-validation procedure worked well for a wide range of different states of nature and levels of information. Jackknife procedures are used to compute individual prediction estimation errors at undrilled locations. The jackknife replicates also are used with a bootstrap resampling procedure to compute confidence bounds for the total volume. The method was applied to data (partitioned into a training set and target set) from the Devonian Antrim Shale continuous-type gas play in the Michigan Basin in Otsego County, Michigan. The analysis showed that the model estimate of total recoverable volumes at prediction sites is within 4 percent of the total observed volume. The model predictions also provide frequency distributions of the cell volumes at the production unit scale. Such distributions are the basis for subsequent economic analyses. ?? Springer Science+Business Media, LLC 2007.

  13. The effect of magnification loupes on the performance of preclinical dental students.

    PubMed

    Maggio, Margrit P; Villegas, Hilda; Blatz, Markus B

    2011-01-01

    optical magnifying devices such as magnification loupes are increasingly used in clinical practice and educational settings. However, scientific evidence to validate their benefits is limited. This study assessed the effect of dental magnification loupes on psychomotor skill acquisition during a preclinical operative dentistry course. the performance of first-year dental students was assessed during an Advanced Simulation Course (AS) using virtual reality-based technology (VRBT) training. The test group consisted of 116 dental students using magnification loupes (+MAG), while students not using them (-MAG, n = 116) served as the control. The following parameters were evaluated: number of successfully passing preparation procedures per course rotation, amount of time per tooth preparation, number of times students needed computer assistance and evaluation, and amount of time spent in the computer assistance and evaluation mode per procedure. Data were collected on each student through VRBT during the preparation procedure and stored on a closed network server computer. Unpaired t tests were used to analyze mean differences between the groups. In addition, student acceptance of magnification loupes was measured and evaluated through survey interpretation. +MAG students completed more preparations, worked faster per procedure, and used the computer-assisted evaluation less frequently and for shorter periods, therefore displaying greater overall performance. The survey revealed a high degree of student acceptance of using magnification. dental magnification loupes significantly enhanced student performance during preclinical dental education and were considered an effective adjunct by the students who used them.

  14. Incompressible Navier-Stokes Solvers in Primative Variables and their Applications to Steady and Unsteady Flow Simulations

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin C.; Kwak, Dochan; Rogers, Stuart E.

    2002-01-01

    This paper reviews recent progress made in incompressible Navier-Stokes simulation procedures and their application to problems of engineering interest. Discussions are focused on the methods designed for complex geometry applications in three dimensions, and thus are limited to primitive variable formulation. A summary of efforts in flow solver development is given followed by numerical studies of a few example problems of current interest. Both steady and unsteady solution algorithms and their salient features are discussed. Solvers discussed here are based on a structured-grid approach using either a finite -difference or a finite-volume frame work. As a grand-challenge application of these solvers, an unsteady turbopump flow simulation procedure has been developed which utilizes high performance computing platforms. In the paper, the progress toward the complete simulation capability of the turbo-pump for a liquid rocket engine is reported. The Space Shuttle Main Engine (SSME) turbo-pump is used as a test case for evaluation of two parallel computing algorithms that have been implemented in the INS3D code. The relative motion of the grid systems for the rotorstator interaction was obtained using overact grid techniques. Unsteady computations for the SSME turbo-pump, which contains 114 zones with 34.5 million grid points, are carried out on SCSI Origin 3000 systems at NASA Ames Research Center. The same procedure has been extended to the development of NASA-DeBakey Ventricular Assist Device (VAD) that is based on an axial blood pump. Computational, and clinical analysis of this device are presented.

  15. Numerical procedure to determine geometric view factors for surfaces occluded by cylinders

    NASA Technical Reports Server (NTRS)

    Sawyer, P. L.

    1978-01-01

    A numerical procedure was developed to determine geometric view factors between connected infinite strips occluded by any number of infinite circular cylinders. The procedure requires a two-dimensional cross-sectional model of the configuration of interest. The two-dimensional model consists of a convex polygon enclosing any number of circles. Each side of the polygon represents one strip, and each circle represents a circular cylinder. A description and listing of a computer program based on this procedure are included in this report. The program calculates geometric view factors between individual strips and between individual strips and the collection of occluding cylinders.

  16. Optimum structural sizing of conventional cantilever and joined wing configurations using equivalent beam models

    NASA Technical Reports Server (NTRS)

    Hajela, P.; Chen, J. L.

    1986-01-01

    The present paper describes an approach for the optimum sizing of single and joined wing structures that is based on representing the built-up finite element model of the structure by an equivalent beam model. The low order beam model is computationally more efficient in an environment that requires repetitive analysis of several trial designs. The design procedure is implemented in a computer program that requires geometry and loading data typically available from an aerodynamic synthesis program, to create the finite element model of the lifting surface and an equivalent beam model. A fully stressed design procedure is used to obtain rapid estimates of the optimum structural weight for the beam model for a given geometry, and a qualitative description of the material distribution over the wing structure. The synthesis procedure is demonstrated for representative single wing and joined wing structures.

  17. Efficient Simulation Budget Allocation for Selecting an Optimal Subset

    NASA Technical Reports Server (NTRS)

    Chen, Chun-Hung; He, Donghai; Fu, Michael; Lee, Loo Hay

    2008-01-01

    We consider a class of the subset selection problem in ranking and selection. The objective is to identify the top m out of k designs based on simulated output. Traditional procedures are conservative and inefficient. Using the optimal computing budget allocation framework, we formulate the problem as that of maximizing the probability of correc tly selecting all of the top-m designs subject to a constraint on the total number of samples available. For an approximation of this corre ct selection probability, we derive an asymptotically optimal allocat ion and propose an easy-to-implement heuristic sequential allocation procedure. Numerical experiments indicate that the resulting allocatio ns are superior to other methods in the literature that we tested, and the relative efficiency increases for larger problems. In addition, preliminary numerical results indicate that the proposed new procedur e has the potential to enhance computational efficiency for simulation optimization.

  18. Computer-aided boundary delineation of agricultural lands

    NASA Technical Reports Server (NTRS)

    Cheng, Thomas D.; Angelici, Gary L.; Slye, Robert E.; Ma, Matt

    1989-01-01

    The National Agricultural Statistics Service of the United States Department of Agriculture (USDA) presently uses labor-intensive aerial photographic interpretation techniques to divide large geographical areas into manageable-sized units for estimating domestic crop and livestock production. Prototype software, the computer-aided stratification (CAS) system, was developed to automate the procedure, and currently runs on a Sun-based image processing system. With a background display of LANDSAT Thematic Mapper and United States Geological Survey Digital Line Graph data, the operator uses a cursor to delineate agricultural areas, called sampling units, which are assigned to strata of land-use and land-cover types. The resultant stratified sampling units are used as input into subsequent USDA sampling procedures. As a test, three counties in Missouri were chosen for application of the CAS procedures. Subsequent analysis indicates that CAS was five times faster in creating sampling units than the manual techniques were.

  19. 77 FR 76406 - Criteria and Procedures for Proposed Assessment of Civil Penalties; Inflation Adjustment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-28

    ... discretion. MSHA is required to perform mathematical computations based on published cost-of-living data and... altering the budgetary impact of entitlements or the rights of entitlement recipients, or raising novel...

  20. Availability Simulation of AGT Systems

    DOT National Transportation Integrated Search

    1975-02-01

    The report discusses the analytical and simulation procedures that were used to evaluate the effects of failure in a complex dual mode transportation system based on a worst case study-state condition. The computed results are an availability figure ...

  1. Constructed-Response Matching to Sample and Spelling Instruction.

    ERIC Educational Resources Information Center

    Dube, William V.; And Others

    1991-01-01

    This paper describes a computer-based spelling program grounded in programed instructional techniques and using constructed-response matching-to-sample procedures. Following use of the program, two mentally retarded men successfully spelled previously misspelled words. (JDD)

  2. Computer-assisted propofol administration.

    PubMed

    O'Connor, J P A; O'Moráin, C A; Vargo, J J

    2010-01-01

    The use of propofol for sedation in endoscopy may allow for better quality of sedation, quicker recovery and facilitate greater throughput in endoscopy units. The cost-effectiveness and utility of propofol sedation for endoscopic procedures is contingent on the personnel and resources required to carry out the procedure. Computer-based platforms are based on the patients response to stimulation and physiologic parameters. They offer an appealing means of delivering safe and effective doses of propofol. One such means is the bispectral index where continuous EEG recordings are used to assess the degree of sedation. Another is the closed-loop target-controlled system where a set of physical parameters, such as muscle relaxation and auditory-evoked potential, determine a level of medication appropriate to achieve sedation. Patient-controlled platforms may also be used. These electronic adjuncts may help endoscopists who wish to adopt propofol sedation to change current practices with greater confidence. Copyright 2010 S. Karger AG, Basel.

  3. Simple proof of equivalence between adiabatic quantum computation and the circuit model.

    PubMed

    Mizel, Ari; Lidar, Daniel A; Mitchell, Morgan

    2007-08-17

    We prove the equivalence between adiabatic quantum computation and quantum computation in the circuit model. An explicit adiabatic computation procedure is given that generates a ground state from which the answer can be extracted. The amount of time needed is evaluated by computing the gap. We show that the procedure is computationally efficient.

  4. Simple Procedure to Compute the Inductance of a Toroidal Ferrite Core from the Linear to the Saturation Regions

    PubMed Central

    Salas, Rosa Ana; Pleite, Jorge

    2013-01-01

    We propose a specific procedure to compute the inductance of a toroidal ferrite core as a function of the excitation current. The study includes the linear, intermediate and saturation regions. The procedure combines the use of Finite Element Analysis in 2D and experimental measurements. Through the two dimensional (2D) procedure we are able to achieve convergence, a reduction of computational cost and equivalent results to those computed by three dimensional (3D) simulations. The validation is carried out by comparing 2D, 3D and experimental results. PMID:28809283

  5. Rotor dynamic simulation and system identification methods for application to vacuum whirl data

    NASA Technical Reports Server (NTRS)

    Berman, A.; Giansante, N.; Flannelly, W. G.

    1980-01-01

    Methods of using rotor vacuum whirl data to improve the ability to model helicopter rotors were developed. The work consisted of the formulation of the equations of motion of elastic blades on a hub using a Galerkin method; the development of a general computer program for simulation of these equations; the study and implementation of a procedure for determining physical parameters based on measured data; and the application of a method for computing the normal modes and natural frequencies based on test data.

  6. A high temperature fatigue life prediction computer code based on the total strain version of StrainRange Partitioning (SRP)

    NASA Technical Reports Server (NTRS)

    Mcgaw, Michael A.; Saltsman, James F.

    1993-01-01

    A recently developed high-temperature fatigue life prediction computer code is presented and an example of its usage given. The code discussed is based on the Total Strain version of Strainrange Partitioning (TS-SRP). Included in this code are procedures for characterizing the creep-fatigue durability behavior of an alloy according to TS-SRP guidelines and predicting cyclic life for complex cycle types for both isothermal and thermomechanical conditions. A reasonably extensive materials properties database is included with the code.

  7. 48 CFR 227.7203-11 - Contractor procedures and records.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Rights in Computer Software and Computer Software Documentation 227.7203-11 Contractor procedures and records. (a) The clause at 252.227-7014, Rights in Noncommercial Computer Software and Noncommercial Computer Software Documentation, requires a contractor, and its subcontractors or suppliers that will...

  8. 48 CFR 227.7203-11 - Contractor procedures and records.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Rights in Computer Software and Computer Software Documentation 227.7203-11 Contractor procedures and records. (a) The clause at 252.227-7014, Rights in Noncommercial Computer Software and Noncommercial Computer Software Documentation, requires a contractor, and its subcontractors or suppliers that will...

  9. 48 CFR 227.7203-11 - Contractor procedures and records.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Rights in Computer Software and Computer Software Documentation 227.7203-11 Contractor procedures and records. (a) The clause at 252.227-7014, Rights in Noncommercial Computer Software and Noncommercial Computer Software Documentation, requires a contractor, and its subcontractors or suppliers that will...

  10. 48 CFR 227.7203-11 - Contractor procedures and records.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Rights in Computer Software and Computer Software Documentation 227.7203-11 Contractor procedures and records. (a) The clause at 252.227-7014, Rights in Noncommercial Computer Software and Noncommercial Computer Software Documentation, requires a contractor, and its subcontractors or suppliers that will...

  11. 48 CFR 227.7203-11 - Contractor procedures and records.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Rights in Computer Software and Computer Software Documentation 227.7203-11 Contractor procedures and records. (a) The clause at 252.227-7014, Rights in Noncommercial Computer Software and Noncommercial Computer Software Documentation, requires a contractor, and its subcontractors or suppliers that will...

  12. Effects of Instructional Design with Mental Model Analysis on Learning.

    ERIC Educational Resources Information Center

    Hong, Eunsook

    This paper presents a model for systematic instructional design that includes mental model analysis together with the procedures used in developing computer-based instructional materials in the area of statistical hypothesis testing. The instructional design model is based on the premise that the objective for learning is to achieve expert-like…

  13. Collaborative Practice of Science Construction in a Computer-Based Multimedia Environment.

    ERIC Educational Resources Information Center

    Kumpulainen, Kristiina; Mutanen, Mika

    1998-01-01

    Examines the ways in which the collaborative use of a multimedia-based CD-ROM encyclopedia in a sixth-grade Finnish classroom fosters science learning. Results show that students' activities during task-processing were highly procedural and product-oriented. Students had inefficient skills in accessing and retrieving information from the…

  14. The Abstraction-First Approach to Data Abstraction and Algorithms.

    ERIC Educational Resources Information Center

    Machanick, Philip

    1998-01-01

    Based on a computer-science course, this article outlines an alternative ordering of programming concepts that aims to develop a reuse habit before other styles of programming are developed. Although the discussion is based on transition from Modula-2 to C++, the issues raised apply to transition from any procedural to any object-oriented…

  15. Automating NEURON Simulation Deployment in Cloud Resources.

    PubMed

    Stockton, David B; Santamaria, Fidel

    2017-01-01

    Simulations in neuroscience are performed on local servers or High Performance Computing (HPC) facilities. Recently, cloud computing has emerged as a potential computational platform for neuroscience simulation. In this paper we compare and contrast HPC and cloud resources for scientific computation, then report how we deployed NEURON, a widely used simulator of neuronal activity, in three clouds: Chameleon Cloud, a hybrid private academic cloud for cloud technology research based on the OpenStack software; Rackspace, a public commercial cloud, also based on OpenStack; and Amazon Elastic Cloud Computing, based on Amazon's proprietary software. We describe the manual procedures and how to automate cloud operations. We describe extending our simulation automation software called NeuroManager (Stockton and Santamaria, Frontiers in Neuroinformatics, 2015), so that the user is capable of recruiting private cloud, public cloud, HPC, and local servers simultaneously with a simple common interface. We conclude by performing several studies in which we examine speedup, efficiency, total session time, and cost for sets of simulations of a published NEURON model.

  16. Automating NEURON Simulation Deployment in Cloud Resources

    PubMed Central

    Santamaria, Fidel

    2016-01-01

    Simulations in neuroscience are performed on local servers or High Performance Computing (HPC) facilities. Recently, cloud computing has emerged as a potential computational platform for neuroscience simulation. In this paper we compare and contrast HPC and cloud resources for scientific computation, then report how we deployed NEURON, a widely used simulator of neuronal activity, in three clouds: Chameleon Cloud, a hybrid private academic cloud for cloud technology research based on the Open-Stack software; Rackspace, a public commercial cloud, also based on OpenStack; and Amazon Elastic Cloud Computing, based on Amazon’s proprietary software. We describe the manual procedures and how to automate cloud operations. We describe extending our simulation automation software called NeuroManager (Stockton and Santamaria, Frontiers in Neuroinformatics, 2015), so that the user is capable of recruiting private cloud, public cloud, HPC, and local servers simultaneously with a simple common interface. We conclude by performing several studies in which we examine speedup, efficiency, total session time, and cost for sets of simulations of a published NEURON model. PMID:27655341

  17. A survey of GPU-based medical image computing techniques

    PubMed Central

    Shi, Lin; Liu, Wen; Zhang, Heye; Xie, Yongming

    2012-01-01

    Medical imaging currently plays a crucial role throughout the entire clinical applications from medical scientific research to diagnostics and treatment planning. However, medical imaging procedures are often computationally demanding due to the large three-dimensional (3D) medical datasets to process in practical clinical applications. With the rapidly enhancing performances of graphics processors, improved programming support, and excellent price-to-performance ratio, the graphics processing unit (GPU) has emerged as a competitive parallel computing platform for computationally expensive and demanding tasks in a wide range of medical image applications. The major purpose of this survey is to provide a comprehensive reference source for the starters or researchers involved in GPU-based medical image processing. Within this survey, the continuous advancement of GPU computing is reviewed and the existing traditional applications in three areas of medical image processing, namely, segmentation, registration and visualization, are surveyed. The potential advantages and associated challenges of current GPU-based medical imaging are also discussed to inspire future applications in medicine. PMID:23256080

  18. Resource quality of a symmetry-protected topologically ordered phase for quantum computation.

    PubMed

    Miller, Jacob; Miyake, Akimasa

    2015-03-27

    We investigate entanglement naturally present in the 1D topologically ordered phase protected with the on-site symmetry group of an octahedron as a potential resource for teleportation-based quantum computation. We show that, as long as certain characteristic lengths are finite, all its ground states have the capability to implement any unit-fidelity one-qubit gate operation asymptotically as a key computational building block. This feature is intrinsic to the entire phase, in that perfect gate fidelity coincides with perfect string order parameters under a state-insensitive renormalization procedure. Our approach may pave the way toward a novel program to classify quantum many-body systems based on their operational use for quantum information processing.

  19. Resource Quality of a Symmetry-Protected Topologically Ordered Phase for Quantum Computation

    NASA Astrophysics Data System (ADS)

    Miller, Jacob; Miyake, Akimasa

    2015-03-01

    We investigate entanglement naturally present in the 1D topologically ordered phase protected with the on-site symmetry group of an octahedron as a potential resource for teleportation-based quantum computation. We show that, as long as certain characteristic lengths are finite, all its ground states have the capability to implement any unit-fidelity one-qubit gate operation asymptotically as a key computational building block. This feature is intrinsic to the entire phase, in that perfect gate fidelity coincides with perfect string order parameters under a state-insensitive renormalization procedure. Our approach may pave the way toward a novel program to classify quantum many-body systems based on their operational use for quantum information processing.

  20. Need for evaluative methodologies in land use, regional resource and waste management planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Croke, E. J.

    The transfer of planning methodology from the research community to the practitioner very frequently takes the form of analytical and evaluative techniques and procedures. In the end, these become operational in the form of data acquisition, management and display systems, computational schemes that are codified in the form of manuals and handbooks, and computer simulation models. The complexity of the socioeconomic and physical processes that govern environmental resource and waste management have reinforced the need for computer assisted, scientifically sophisticated planning models that are fully operational, dependent on an attainable data base and accessible in terms of the resources normallymore » available to practitioners of regional resource management, waste management, and land use planning. A variety of models and procedures that attempt to meet one or more of the needs of these practitioners are discussed.« less

  1. Time-Accurate Solutions of Incompressible Navier-Stokes Equations for Potential Turbopump Applications

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Kwak, Dochan

    2001-01-01

    Two numerical procedures, one based on artificial compressibility method and the other pressure projection method, are outlined for obtaining time-accurate solutions of the incompressible Navier-Stokes equations. The performance of the two method are compared by obtaining unsteady solutions for the evolution of twin vortices behind a at plate. Calculated results are compared with experimental and other numerical results. For an un- steady ow which requires small physical time step, pressure projection method was found to be computationally efficient since it does not require any subiterations procedure. It was observed that the artificial compressibility method requires a fast convergence scheme at each physical time step in order to satisfy incompressibility condition. This was obtained by using a GMRES-ILU(0) solver in our computations. When a line-relaxation scheme was used, the time accuracy was degraded and time-accurate computations became very expensive.

  2. Development of an automated ultrasonic testing system

    NASA Astrophysics Data System (ADS)

    Shuxiang, Jiao; Wong, Brian Stephen

    2005-04-01

    Non-Destructive Testing is necessary in areas where defects in structures emerge over time due to wear and tear and structural integrity is necessary to maintain its usability. However, manual testing results in many limitations: high training cost, long training procedure, and worse, the inconsistent test results. A prime objective of this project is to develop an automatic Non-Destructive testing system for a shaft of the wheel axle of a railway carriage. Various methods, such as the neural network, pattern recognition methods and knowledge-based system are used for the artificial intelligence problem. In this paper, a statistical pattern recognition approach, Classification Tree is applied. Before feature selection, a thorough study on the ultrasonic signals produced was carried out. Based on the analysis of the ultrasonic signals, three signal processing methods were developed to enhance the ultrasonic signals: Cross-Correlation, Zero-Phase filter and Averaging. The target of this step is to reduce the noise and make the signal character more distinguishable. Four features: 1. The Auto Regressive Model Coefficients. 2. Standard Deviation. 3. Pearson Correlation 4. Dispersion Uniformity Degree are selected. And then a Classification Tree is created and applied to recognize the peak positions and amplitudes. Searching local maximum is carried out before feature computing. This procedure reduces much computation time in the real-time testing. Based on this algorithm, a software package called SOFRA was developed to recognize the peaks, calibrate automatically and test a simulated shaft automatically. The automatic calibration procedure and the automatic shaft testing procedure are developed.

  3. Needs analysis of a flexible computerized management infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Usman, S.; Hajek, B. K.; Ali, S. F.

    2006-07-01

    The United States' Energy Policy Act of 2005 is expected to facilitate construction of new commercial nuclear power plants. In the meanwhile, current plants are in the process of obtaining licenses for extended operation beyond their predetermined design life. In this beneficial yet challenging situation, it seems desirable to develop a strategic plan for smooth and seamless transition from paper based procedure systems to computer based procedure systems for improved performance and safety of the existing nuclear power plants. Many utilities already maintain procedures using word processing software, but it is common to print paper copies for daily use. Atmore » this time it is highly desirable to better understand the collective as well as individual document management needs of a commercial nuclear power plant as they migrate to a computer based system. As a contributory role in initiating a strategic plan, this paper offers a comprehensive questionnaire that is suitable for conducting a survey to determine the related needs of the utilities. The questionnaire covers three major areas: Formatting and User Friendly Features; Technical and Environmental Considerations; and Safety, System Integrity and Regulatory Considerations. A plan to conduct the proposed survey is also outlined in the future work section of this paper. (authors)« less

  4. Investigation of Low-Reynolds-Number Rocket Nozzle Design Using PNS-Based Optimization Procedure

    NASA Technical Reports Server (NTRS)

    Hussaini, M. Moin; Korte, John J.

    1996-01-01

    An optimization approach to rocket nozzle design, based on computational fluid dynamics (CFD) methodology, is investigated for low-Reynolds-number cases. This study is undertaken to determine the benefits of this approach over those of classical design processes such as Rao's method. A CFD-based optimization procedure, using the parabolized Navier-Stokes (PNS) equations, is used to design conical and contoured axisymmetric nozzles. The advantage of this procedure is that it accounts for viscosity during the design process; other processes make an approximated boundary-layer correction after an inviscid design is created. Results showed significant improvement in the nozzle thrust coefficient over that of the baseline case; however, the unusual nozzle design necessitates further investigation of the accuracy of the PNS equations for modeling expanding flows with thick laminar boundary layers.

  5. Using Computer Simulations for Promoting Model-based Reasoning. Epistemological and Educational Dimensions

    NASA Astrophysics Data System (ADS)

    Develaki, Maria

    2017-11-01

    Scientific reasoning is particularly pertinent to science education since it is closely related to the content and methodologies of science and contributes to scientific literacy. Much of the research in science education investigates the appropriate framework and teaching methods and tools needed to promote students' ability to reason and evaluate in a scientific way. This paper aims (a) to contribute to an extended understanding of the nature and pedagogical importance of model-based reasoning and (b) to exemplify how using computer simulations can support students' model-based reasoning. We provide first a background for both scientific reasoning and computer simulations, based on the relevant philosophical views and the related educational discussion. This background suggests that the model-based framework provides an epistemologically valid and pedagogically appropriate basis for teaching scientific reasoning and for helping students develop sounder reasoning and decision-taking abilities and explains how using computer simulations can foster these abilities. We then provide some examples illustrating the use of computer simulations to support model-based reasoning and evaluation activities in the classroom. The examples reflect the procedure and criteria for evaluating models in science and demonstrate the educational advantages of their application in classroom reasoning activities.

  6. Advanced Methodologies for NASA Science Missions

    NASA Astrophysics Data System (ADS)

    Hurlburt, N. E.; Feigelson, E.; Mentzel, C.

    2017-12-01

    Most of NASA's commitment to computational space science involves the organization and processing of Big Data from space-based satellites, and the calculations of advanced physical models based on these datasets. But considerable thought is also needed on what computations are needed. The science questions addressed by space data are so diverse and complex that traditional analysis procedures are often inadequate. The knowledge and skills of the statistician, applied mathematician, and algorithmic computer scientist must be incorporated into programs that currently emphasize engineering and physical science. NASA's culture and administrative mechanisms take full cognizance that major advances in space science are driven by improvements in instrumentation. But it is less well recognized that new instruments and science questions give rise to new challenges in the treatment of satellite data after it is telemetered to the ground. These issues might be divided into two stages: data reduction through software pipelines developed within NASA mission centers; and science analysis that is performed by hundreds of space scientists dispersed through NASA, U.S. universities, and abroad. Both stages benefit from the latest statistical and computational methods; in some cases, the science result is completely inaccessible using traditional procedures. This paper will review the current state of NASA and present example applications using modern methodologies.

  7. Southwest electronic one-stop shopping, motor carrier test report

    DOT National Transportation Integrated Search

    1997-12-22

    The Electronic One-Stop System (EOSS) used in this credential test was designed to replace current normal credentialling procedures with a personal computer-based electronic method that allows users to prepare, apply for, and obtain certain types of ...

  8. Southwest electronic one-stop shopping, state agency test report

    DOT National Transportation Integrated Search

    1997-12-22

    The Electronic One-Stop System (EOSS) used in this credential test was designed to replace current normal credentialling procedures with a personal computer-based electronic method that allows users to prepare, apply for, and obtain certain types of ...

  9. Availability Analysis of Dual Mode Systems

    DOT National Transportation Integrated Search

    1974-04-01

    The analytical procedures presented define a method of evaluating the effects of failures in a complex dual-mode system based on a worst case steady-state analysis. The computed result is an availability figure of merit and not an absolute prediction...

  10. Reengineering Education at Anderson Consulting.

    ERIC Educational Resources Information Center

    Nowakowski, Alan

    1994-01-01

    Describes new educational approaches and classroom management procedures that are changing the professional education program at a business consulting firm. Highlights include a needs assessment study; skills development; specialists; adapting to change; computer-based self-study training programs; curriculum reassessment; and curriculum…

  11. Error Estimates of the Ares I Computed Turbulent Ascent Longitudinal Aerodynamic Analysis

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Ghaffari, Farhad

    2012-01-01

    Numerical predictions of the longitudinal aerodynamic characteristics for the Ares I class of vehicles, along with the associated error estimate derived from an iterative convergence grid refinement, are presented. Computational results are based on an unstructured grid, Reynolds-averaged Navier-Stokes analysis. The validity of the approach to compute the associated error estimates, derived from a base grid to an extrapolated infinite-size grid, was first demonstrated on a sub-scaled wind tunnel model at representative ascent flow conditions for which the experimental data existed. Such analysis at the transonic flow conditions revealed a maximum deviation of about 23% between the computed longitudinal aerodynamic coefficients with the base grid and the measured data across the entire roll angles. This maximum deviation from the wind tunnel data was associated with the computed normal force coefficient at the transonic flow condition and was reduced to approximately 16% based on the infinite-size grid. However, all the computed aerodynamic coefficients with the base grid at the supersonic flow conditions showed a maximum deviation of only about 8% with that level being improved to approximately 5% for the infinite-size grid. The results and the error estimates based on the established procedure are also presented for the flight flow conditions.

  12. On evaluating clustering procedures for use in classification

    NASA Technical Reports Server (NTRS)

    Pore, M. D.; Moritz, T. E.; Register, D. T.; Yao, S. S.; Eppler, W. G. (Principal Investigator)

    1979-01-01

    The problem of evaluating clustering algorithms and their respective computer programs for use in a preprocessing step for classification is addressed. In clustering for classification the probability of correct classification is suggested as the ultimate measure of accuracy on training data. A means of implementing this criterion and a measure of cluster purity are discussed. Examples are given. A procedure for cluster labeling that is based on cluster purity and sample size is presented.

  13. A General Procedure to Assess the Internal Structure of a Noncognitive Measure--The Student360 Insight Program (S360) Time Management Scale. Research Report. ETS RR-11-42

    ERIC Educational Resources Information Center

    Ling, Guangming; Rijmen, Frank

    2011-01-01

    The factorial structure of the Time Management (TM) scale of the Student 360: Insight Program (S360) was evaluated based on a national sample. A general procedure with a variety of methods was introduced and implemented, including the computation of descriptive statistics, exploratory factor analysis (EFA), and confirmatory factor analysis (CFA).…

  14. MATLAB-implemented estimation procedure for model-based assessment of hepatic insulin degradation from standard intravenous glucose tolerance test data.

    PubMed

    Di Nardo, Francesco; Mengoni, Michele; Morettini, Micaela

    2013-05-01

    Present study provides a novel MATLAB-based parameter estimation procedure for individual assessment of hepatic insulin degradation (HID) process from standard frequently-sampled intravenous glucose tolerance test (FSIGTT) data. Direct access to the source code, offered by MATLAB, enabled us to design an optimization procedure based on the alternating use of Gauss-Newton's and Levenberg-Marquardt's algorithms, which assures the full convergence of the process and the containment of computational time. Reliability was tested by direct comparison with the application, in eighteen non-diabetic subjects, of well-known kinetic analysis software package SAAM II, and by application on different data. Agreement between MATLAB and SAAM II was warranted by intraclass correlation coefficients ≥0.73; no significant differences between corresponding mean parameter estimates and prediction of HID rate; and consistent residual analysis. Moreover, MATLAB optimization procedure resulted in a significant 51% reduction of CV% for the worst-estimated parameter by SAAM II and in maintaining all model-parameter CV% <20%. In conclusion, our MATLAB-based procedure was suggested as a suitable tool for the individual assessment of HID process. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  15. Multi-loop Integrand Reduction with Computational Algebraic Geometry

    NASA Astrophysics Data System (ADS)

    Badger, Simon; Frellesvig, Hjalte; Zhang, Yang

    2014-06-01

    We discuss recent progress in multi-loop integrand reduction methods. Motivated by the possibility of an automated construction of multi-loop amplitudes via generalized unitarity cuts we describe a procedure to obtain a general parameterisation of any multi-loop integrand in a renormalizable gauge theory. The method relies on computational algebraic geometry techniques such as Gröbner bases and primary decomposition of ideals. We present some results for two and three loop amplitudes obtained with the help of the MACAULAY2 computer algebra system and the Mathematica package BASISDET.

  16. Simulation and evaluation of latent heat thermal energy storage

    NASA Technical Reports Server (NTRS)

    Sigmon, T. W.

    1980-01-01

    The relative value of thermal energy storage (TES) for heat pump storage (heating and cooling) as a function of storage temperature, mode of storage (hotside or coldside), geographic locations, and utility time of use rate structures were derived. Computer models used to simulate the performance of a number of TES/heat pump configurations are described. The models are based on existing performance data of heat pump components, available building thermal load computational procedures, and generalized TES subsystem design. Life cycle costs computed for each site, configuration, and rate structure are discussed.

  17. Development of non-linear finite element computer code

    NASA Technical Reports Server (NTRS)

    Becker, E. B.; Miller, T.

    1985-01-01

    Recent work has shown that the use of separable symmetric functions of the principal stretches can adequately describe the response of certain propellant materials and, further, that a data reduction scheme gives a convenient way of obtaining the values of the functions from experimental data. Based on representation of the energy, a computational scheme was developed that allows finite element analysis of boundary value problems of arbitrary shape and loading. The computational procedure was implemental in a three-dimensional finite element code, TEXLESP-S, which is documented herein.

  18. Performance of ground attitude determination procedures for HEAO-1

    NASA Technical Reports Server (NTRS)

    Fallon, L., III; Sturch, C. R.

    1978-01-01

    Ground attitude support for HEAO 1 provided at GSFC by the HEAO 1 Attitude Ground Support System (AGSS) is described. Information telemetered from Sun sensors, gyroscopes, star trackers, and an onboard computer are used by the AGSS to compute updates to the onboard attitude reference and gyro calibration parameters. The onboard computer utilizes these updates in providing continuous attitudes (accurate to 0.25degree) for use in the observatory's attitude control procedures. The relationship between HEAO 1 onboard and ground processing, the procedures used by the AGSS in computing attitude and gyro calibration updates, and the performance of these procedures in the HEAO 1 postlaunch environment is discussed.

  19. Probabilistic liquefaction hazard analysis at liquefied sites of 1956 Dunaharaszti earthquake, in Hungary

    NASA Astrophysics Data System (ADS)

    Győri, Erzsébet; Gráczer, Zoltán; Tóth, László; Bán, Zoltán; Horváth, Tibor

    2017-04-01

    Liquefaction potential evaluations are generally made to assess the hazard from specific scenario earthquakes. These evaluations may estimate the potential in a binary fashion (yes/no), define a factor of safety or predict the probability of liquefaction given a scenario event. Usually the level of ground shaking is obtained from the results of PSHA. Although it is determined probabilistically, a single level of ground shaking is selected and used within the liquefaction potential evaluation. In contrary, the fully probabilistic liquefaction potential assessment methods provide a complete picture of liquefaction hazard, namely taking into account the joint probability distribution of PGA and magnitude of earthquake scenarios; both of which are key inputs in the stress-based simplified methods. Kramer and Mayfield (2007) has developed a fully probabilistic liquefaction potential evaluation method using a performance-based earthquake engineering (PBEE) framework. The results of the procedure are the direct estimate of the return period of liquefaction and the liquefaction hazard curves in function of depth. The method combines the disaggregation matrices computed for different exceedance frequencies during probabilistic seismic hazard analysis with one of the recent models for the conditional probability of liquefaction. We have developed a software for the assessment of performance-based liquefaction triggering on the basis of Kramer and Mayfield method. Originally the SPT based probabilistic method of Cetin et al. (2004) was built-in into the procedure of Kramer and Mayfield to compute the conditional probability however there is no professional consensus about its applicability. Therefore we have included not only Cetin's method but Idriss and Boulanger (2012) SPT based moreover Boulanger and Idriss (2014) CPT based procedures into our computer program. In 1956, a damaging earthquake of magnitude 5.6 occurred in Dunaharaszti, in Hungary. Its epicenter was located about 5 km from the southern boundary of Budapest. The quake caused serious damages in the epicentral area and in the southern districts of the capital. The epicentral area of the earthquake is located along the Danube River. Sand boils were observed in some locations that indicated the occurrence of liquefaction. Because their exact locations were recorded at the time of the earthquake, in situ geotechnical measurements (CPT and SPT) could be performed at two (Dunaharaszti and Taksony) sites. The different types of measurements enabled the probabilistic liquefaction hazard computations at the two studied sites. We have compared the return periods of liquefaction that were computed using different built-in simplified stress based methods.

  20. Development of a Flexible Computerized Management Infrastructure for a Commercial Nuclear Power Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, Syed Firasat; Hajek, Brian K.; Usman, Shoaib

    The report emphasizes smooth transition from paper-based procedure systems (PBPSs) to computer-based procedure systems (CBPSs) for the existing commercial nuclear power plants in the U.S. The expected advantages and of the transition are mentioned including continued, safe and efficient operation of the plants under their recently acquired or desired extended licenses. The report proposes a three-stage survey to aid in developing a national strategic plan for the transition from PBPSs to CBPSs. It also includes a comprehensive questionnaire that can be readily used for the first stage of the suggested survey.

  1. Qualitative mechanism models and the rationalization of procedures

    NASA Technical Reports Server (NTRS)

    Farley, Arthur M.

    1989-01-01

    A qualitative, cluster-based approach to the representation of hydraulic systems is described and its potential for generating and explaining procedures is demonstrated. Many ideas are formalized and implemented as part of an interactive, computer-based system. The system allows for designing, displaying, and reasoning about hydraulic systems. The interactive system has an interface consisting of three windows: a design/control window, a cluster window, and a diagnosis/plan window. A qualitative mechanism model for the ORS (Orbital Refueling System) is presented to coordinate with ongoing research on this system being conducted at NASA Ames Research Center.

  2. Central Data Processing System (CDPS) user's manual: Solar heating and cooling program

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The software and data base management system required to assess the performance of solar heating and cooling systems installed at multiple sites is presented. The instrumentation data associated with these systems is collected, processed, and presented in a form which supported continuity of performance evaluation across all applications. The CDPS consisted of three major elements: communication interface computer, central data processing computer, and performance evaluation data base. Users of the performance data base were identified, and procedures for operation, and guidelines for software maintenance were outlined. The manual also defined the output capabilities of the CDPS in support of external users of the system.

  3. The load shedding advisor: An example of a crisis-response expert system

    NASA Technical Reports Server (NTRS)

    Bollinger, Terry B.; Lightner, Eric; Laverty, John; Ambrose, Edward

    1987-01-01

    A Prolog-based prototype expert system is described that was implemented by the Network Operations Branch of the NASA Goddard Space Flight Center. The purpose of the prototype was to test whether a small, inexpensive computer system could be used to host a load shedding advisor, a system which would monitor major physical environment parameters in a computer facility, then recommend appropriate operator reponses whenever a serious condition was detected. The resulting prototype performed significantly to efficiency gains achieved by replacing a purely rule-based design methodology with a hybrid approach that combined procedural, entity-relationship, and rule-based methods.

  4. EEG-Based Brain-Computer Interface for Decoding Motor Imagery Tasks within the Same Hand Using Choi-Williams Time-Frequency Distribution

    PubMed Central

    Alwanni, Hisham; Baslan, Yara; Alnuman, Nasim; Daoud, Mohammad I.

    2017-01-01

    This paper presents an EEG-based brain-computer interface system for classifying eleven motor imagery (MI) tasks within the same hand. The proposed system utilizes the Choi-Williams time-frequency distribution (CWD) to construct a time-frequency representation (TFR) of the EEG signals. The constructed TFR is used to extract five categories of time-frequency features (TFFs). The TFFs are processed using a hierarchical classification model to identify the MI task encapsulated within the EEG signals. To evaluate the performance of the proposed approach, EEG data were recorded for eighteen intact subjects and four amputated subjects while imagining to perform each of the eleven hand MI tasks. Two performance evaluation analyses, namely channel- and TFF-based analyses, are conducted to identify the best subset of EEG channels and the TFFs category, respectively, that enable the highest classification accuracy between the MI tasks. In each evaluation analysis, the hierarchical classification model is trained using two training procedures, namely subject-dependent and subject-independent procedures. These two training procedures quantify the capability of the proposed approach to capture both intra- and inter-personal variations in the EEG signals for different MI tasks within the same hand. The results demonstrate the efficacy of the approach for classifying the MI tasks within the same hand. In particular, the classification accuracies obtained for the intact and amputated subjects are as high as 88.8% and 90.2%, respectively, for the subject-dependent training procedure, and 80.8% and 87.8%, respectively, for the subject-independent training procedure. These results suggest the feasibility of applying the proposed approach to control dexterous prosthetic hands, which can be of great benefit for individuals suffering from hand amputations. PMID:28832513

  5. Ultra-fast computation of electronic spectra for large systems by tight-binding based simplified Tamm-Dancoff approximation (sTDA-xTB)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grimme, Stefan, E-mail: grimme@thch.uni-bonn.de; Bannwarth, Christoph

    2016-08-07

    The computational bottleneck of the extremely fast simplified Tamm-Dancoff approximated (sTDA) time-dependent density functional theory procedure [S. Grimme, J. Chem. Phys. 138, 244104 (2013)] for the computation of electronic spectra for large systems is the determination of the ground state Kohn-Sham orbitals and eigenvalues. This limits such treatments to single structures with a few hundred atoms and hence, e.g., sampling along molecular dynamics trajectories for flexible systems or the calculation of chromophore aggregates is often not possible. The aim of this work is to solve this problem by a specifically designed semi-empirical tight binding (TB) procedure similar to the wellmore » established self-consistent-charge density functional TB scheme. The new special purpose method provides orbitals and orbital energies of hybrid density functional character for a subsequent and basically unmodified sTDA procedure. Compared to many previous semi-empirical excited state methods, an advantage of the ansatz is that a general eigenvalue problem in a non-orthogonal, extended atomic orbital basis is solved and therefore correct occupied/virtual orbital energy splittings as well as Rydberg levels are obtained. A key idea for the success of the new model is that the determination of atomic charges (describing an effective electron-electron interaction) and the one-particle spectrum is decoupled and treated by two differently parametrized Hamiltonians/basis sets. The three-diagonalization-step composite procedure can routinely compute broad range electronic spectra (0-8 eV) within minutes of computation time for systems composed of 500-1000 atoms with an accuracy typical of standard time-dependent density functional theory (0.3-0.5 eV average error). An easily extendable parametrization based on coupled-cluster and density functional computed reference data for the elements H–Zn including transition metals is described. The accuracy of the method termed sTDA-xTB is first benchmarked for vertical excitation energies of open- and closed-shell systems in comparison to other semi-empirical methods and applied to exemplary problems in electronic spectroscopy. As side products of the development, a robust and efficient valence electron TB method for the accurate determination of atomic charges as well as a more accurate calculation scheme of dipole rotatory strengths within the Tamm-Dancoff approximation is proposed.« less

  6. Non-photorealistic rendering of virtual implant models for computer-assisted fluoroscopy-based surgical procedures

    NASA Astrophysics Data System (ADS)

    Zheng, Guoyan

    2007-03-01

    Surgical navigation systems visualize the positions and orientations of surgical instruments and implants as graphical overlays onto a medical image of the operated anatomy on a computer monitor. The orthopaedic surgical navigation systems could be categorized according to the image modalities that are used for the visualization of surgical action. In the so-called CT-based systems or 'surgeon-defined anatomy' based systems, where a 3D volume or surface representation of the operated anatomy could be constructed from the preoperatively acquired tomographic data or through intraoperatively digitized anatomy landmarks, a photorealistic rendering of the surgical action has been identified to greatly improve usability of these navigation systems. However, this may not hold true when the virtual representation of surgical instruments and implants is superimposed onto 2D projection images in a fluoroscopy-based navigation system due to the so-called image occlusion problem. Image occlusion occurs when the field of view of the fluoroscopic image is occupied by the virtual representation of surgical implants or instruments. In these situations, the surgeon may miss part of the image details, even if transparency and/or wire-frame rendering is used. In this paper, we propose to use non-photorealistic rendering to overcome this difficulty. Laboratory testing results on foamed plastic bones during various computer-assisted fluoroscopybased surgical procedures including total hip arthroplasty and long bone fracture reduction and osteosynthesis are shown.

  7. Application of transient CFD-procedures for S-shape computation in pump-turbines with and without FSI

    NASA Astrophysics Data System (ADS)

    Casartelli, E.; Mangani, L.; Ryan, O.; Schmid, A.

    2016-11-01

    CFD has entered the product development process in hydraulic machines since more than three decades. Beside the actual design process, in which the most appropriate geometry for a certain task is iteratively sought, several steady-state simulations and related analyses are performed with the help of CFD. Basic transient CFD-analysis is becoming more and more routine for rotor-stator interaction assessment, but in general unsteady CFD is still not standard due to the large computational effort. Especially for FSI simulations, where mesh motion is involved, a considerable amount of computational time is necessary for the mesh handling and deformation as well as the related unsteady flow field resolution. Therefore this kind of CFD computations are still unusual and mostly performed during trouble-shooting analysis rather than in the standard development process, i.e. in order to understand what went wrong instead of preventing failure or even better to increase the available knowledge. In this paper the application of an efficient and particularly robust algorithm for fast computations with moving mesh is presented for the analysis of transient effects encountered during highly dynamic procedures in the operation of a pump-turbine, like runaway at fixed GV position and load-rejection with GV motion imposed as one-way FSI. In both cases the computations extend through the S-shape of the machine in the turbine-brake and reverse pump domain, showing that such exotic computations can be perform on a more regular base, even if quite time consuming. Beside the presentation of the procedure and global results, some highlights in the encountered flow-physics are also given.

  8. Air-Gapped Structures as Magnetic Elements for Use in Power Processing Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Ohri, A. K.

    1977-01-01

    Methodical approaches to the design of inductors for use in LC filters and dc-to-dc converters using air gapped magnetic structures are presented. Methods for the analysis and design of full wave rectifier LC filter circuits operating with the inductor current in both the continuous conduction and the discontinuous conduction modes are also described. In the continuous conduction mode, linear circuit analysis techniques are employed, while in the case of the discontinuous mode, the method of analysis requires computer solutions of the piecewise linear differential equations which describe the filter in the time domain. Procedures for designing filter inductors using air gapped cores are presented. The first procedure requires digital computation to yield a design which is optimized in the sense of minimum core volume and minimum number of turns. The second procedure does not yield an optimized design as defined above, but the design can be obtained by hand calculations or with a small calculator. The third procedure is based on the use of specially prepared magnetic core data and provides an easy way to quickly reach a workable design.

  9. Computationally mapping sequence space to understand evolutionary protein engineering.

    PubMed

    Armstrong, Kathryn A; Tidor, Bruce

    2008-01-01

    Evolutionary protein engineering has been dramatically successful, producing a wide variety of new proteins with altered stability, binding affinity, and enzymatic activity. However, the success of such procedures is often unreliable, and the impact of the choice of protein, engineering goal, and evolutionary procedure is not well understood. We have created a framework for understanding aspects of the protein engineering process by computationally mapping regions of feasible sequence space for three small proteins using structure-based design protocols. We then tested the ability of different evolutionary search strategies to explore these sequence spaces. The results point to a non-intuitive relationship between the error-prone PCR mutation rate and the number of rounds of replication. The evolutionary relationships among feasible sequences reveal hub-like sequences that serve as particularly fruitful starting sequences for evolutionary search. Moreover, genetic recombination procedures were examined, and tradeoffs relating sequence diversity and search efficiency were identified. This framework allows us to consider the impact of protein structure on the allowed sequence space and therefore on the challenges that each protein presents to error-prone PCR and genetic recombination procedures.

  10. Registration of planar bioluminescence to magnetic resonance and x-ray computed tomography images as a platform for the development of bioluminescence tomography reconstruction algorithms.

    PubMed

    Beattie, Bradley J; Klose, Alexander D; Le, Carl H; Longo, Valerie A; Dobrenkov, Konstantine; Vider, Jelena; Koutcher, Jason A; Blasberg, Ronald G

    2009-01-01

    The procedures we propose make possible the mapping of two-dimensional (2-D) bioluminescence image (BLI) data onto a skin surface derived from a three-dimensional (3-D) anatomical modality [magnetic resonance (MR) or computed tomography (CT)] dataset. This mapping allows anatomical information to be incorporated into bioluminescence tomography (BLT) reconstruction procedures and, when applied using sources visible to both optical and anatomical modalities, can be used to evaluate the accuracy of those reconstructions. Our procedures, based on immobilization of the animal and a priori determined fixed projective transforms, should be more robust and accurate than previously described efforts, which rely on a poorly constrained retrospectively determined warping of the 3-D anatomical information. Experiments conducted to measure the accuracy of the proposed registration procedure found it to have a mean error of 0.36+/-0.23 mm. Additional experiments highlight some of the confounds that are often overlooked in the BLT reconstruction process, and for two of these confounds, simple corrections are proposed.

  11. Three-Dimensional Navier-Stokes Method with Two-Equation Turbulence Models for Efficient Numerical Simulation of Hypersonic Flows

    NASA Technical Reports Server (NTRS)

    Bardina, J. E.

    1994-01-01

    A new computational efficient 3-D compressible Reynolds-averaged implicit Navier-Stokes method with advanced two equation turbulence models for high speed flows is presented. All convective terms are modeled using an entropy satisfying higher-order Total Variation Diminishing (TVD) scheme based on implicit upwind flux-difference split approximations and arithmetic averaging procedure of primitive variables. This method combines the best features of data management and computational efficiency of space marching procedures with the generality and stability of time dependent Navier-Stokes procedures to solve flows with mixed supersonic and subsonic zones, including streamwise separated flows. Its robust stability derives from a combination of conservative implicit upwind flux-difference splitting with Roe's property U to provide accurate shock capturing capability that non-conservative schemes do not guarantee, alternating symmetric Gauss-Seidel 'method of planes' relaxation procedure coupled with a three-dimensional two-factor diagonal-dominant approximate factorization scheme, TVD flux limiters of higher-order flux differences satisfying realizability, and well-posed characteristic-based implicit boundary-point a'pproximations consistent with the local characteristics domain of dependence. The efficiency of the method is highly increased with Newton Raphson acceleration which allows convergence in essentially one forward sweep for supersonic flows. The method is verified by comparing with experiment and other Navier-Stokes methods. Here, results of adiabatic and cooled flat plate flows, compression corner flow, and 3-D hypersonic shock-wave/turbulent boundary layer interaction flows are presented. The robust 3-D method achieves a better computational efficiency of at least one order of magnitude over the CNS Navier-Stokes code. It provides cost-effective aerodynamic predictions in agreement with experiment, and the capability of predicting complex flow structures in complex geometries with good accuracy.

  12. Numerical sensitivity analysis of a variational data assimilation procedure for cardiac conductivities

    NASA Astrophysics Data System (ADS)

    Barone, Alessandro; Fenton, Flavio; Veneziani, Alessandro

    2017-09-01

    An accurate estimation of cardiac conductivities is critical in computational electro-cardiology, yet experimental results in the literature significantly disagree on the values and ratios between longitudinal and tangential coefficients. These are known to have a strong impact on the propagation of potential particularly during defibrillation shocks. Data assimilation is a procedure for merging experimental data and numerical simulations in a rigorous way. In particular, variational data assimilation relies on the least-square minimization of the misfit between simulations and experiments, constrained by the underlying mathematical model, which in this study is represented by the classical Bidomain system, or its common simplification given by the Monodomain problem. Operating on the conductivity tensors as control variables of the minimization, we obtain a parameter estimation procedure. As the theory of this approach currently provides only an existence proof and it is not informative for practical experiments, we present here an extensive numerical simulation campaign to assess practical critical issues such as the size and the location of the measurement sites needed for in silico test cases of potential experimental and realistic settings. This will be finalized with a real validation of the variational data assimilation procedure. Results indicate the presence of lower and upper bounds for the number of sites which guarantee an accurate and minimally redundant parameter estimation, the location of sites being generally non critical for properly designed experiments. An effective combination of parameter estimation based on the Monodomain and Bidomain models is tested for the sake of computational efficiency. Parameter estimation based on the Monodomain equation potentially leads to the accurate computation of the transmembrane potential in real settings.

  13. Fast and Adaptive Sparse Precision Matrix Estimation in High Dimensions

    PubMed Central

    Liu, Weidong; Luo, Xi

    2014-01-01

    This paper proposes a new method for estimating sparse precision matrices in the high dimensional setting. It has been popular to study fast computation and adaptive procedures for this problem. We propose a novel approach, called Sparse Column-wise Inverse Operator, to address these two issues. We analyze an adaptive procedure based on cross validation, and establish its convergence rate under the Frobenius norm. The convergence rates under other matrix norms are also established. This method also enjoys the advantage of fast computation for large-scale problems, via a coordinate descent algorithm. Numerical merits are illustrated using both simulated and real datasets. In particular, it performs favorably on an HIV brain tissue dataset and an ADHD resting-state fMRI dataset. PMID:25750463

  14. A structure preserving Lanczos algorithm for computing the optical absorption spectrum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shao, Meiyue; Jornada, Felipe H. da; Lin, Lin

    2016-11-16

    We present a new structure preserving Lanczos algorithm for approximating the optical absorption spectrum in the context of solving full Bethe-Salpeter equation without Tamm-Dancoff approximation. The new algorithm is based on a structure preserving Lanczos procedure, which exploits the special block structure of Bethe-Salpeter Hamiltonian matrices. A recently developed technique of generalized averaged Gauss quadrature is incorporated to accelerate the convergence. We also establish the connection between our structure preserving Lanczos procedure with several existing Lanczos procedures developed in different contexts. Numerical examples are presented to demonstrate the effectiveness of our Lanczos algorithm.

  15. Analysis of vibrational load influence upon passengers in trains with a compulsory body tilt

    NASA Astrophysics Data System (ADS)

    Antipin, D. Ya; Kobishchanov, V. V.; Lapshin, V. F.; Mitrakov, A. S.; Shorokhov, S. G.

    2017-02-01

    The procedure for forecasting the vibrational load influence upon passengers of trains of rolling stocks equipped with a system of a compulsory body tilt on railroad curves is offered. The procedure is based on the use of computer simulation methods and application of solid-state models of anthropometrical mannequins. As a result of the carried out investigations, there are substantiated criteria of the comfort level estimate for passengers in the rolling-stock under consideration. The procedure is approved by the example of the promising domestic rolling stock with a compulsory body tilt on railroad curves.

  16. Use of a graphics processing unit (GPU) to facilitate real-time 3D graphic presentation of the patient skin-dose distribution during fluoroscopic interventional procedures

    PubMed Central

    Rana, Vijay; Rudin, Stephen; Bednarek, Daniel R.

    2012-01-01

    We have developed a dose-tracking system (DTS) that calculates the radiation dose to the patient’s skin in real-time by acquiring exposure parameters and imaging-system-geometry from the digital bus on a Toshiba Infinix C-arm unit. The cumulative dose values are then displayed as a color map on an OpenGL-based 3D graphic of the patient for immediate feedback to the interventionalist. Determination of those elements on the surface of the patient 3D-graphic that intersect the beam and calculation of the dose for these elements in real time demands fast computation. Reducing the size of the elements results in more computation load on the computer processor and therefore a tradeoff occurs between the resolution of the patient graphic and the real-time performance of the DTS. The speed of the DTS for calculating dose to the skin is limited by the central processing unit (CPU) and can be improved by using the parallel processing power of a graphics processing unit (GPU). Here, we compare the performance speed of GPU-based DTS software to that of the current CPU-based software as a function of the resolution of the patient graphics. Results show a tremendous improvement in speed using the GPU. While an increase in the spatial resolution of the patient graphics resulted in slowing down the computational speed of the DTS on the CPU, the speed of the GPU-based DTS was hardly affected. This GPU-based DTS can be a powerful tool for providing accurate, real-time feedback about patient skin-dose to physicians while performing interventional procedures. PMID:24027616

  17. Use of a graphics processing unit (GPU) to facilitate real-time 3D graphic presentation of the patient skin-dose distribution during fluoroscopic interventional procedures.

    PubMed

    Rana, Vijay; Rudin, Stephen; Bednarek, Daniel R

    2012-02-23

    We have developed a dose-tracking system (DTS) that calculates the radiation dose to the patient's skin in real-time by acquiring exposure parameters and imaging-system-geometry from the digital bus on a Toshiba Infinix C-arm unit. The cumulative dose values are then displayed as a color map on an OpenGL-based 3D graphic of the patient for immediate feedback to the interventionalist. Determination of those elements on the surface of the patient 3D-graphic that intersect the beam and calculation of the dose for these elements in real time demands fast computation. Reducing the size of the elements results in more computation load on the computer processor and therefore a tradeoff occurs between the resolution of the patient graphic and the real-time performance of the DTS. The speed of the DTS for calculating dose to the skin is limited by the central processing unit (CPU) and can be improved by using the parallel processing power of a graphics processing unit (GPU). Here, we compare the performance speed of GPU-based DTS software to that of the current CPU-based software as a function of the resolution of the patient graphics. Results show a tremendous improvement in speed using the GPU. While an increase in the spatial resolution of the patient graphics resulted in slowing down the computational speed of the DTS on the CPU, the speed of the GPU-based DTS was hardly affected. This GPU-based DTS can be a powerful tool for providing accurate, real-time feedback about patient skin-dose to physicians while performing interventional procedures.

  18. Implementing Computer-Based Procedures: Thinking Outside the Paper Margins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxstrand, Johanna; Bly, Aaron

    In the past year there has been increased interest from the nuclear industry in adopting the use of electronic work packages and computer-based procedures (CBPs) in the field. The goal is to incorporate the use of technology in order to meet the Nuclear Promise requirements of reducing costs and improve efficiency and decrease human error rates of plant operations. Researchers, together with the nuclear industry, have been investigating the benefits an electronic work package system and specifically CBPs would have over current paper-based procedure practices. There are several classifications of CBPs ranging from a straight copy of the paper-based proceduremore » in PDF format to a more intelligent dynamic CBP. A CBP system offers a vast variety of improvements, such as context driven job aids, integrated human performance tools (e.g., placekeeping and correct component verification), and dynamic step presentation. The latter means that the CBP system could only display relevant steps based on operating mode, plant status, and the task at hand. The improvements can lead to reduction of the worker’s workload and human error by allowing the work to focus on the task at hand more. A team of human factors researchers at the Idaho National Laboratory studied and developed design concepts for CBPs for field workers between 2012 and 2016. The focus of the research was to present information in a procedure in a manner that leveraged the dynamic and computational capabilities of a handheld device allowing the worker to focus more on the task at hand than on the administrative processes currently applied when conducting work in the plant. As a part of the research the team identified type of work, instructions, and scenarios where the transition to a dynamic CBP system might not be as beneficial as it would for other types of work in the plant. In most cases the decision to use a dynamic CBP system and utilize the dynamic capabilities gained will be beneficial to the worker. However, tasks that are reliant on the skill of the craft or have a short set of instructions may not provide a way or even need to utilize all the advanced capabilities in a dynamic CBP system. Therefore, a hybrid CBP system that could handle all the classifications of a CBP would be the best solution to take advantage of all that a CBP system offers. The implementation of a CBP system does not automatically improve the quality of procedures. Utilities should look into why each procedure is written the way it currently is on paper. Utilities should take the time before implementation to review, standardize format and update current procedures. Implementation of a CBP system can be a time to break out of traditional procedure writing processes and create new processes and procedures that take advantage of the capabilities a CBP system. This paper will summarize the research on CBPs and provide suggestions to take into consideration when implementing a CBP system.« less

  19. A Geometry Based Infra-structure for Computational Analysis and Design

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1997-01-01

    The computational steps traditionally taken for most engineering analysis (CFD, structural analysis, and etc.) are: Surface Generation - usually by employing a CAD system; Grid Generation - preparing the volume for the simulation; Flow Solver - producing the results at the specified operational point; and Post-processing Visualization - interactively attempting to understand the results For structural analysis, integrated systems can be obtained from a number of commercial vendors. For CFD, these steps have worked well in the past for simple steady-state simulations at the expense of much user interaction. The data was transmitted between phases via files. Specifically the problems with this procedure are: (1) File based. Information flows from one step to the next via data files with formats specified for that procedure. (2) 'Good' Geometry. A bottleneck in getting results from a solver is the construction of proper geometry to be fed to the grid generator. With 'good' geometry a grid can be constructed in tens of minutes (even with a complex configuration) using unstructured techniques. (3) One-Way communication. All information travels on from one phase to the next. Until this process can be automated, more complex problems such as multi-disciplinary analysis or using the above procedure for design becomes prohibitive.

  20. DIALOG: An executive computer program for linking independent programs

    NASA Technical Reports Server (NTRS)

    Glatt, C. R.; Hague, D. S.; Watson, D. A.

    1973-01-01

    A very large scale computer programming procedure called the DIALOG Executive System has been developed for the Univac 1100 series computers. The executive computer program, DIALOG, controls the sequence of execution and data management function for a library of independent computer programs. Communication of common information is accomplished by DIALOG through a dynamically constructed and maintained data base of common information. The unique feature of the DIALOG Executive System is the manner in which computer programs are linked. Each program maintains its individual identity and as such is unaware of its contribution to the large scale program. This feature makes any computer program a candidate for use with the DIALOG Executive System. The installation and use of the DIALOG Executive System are described at Johnson Space Center.

  1. 75 FR 26791 - Solicitation for a Cooperative Agreement-Evaluation of Technical Assistance for Evidence-Based...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-12

    ... opportunity to meet with NIC project staff and ask questions about the project and the application procedures. Attendance at the conference is optional. Provisions will be made using WebEx technology (telephone and computer-based conferencing). The WebEx session requires applicants to have access to a telephone and...

  2. Seven-Step Problem-Based Learning in an Interaction Design Course

    ERIC Educational Resources Information Center

    Schultz, Nette; Christensen, Hans Peter

    2004-01-01

    The objective in this paper is the implementation of the highly structured seven-step problem-based learning (PBL) procedure as part of the learning process in a human-computer interaction (HCI) design course at the Technical University of Denmark, taking into account the common learning processes in PBL and the interaction design process. These…

  3. The Health Services Mobility Study Method of Task Analysis and Curriculum Design. Research Report No. 11. Volume 3: Using the Computer to Develop Job Ladders.

    ERIC Educational Resources Information Center

    Gilpatrick, Eleanor

    This document is volume 3 of a four-volume report which describes the components of the Health Services Mobility Study (HSMS) method of task analysis, job ladder design, and curriculum development. Divided into four chapters, volume 3 is a manual for using HSMS computer based statistical procedures to design job structures and job ladders. Chapter…

  4. Computer aided diagnosis and treatment planning for developmental dysplasia of the hip

    NASA Astrophysics Data System (ADS)

    Li, Bin; Lu, Hongbing; Cai, Wenli; Li, Xiang; Meng, Jie; Liang, Zhengrong

    2005-04-01

    The developmental dysplasia of the hip (DDH) is a congenital malformation affecting the proximal femurs and acetabulum that are subluxatable, dislocatable, and dislocated. Early diagnosis and treatment is important because failure to diagnose and improper treatment can result in significant morbidity. In this paper, we designed and implemented a computer aided system for the diagnosis and treatment planning of this disease. With the design, the patient received CT (computed tomography) or MRI (magnetic resonance imaging) scan first. A mixture-based PV partial-volume algorithm was applied to perform bone segmentation on CT image, followed by three-dimensional (3D) reconstruction and display of the segmented image, demonstrating the special relationship between the acetabulum and femurs for visual judgment. Several standard procedures, such as Salter procedure, Pemberton procedure and Femoral Shortening osteotomy, were simulated on the screen to rehearse a virtual treatment plan. Quantitative measurement of Acetabular Index (AI) and Femoral Neck Anteversion (FNA) were performed on the 3D image for evaluation of DDH and treatment plans. PC graphics-card GPU architecture was exploited to accelerate the 3D rendering and geometric manipulation. The prototype system was implemented on PC/Windows environment and is currently under clinical trial on patient datasets.

  5. Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 2.

    PubMed

    Bergmann, Frank T; Cooper, Jonathan; Le Novère, Nicolas; Nickerson, David; Waltemath, Dagmar

    2015-09-04

    The number, size and complexity of computational models of biological systems are growing at an ever increasing pace. It is imperative to build on existing studies by reusing and adapting existing models and parts thereof. The description of the structure of models is not sufficient to enable the reproduction of simulation results. One also needs to describe the procedures the models are subjected to, as recommended by the Minimum Information About a Simulation Experiment (MIASE) guidelines. This document presents Level 1 Version 2 of the Simulation Experiment Description Markup Language (SED-ML), a computer-readable format for encoding simulation and analysis experiments to apply to computational models. SED-ML files are encoded in the Extensible Markup Language (XML) and can be used in conjunction with any XML-based model encoding format, such as CellML or SBML. A SED-ML file includes details of which models to use, how to modify them prior to executing a simulation, which simulation and analysis procedures to apply, which results to extract and how to present them. Level 1 Version 2 extends the format by allowing the encoding of repeated and chained procedures.

  6. Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 2.

    PubMed

    Bergmann, Frank T; Cooper, Jonathan; Le Novère, Nicolas; Nickerson, David; Waltemath, Dagmar

    2015-06-01

    The number, size and complexity of computational models of biological systems are growing at an ever increasing pace. It is imperative to build on existing studies by reusing and adapting existing models and parts thereof. The description of the structure of models is not sufficient to enable the reproduction of simulation results. One also needs to describe the procedures the models are subjected to, as recommended by the Minimum Information About a Simulation Experiment (MIASE) guidelines. This document presents Level 1 Version 2 of the Simulation Experiment Description Markup Language (SED-ML), a computer-readable format for encoding simulation and analysis experiments to apply to computational models. SED-ML files are encoded in the Extensible Markup Language (XML) and can be used in conjunction with any XML-based model encoding format, such as CellML or SBML. A SED-ML file includes details of which models to use, how to modify them prior to executing a simulation, which simulation and analysis procedures to apply, which results to extract and how to present them. Level 1 Version 2 extends the format by allowing the encoding of repeated and chained procedures.

  7. A novel tree-based procedure for deciphering the genomic spectrum of clinical disease entities.

    PubMed

    Mbogning, Cyprien; Perdry, Hervé; Toussile, Wilson; Broët, Philippe

    2014-01-01

    Dissecting the genomic spectrum of clinical disease entities is a challenging task. Recursive partitioning (or classification trees) methods provide powerful tools for exploring complex interplay among genomic factors, with respect to a main factor, that can reveal hidden genomic patterns. To take confounding variables into account, the partially linear tree-based regression (PLTR) model has been recently published. It combines regression models and tree-based methodology. It is however computationally burdensome and not well suited for situations for which a large number of exploratory variables is expected. We developed a novel procedure that represents an alternative to the original PLTR procedure, and considered different selection criteria. A simulation study with different scenarios has been performed to compare the performances of the proposed procedure to the original PLTR strategy. The proposed procedure with a Bayesian Information Criterion (BIC) achieved good performances to detect the hidden structure as compared to the original procedure. The novel procedure was used for analyzing patterns of copy-number alterations in lung adenocarcinomas, with respect to Kirsten Rat Sarcoma Viral Oncogene Homolog gene (KRAS) mutation status, while controlling for a cohort effect. Results highlight two subgroups of pure or nearly pure wild-type KRAS tumors with particular copy-number alteration patterns. The proposed procedure with a BIC criterion represents a powerful and practical alternative to the original procedure. Our procedure performs well in a general framework and is simple to implement.

  8. Time-Of-Flight Camera, Optical Tracker and Computed Tomography in Pairwise Data Registration.

    PubMed

    Pycinski, Bartlomiej; Czajkowska, Joanna; Badura, Pawel; Juszczyk, Jan; Pietka, Ewa

    2016-01-01

    A growing number of medical applications, including minimal invasive surgery, depends on multi-modal or multi-sensors data processing. Fast and accurate 3D scene analysis, comprising data registration, seems to be crucial for the development of computer aided diagnosis and therapy. The advancement of surface tracking system based on optical trackers already plays an important role in surgical procedures planning. However, new modalities, like the time-of-flight (ToF) sensors, widely explored in non-medical fields are powerful and have the potential to become a part of computer aided surgery set-up. Connection of different acquisition systems promises to provide a valuable support for operating room procedures. Therefore, the detailed analysis of the accuracy of such multi-sensors positioning systems is needed. We present the system combining pre-operative CT series with intra-operative ToF-sensor and optical tracker point clouds. The methodology contains: optical sensor set-up and the ToF-camera calibration procedures, data pre-processing algorithms, and registration technique. The data pre-processing yields a surface, in case of CT, and point clouds for ToF-sensor and marker-driven optical tracker representation of an object of interest. An applied registration technique is based on Iterative Closest Point algorithm. The experiments validate the registration of each pair of modalities/sensors involving phantoms of four various human organs in terms of Hausdorff distance and mean absolute distance metrics. The best surface alignment was obtained for CT and optical tracker combination, whereas the worst for experiments involving ToF-camera. The obtained accuracies encourage to further develop the multi-sensors systems. The presented substantive discussion concerning the system limitations and possible improvements mainly related to the depth information produced by the ToF-sensor is useful for computer aided surgery developers.

  9. Nuclear-relaxed elastic and piezoelectric constants of materials: Computational aspects of two quantum-mechanical approaches.

    PubMed

    Erba, Alessandro; Caglioti, Dominique; Zicovich-Wilson, Claudio Marcelo; Dovesi, Roberto

    2017-02-15

    Two alternative approaches for the quantum-mechanical calculation of the nuclear-relaxation term of elastic and piezoelectric tensors of crystalline materials are illustrated and their computational aspects discussed: (i) a numerical approach based on the geometry optimization of atomic positions at strained lattice configurations and (ii) a quasi-analytical approach based on the evaluation of the force- and displacement-response internal-strain tensors as combined with the interatomic force-constant matrix. The two schemes are compared both as regards their computational accuracy and performance. The latter approach, not being affected by the many numerical parameters and procedures of a typical quasi-Newton geometry optimizer, constitutes a more reliable and robust mean to the evaluation of such properties, at a reduced computational cost for most crystalline systems. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  10. Observations on computational methodologies for use in large-scale, gradient-based, multidisciplinary design incorporating advanced CFD codes

    NASA Technical Reports Server (NTRS)

    Newman, P. A.; Hou, G. J.-W.; Jones, H. E.; Taylor, A. C., III; Korivi, V. M.

    1992-01-01

    How a combination of various computational methodologies could reduce the enormous computational costs envisioned in using advanced CFD codes in gradient based optimized multidisciplinary design (MdD) procedures is briefly outlined. Implications of these MdD requirements upon advanced CFD codes are somewhat different than those imposed by a single discipline design. A means for satisfying these MdD requirements for gradient information is presented which appear to permit: (1) some leeway in the CFD solution algorithms which can be used; (2) an extension to 3-D problems; and (3) straightforward use of other computational methodologies. Many of these observations have previously been discussed as possibilities for doing parts of the problem more efficiently; the contribution here is observing how they fit together in a mutually beneficial way.

  11. Flight Behaviors of a Complex Projectile Using a Coupled Computational Fluid Dynamics (CFD)-based Simulation Technique: Free Motion

    DTIC Science & Technology

    2015-09-01

    million cells each. These 4 canard meshes were then overset with the 10 background projectile body mesh using the Chimera procedure.29 The final... Chimera -overlapped mesh for each of the 2 (fin cant) models consists of approximately 43 million cells. A circumferential cross section (Fig. 4... Chimera procedure requires proper transfer of information between the background mesh and the canard meshes at every time step. However, the advantage

  12. [Short-term and long-term fetal heart rate variability after amnioinfusion treatment of oligohydramnios complicated pregnancy].

    PubMed

    Machalski, T; Sikora, J; Bakon, I; Magnucki, J; Grzesiak-Kubica, E; Szkodny, E

    2001-12-01

    Results of computerised analysis of cardiotocograms obtained in the group of 21 pregnancies complicated by idiopathic oligohydramnios are presented in the study. Amnioinfusion procedures were administered serially in local anesthesia with ultrasound and colour Doppler control on the base of oligohydramnios criteria by Phelan. The analysis was based on KOMPOR software created by ITAM Zabrze based on PC computer connected to Hewlett-Packard Series 50A cardiotocograph. Significant short-term variability increase just after amnioinfusion procedure from 5.55 ms to 8.24 ms and after 24 hours up to 7.25 ms was found, while significant long-term variability values changes were not observed.

  13. 12 CFR 1209.17 - Time computations.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 10 2014-01-01 2014-01-01 false Time computations. 1209.17 Section 1209.17... PROCEDURE Rules of Practice and Procedure § 1209.17 Time computations. (a) General rule. In computing any period of time prescribed or allowed under this part, the date of the act or event that commences the...

  14. 12 CFR 1209.17 - Time computations.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 9 2013-01-01 2013-01-01 false Time computations. 1209.17 Section 1209.17... PROCEDURE Rules of Practice and Procedure § 1209.17 Time computations. (a) General rule. In computing any period of time prescribed or allowed under this part, the date of the act or event that commences the...

  15. 12 CFR 1209.17 - Time computations.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 9 2012-01-01 2012-01-01 false Time computations. 1209.17 Section 1209.17... PROCEDURE Rules of Practice and Procedure § 1209.17 Time computations. (a) General rule. In computing any period of time prescribed or allowed under this part, the date of the act or event that commences the...

  16. Stratified and Maximum Information Item Selection Procedures in Computer Adaptive Testing

    ERIC Educational Resources Information Center

    Deng, Hui; Ansley, Timothy; Chang, Hua-Hua

    2010-01-01

    In this study we evaluated and compared three item selection procedures: the maximum Fisher information procedure (F), the a-stratified multistage computer adaptive testing (CAT) (STR), and a refined stratification procedure that allows more items to be selected from the high a strata and fewer items from the low a strata (USTR), along with…

  17. Web-based continuing medical education. (II): Evaluation study of computer-mediated continuing medical education.

    PubMed

    Curran, V R; Hoekman, T; Gulliver, W; Landells, I; Hatcher, L

    2000-01-01

    Over the years, various distance learning technologies and methods have been applied to the continuing medical education needs of rural and remote physicians. They have included audio teleconferencing, slow scan imaging, correspondence study, and compressed videoconferencing. The recent emergence and growth of Internet, World Wide Web (Web), and compact disk read-only-memory (CD-ROM) technologies have introduced new opportunities for providing continuing education to the rural medical practitioner. This evaluation study assessed the instructional effectiveness of a hybrid computer-mediated courseware delivery system on dermatologic office procedures. A hybrid delivery system merges Web documents, multimedia, computer-mediated communications, and CD-ROMs to enable self-paced instruction and collaborative learning. Using a modified pretest to post-test control group study design, several evaluative criteria (participant reaction, learning achievement, self-reported performance change, and instructional transactions) were assessed by various qualitative and quantitative data collection methods. This evaluation revealed that a hybrid computer-mediated courseware system was an effective means for increasing knowledge (p < .05) and improving self-reported competency (p < .05) in dermatologic office procedures, and that participants were very satisfied with the self-paced instruction and use of asynchronous computer conferencing for collaborative information sharing among colleagues.

  18. Characterization of robotics parallel algorithms and mapping onto a reconfigurable SIMD machine

    NASA Technical Reports Server (NTRS)

    Lee, C. S. G.; Lin, C. T.

    1989-01-01

    The kinematics, dynamics, Jacobian, and their corresponding inverse computations are six essential problems in the control of robot manipulators. Efficient parallel algorithms for these computations are discussed and analyzed. Their characteristics are identified and a scheme on the mapping of these algorithms to a reconfigurable parallel architecture is presented. Based on the characteristics including type of parallelism, degree of parallelism, uniformity of the operations, fundamental operations, data dependencies, and communication requirement, it is shown that most of the algorithms for robotic computations possess highly regular properties and some common structures, especially the linear recursive structure. Moreover, they are well-suited to be implemented on a single-instruction-stream multiple-data-stream (SIMD) computer with reconfigurable interconnection network. The model of a reconfigurable dual network SIMD machine with internal direct feedback is introduced. A systematic procedure internal direct feedback is introduced. A systematic procedure to map these computations to the proposed machine is presented. A new scheduling problem for SIMD machines is investigated and a heuristic algorithm, called neighborhood scheduling, that reorders the processing sequence of subtasks to reduce the communication time is described. Mapping results of a benchmark algorithm are illustrated and discussed.

  19. Computer simulation of surface and film processes

    NASA Technical Reports Server (NTRS)

    Tiller, W. A.; Halicioglu, M. T.

    1983-01-01

    Adequate computer methods, based on interactions between discrete particles, provide information leading to an atomic level understanding of various physical processes. The success of these simulation methods, however, is related to the accuracy of the potential energy function representing the interactions among the particles. The development of a potential energy function for crystalline SiO2 forms that can be employed in lengthy computer modelling procedures was investigated. In many of the simulation methods which deal with discrete particles, semiempirical two body potentials were employed to analyze energy and structure related properties of the system. Many body interactions are required for a proper representation of the total energy for many systems. Many body interactions for simulations based on discrete particles are discussed.

  20. Modelling of DNA-protein recognition

    NASA Technical Reports Server (NTRS)

    Rein, R.; Garduno, R.; Colombano, S.; Nir, S.; Haydock, K.; Macelroy, R. D.

    1980-01-01

    Computer model-building procedures using stereochemical principles together with theoretical energy calculations appear to be, at this stage, the most promising route toward the elucidation of DNA-protein binding schemes and recognition principles. A review of models and bonding principles is conducted and approaches to modeling are considered, taking into account possible di-hydrogen-bonding schemes between a peptide and a base (or a base pair) of a double-stranded nucleic acid in the major groove, aspects of computer graphic modeling, and a search for isogeometric helices. The energetics of recognition complexes is discussed and several models for peptide DNA recognition are presented.

  1. Response surface method in geotechnical/structural analysis, phase 1

    NASA Astrophysics Data System (ADS)

    Wong, F. S.

    1981-02-01

    In the response surface approach, an approximating function is fit to a long running computer code based on a limited number of code calculations. The approximating function, called the response surface, is then used to replace the code in subsequent repetitive computations required in a statistical analysis. The procedure of the response surface development and feasibility of the method are shown using a sample problem in slop stability which is based on data from centrifuge experiments of model soil slopes and involves five random soil parameters. It is shown that a response surface can be constructed based on as few as four code calculations and that the response surface is computationally extremely efficient compared to the code calculation. Potential applications of this research include probabilistic analysis of dynamic, complex, nonlinear soil/structure systems such as slope stability, liquefaction, and nuclear reactor safety.

  2. A Computer-Managed Instruction Support System for Large Group Individualized Instruction.

    ERIC Educational Resources Information Center

    Countermine, Terry; Singh, Jane M.

    1977-01-01

    The Pennsylvania State University College of Education's Instruction Support System (ISS) was developed to manage the logistical operation of large group individualized competency-based instruction. Software and hardware charting, operational procedures, and data from student opinion questionnaires are cited. (RAO)

  3. Brain-computer interaction research at the Computer Vision and Multimedia Laboratory, University of Geneva.

    PubMed

    Pun, Thierry; Alecu, Teodor Iulian; Chanel, Guillaume; Kronegg, Julien; Voloshynovskiy, Sviatoslav

    2006-06-01

    This paper describes the work being conducted in the domain of brain-computer interaction (BCI) at the Multimodal Interaction Group, Computer Vision and Multimedia Laboratory, University of Geneva, Geneva, Switzerland. The application focus of this work is on multimodal interaction rather than on rehabilitation, that is how to augment classical interaction by means of physiological measurements. Three main research topics are addressed. The first one concerns the more general problem of brain source activity recognition from EEGs. In contrast with classical deterministic approaches, we studied iterative robust stochastic based reconstruction procedures modeling source and noise statistics, to overcome known limitations of current techniques. We also developed procedures for optimal electroencephalogram (EEG) sensor system design in terms of placement and number of electrodes. The second topic is the study of BCI protocols and performance from an information-theoretic point of view. Various information rate measurements have been compared for assessing BCI abilities. The third research topic concerns the use of EEG and other physiological signals for assessing a user's emotional status.

  4. 47 CFR 1.958 - Distance computation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Distance computation. 1.958 Section 1.958 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Wireless Radio Services Applications and Proceedings Application Requirements and Procedures § 1.958 Distance computation. The method...

  5. 47 CFR 1.958 - Distance computation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false Distance computation. 1.958 Section 1.958 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Wireless Radio Services Applications and Proceedings Application Requirements and Procedures § 1.958 Distance computation. The method...

  6. Steepest descent method implementation on unconstrained optimization problem using C++ program

    NASA Astrophysics Data System (ADS)

    Napitupulu, H.; Sukono; Mohd, I. Bin; Hidayat, Y.; Supian, S.

    2018-03-01

    Steepest Descent is known as the simplest gradient method. Recently, many researches are done to obtain the appropriate step size in order to reduce the objective function value progressively. In this paper, the properties of steepest descent method from literatures are reviewed together with advantages and disadvantages of each step size procedure. The development of steepest descent method due to its step size procedure is discussed. In order to test the performance of each step size, we run a steepest descent procedure in C++ program. We implemented it to unconstrained optimization test problem with two variables, then we compare the numerical results of each step size procedure. Based on the numerical experiment, we conclude the general computational features and weaknesses of each procedure in each case of problem.

  7. Desktop-based computer-assisted orthopedic training system for spinal surgery.

    PubMed

    Rambani, Rohit; Ward, James; Viant, Warren

    2014-01-01

    Simulation and surgical training has moved on since its inception during the end of the last century. The trainees are getting more exposed to computers and laboratory training in different subspecialties. More needs to be done in orthopedic simulation in spinal surgery. To develop a training system for pedicle screw fixation and validate its effectiveness in a cohort of junior orthopedic trainees. Fully simulated computer-navigated training system is used to train junior orthopedic trainees perform pedicle screw insertion in the lumbar spine. Real patient computed tomography scans are used to produce the real-time fluoroscopic images of the lumbar spine. The training system was developed to simulate pedicle screw insertion in the lumbar spine. A total of 12 orthopedic senior house officers performed pedicle screw insertion in the lumbar spine before and after the training on training system. The results were assessed based on the scoring system, which included the amount of time taken, accuracy of pedicle screw insertion, and the number of exposures requested to complete the procedure. The result shows a significant improvement in amount of time taken, accuracy of fixation, and the number of exposures after the training on simulator system. This was statistically significant using paired Student t test (p < 0.05). Fully simulated computer-navigated training system is an efficient training tool for young orthopedic trainees. This system can be used to augment training in the operating room, and trainees acquire their skills in the comfort of their study room or in the training room in the hospital. The system has the potential to be used in various other orthopedic procedures for learning of technical skills in a manner aimed at ensuring a smooth escalation in task complexity leading to the better performance of procedures in the operating theater. Copyright © 2014 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  8. An XML Representation for Crew Procedures

    NASA Technical Reports Server (NTRS)

    Simpson, Richard C.

    2005-01-01

    NASA ensures safe operation of complex systems through the use of formally-documented procedures, which encode the operational knowledge of the system as derived from system experts. Crew members use procedure documentation on the ground for training purposes and on-board space shuttle and space station to guide their activities. Investigators at JSC are developing a new representation for procedures that is content-based (as opposed to display-based). Instead of specifying how a procedure should look on the printed page, the content-based representation will identify the components of a procedure and (more importantly) how the components are related (e.g., how the activities within a procedure are sequenced; what resources need to be available for each activity). This approach will allow different sets of rules to be created for displaying procedures on a computer screen, on a hand-held personal digital assistant (PDA), verbally, or on a printed page, and will also allow intelligent reasoning processes to automatically interpret and use procedure definitions. During his NASA fellowship, Dr. Simpson examined how various industries represent procedures (also called business processes or workflows), in areas such as manufacturing, accounting, shipping, or customer service. A useful method for designing and evaluating workflow representation languages is by determining their ability to encode various workflow patterns, which depict abstract relationships between the components of a procedure removed from the context of a specific procedure or industry. Investigators have used this type of analysis to evaluate how well-suited existing workflow representation languages are for various industries based on the workflow patterns that commonly arise across industry-specific procedures. Based on this type of analysis, it is already clear that existing workflow representations capture discrete flow of control (i.e., when one activity should start and stop based on when other activities start and stop), but do not capture the flow of data, materials, resources or priorities. Existing workflow representation languages are also limited to representing sequences of discrete activities, and cannot encode procedures involving continuous flow of information or materials between activities.

  9. Models and analysis for multivariate failure time data

    NASA Astrophysics Data System (ADS)

    Shih, Joanna Huang

    The goal of this research is to develop and investigate models and analytic methods for multivariate failure time data. We compare models in terms of direct modeling of the margins, flexibility of dependency structure, local vs. global measures of association, and ease of implementation. In particular, we study copula models, and models produced by right neutral cumulative hazard functions and right neutral hazard functions. We examine the changes of association over time for families of bivariate distributions induced from these models by displaying their density contour plots, conditional density plots, correlation curves of Doksum et al, and local cross ratios of Oakes. We know that bivariate distributions with same margins might exhibit quite different dependency structures. In addition to modeling, we study estimation procedures. For copula models, we investigate three estimation procedures. the first procedure is full maximum likelihood. The second procedure is two-stage maximum likelihood. At stage 1, we estimate the parameters in the margins by maximizing the marginal likelihood. At stage 2, we estimate the dependency structure by fixing the margins at the estimated ones. The third procedure is two-stage partially parametric maximum likelihood. It is similar to the second procedure, but we estimate the margins by the Kaplan-Meier estimate. We derive asymptotic properties for these three estimation procedures and compare their efficiency by Monte-Carlo simulations and direct computations. For models produced by right neutral cumulative hazards and right neutral hazards, we derive the likelihood and investigate the properties of the maximum likelihood estimates. Finally, we develop goodness of fit tests for the dependency structure in the copula models. We derive a test statistic and its asymptotic properties based on the test of homogeneity of Zelterman and Chen (1988), and a graphical diagnostic procedure based on the empirical Bayes approach. We study the performance of these two methods using actual and computer generated data.

  10. Acceleration of fluoro-CT reconstruction for a mobile C-Arm on GPU and FPGA hardware: a simulation study

    NASA Astrophysics Data System (ADS)

    Xue, Xinwei; Cheryauka, Arvi; Tubbs, David

    2006-03-01

    CT imaging in interventional and minimally-invasive surgery requires high-performance computing solutions that meet operational room demands, healthcare business requirements, and the constraints of a mobile C-arm system. The computational requirements of clinical procedures using CT-like data are increasing rapidly, mainly due to the need for rapid access to medical imagery during critical surgical procedures. The highly parallel nature of Radon transform and CT algorithms enables embedded computing solutions utilizing a parallel processing architecture to realize a significant gain of computational intensity with comparable hardware and program coding/testing expenses. In this paper, using a sample 2D and 3D CT problem, we explore the programming challenges and the potential benefits of embedded computing using commodity hardware components. The accuracy and performance results obtained on three computational platforms: a single CPU, a single GPU, and a solution based on FPGA technology have been analyzed. We have shown that hardware-accelerated CT image reconstruction can be achieved with similar levels of noise and clarity of feature when compared to program execution on a CPU, but gaining a performance increase at one or more orders of magnitude faster. 3D cone-beam or helical CT reconstruction and a variety of volumetric image processing applications will benefit from similar accelerations.

  11. Base Flow Model Validation

    NASA Technical Reports Server (NTRS)

    Sinha, Neeraj; Brinckman, Kevin; Jansen, Bernard; Seiner, John

    2011-01-01

    A method was developed of obtaining propulsive base flow data in both hot and cold jet environments, at Mach numbers and altitude of relevance to NASA launcher designs. The base flow data was used to perform computational fluid dynamics (CFD) turbulence model assessments of base flow predictive capabilities in order to provide increased confidence in base thermal and pressure load predictions obtained from computational modeling efforts. Predictive CFD analyses were used in the design of the experiments, available propulsive models were used to reduce program costs and increase success, and a wind tunnel facility was used. The data obtained allowed assessment of CFD/turbulence models in a complex flow environment, working within a building-block procedure to validation, where cold, non-reacting test data was first used for validation, followed by more complex reacting base flow validation.

  12. The evolution of spinal instrumentation for the management of occipital cervical and cervicothoracic junctional injuries.

    PubMed

    Smucker, Joseph D; Sasso, Rick C

    2006-05-15

    Independent computer-based literature review of articles pertaining to instrumentation and fusion of junctional injuries of the cervical spine. To review and discuss the evolution of instrumentation techniques and systems used in the treatment of cervical spine junctional injuries. Instrumentation of junctional injuries of the cervical spine has been limited historically by failure to achieve rigid internal fixation in multiple planes. The evolution of these techniques has required increased insight into the morphology and unique biomechanics of the structures to be instrumented. Computer-based literature search of Ovid and PubMed databases. Extensive literature search yielded insights into the evolution of systems initially based on onlay bone graft combined with wiring techniques. Such techniques have come to include systems incorporating rigid, longitudinal struts that accommodate multiplanar screws placed in the lateral masses, pedicles, transarticular regions, and occipital bone. Despite a rapid evolution of techniques and instrumentation technologies, it remains incumbent on the physician to provide the patient with a surgical procedure that balances the likelihood of a favorable outcome with the risk inherent in the implementation of the procedure.

  13. Learning, Realizability and Games in Classical Arithmetic

    NASA Astrophysics Data System (ADS)

    Aschieri, Federico

    2010-12-01

    In this dissertation we provide mathematical evidence that the concept of learning can be used to give a new and intuitive computational semantics of classical proofs in various fragments of Predicative Arithmetic. First, we extend Kreisel modified realizability to a classical fragment of first order Arithmetic, Heyting Arithmetic plus EM1 (Excluded middle axiom restricted to Sigma^0_1 formulas). We introduce a new realizability semantics we call "Interactive Learning-Based Realizability". Our realizers are self-correcting programs, which learn from their errors and evolve through time. Secondly, we extend the class of learning based realizers to a classical version PCFclass of PCF and, then, compare the resulting notion of realizability with Coquand game semantics and prove a full soundness and completeness result. In particular, we show there is a one-to-one correspondence between realizers and recursive winning strategies in the 1-Backtracking version of Tarski games. Third, we provide a complete and fully detailed constructive analysis of learning as it arises in learning based realizability for HA+EM1, Avigad's update procedures and epsilon substitution method for Peano Arithmetic PA. We present new constructive techniques to bound the length of learning processes and we apply them to reprove - by means of our theory - the classic result of Godel that provably total functions of PA can be represented in Godel's system T. Last, we give an axiomatization of the kind of learning that is needed to computationally interpret Predicative classical second order Arithmetic. Our work is an extension of Avigad's and generalizes the concept of update procedure to the transfinite case. Transfinite update procedures have to learn values of transfinite sequences of non computable functions in order to extract witnesses from classical proofs.

  14. Identification procedure for epistemic uncertainties using inverse fuzzy arithmetic

    NASA Astrophysics Data System (ADS)

    Haag, T.; Herrmann, J.; Hanss, M.

    2010-10-01

    For the mathematical representation of systems with epistemic uncertainties, arising, for example, from simplifications in the modeling procedure, models with fuzzy-valued parameters prove to be a suitable and promising approach. In practice, however, the determination of these parameters turns out to be a non-trivial problem. The identification procedure to appropriately update these parameters on the basis of a reference output (measurement or output of an advanced model) requires the solution of an inverse problem. Against this background, an inverse method for the computation of the fuzzy-valued parameters of a model with epistemic uncertainties is presented. This method stands out due to the fact that it only uses feedforward simulations of the model, based on the transformation method of fuzzy arithmetic, along with the reference output. An inversion of the system equations is not necessary. The advancement of the method presented in this paper consists of the identification of multiple input parameters based on a single reference output or measurement. An optimization is used to solve the resulting underdetermined problems by minimizing the uncertainty of the identified parameters. Regions where the identification procedure is reliable are determined by the computation of a feasibility criterion which is also based on the output data of the transformation method only. For a frequency response function of a mechanical system, this criterion allows a restriction of the identification process to some special range of frequency where its solution can be guaranteed. Finally, the practicability of the method is demonstrated by covering the measured output of a fluid-filled piping system by the corresponding uncertain FE model in a conservative way.

  15. Vision based flight procedure stereo display system

    NASA Astrophysics Data System (ADS)

    Shen, Xiaoyun; Wan, Di; Ma, Lan; He, Yuncheng

    2008-03-01

    A virtual reality flight procedure vision system is introduced in this paper. The digital flight map database is established based on the Geographic Information System (GIS) and high definitions satellite remote sensing photos. The flight approaching area database is established through computer 3D modeling system and GIS. The area texture is generated from the remote sensing photos and aerial photographs in various level of detail. According to the flight approaching procedure, the flight navigation information is linked to the database. The flight approaching area vision can be dynamic displayed according to the designed flight procedure. The flight approaching area images are rendered in 2 channels, one for left eye images and the others for right eye images. Through the polarized stereoscopic projection system, the pilots and aircrew can get the vivid 3D vision of the flight destination approaching area. Take the use of this system in pilots preflight preparation procedure, the aircrew can get more vivid information along the flight destination approaching area. This system can improve the aviator's self-confidence before he carries out the flight mission, accordingly, the flight safety is improved. This system is also useful in validate the visual flight procedure design, and it helps to the flight procedure design.

  16. Boundary condition computational procedures for inviscid, supersonic steady flow field calculations

    NASA Technical Reports Server (NTRS)

    Abbett, M. J.

    1971-01-01

    Results are given of a comparative study of numerical procedures for computing solid wall boundary points in supersonic inviscid flow calculatons. Twenty five different calculation procedures were tested on two sample problems: a simple expansion wave and a simple compression (two-dimensional steady flow). A simple calculation procedure was developed. The merits and shortcomings of the various procedures are discussed, along with complications for three-dimensional and time-dependent flows.

  17. Modeling Geometry and Progressive Failure of Material Interfaces in Plain Weave Composites

    NASA Technical Reports Server (NTRS)

    Hsu, Su-Yuen; Cheng, Ron-Bin

    2010-01-01

    A procedure combining a geometrically nonlinear, explicit-dynamics contact analysis, computer aided design techniques, and elasticity-based mesh adjustment is proposed to efficiently generate realistic finite element models for meso-mechanical analysis of progressive failure in textile composites. In the procedure, the geometry of fiber tows is obtained by imposing a fictitious expansion on the tows. Meshes resulting from the procedure are conformal with the computed tow-tow and tow-matrix interfaces but are incongruent at the interfaces. The mesh interfaces are treated as cohesive contact surfaces not only to resolve the incongruence but also to simulate progressive failure. The method is employed to simulate debonding at the material interfaces in a ceramic-matrix plain weave composite with matrix porosity and in a polymeric matrix plain weave composite without matrix porosity, both subject to uniaxial cyclic loading. The numerical results indicate progression of the interfacial damage during every loading and reverse loading event in a constant strain amplitude cyclic process. However, the composites show different patterns of damage advancement.

  18. Probing RNA Native Conformational Ensembles with Structural Constraints.

    PubMed

    Fonseca, Rasmus; van den Bedem, Henry; Bernauer, Julie

    2016-05-01

    Noncoding ribonucleic acids (RNA) play a critical role in a wide variety of cellular processes, ranging from regulating gene expression to post-translational modification and protein synthesis. Their activity is modulated by highly dynamic exchanges between three-dimensional conformational substates, which are difficult to characterize experimentally and computationally. Here, we present an innovative, entirely kinematic computational procedure to efficiently explore the native ensemble of RNA molecules. Our procedure projects degrees of freedom onto a subspace of conformation space defined by distance constraints in the tertiary structure. The dimensionality reduction enables efficient exploration of conformational space. We show that the conformational distributions obtained with our method broadly sample the conformational landscape observed in NMR experiments. Compared to normal mode analysis-based exploration, our procedure diffuses faster through the experimental ensemble while also accessing conformational substates to greater precision. Our results suggest that conformational sampling with a highly reduced but fully atomistic representation of noncoding RNA expresses key features of their dynamic nature.

  19. Three-phase short circuit calculation method based on pre-computed surface for doubly fed induction generator

    NASA Astrophysics Data System (ADS)

    Ma, J.; Liu, Q.

    2018-02-01

    This paper presents an improved short circuit calculation method, based on pre-computed surface to determine the short circuit current of a distribution system with multiple doubly fed induction generators (DFIGs). The short circuit current, injected into power grid by DFIG, is determined by low voltage ride through (LVRT) control and protection under grid fault. However, the existing methods are difficult to calculate the short circuit current of DFIG in engineering practice due to its complexity. A short circuit calculation method, based on pre-computed surface, was proposed by developing the surface of short circuit current changing with the calculating impedance and the open circuit voltage. And the short circuit currents were derived by taking into account the rotor excitation and crowbar activation time. Finally, the pre-computed surfaces of short circuit current at different time were established, and the procedure of DFIG short circuit calculation considering its LVRT was designed. The correctness of proposed method was verified by simulation.

  20. VOFTools - A software package of calculation tools for volume of fluid methods using general convex grids

    NASA Astrophysics Data System (ADS)

    López, J.; Hernández, J.; Gómez, P.; Faura, F.

    2018-02-01

    The VOFTools library includes efficient analytical and geometrical routines for (1) area/volume computation, (2) truncation operations that typically arise in VOF (volume of fluid) methods, (3) area/volume conservation enforcement (VCE) in PLIC (piecewise linear interface calculation) reconstruction and(4) computation of the distance from a given point to the reconstructed interface. The computation of a polyhedron volume uses an efficient formula based on a quadrilateral decomposition and a 2D projection of each polyhedron face. The analytical VCE method is based on coupling an interpolation procedure to bracket the solution with an improved final calculation step based on the above volume computation formula. Although the library was originally created to help develop highly accurate advection and reconstruction schemes in the context of VOF methods, it may have more general applications. To assess the performance of the supplied routines, different tests, which are provided in FORTRAN and C, were implemented for several 2D and 3D geometries.

  1. BESIU Physical Analysis on Hadoop Platform

    NASA Astrophysics Data System (ADS)

    Huo, Jing; Zang, Dongsong; Lei, Xiaofeng; Li, Qiang; Sun, Gongxing

    2014-06-01

    In the past 20 years, computing cluster has been widely used for High Energy Physics data processing. The jobs running on the traditional cluster with a Data-to-Computing structure, have to read large volumes of data via the network to the computing nodes for analysis, thereby making the I/O latency become a bottleneck of the whole system. The new distributed computing technology based on the MapReduce programming model has many advantages, such as high concurrency, high scalability and high fault tolerance, and it can benefit us in dealing with Big Data. This paper brings the idea of using MapReduce model to do BESIII physical analysis, and presents a new data analysis system structure based on Hadoop platform, which not only greatly improve the efficiency of data analysis, but also reduces the cost of system building. Moreover, this paper establishes an event pre-selection system based on the event level metadata(TAGs) database to optimize the data analyzing procedure.

  2. Experimental Evaluation of Computer-Aided Tele-operation (CATO) and Computer-Aided Robotic Manipulation (CARMAN) Technology

    DTIC Science & Technology

    2010-08-01

    Consent Form 29 Appendix B. Demographics Questionnaire 35 Appendix C. NASA TLX Questionnaire 39 Appendix D. Symptom Questionnaire 41 List of Symbols...Index ( NASA - TLX ) Participants were given the NASA - TLX subjective workload rating at the end of each task (appendix C, Hart and Staveland, 1987).1 The... NASA - TLX is a multi-dimensional rating procedure that derives an overall workload score based on a weighted average of ratings on six subscales

  3. Computer-Based Technologies in Dentistry: Types and Applications

    PubMed Central

    Albuha Al-Mussawi, Raja’a M.; Farid, Farzaneh

    2016-01-01

    During dental education, dental students learn how to examine patients, make diagnosis, plan treatment and perform dental procedures perfectly and efficiently. However, progresses in computer-based technologies including virtual reality (VR) simulators, augmented reality (AR) and computer aided design/computer aided manufacturing (CAD/CAM) systems have resulted in new modalities for instruction and practice of dentistry. Virtual reality dental simulators enable repeated, objective and assessable practice in various controlled situations. Superimposition of three-dimensional (3D) virtual images on actual images in AR allows surgeons to simultaneously visualize the surgical site and superimpose informative 3D images of invisible regions on the surgical site to serve as a guide. The use of CAD/CAM systems for designing and manufacturing of dental appliances and prostheses has been well established. This article reviews computer-based technologies, their application in dentistry and their potentials and limitations in promoting dental education, training and practice. Practitioners will be able to choose from a broader spectrum of options in their field of practice by becoming familiar with new modalities of training and practice. PMID:28392819

  4. Computer-Based Technologies in Dentistry: Types and Applications.

    PubMed

    Albuha Al-Mussawi, Raja'a M; Farid, Farzaneh

    2016-06-01

    During dental education, dental students learn how to examine patients, make diagnosis, plan treatment and perform dental procedures perfectly and efficiently. However, progresses in computer-based technologies including virtual reality (VR) simulators, augmented reality (AR) and computer aided design/computer aided manufacturing (CAD/CAM) systems have resulted in new modalities for instruction and practice of dentistry. Virtual reality dental simulators enable repeated, objective and assessable practice in various controlled situations. Superimposition of three-dimensional (3D) virtual images on actual images in AR allows surgeons to simultaneously visualize the surgical site and superimpose informative 3D images of invisible regions on the surgical site to serve as a guide. The use of CAD/CAM systems for designing and manufacturing of dental appliances and prostheses has been well established. This article reviews computer-based technologies, their application in dentistry and their potentials and limitations in promoting dental education, training and practice. Practitioners will be able to choose from a broader spectrum of options in their field of practice by becoming familiar with new modalities of training and practice.

  5. The kinetics-based enzyme-linked immunosorbent assay for coronavirus antibodies in cats: calibration to the indirect immunofluorescence assay and computerized standardization of results through normalization to control values.

    PubMed Central

    Barlough, J E; Jacobson, R H; Downing, D R; Lynch, T J; Scott, F W

    1987-01-01

    The computer-assisted, kinetics-based enzyme-linked immunosorbent assay for coronavirus antibodies in cats was calibrated to the conventional indirect immunofluorescence assay by linear regression analysis and computerized interpolation (generation of "immunofluorescence assay-equivalent" titers). Procedures were developed for normalization and standardization of kinetics-based enzyme-linked immunosorbent assay results through incorporation of five different control sera of predetermined ("expected") titer in daily runs. When used with such sera and with computer assistance, the kinetics-based enzyme-linked immunosorbent assay minimized both within-run and between-run variability while allowing also for efficient data reduction and statistical analysis and reporting of results. PMID:3032390

  6. The kinetics-based enzyme-linked immunosorbent assay for coronavirus antibodies in cats: calibration to the indirect immunofluorescence assay and computerized standardization of results through normalization to control values.

    PubMed

    Barlough, J E; Jacobson, R H; Downing, D R; Lynch, T J; Scott, F W

    1987-01-01

    The computer-assisted, kinetics-based enzyme-linked immunosorbent assay for coronavirus antibodies in cats was calibrated to the conventional indirect immunofluorescence assay by linear regression analysis and computerized interpolation (generation of "immunofluorescence assay-equivalent" titers). Procedures were developed for normalization and standardization of kinetics-based enzyme-linked immunosorbent assay results through incorporation of five different control sera of predetermined ("expected") titer in daily runs. When used with such sera and with computer assistance, the kinetics-based enzyme-linked immunosorbent assay minimized both within-run and between-run variability while allowing also for efficient data reduction and statistical analysis and reporting of results.

  7. 10 CFR Appendix I to Part 504 - Procedures for the Computation of the Real Cost of Capital

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Procedures for the Computation of the Real Cost of Capital I Appendix I to Part 504 Energy DEPARTMENT OF ENERGY (CONTINUED) ALTERNATE FUELS EXISTING POWERPLANTS Pt. 504, App. I Appendix I to Part 504—Procedures for the Computation of the Real Cost of Capital (a) The firm's real after-tax weighted average...

  8. Numerical Investigation of the Interaction of Counterflowing Jets and Supersonic Capsule Flows

    NASA Technical Reports Server (NTRS)

    Venkatachari, Balaji Shankar; Ito, Yasushi; Cheng, Gary; Chang, Chau-Lyan

    2011-01-01

    Use of counterflowing jets ejected into supersonic freestreams as a flow control concept to modify the external flowfield has gained renewed interest with regards to potential retropropulsion applications pertinent to entry, descent, and landing investigations. This study describes numerical computations of such a concept for a scaled wind-tunnel capsule model by employing the space-time conservation element solution element viscous flow solver with unstructured meshes. Both steady-state and time-accurate computations are performed for several configurations with different counterflowing jet Mach numbers. Axisymmetric computations exploring the effect of the jet flow rate and jet Mach number on the flow stability, jet interaction with the bow shock and its subsequent impact on the aerodynamic and aerothermal loads on the capsule body are carried out. Similar to previous experimental findings, both long and short penetration modes exist at a windtunnel Mach number of 3.48. It was found that both modes exhibit non-stationary behavior and the former is much more unstable than the latter. It was also found that the unstable long penetration mode only exists in a relatively small range of the jet mass flow rate. Solution-based mesh refinement procedures are used to improve solution accuracy and provide guidelines for a more effective mesh generation procedure for parametric studies. Details of the computed flowfields also serve as a means to broaden the knowledge base for future retropropulsion design studies.

  9. Design and Implementation of a Multimedia CD-ROM-Based Directed Study Preservice Teacher Education Course.

    ERIC Educational Resources Information Center

    Morse, Timothy

    This paper discusses the development and implementation of a computer-based directed study preservice teacher education course. Specifically, it explains the procedures followed to create the ten multimedia CD-ROMs that comprise the course, and how preservice teachers use the CD-ROMs. Topics to be covered include (1) rationale for course creation;…

  10. Effects of Video-Based and Applied Problems on the Procedural Math Skills of Average- and Low-Achieving Adolescents.

    ERIC Educational Resources Information Center

    Bottge, Brian A.; Heinrichs, Mary; Chan, Shih-Yi; Mehta, Zara Dee; Watson, Elizabeth

    2003-01-01

    This study examined effects of video-based, anchored instruction and applied problems on the ability of 11 low-achieving (LA) and 26 average-achieving (AA) eighth graders to solve computation and word problems. Performance for both groups was higher during anchored instruction than during baseline, but no differences were found between instruction…

  11. A Reaction to Mazoue's Deconstructed Campus

    ERIC Educational Resources Information Center

    Shrock, Sharon A.

    2012-01-01

    Mazoue's ("J Comput High Educ," 2012) article, "The Deconstructed Campus," is examined from the perspective of instructional design practice. Concerns center on: the knowledge base of precision instruction; the differential effectiveness of teaching procedural as opposed to declarative knowledge; the reliance on assessment of online learning; and…

  12. On Automatic Assessment and Conceptual Understanding

    ERIC Educational Resources Information Center

    Rasila, Antti; Malinen, Jarmo; Tiitu, Hannu

    2015-01-01

    We consider two complementary aspects of mathematical skills, i.e. "procedural fluency" and "conceptual understanding," from a point of view that is related to modern e-learning environments and computer-based assessment. Pedagogical background of teaching mathematics is discussed, and it is proposed that the traditional book…

  13. Bistatic passive radar simulator with spatial filtering subsystem

    NASA Astrophysics Data System (ADS)

    Hossa, Robert; Szlachetko, Boguslaw; Lewandowski, Andrzej; Górski, Maksymilian

    2009-06-01

    The purpose of this paper is to briefly introduce the structure and features of the developed virtual passive FM radar implemented in Matlab system of numerical computations and to present many alternative ways of its performance. An idea of the proposed solution is based on analytic representation of transmitted direct signals and reflected echo signals. As a spatial filtering subsystem a beamforming network of ULA and UCA dipole configuration dedicated to bistatic radar concept is considered and computationally efficient procedures are presented in details. Finally, exemplary results of the computer simulations of the elaborated virtual simulator are provided and discussed.

  14. Advanced computer architecture specification for automated weld systems

    NASA Technical Reports Server (NTRS)

    Katsinis, Constantine

    1994-01-01

    This report describes the requirements for an advanced automated weld system and the associated computer architecture, and defines the overall system specification from a broad perspective. According to the requirements of welding procedures as they relate to an integrated multiaxis motion control and sensor architecture, the computer system requirements are developed based on a proven multiple-processor architecture with an expandable, distributed-memory, single global bus architecture, containing individual processors which are assigned to specific tasks that support sensor or control processes. The specified architecture is sufficiently flexible to integrate previously developed equipment, be upgradable and allow on-site modifications.

  15. Parallel-vector computation for linear structural analysis and non-linear unconstrained optimization problems

    NASA Technical Reports Server (NTRS)

    Nguyen, D. T.; Al-Nasra, M.; Zhang, Y.; Baddourah, M. A.; Agarwal, T. K.; Storaasli, O. O.; Carmona, E. A.

    1991-01-01

    Several parallel-vector computational improvements to the unconstrained optimization procedure are described which speed up the structural analysis-synthesis process. A fast parallel-vector Choleski-based equation solver, pvsolve, is incorporated into the well-known SAP-4 general-purpose finite-element code. The new code, denoted PV-SAP, is tested for static structural analysis. Initial results on a four processor CRAY 2 show that using pvsolve reduces the equation solution time by a factor of 14-16 over the original SAP-4 code. In addition, parallel-vector procedures for the Golden Block Search technique and the BFGS method are developed and tested for nonlinear unconstrained optimization. A parallel version of an iterative solver and the pvsolve direct solver are incorporated into the BFGS method. Preliminary results on nonlinear unconstrained optimization test problems, using pvsolve in the analysis, show excellent parallel-vector performance indicating that these parallel-vector algorithms can be used in a new generation of finite-element based structural design/analysis-synthesis codes.

  16. Implementation of the diagonalization-free algorithm in the self-consistent field procedure within the four-component relativistic scheme.

    PubMed

    Hrdá, Marcela; Kulich, Tomáš; Repiský, Michal; Noga, Jozef; Malkina, Olga L; Malkin, Vladimir G

    2014-09-05

    A recently developed Thouless-expansion-based diagonalization-free approach for improving the efficiency of self-consistent field (SCF) methods (Noga and Šimunek, J. Chem. Theory Comput. 2010, 6, 2706) has been adapted to the four-component relativistic scheme and implemented within the program package ReSpect. In addition to the implementation, the method has been thoroughly analyzed, particularly with respect to cases for which it is difficult or computationally expensive to find a good initial guess. Based on this analysis, several modifications of the original algorithm, refining its stability and efficiency, are proposed. To demonstrate the robustness and efficiency of the improved algorithm, we present the results of four-component diagonalization-free SCF calculations on several heavy-metal complexes, the largest of which contains more than 80 atoms (about 6000 4-spinor basis functions). The diagonalization-free procedure is about twice as fast as the corresponding diagonalization. Copyright © 2014 Wiley Periodicals, Inc.

  17. Kuhn-Tucker optimization based reliability analysis for probabilistic finite elements

    NASA Technical Reports Server (NTRS)

    Liu, W. K.; Besterfield, G.; Lawrence, M.; Belytschko, T.

    1988-01-01

    The fusion of probability finite element method (PFEM) and reliability analysis for fracture mechanics is considered. Reliability analysis with specific application to fracture mechanics is presented, and computational procedures are discussed. Explicit expressions for the optimization procedure with regard to fracture mechanics are given. The results show the PFEM is a very powerful tool in determining the second-moment statistics. The method can determine the probability of failure or fracture subject to randomness in load, material properties and crack length, orientation, and location.

  18. Investigation of lightweight designs and materials for LO2 and LH2 propellant tanks for space vehicles, phase 2 and phase 3

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Full size Tug LO2 and LH2 tank configurations were defined, based on selected tank geometries. These configurations were then locally modeled for computer stress analysis. A large subscale test tank, representing the selected Tug LO2 tank, was designed and analyzed. This tank was fabricated using procedures which represented production operations. An evaluation test program was outlined and a test procedure defined. The necessary test hardware was also fabricated.

  19. Modal control theory and application to aircraft lateral handling qualities design

    NASA Technical Reports Server (NTRS)

    Srinathkumar, S.

    1978-01-01

    A multivariable synthesis procedure based on eigenvalue/eigenvector assignment is reviewed and is employed to develop a systematic design procedure to meet the lateral handling qualities design objectives of a fighter aircraft over a wide range of flight conditions. The closed loop modal characterization developed provides significant insight into the design process and plays a pivotal role in the synthesis of robust feedback systems. The simplicity of the synthesis algorithm yields an efficient computer aided interactive design tool for flight control system synthesis.

  20. Velocity precision measurements using laser Doppler anemometry

    NASA Astrophysics Data System (ADS)

    Dopheide, D.; Taux, G.; Narjes, L.

    1985-07-01

    A Laser Doppler Anemometer (LDA) was calibrated to determine its applicability to high pressure measurements (up to 10 bars) for industrial purposes. The measurement procedure with LDA and the experimental computerized layouts are presented. The calibration procedure is based on absolute accuracy of Doppler frequency and calibration of interference strip intervals. A four-quadrant detector allows comparison of the interference strip distance measurements and computer profiles. Further development of LDA is recommended to increase accuracy (0.1% inaccuracy) and to apply the method industrially.

  1. Computer aided planning of orthopaedic surgeries: the definition of generic planning steps for bone removal procedures.

    PubMed

    Putzer, David; Moctezuma, Jose Luis; Nogler, Michael

    2017-11-01

    An increasing number of orthopaedic surgeons are using computer aided planning tools for bone removal applications. The aim of the study was to consolidate a set of generic functions to be used for a 3D computer assisted planning or simulation. A limited subset of 30 surgical procedures was analyzed and verified in 243 surgical procedures of a surgical atlas. Fourteen generic functions to be used in 3D computer assisted planning and simulations were extracted. Our results showed that the average procedure comprises 14 ± 10 (SD) steps with ten different generic planning steps and four generic bone removal steps. In conclusion, the study shows that with a limited number of 14 planning functions it is possible to perform 243 surgical procedures out of Campbell's Operative Orthopedics atlas. The results may be used as a basis for versatile generic intraoperative planning software.

  2. Interactive computer simulations of knee-replacement surgery.

    PubMed

    Gunther, Stephen B; Soto, Gabriel E; Colman, William W

    2002-07-01

    Current surgical training programs in the United States are based on an apprenticeship model. This model is outdated because it does not provide conceptual scaffolding, promote collaborative learning, or offer constructive reinforcement. Our objective was to create a more useful approach by preparing students and residents for operative cases using interactive computer simulations of surgery. Total-knee-replacement surgery (TKR) is an ideal procedure to model on the computer because there is a systematic protocol for the procedure. Also, this protocol is difficult to learn by the apprenticeship model because of the multiple instruments that must be used in a specific order. We designed an interactive computer tutorial to teach medical students and residents how to perform knee-replacement surgery. We also aimed to reinforce the specific protocol of the operative procedure. Our final goal was to provide immediate, constructive feedback. We created a computer tutorial by generating three-dimensional wire-frame models of the surgical instruments. Next, we applied a surface to the wire-frame models using three-dimensional modeling. Finally, the three-dimensional models were animated to simulate the motions of an actual TKR. The tutorial is a step-by-step tutorial that teaches and tests the correct sequence of steps in a TKR. The student or resident must select the correct instruments in the correct order. The learner is encouraged to learn the stepwise surgical protocol through repetitive use of the computer simulation. Constructive feedback is acquired through a grading system, which rates the student's or resident's ability to perform the task in the correct order. The grading system also accounts for the time required to perform the simulated procedure. We evaluated the efficacy of this teaching technique by testing medical students who learned by the computer simulation and those who learned by reading the surgical protocol manual. Both groups then performed TKR on manufactured bone models using real instruments. Their technique was graded with the standard protocol. The students who learned on the computer simulation performed the task in a shorter time and with fewer errors than the control group. They were also more engaged in the learning process. Surgical training programs generally lack a consistent approach to preoperative education related to surgical procedures. This interactive computer tutorial has allowed us to make a quantum leap in medical student and resident teaching in our orthopedic department because the students actually participate in the entire process. Our technique provides a linear, sequential method of skill acquisition and direct feedback, which is ideally suited for learning stepwise surgical protocols. Since our initial evaluation has shown the efficacy of this program, we have implemented this teaching tool into our orthopedic curriculum. Our plans for future work with this simulator include modeling procedures involving other anatomic areas of interest, such as the hip and shoulder.

  3. Simulating Laboratory Procedures.

    ERIC Educational Resources Information Center

    Baker, J. E.; And Others

    1986-01-01

    Describes the use of computer assisted instruction in a medical microbiology course. Presents examples of how computer assisted instruction can present case histories in which the laboratory procedures are simulated. Discusses an authoring system used to prepare computer simulations and provides one example of a case history dealing with fractured…

  4. On the Solution of the Three-Dimensional Flowfield About a Flow-Through Nacelle. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Compton, William Bernard

    1985-01-01

    The solution of the three dimensional flow field for a flow through nacelle was studied. Both inviscid and viscous inviscid interacting solutions were examined. Inviscid solutions were obtained with two different computational procedures for solving the three dimensional Euler equations. The first procedure employs an alternating direction implicit numerical algorithm, and required the development of a complete computational model for the nacelle problem. The second computational technique employs a fourth order Runge-Kutta numerical algorithm which was modified to fit the nacelle problem. Viscous effects on the flow field were evaluated with a viscous inviscid interacting computational model. This model was constructed by coupling the explicit Euler solution procedure with a flag entrainment boundary layer solution procedure in a global iteration scheme. The computational techniques were used to compute the flow field for a long duct turbofan engine nacelle at free stream Mach numbers of 0.80 and 0.94 and angles of attack of 0 and 4 deg.

  5. Computer Synthesis Approaches of Hyperboloid Gear Drives with Linear Contact

    NASA Astrophysics Data System (ADS)

    Abadjiev, Valentin; Kawasaki, Haruhisa

    2014-09-01

    The computer design has improved forming different type software for scientific researches in the field of gearing theory as well as performing an adequate scientific support of the gear drives manufacture. Here are attached computer programs that are based on mathematical models as a result of scientific researches. The modern gear transmissions require the construction of new mathematical approaches to their geometric, technological and strength analysis. The process of optimization, synthesis and design is based on adequate iteration procedures to find out an optimal solution by varying definite parameters. The study is dedicated to accepted methodology in the creation of soft- ware for the synthesis of a class high reduction hyperboloid gears - Spiroid and Helicon ones (Spiroid and Helicon are trademarks registered by the Illinois Tool Works, Chicago, Ill). The developed basic computer products belong to software, based on original mathematical models. They are based on the two mathematical models for the synthesis: "upon a pitch contact point" and "upon a mesh region". Computer programs are worked out on the basis of the described mathematical models, and the relations between them are shown. The application of the shown approaches to the synthesis of commented gear drives is illustrated.

  6. Expansion of Tabulated Scattering Matrices in Generalized Spherical Functions

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Geogdzhayev, Igor V.; Yang, Ping

    2016-01-01

    An efficient way to solve the vector radiative transfer equation for plane-parallel turbid media is to Fourier-decompose it in azimuth. This methodology is typically based on the analytical computation of the Fourier components of the phase matrix and is predicated on the knowledge of the coefficients appearing in the expansion of the normalized scattering matrix in generalized spherical functions. Quite often the expansion coefficients have to be determined from tabulated values of the scattering matrix obtained from measurements or calculated by solving the Maxwell equations. In such cases one needs an efficient and accurate computer procedure converting a tabulated scattering matrix into the corresponding set of expansion coefficients. This short communication summarizes the theoretical basis of this procedure and serves as the user guide to a simple public-domain FORTRAN program.

  7. Development of upwind schemes for the Euler equations

    NASA Technical Reports Server (NTRS)

    Chakravarthy, Sukumar R.

    1987-01-01

    Described are many algorithmic and computational aspects of upwind schemes and their second-order accurate formulations based on Total-Variation-Diminishing (TVD) approaches. An operational unification of the underlying first-order scheme is first presented encompassing Godunov's, Roe's, Osher's, and Split-Flux methods. For higher order versions, the preprocessing and postprocessing approaches to constructing TVD discretizations are considered. TVD formulations can be used to construct relaxation methods for unfactored implicit upwind schemes, which in turn can be exploited to construct space-marching procedures for even the unsteady Euler equations. A major part of the report describes time- and space-marching procedures for solving the Euler equations in 2-D, 3-D, Cartesian, and curvilinear coordinates. Along with many illustrative examples, several results of efficient computations on 3-D supersonic flows with subsonic pockets are presented.

  8. NASA/RAE collaboration on nonlinear control using the F-8C digital fly-by-wire aircraft

    NASA Technical Reports Server (NTRS)

    Butler, G. F.; Corbin, M. J.; Mepham, S.; Stewart, J. F.; Larson, R. R.

    1983-01-01

    Design procedures are reviewed for variable integral control to optimize response (VICTOR) algorithms and results of preliminary flight tests are presented. The F-8C aircraft is operated in the remotely augmented vehicle (RAV) mode, with the control laws implemented as FORTRAN programs on a ground-based computer. Pilot commands and sensor information are telemetered to the ground, where the data are processed to form surface commands which are then telemetered back to the aircraft. The RAV mode represents a singlestring (simplex) system and is therefore vulnerable to a hardover since comparison monitoring is not possible. Hence, extensive error checking is conducted on both the ground and airborne computers to prevent the development of potentially hazardous situations. Experience with the RAV monitoring and validation procedures is described.

  9. 4Pi microscopy deconvolution with a variable point-spread function.

    PubMed

    Baddeley, David; Carl, Christian; Cremer, Christoph

    2006-09-20

    To remove the axial sidelobes from 4Pi images, deconvolution forms an integral part of 4Pi microscopy. As a result of its high axial resolution, the 4Pi point spread function (PSF) is particularly susceptible to imperfect optical conditions within the sample. This is typically observed as a shift in the position of the maxima under the PSF envelope. A significantly varying phase shift renders deconvolution procedures based on a spatially invariant PSF essentially useless. We present a technique for computing the forward transformation in the case of a varying phase at a computational expense of the same order of magnitude as that of the shift invariant case, a method for the estimation of PSF phase from an acquired image, and a deconvolution procedure built on these techniques.

  10. A physics-based algorithm for real-time simulation of electrosurgery procedures in minimally invasive surgery.

    PubMed

    Lu, Zhonghua; Arikatla, Venkata S; Han, Zhongqing; Allen, Brian F; De, Suvranu

    2014-12-01

    High-frequency electricity is used in the majority of surgical interventions. However, modern computer-based training and simulation systems rely on physically unrealistic models that fail to capture the interplay of the electrical, mechanical and thermal properties of biological tissue. We present a real-time and physically realistic simulation of electrosurgery by modelling the electrical, thermal and mechanical properties as three iteratively solved finite element models. To provide subfinite-element graphical rendering of vaporized tissue, a dual-mesh dynamic triangulation algorithm based on isotherms is proposed. The block compressed row storage (BCRS) structure is shown to be critical in allowing computationally efficient changes in the tissue topology due to vaporization. We have demonstrated our physics-based electrosurgery cutting algorithm through various examples. Our matrix manipulation algorithms designed for topology changes have shown low computational cost. Our simulator offers substantially greater physical fidelity compared to previous simulators that use simple geometry-based heat characterization. Copyright © 2013 John Wiley & Sons, Ltd.

  11. Simulation System for Training in Laparoscopic Surgery

    NASA Technical Reports Server (NTRS)

    Basdogan, Cagatay; Ho, Chih-Hao

    2003-01-01

    A computer-based simulation system creates a visual and haptic virtual environment for training a medical practitioner in laparoscopic surgery. Heretofore, it has been common practice to perform training in partial laparoscopic surgical procedures by use of a laparoscopic training box that encloses a pair of laparoscopic tools, objects to be manipulated by the tools, and an endoscopic video camera. However, the surgical procedures simulated by use of a training box are usually poor imitations of the actual ones. The present computer-based system improves training by presenting a more realistic simulated environment to the trainee. The system includes a computer monitor that displays a real-time image of the affected interior region of the patient, showing laparoscopic instruments interacting with organs and tissues, as would be viewed by use of an endoscopic video camera and displayed to a surgeon during a laparoscopic operation. The system also includes laparoscopic tools that the trainee manipulates while observing the image on the computer monitor (see figure). The instrumentation on the tools consists of (1) position and orientation sensors that provide input data for the simulation and (2) actuators that provide force feedback to simulate the contact forces between the tools and tissues. The simulation software includes components that model the geometries of surgical tools, components that model the geometries and physical behaviors of soft tissues, and components that detect collisions between them. Using the measured positions and orientations of the tools, the software detects whether they are in contact with tissues. In the event of contact, the deformations of the tissues and contact forces are computed by use of the geometric and physical models. The image on the computer screen shows tissues deformed accordingly, while the actuators apply the corresponding forces to the distal ends of the tools. For the purpose of demonstration, the system has been set up to simulate the insertion of a flexible catheter in a bile duct. [As thus configured, the system can also be used to simulate other endoscopic procedures (e.g., bronchoscopy and colonoscopy) that include the insertion of flexible tubes into flexible ducts.] A hybrid approach has been followed in developing the software for real-time simulation of the visual and haptic interactions (1) between forceps and the catheter, (2) between the forceps and the duct, and (3) between the catheter and the duct. The deformations of the duct are simulated by finite-element and modalanalysis procedures, using only the most significant vibration modes of the duct for computing deformations and interaction forces. The catheter is modeled as a set of virtual particles uniformly distributed along the center line of the catheter and connected to each other via linear and torsional springs and damping elements. The interactions between the forceps and the duct as well as the catheter are simulated by use of a ray-based haptic-interaction- simulating technique in which the forceps are modeled as connected line segments.

  12. Computational physical oceanography -- A comprehensive approach based on generalized CFD/grid techniques for planetary scale simulations of oceanic flows. Final report, September 1, 1995--August 31, 1996

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beddhu, M.; Jiang, M.Y.; Whitfield, D.L.

    The original intention for this work was to impart the technology that was developed in the field of computational aeronautics to the field of computational physical oceanography. This technology transfer involved grid generation techniques and solution procedures to solve the governing equations over the grids thus generated. Specifically, boundary fitting non-orthogonal grids would be generated over a sphere taking into account the topography of the ocean floor and the topography of the continents. The solution methodology to be employed involved the application of an upwind, finite volume discretization procedure that uses higher order numerical fluxes at the cell faces tomore » discretize the governing equations and an implicit Newton relaxation technique to solve the discretized equations. This report summarizes the efforts put forth during the past three years to achieve these goals and indicates the future direction of this work as it is still an ongoing effort.« less

  13. Aerodynamic optimization by simultaneously updating flow variables and design parameters

    NASA Technical Reports Server (NTRS)

    Rizk, M. H.

    1990-01-01

    The application of conventional optimization schemes to aerodynamic design problems leads to inner-outer iterative procedures that are very costly. An alternative approach is presented based on the idea of updating the flow variable iterative solutions and the design parameter iterative solutions simultaneously. Two schemes based on this idea are applied to problems of correcting wind tunnel wall interference and optimizing advanced propeller designs. The first of these schemes is applicable to a limited class of two-design-parameter problems with an equality constraint. It requires the computation of a single flow solution. The second scheme is suitable for application to general aerodynamic problems. It requires the computation of several flow solutions in parallel. In both schemes, the design parameters are updated as the iterative flow solutions evolve. Computations are performed to test the schemes' efficiency, accuracy, and sensitivity to variations in the computational parameters.

  14. Using a Cray Y-MP as an array processor for a RISC Workstation

    NASA Technical Reports Server (NTRS)

    Lamaster, Hugh; Rogallo, Sarah J.

    1992-01-01

    As microprocessors increase in power, the economics of centralized computing has changed dramatically. At the beginning of the 1980's, mainframes and super computers were often considered to be cost-effective machines for scalar computing. Today, microprocessor-based RISC (reduced-instruction-set computer) systems have displaced many uses of mainframes and supercomputers. Supercomputers are still cost competitive when processing jobs that require both large memory size and high memory bandwidth. One such application is array processing. Certain numerical operations are appropriate to use in a Remote Procedure Call (RPC)-based environment. Matrix multiplication is an example of an operation that can have a sufficient number of arithmetic operations to amortize the cost of an RPC call. An experiment which demonstrates that matrix multiplication can be executed remotely on a large system to speed the execution over that experienced on a workstation is described.

  15. The Research of Computer Aided Farm Machinery Designing Method Based on Ergonomics

    NASA Astrophysics Data System (ADS)

    Gao, Xiyin; Li, Xinling; Song, Qiang; Zheng, Ying

    Along with agricultural economy development, the farm machinery product type Increases gradually, the ergonomics question is also getting more and more prominent. The widespread application of computer aided machinery design makes it possible that farm machinery design is intuitive, flexible and convenient. At present, because the developed computer aided ergonomics software has not suitable human body database, which is needed in view of farm machinery design in China, the farm machinery design have deviation in ergonomics analysis. This article puts forward that using the open database interface procedure in CATIA to establish human body database which aims at the farm machinery design, and reading the human body data to ergonomics module of CATIA can product practical application virtual body, using human posture analysis and human activity analysis module to analysis the ergonomics in farm machinery, thus computer aided farm machinery designing method based on engineering can be realized.

  16. Iterative pass optimization of sequence data

    NASA Technical Reports Server (NTRS)

    Wheeler, Ward C.

    2003-01-01

    The problem of determining the minimum-cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete. This "tree alignment" problem has motivated the considerable effort placed in multiple sequence alignment procedures. Wheeler in 1996 proposed a heuristic method, direct optimization, to calculate cladogram costs without the intervention of multiple sequence alignment. This method, though more efficient in time and more effective in cladogram length than many alignment-based procedures, greedily optimizes nodes based on descendent information only. In their proposal of an exact multiple alignment solution, Sankoff et al. in 1976 described a heuristic procedure--the iterative improvement method--to create alignments at internal nodes by solving a series of median problems. The combination of a three-sequence direct optimization with iterative improvement and a branch-length-based cladogram cost procedure, provides an algorithm that frequently results in superior (i.e., lower) cladogram costs. This iterative pass optimization is both computation and memory intensive, but economies can be made to reduce this burden. An example in arthropod systematics is discussed. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.

  17. PHYSICO: An UNIX based Standalone Procedure for Computation of Individual and Group Properties of Protein Sequences.

    PubMed

    Gupta, Parth Sarthi Sen; Banerjee, Shyamashree; Islam, Rifat Nawaz Ul; Mondal, Sudipta; Mondal, Buddhadev; Bandyopadhyay, Amal K

    2014-01-01

    In the genomic and proteomic era, efficient and automated analyses of sequence properties of protein have become an important task in bioinformatics. There are general public licensed (GPL) software tools to perform a part of the job. However, computations of mean properties of large number of orthologous sequences are not possible from the above mentioned GPL sets. Further, there is no GPL software or server which can calculate window dependent sequence properties for a large number of sequences in a single run. With a view to overcome above limitations, we have developed a standalone procedure i.e. PHYSICO, which performs various stages of computation in a single run based on the type of input provided either in RAW-FASTA or BLOCK-FASTA format and makes excel output for: a) Composition, Class composition, Mean molecular weight, Isoelectic point, Aliphatic index and GRAVY, b) column based compositions, variability and difference matrix, c) 25 kinds of window dependent sequence properties. The program is fast, efficient, error free and user friendly. Calculation of mean and standard deviation of homologous sequences sets, for comparison purpose when relevant, is another attribute of the program; a property seldom seen in existing GPL softwares. PHYSICO is freely available for non-commercial/academic user in formal request to the corresponding author akbanerjee@biotech.buruniv.ac.in.

  18. PHYSICO: An UNIX based Standalone Procedure for Computation of Individual and Group Properties of Protein Sequences

    PubMed Central

    Gupta, Parth Sarthi Sen; Banerjee, Shyamashree; Islam, Rifat Nawaz Ul; Mondal, Sudipta; Mondal, Buddhadev; Bandyopadhyay, Amal K

    2014-01-01

    In the genomic and proteomic era, efficient and automated analyses of sequence properties of protein have become an important task in bioinformatics. There are general public licensed (GPL) software tools to perform a part of the job. However, computations of mean properties of large number of orthologous sequences are not possible from the above mentioned GPL sets. Further, there is no GPL software or server which can calculate window dependent sequence properties for a large number of sequences in a single run. With a view to overcome above limitations, we have developed a standalone procedure i.e. PHYSICO, which performs various stages of computation in a single run based on the type of input provided either in RAW-FASTA or BLOCK-FASTA format and makes excel output for: a) Composition, Class composition, Mean molecular weight, Isoelectic point, Aliphatic index and GRAVY, b) column based compositions, variability and difference matrix, c) 25 kinds of window dependent sequence properties. The program is fast, efficient, error free and user friendly. Calculation of mean and standard deviation of homologous sequences sets, for comparison purpose when relevant, is another attribute of the program; a property seldom seen in existing GPL softwares. Availability PHYSICO is freely available for non-commercial/academic user in formal request to the corresponding author akbanerjee@biotech.buruniv.ac.in PMID:24616564

  19. The Individual Virtual Eye: a Computer Model for Advanced Intraocular Lens Calculation

    PubMed Central

    Einighammer, Jens; Oltrup, Theo; Bende, Thomas; Jean, Benedikt

    2010-01-01

    Purpose To describe the individual virtual eye, a computer model of a human eye with respect to its optical properties. It is based on measurements of an individual person and one of its major application is calculating intraocular lenses (IOLs) for cataract surgery. Methods The model is constructed from an eye's geometry, including axial length and topographic measurements of the anterior corneal surface. All optical components of a pseudophakic eye are modeled with computer scientific methods. A spline-based interpolation method efficiently includes data from corneal topographic measurements. The geometrical optical properties, such as the wavefront aberration, are simulated with real ray-tracing using Snell's law. Optical components can be calculated using computer scientific optimization procedures. The geometry of customized aspheric IOLs was calculated for 32 eyes and the resulting wavefront aberration was investigated. Results The more complex the calculated IOL is, the lower the residual wavefront error is. Spherical IOLs are only able to correct for the defocus, while toric IOLs also eliminate astigmatism. Spherical aberration is additionally reduced by aspheric and toric aspheric IOLs. The efficient implementation of time-critical numerical ray-tracing and optimization procedures allows for short calculation times, which may lead to a practicable method integrated in some device. Conclusions The individual virtual eye allows for simulations and calculations regarding geometrical optics for individual persons. This leads to clinical applications like IOL calculation, with the potential to overcome the limitations of those current calculation methods that are based on paraxial optics, exemplary shown by calculating customized aspheric IOLs.

  20. Automated Instructional Management Systems (AIMS) Version III, Users Manual.

    ERIC Educational Resources Information Center

    New York Inst. of Tech., Old Westbury.

    This document sets forth the procedures necessary to utilize and understand the operating characteristics of the Automated Instructional Management System - Version III, a computer-based system for management of educational processes. Directions for initialization, including internal and user files; system and operational input requirements;…

  1. Computation of laminar heat transfer from gaseous plasmas in electromagnetic fields

    NASA Technical Reports Server (NTRS)

    Bose, T. K.

    1972-01-01

    Heat transfer analysis procedure is presented for two-temperature gaseous plasma. Analysis is based on laminar flow of singly-ionized, quasineutral plasma with variable properties. Sheath analysis is described for species in accelerating field, decelerating field, emitted from wall, and recombining at wall.

  2. The Father Christmas worm

    NASA Technical Reports Server (NTRS)

    Green, James L.; Sisson, Patricia L.

    1989-01-01

    Given here is an overview analysis of the Father Christmas Worm, a computer worm that was released onto the DECnet Internet three days before Christmas 1988. The purpose behind the worm was to send an electronic mail message to all users on the computer system running the worm. The message was a Christmas greeting and was signed 'Father Christmas'. From the investigation, it was determined that the worm was released from a computer (node number 20597::) at a university in Switzerland. The worm was designed to travel quickly. Estimates are that it was copied to over 6,000 computer nodes. However, it was believed to have executed on only a fraction of those computers. Within ten minutes after it was released, the worm was detected at the Space Physics Analysis Network (SPAN), NASA's largest space and Earth science network. Once the source program was captured, a procedural cure, using the existing functionality of the computer operating systems, was quickly devised and distributed. A combination of existing computer security measures, the quick and accurate procedures devised to stop copies of the worm from executing, and the network itself, were used to rapidly provide the cure. These were the main reasons why the worm executed on such a small percentage of nodes. This overview of the analysis of the events concerning the worm is based on an investigation made by the SPAN Security Team and provides some insight into future security measures that will be taken to handle computer worms and viruses that may hit similar networks.

  3. SD-CAS: Spin Dynamics by Computer Algebra System.

    PubMed

    Filip, Xenia; Filip, Claudiu

    2010-11-01

    A computer algebra tool for describing the Liouville-space quantum evolution of nuclear 1/2-spins is introduced and implemented within a computational framework named Spin Dynamics by Computer Algebra System (SD-CAS). A distinctive feature compared with numerical and previous computer algebra approaches to solving spin dynamics problems results from the fact that no matrix representation for spin operators is used in SD-CAS, which determines a full symbolic character to the performed computations. Spin correlations are stored in SD-CAS as four-entry nested lists of which size increases linearly with the number of spins into the system and are easily mapped into analytical expressions in terms of spin operator products. For the so defined SD-CAS spin correlations a set of specialized functions and procedures is introduced that are essential for implementing basic spin algebra operations, such as the spin operator products, commutators, and scalar products. They provide results in an abstract algebraic form: specific procedures to quantitatively evaluate such symbolic expressions with respect to the involved spin interaction parameters and experimental conditions are also discussed. Although the main focus in the present work is on laying the foundation for spin dynamics symbolic computation in NMR based on a non-matrix formalism, practical aspects are also considered throughout the theoretical development process. In particular, specific SD-CAS routines have been implemented using the YACAS computer algebra package (http://yacas.sourceforge.net), and their functionality was demonstrated on a few illustrative examples. Copyright © 2010 Elsevier Inc. All rights reserved.

  4. Physics Based Modeling and Rendering of Vegetation in the Thermal Infrared

    NASA Technical Reports Server (NTRS)

    Smith, J. A.; Ballard, J. R., Jr.

    1999-01-01

    We outline a procedure for rendering physically-based thermal infrared images of simple vegetation scenes. Our approach incorporates the biophysical processes that affect the temperature distribution of the elements within a scene. Computer graphics plays a key role in two respects. First, in computing the distribution of scene shaded and sunlit facets and, second, in the final image rendering once the temperatures of all the elements in the scene have been computed. We illustrate our approach for a simple corn scene where the three-dimensional geometry is constructed based on measured morphological attributes of the row crop. Statistical methods are used to construct a representation of the scene in agreement with the measured characteristics. Our results are quite good. The rendered images exhibit realistic behavior in directional properties as a function of view and sun angle. The root-mean-square error in measured versus predicted brightness temperatures for the scene was 2.1 deg C.

  5. Cost accounting in radiation oncology: a computer-based model for reimbursement.

    PubMed

    Perez, C A; Kobeissi, B; Smith, B D; Fox, S; Grigsby, P W; Purdy, J A; Procter, H D; Wasserman, T H

    1993-04-02

    The skyrocketing cost of medical care in the United States has resulted in multiple efforts in cost containment. The present work offers a rational computer-based cost accounting approach to determine the actual use of resources in providing a specific service in a radiation oncology center. A procedure-level cost accounting system was developed by using recorded information on actual time and effort spent by individual staff members performing various radiation oncology procedures, and analyzing direct and indirect costs related to staffing (labor), facilities and equipment, supplies, etc. Expenditures were classified as direct or indirect and fixed or variable. A relative value unit was generated to allocate specific cost factors to each procedure. Different costs per procedure were identified according to complexity. Whereas there was no significant difference in the treatment time between low-energy (4 and 6 MV) or high-energy (18 MV) accelerators, there were significantly higher costs identified in the operation of a high-energy linear accelerator, a reflection of initial equipment investment, quality assurance and calibration procedures, maintenance costs, service contract, and replacement parts. Utilization of resources was related to the complexity of the procedures performed and whether the treatments were delivered to inpatients or outpatients. In analyzing time motion for physicians and other staff, it was apparent that a greater effort must be made to train the staff to accurately record all times involved in a given procedure, and it is strongly recommended that each institution perform its own time motion studies to more accurately determine operating costs. Sixty-six percent of our facility's global costs were for labor, 20% for other operating expenses, 10% for space, and 4% for equipment. Significant differences were noted in the cost allocation for professional or technical functions, as labor, space, and equipment costs are higher in the latter. External beam treatment-related procedures accounted for more than 50% of all technical and professional revenues, simulation for 8% to 10%, and other physics/dosimetry procedures for 11% to 14% of revenues. Some discrepancies were identified between the actual cost and level of reimbursement of various procedures. Details are described in the manuscript. It is imperative to develop an equitable reimbursement system for radiation oncology services, based on cost accounting and other measures that may enhance productivity and reduce the cost per procedure unit, while at the same time preserving the highest quality of service provided to patients.

  6. Comparing solar energy alternatives

    NASA Astrophysics Data System (ADS)

    White, J. R.

    1984-03-01

    This paper outlines a computational procedure for comparing the merits of alternative processes to convert solar radiation to heat, electrical power, or chemical energy. The procedure uses the ratio of equipment investment to useful work as an index. Comparisons with conversion counterparts based on conventional fuels are also facilitated by examining this index. The procedure is illustrated by comparisons of (1) photovoltaic converters of differing efficiencies; (2) photovoltaic converters with and without focusing concentrators; (3) photovoltaic conversion plus electrolysis vs photocatalysis for the production of hydrogen; (4) photovoltaic conversion plus plasma arcs vs photocatalysis for nitrogen fixation. Estimates for conventionally-fuelled processes are included for comparison. The reasons why solar-based concepts fare poorly in such comparisons are traced to the low energy density of solar radiation and its low stream time factor resulting from the limited number of daylight hours available and clouds obscuring the sun.

  7. A sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, J. D.; Oberkampf, William Louis; Helton, Jon Craig

    2006-10-01

    Evidence theory provides an alternative to probability theory for the representation of epistemic uncertainty in model predictions that derives from epistemic uncertainty in model inputs, where the descriptor epistemic is used to indicate uncertainty that derives from a lack of knowledge with respect to the appropriate values to use for various inputs to the model. The potential benefit, and hence appeal, of evidence theory is that it allows a less restrictive specification of uncertainty than is possible within the axiomatic structure on which probability theory is based. Unfortunately, the propagation of an evidence theory representation for uncertainty through a modelmore » is more computationally demanding than the propagation of a probabilistic representation for uncertainty, with this difficulty constituting a serious obstacle to the use of evidence theory in the representation of uncertainty in predictions obtained from computationally intensive models. This presentation describes and illustrates a sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory. Preliminary trials indicate that the presented strategy can be used to propagate uncertainty representations based on evidence theory in analysis situations where naive sampling-based (i.e., unsophisticated Monte Carlo) procedures are impracticable due to computational cost.« less

  8. Computational State Space Models for Activity and Intention Recognition. A Feasibility Study

    PubMed Central

    Krüger, Frank; Nyolt, Martin; Yordanova, Kristina; Hein, Albert; Kirste, Thomas

    2014-01-01

    Background Computational state space models (CSSMs) enable the knowledge-based construction of Bayesian filters for recognizing intentions and reconstructing activities of human protagonists in application domains such as smart environments, assisted living, or security. Computational, i. e., algorithmic, representations allow the construction of increasingly complex human behaviour models. However, the symbolic models used in CSSMs potentially suffer from combinatorial explosion, rendering inference intractable outside of the limited experimental settings investigated in present research. The objective of this study was to obtain data on the feasibility of CSSM-based inference in domains of realistic complexity. Methods A typical instrumental activity of daily living was used as a trial scenario. As primary sensor modality, wearable inertial measurement units were employed. The results achievable by CSSM methods were evaluated by comparison with those obtained from established training-based methods (hidden Markov models, HMMs) using Wilcoxon signed rank tests. The influence of modeling factors on CSSM performance was analyzed via repeated measures analysis of variance. Results The symbolic domain model was found to have more than states, exceeding the complexity of models considered in previous research by at least three orders of magnitude. Nevertheless, if factors and procedures governing the inference process were suitably chosen, CSSMs outperformed HMMs. Specifically, inference methods used in previous studies (particle filters) were found to perform substantially inferior in comparison to a marginal filtering procedure. Conclusions Our results suggest that the combinatorial explosion caused by rich CSSM models does not inevitably lead to intractable inference or inferior performance. This means that the potential benefits of CSSM models (knowledge-based model construction, model reusability, reduced need for training data) are available without performance penalty. However, our results also show that research on CSSMs needs to consider sufficiently complex domains in order to understand the effects of design decisions such as choice of heuristics or inference procedure on performance. PMID:25372138

  9. Decision support in psychiatry – a comparison between the diagnostic outcomes using a computerized decision support system versus manual diagnosis

    PubMed Central

    Bergman, Lars G; Fors, Uno GH

    2008-01-01

    Background Correct diagnosis in psychiatry may be improved by novel diagnostic procedures. Computerized Decision Support Systems (CDSS) are suggested to be able to improve diagnostic procedures, but some studies indicate possible problems. Therefore, it could be important to investigate CDSS systems with regard to their feasibility to improve diagnostic procedures as well as to save time. Methods This study was undertaken to compare the traditional 'paper and pencil' diagnostic method SCID1 with the computer-aided diagnostic system CB-SCID1 to ascertain processing time and accuracy of diagnoses suggested. 63 clinicians volunteered to participate in the study and to solve two paper-based cases using either a CDSS or manually. Results No major difference between paper and pencil and computer-supported diagnosis was found. Where a difference was found it was in favour of paper and pencil. For example, a significantly shorter time was found for paper and pencil for the difficult case, as compared to computer support. A significantly higher number of correct diagnoses were found in the diffilt case for the diagnosis 'Depression' using the paper and pencil method. Although a majority of the clinicians found the computer method supportive and easy to use, it took a longer time and yielded fewer correct diagnoses than with paper and pencil. Conclusion This study could not detect any major difference in diagnostic outcome between traditional paper and pencil methods and computer support for psychiatric diagnosis. Where there were significant differences, traditional paper and pencil methods were better than the tested CDSS and thus we conclude that CDSS for diagnostic procedures may interfere with diagnosis accuracy. A limitation was that most clinicians had not previously used the CDSS system under study. The results of this study, however, confirm that CDSS development for diagnostic purposes in psychiatry has much to deal with before it can be used for routine clinical purposes. PMID:18261222

  10. The Dynamo package for tomography and subtomogram averaging: components for MATLAB, GPU computing and EC2 Amazon Web Services

    PubMed Central

    Castaño-Díez, Daniel

    2017-01-01

    Dynamo is a package for the processing of tomographic data. As a tool for subtomogram averaging, it includes different alignment and classification strategies. Furthermore, its data-management module allows experiments to be organized in groups of tomograms, while offering specialized three-dimensional tomographic browsers that facilitate visualization, location of regions of interest, modelling and particle extraction in complex geometries. Here, a technical description of the package is presented, focusing on its diverse strategies for optimizing computing performance. Dynamo is built upon mbtools (middle layer toolbox), a general-purpose MATLAB library for object-oriented scientific programming specifically developed to underpin Dynamo but usable as an independent tool. Its structure intertwines a flexible MATLAB codebase with precompiled C++ functions that carry the burden of numerically intensive operations. The package can be delivered as a precompiled standalone ready for execution without a MATLAB license. Multicore parallelization on a single node is directly inherited from the high-level parallelization engine provided for MATLAB, automatically imparting a balanced workload among the threads in computationally intense tasks such as alignment and classification, but also in logistic-oriented tasks such as tomogram binning and particle extraction. Dynamo supports the use of graphical processing units (GPUs), yielding considerable speedup factors both for native Dynamo procedures (such as the numerically intensive subtomogram alignment) and procedures defined by the user through its MATLAB-based GPU library for three-dimensional operations. Cloud-based virtual computing environments supplied with a pre-installed version of Dynamo can be publicly accessed through the Amazon Elastic Compute Cloud (EC2), enabling users to rent GPU computing time on a pay-as-you-go basis, thus avoiding upfront investments in hardware and longterm software maintenance. PMID:28580909

  11. The Dynamo package for tomography and subtomogram averaging: components for MATLAB, GPU computing and EC2 Amazon Web Services.

    PubMed

    Castaño-Díez, Daniel

    2017-06-01

    Dynamo is a package for the processing of tomographic data. As a tool for subtomogram averaging, it includes different alignment and classification strategies. Furthermore, its data-management module allows experiments to be organized in groups of tomograms, while offering specialized three-dimensional tomographic browsers that facilitate visualization, location of regions of interest, modelling and particle extraction in complex geometries. Here, a technical description of the package is presented, focusing on its diverse strategies for optimizing computing performance. Dynamo is built upon mbtools (middle layer toolbox), a general-purpose MATLAB library for object-oriented scientific programming specifically developed to underpin Dynamo but usable as an independent tool. Its structure intertwines a flexible MATLAB codebase with precompiled C++ functions that carry the burden of numerically intensive operations. The package can be delivered as a precompiled standalone ready for execution without a MATLAB license. Multicore parallelization on a single node is directly inherited from the high-level parallelization engine provided for MATLAB, automatically imparting a balanced workload among the threads in computationally intense tasks such as alignment and classification, but also in logistic-oriented tasks such as tomogram binning and particle extraction. Dynamo supports the use of graphical processing units (GPUs), yielding considerable speedup factors both for native Dynamo procedures (such as the numerically intensive subtomogram alignment) and procedures defined by the user through its MATLAB-based GPU library for three-dimensional operations. Cloud-based virtual computing environments supplied with a pre-installed version of Dynamo can be publicly accessed through the Amazon Elastic Compute Cloud (EC2), enabling users to rent GPU computing time on a pay-as-you-go basis, thus avoiding upfront investments in hardware and longterm software maintenance.

  12. Detailed computational procedure for design of cascade blades with prescribed velocity distributions in compressible potential flows

    NASA Technical Reports Server (NTRS)

    Costello, George R; Cummings, Robert L; Sinnette, John T , Jr

    1952-01-01

    A detailed step-by-step computational outline is presented for the design of two-dimensional cascade blades having a prescribed velocity distribution on the blade in a potential flow of the usual compressible fluid. The outline is based on the assumption that the magnitude of the velocity in the flow of the usual compressible nonviscous fluid is proportional to the magnitude of the velocity in the flow of a compressible nonviscous fluid with linear pressure-volume relation.

  13. High-Order Methods for Computational Fluid Dynamics: A Brief Review of Compact Differential Formulations on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Huynh, H. T.; Wang, Z. J.; Vincent, P. E.

    2013-01-01

    Popular high-order schemes with compact stencils for Computational Fluid Dynamics (CFD) include Discontinuous Galerkin (DG), Spectral Difference (SD), and Spectral Volume (SV) methods. The recently proposed Flux Reconstruction (FR) approach or Correction Procedure using Reconstruction (CPR) is based on a differential formulation and provides a unifying framework for these high-order schemes. Here we present a brief review of recent developments for the FR/CPR schemes as well as some pacing items.

  14. Generation of Well-Relaxed All-Atom Models of Large Molecular Weight Polymer Melts: A Hybrid Particle-Continuum Approach Based on Particle-Field Molecular Dynamics Simulations.

    PubMed

    De Nicola, Antonio; Kawakatsu, Toshihiro; Milano, Giuseppe

    2014-12-09

    A procedure based on Molecular Dynamics (MD) simulations employing soft potentials derived from self-consistent field (SCF) theory (named MD-SCF) able to generate well-relaxed all-atom structures of polymer melts is proposed. All-atom structures having structural correlations indistinguishable from ones obtained by long MD relaxations have been obtained for poly(methyl methacrylate) (PMMA) and poly(ethylene oxide) (PEO) melts. The proposed procedure leads to computational costs mainly related on system size rather than to the chain length. Several advantages of the proposed procedure over current coarse-graining/reverse mapping strategies are apparent. No parametrization is needed to generate relaxed structures of different polymers at different scales or resolutions. There is no need for special algorithms or back-mapping schemes to change the resolution of the models. This characteristic makes the procedure general and its extension to other polymer architectures straightforward. A similar procedure can be easily extended to the generation of all-atom structures of block copolymer melts and polymer nanocomposites.

  15. Estimating Soil Organic Carbon stocks and uncertainties at the regional scale following a legacy sampling strategy - a case study from southern Belgium

    NASA Astrophysics Data System (ADS)

    Chartin, Caroline; Krüger, Inken; Goidts, Esther; Carnol, Monique; van Wesemael, Bas

    2017-04-01

    The quantification and the spatialisation of reliable SOC stocks (Mg C ha-1) and total stock (Tg C) baselines and associated uncertainties are fundamental to detect the gains or losses in SOC, and to locate sensitive areas with low SOC levels. Here, we aim to both quantify and spatialize SOC stocks at regional scale (southern Belgium) based on data from one non-design-based nor model-based sampling scheme. To this end, we developed a computation procedure based on Digital Soil Mapping techniques and stochastic simulations (Monte-Carlo) allowing the estimation of multiple (here, 10,000) independent spatialized datasets. The computation of the prediction uncertainty accounts for the errors associated to the both estimations of i) SOC stock at the pixel-related area scale and ii) parameters of the spatial model. Based on these 10,000 individuals, median SOC stocks and 90% prediction intervals were computed for each pixel, as well as total SOC stocks and their 90% prediction intervals for selected sub-areas and for the entire study area. Hence, a Generalised Additive Model (GAM) explaining 69.3 % of the SOC stock variance was calibrated and then validated (R2 = 0.64). The model overestimated low SOC stock (below 50 Mg C ha-1) and underestimated high SOC stock (especially those above 100 Mg C kg-1). A positive gradient of SOC stock occurred from the northwest to the center of Wallonia with a slight decrease on the southernmost part, correlating to the evolution of precipitation and temperature (along with elevation) and dominant land use. At the catchment scale higher SOC stocks were predicted on valley bottoms, especially for poorly drained soils under grassland. Mean predicted SOC stocks for cropland and grassland in Wallonia were of 26.58 Tg C (SD 1.52) and 43.30 Tg C (2.93), respectively. The procedure developed here allowed to predict realistic spatial patterns of SOC stocks all over agricultural lands of southern Belgium and to produce reliable statistics of total SOC stocks for each of the 20 combinations of land use / agricultural regions of Wallonia. This procedure appears useful to produce soil maps as policy tools in conducting sustainable management at regional and national scales, and to compute statistics which comply with specific requirements of reporting activities.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brun, E., E-mail: emmanuel.brun@esrf.fr; Grandl, S.; Sztrókay-Gaul, A.

    Purpose: Phase contrast computed tomography has emerged as an imaging method, which is able to outperform present day clinical mammography in breast tumor visualization while maintaining an equivalent average dose. To this day, no segmentation technique takes into account the specificity of the phase contrast signal. In this study, the authors propose a new mathematical framework for human-guided breast tumor segmentation. This method has been applied to high-resolution images of excised human organs, each of several gigabytes. Methods: The authors present a segmentation procedure based on the viscous watershed transform and demonstrate the efficacy of this method on analyzer basedmore » phase contrast images. The segmentation of tumors inside two full human breasts is then shown as an example of this procedure’s possible applications. Results: A correct and precise identification of the tumor boundaries was obtained and confirmed by manual contouring performed independently by four experienced radiologists. Conclusions: The authors demonstrate that applying the watershed viscous transform allows them to perform the segmentation of tumors in high-resolution x-ray analyzer based phase contrast breast computed tomography images. Combining the additional information provided by the segmentation procedure with the already high definition of morphological details and tissue boundaries offered by phase contrast imaging techniques, will represent a valuable multistep procedure to be used in future medical diagnostic applications.« less

  17. The computational structural mechanics testbed procedures manual

    NASA Technical Reports Server (NTRS)

    Stewart, Caroline B. (Compiler)

    1991-01-01

    The purpose of this manual is to document the standard high level command language procedures of the Computational Structural Mechanics (CSM) Testbed software system. A description of each procedure including its function, commands, data interface, and use is presented. This manual is designed to assist users in defining and using command procedures to perform structural analysis in the CSM Testbed User's Manual and the CSM Testbed Data Library Description.

  18. Aeroelastic-Acoustics Simulation of Flight Systems

    NASA Technical Reports Server (NTRS)

    Gupta, kajal K.; Choi, S.; Ibrahim, A.

    2009-01-01

    This paper describes the details of a numerical finite element (FE) based analysis procedure and a resulting code for the simulation of the acoustics phenomenon arising from aeroelastic interactions. Both CFD and structural simulations are based on FE discretization employing unstructured grids. The sound pressure level (SPL) on structural surfaces is calculated from the root mean square (RMS) of the unsteady pressure and the acoustic wave frequencies are computed from a fast Fourier transform (FFT) of the unsteady pressure distribution as a function of time. The resulting tool proves to be unique as it is designed to analyze complex practical problems, involving large scale computations, in a routine fashion.

  19. Sample-space-based feature extraction and class preserving projection for gene expression data.

    PubMed

    Wang, Wenjun

    2013-01-01

    In order to overcome the problems of high computational complexity and serious matrix singularity for feature extraction using Principal Component Analysis (PCA) and Fisher's Linear Discrinimant Analysis (LDA) in high-dimensional data, sample-space-based feature extraction is presented, which transforms the computation procedure of feature extraction from gene space to sample space by representing the optimal transformation vector with the weighted sum of samples. The technique is used in the implementation of PCA, LDA, Class Preserving Projection (CPP) which is a new method for discriminant feature extraction proposed, and the experimental results on gene expression data demonstrate the effectiveness of the method.

  20. Image-Based Patient-Specific Ventricle Models with Fluid-Structure Interaction for Cardiac Function Assessment and Surgical Design Optimization

    PubMed Central

    Tang, Dalin; Yang, Chun; Geva, Tal; del Nido, Pedro J.

    2010-01-01

    Recent advances in medical imaging technology and computational modeling techniques are making it possible that patient-specific computational ventricle models be constructed and used to test surgical hypotheses and replace empirical and often risky clinical experimentation to examine the efficiency and suitability of various reconstructive procedures in diseased hearts. In this paper, we provide a brief review on recent development in ventricle modeling and its potential application in surgical planning and management of tetralogy of Fallot (ToF) patients. Aspects of data acquisition, model selection and construction, tissue material properties, ventricle layer structure and tissue fiber orientations, pressure condition, model validation and virtual surgery procedures (changing patient-specific ventricle data and perform computer simulation) were reviewed. Results from a case study using patient-specific cardiac magnetic resonance (CMR) imaging and right/left ventricle and patch (RV/LV/Patch) combination model with fluid-structure interactions (FSI) were reported. The models were used to evaluate and optimize human pulmonary valve replacement/insertion (PVR) surgical procedure and patch design and test a surgical hypothesis that PVR with small patch and aggressive scar tissue trimming in PVR surgery may lead to improved recovery of RV function and reduced stress/strain conditions in the patch area. PMID:21344066

  1. Critical Emergency Medicine Procedural Skills: A Comparative Study of Methods for Teaching and Assessment.

    ERIC Educational Resources Information Center

    Chapman, Dane M.; And Others

    Three critical procedural skills in emergency medicine were evaluated using three assessment modalities--written, computer, and animal model. The effects of computer practice and previous procedure experience on skill competence were also examined in an experimental sequential assessment design. Subjects were six medical students, six residents,…

  2. Lumley's PODT definition of large eddies and a trio of numerical procedures. [Proper Orthogonal Decomposition Theorem

    NASA Technical Reports Server (NTRS)

    Payne, Fred R.

    1992-01-01

    Lumley's 1967 Moscow paper provided, for the first time, a completely rational definition of the physically-useful term 'large eddy', popular for a half-century. The numerical procedures based upon his results are: (1) PODT (Proper Orthogonal Decomposition Theorem), which extracts the Large Eddy structure of stochastic processes from physical or computer simulation two-point covariances, and 2) LEIM (Large-Eddy Interaction Model), a predictive scheme for the dynamical large eddies based upon higher order turbulence modeling. Earlier Lumley's work (1964) forms the basis for the final member of the triad of numerical procedures: this predicts the global neutral modes of turbulence which have surprising agreement with both structural eigenmodes and those obtained from the dynamical equations. The ultimate goal of improved engineering design tools for turbulence may be near at hand, partly due to the power and storage of 'supermicrocomputer' workstations finally becoming adequate for the demanding numerics of these procedures.

  3. On the classical and quantum integrability of systems of resonant oscillators

    NASA Astrophysics Data System (ADS)

    Marino, Massimo

    2017-01-01

    We study in this paper systems of harmonic oscillators with resonant frequencies. For these systems we present general procedures for the construction of sets of functionally independent constants of motion, which can be used for the definition of generalized actionangle variables, in accordance with the general description of degenerate integrable systems which was presented by Nekhoroshev in a seminal paper in 1972. We then apply to these classical integrable systems the procedure of quantization which has been proposed to the author by Nekhoroshev during his last years of activity at Milan University. This procedure is based on the construction of linear operators by means of the symmetrization of the classical constants of motion mentioned above. For 3 oscillators with resonance 1: 1: 2, by using a computer program we have discovered an exceptional integrable system, which cannot be obtained with the standard methods based on the obvious symmetries of the Hamiltonian function. In this exceptional case, quantum integrability can be realized only by means of a modification of the symmetrization procedure.

  4. Computer-assisted preoperative simulation for positioning and fixation of plate in 2-stage procedure combining maxillary advancement by distraction technique and mandibular setback surgery.

    PubMed

    Suenaga, Hideyuki; Taniguchi, Asako; Yonenaga, Kazumichi; Hoshi, Kazuto; Takato, Tsuyoshi

    2016-01-01

    Computer-assisted preoperative simulation surgery is employed to plan and interact with the 3D images during the orthognathic procedure. It is useful for positioning and fixation of maxilla by a plate. We report a case of maxillary retrusion by a bilateral cleft lip and palate, in which a 2-stage orthognathic procedure (maxillary advancement by distraction technique and mandibular setback surgery) was performed following a computer-assisted preoperative simulation planning to achieve the positioning and fixation of the plate. A high accuracy was achieved in the present case. A 21-year-old male patient presented to our department with a complaint of maxillary retrusion following bilateral cleft lip and palate. Computer-assisted preoperative simulation with 2-stage orthognathic procedure using distraction technique and mandibular setback surgery was planned. The preoperative planning of the procedure resulted in good aesthetic outcomes. The error of the maxillary position was less than 1mm. The implementation of the computer-assisted preoperative simulation for the positioning and fixation of plate in 2-stage orthognathic procedure using distraction technique and mandibular setback surgery yielded good results. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demeure, I.M.

    The research presented here is concerned with representation techniques and tools to support the design, prototyping, simulation, and evaluation of message-based parallel, distributed computations. The author describes ParaDiGM-Parallel, Distributed computation Graph Model-a visual representation technique for parallel, message-based distributed computations. ParaDiGM provides several views of a computation depending on the aspect of concern. It is made of two complementary submodels, the DCPG-Distributed Computing Precedence Graph-model, and the PAM-Process Architecture Model-model. DCPGs are precedence graphs used to express the functionality of a computation in terms of tasks, message-passing, and data. PAM graphs are used to represent the partitioning of a computationmore » into schedulable units or processes, and the pattern of communication among those units. There is a natural mapping between the two models. He illustrates the utility of ParaDiGM as a representation technique by applying it to various computations (e.g., an adaptive global optimization algorithm, the client-server model). ParaDiGM representations are concise. They can be used in documenting the design and the implementation of parallel, distributed computations, in describing such computations to colleagues, and in comparing and contrasting various implementations of the same computation. He then describes VISA-VISual Assistant, a software tool to support the design, prototyping, and simulation of message-based parallel, distributed computations. VISA is based on the ParaDiGM model. In particular, it supports the editing of ParaDiGM graphs to describe the computations of interest, and the animation of these graphs to provide visual feedback during simulations. The graphs are supplemented with various attributes, simulation parameters, and interpretations which are procedures that can be executed by VISA.« less

  6. Interactive Simulated Patient: Experiences with Collaborative E-Learning in Medicine

    ERIC Educational Resources Information Center

    Bergin, Rolf; Youngblood, Patricia; Ayers, Mary K.; Boberg, Jonas; Bolander, Klara; Courteille, Olivier; Dev, Parvati; Hindbeck, Hans; Edward, Leonard E., II; Stringer, Jennifer R.; Thalme, Anders; Fors, Uno G. H.

    2003-01-01

    Interactive Simulated Patient (ISP) is a computer-based simulation tool designed to provide medical students with the opportunity to practice their clinical problem solving skills. The ISP system allows students to perform most clinical decision-making procedures in a simulated environment, including history taking in natural language, many…

  7. Computer Mediated Communication and University International Students

    ERIC Educational Resources Information Center

    Robbins, Nancy; Lo, Yen-Hai; Hou, Feng-Heiung; Chou, Tsai-Sheng; Chen, Chin-Hung; Chen, Chao-Chien; Chen, Wen-Chiang; Chen, Yen-Chuan; Wang, Shih-Jen; Huang, Shih-Yu; Lii, Jong-Yiing

    2002-01-01

    The design for this preliminary study was based on the experiences of the international students and faculty members of a small southwest university being surveyed and interviewed. The data collection procedure blends qualitative and quantitative data. A strong consensus was found that supports the study's premise that there is an association…

  8. Guidance on Software Maintenance. Final Report. Reports on Computer Science and Technology.

    ERIC Educational Resources Information Center

    Martin, Roger J.; Osborne, Wilma M.

    Based on informal discussions with personnel at selected federal agencies and private sector organizations and on additional research, this publication addresses issues and problems of software maintenance and suggests actions and procedures which can help software maintenance organizations meet the growing demands of maintaining existing systems.…

  9. The Use of Screencasting to Transform Traditional Pedagogy in a Preservice Mathematics Content Course

    ERIC Educational Resources Information Center

    Guerrero, Shannon; Baumgartel, Drew; Zobott, Maren

    2013-01-01

    Screencasting, or digital recordings of computer screen outputs, can be used to promote pedagogical transformation in the mathematics classroom by moving explicit, procedural-based instruction to the online environment, thus freeing classroom time for more student-centered investigations, problem solving, communication, and collaboration. This…

  10. The Construction of an Ontology-Based Ubiquitous Learning Grid

    ERIC Educational Resources Information Center

    Liao, Ching-Jung; Chou, Chien-Chih; Yang, Jin-Tan David

    2009-01-01

    The purpose of this study is to incorporate adaptive ontology into ubiquitous learning grid to achieve seamless learning environment. Ubiquitous learning grid uses ubiquitous computing environment to infer and determine the most adaptive learning contents and procedures in anytime, any place and with any device. To achieve the goal, an…

  11. Negotiation Performance: Antecedents, Outcomes, and Training Recommendations

    DTIC Science & Technology

    2011-10-01

    Tutorial Cognitive Apprenticeships Instructional Conversations Independent Programmed Instruction Computer-Based Instruction I Rr La...procedural knowledge, as well as the more distal antecedents of individual difference variables (e.g., cognitive ability , personality) and psychological...individual difference variables (e.g., cognitive ability , personality) and psychological processes (e.g., cognitive , motivational, and emotional). This

  12. Prose Style and Critical Reading.

    ERIC Educational Resources Information Center

    Cluett, Robert

    This book is based on the York Computer Inventory of Prose Style, which seeks to provide a quantitative description of the syntactic characteristics of the literary language of specific authors over the last 400 years. After a brief theoretical introduction and a description of texts and sampling procedures, the discussion turns to specific…

  13. The Effect of Password Management Procedures on the Entropy of User Selected Passwords

    ERIC Educational Resources Information Center

    Enamait, John D.

    2012-01-01

    Maintaining the security of information contained within computer systems poses challenges for users and administrators. Attacks on information systems continue to rise. Specifically, attacks that target user authentication are increasingly popular. These attacks are based on the common perception that traditional alphanumeric passwords are weak…

  14. Quest: The Interactive Test Analysis System.

    ERIC Educational Resources Information Center

    Adams, Raymond J.; Khoo, Siek-Toon

    The Quest program offers a comprehensive test and questionnaire analysis environment by providing a data analyst (a computer program) with access to the most recent developments in Rasch measurement theory, as well as a range of traditional analysis procedures. This manual helps the user use Quest to construct and validate variables based on…

  15. Procedural research for reporting of material test data using computer systems : the MATT System : Volume 1 : user manual.

    DOT National Transportation Integrated Search

    1978-08-01

    The design and development of the MATT System was accomplished through task groups' approach. These task groups (one each for soil and base course, concrete, and hot mix) consisted of personnel from the Construction, Materials, Research and Developme...

  16. Computer Aided Segmentation Analysis: New Software for College Admissions Marketing.

    ERIC Educational Resources Information Center

    Lay, Robert S.; Maguire, John J.

    1983-01-01

    Compares segmentation solutions obtained using a binary segmentation algorithm (THAID) and a new chi-square-based procedure (CHAID) that segments the prospective pool of college applicants using application and matriculation as criteria. Results showed a higher number of estimated qualified inquiries and more accurate estimates with CHAID. (JAC)

  17. Parallel Computer System for 3D Visualization Stereo on GPU

    NASA Astrophysics Data System (ADS)

    Al-Oraiqat, Anas M.; Zori, Sergii A.

    2018-03-01

    This paper proposes the organization of a parallel computer system based on Graphic Processors Unit (GPU) for 3D stereo image synthesis. The development is based on the modified ray tracing method developed by the authors for fast search of tracing rays intersections with scene objects. The system allows significant increase in the productivity for the 3D stereo synthesis of photorealistic quality. The generalized procedure of 3D stereo image synthesis on the Graphics Processing Unit/Graphics Processing Clusters (GPU/GPC) is proposed. The efficiency of the proposed solutions by GPU implementation is compared with single-threaded and multithreaded implementations on the CPU. The achieved average acceleration in multi-thread implementation on the test GPU and CPU is about 7.5 and 1.6 times, respectively. Studying the influence of choosing the size and configuration of the computational Compute Unified Device Archi-tecture (CUDA) network on the computational speed shows the importance of their correct selection. The obtained experimental estimations can be significantly improved by new GPUs with a large number of processing cores and multiprocessors, as well as optimized configuration of the computing CUDA network.

  18. Development and application of unified algorithms for problems in computational science

    NASA Technical Reports Server (NTRS)

    Shankar, Vijaya; Chakravarthy, Sukumar

    1987-01-01

    A framework is presented for developing computationally unified numerical algorithms for solving nonlinear equations that arise in modeling various problems in mathematical physics. The concept of computational unification is an attempt to encompass efficient solution procedures for computing various nonlinear phenomena that may occur in a given problem. For example, in Computational Fluid Dynamics (CFD), a unified algorithm will be one that allows for solutions to subsonic (elliptic), transonic (mixed elliptic-hyperbolic), and supersonic (hyperbolic) flows for both steady and unsteady problems. The objectives are: development of superior unified algorithms emphasizing accuracy and efficiency aspects; development of codes based on selected algorithms leading to validation; application of mature codes to realistic problems; and extension/application of CFD-based algorithms to problems in other areas of mathematical physics. The ultimate objective is to achieve integration of multidisciplinary technologies to enhance synergism in the design process through computational simulation. Specific unified algorithms for a hierarchy of gas dynamics equations and their applications to two other areas: electromagnetic scattering, and laser-materials interaction accounting for melting.

  19. Finite-difference solution for turbulent swirling compressible flow in axisymmetric ducts with struts

    NASA Technical Reports Server (NTRS)

    Anderson, O. L.

    1974-01-01

    A finite-difference procedure for computing the turbulent, swirling, compressible flow in axisymmetric ducts is described. Arbitrary distributions of heat and mass transfer at the boundaries can be treated, and the effects of struts, inlet guide vanes, and flow straightening vanes can be calculated. The calculation procedure is programmed in FORTRAN 4 and has operated successfully on the UNIVAC 1108, IBM 360, and CDC 6600 computers. The analysis which forms the basis of the procedure, a detailed description of the computer program, and the input/output formats are presented. The results of sample calculations performed with the computer program are compared with experimental data.

  20. Zero side force volute development

    NASA Technical Reports Server (NTRS)

    Anderson, P. G.; Franz, R. J.; Farmer, R. C.; Chen, Y. S.

    1995-01-01

    Collector scrolls on high performance centrifugal pumps are currently designed with methods which are based on very approximate flowfield models. Such design practices result in some volute configurations causing excessive side loads even at design flowrates. The purpose of this study was to develop and verify computational design tools which may be used to optimize volute configurations with respect to avoiding excessive loads on the bearings. The new design methodology consisted of a volute grid generation module and a computational fluid dynamics (CFD) module to describe the volute geometry and predict the radial forces for a given flow condition, respectively. Initially, the CFD module was used to predict the impeller and the volute flowfields simultaneously; however, the required computation time was found to be excessive for parametric design studies. A second computational procedure was developed which utilized an analytical impeller flowfield model and an ordinary differential equation to describe the impeller/volute coupling obtained from the literature, Adkins & Brennen (1988). The second procedure resulted in 20 to 30 fold increase in computational speed for an analysis. The volute design analysis was validated by postulating a volute geometry, constructing a volute to this configuration, and measuring the steady radial forces over a range of flow coefficients. Excellent agreement between model predictions and observed pump operation prove the computational impeller/volute pump model to be a valuable design tool. Further applications are recommended to fully establish the benefits of this new methodology.

  1. Alternative Computer Access for Young Handicapped Children: A Systematic Selection Procedure.

    ERIC Educational Resources Information Center

    Morris, Karen J.

    The paper describes the type of computer access products appropriate for use by handicapped children and presents a systematic procedure for selection of such input and output devices. Modification of computer input is accomplished by three strategies: modifying the keyboard, adding alternative keyboards, and attaching switches to the keyboard.…

  2. Computational procedure for finite difference solution of one-dimensional heat conduction problems reduces computer time

    NASA Technical Reports Server (NTRS)

    Iida, H. T.

    1966-01-01

    Computational procedure reduces the numerical effort whenever the method of finite differences is used to solve ablation problems for which the surface recession is large relative to the initial slab thickness. The number of numerical operations required for a given maximum space mesh size is reduced.

  3. Training femoral neck screw insertion skills to surgical trainees: computer-assisted surgery versus conventional fluoroscopic technique.

    PubMed

    Nousiainen, Markku T; Omoto, Daniel M; Zingg, Patrick O; Weil, Yoram A; Mardam-Bey, Sami W; Eward, William C

    2013-02-01

    : Femoral neck fractures are among the most common orthopaedic injuries impacting the health care system. Surgical management of such fractures with cannulated screws is a commonly performed procedure. The acquisition of surgical skills necessary to perform this procedure typically involves learning on real patients with fluoroscopic guidance. This study attempts to determine if a novel computer-navigated training model improves the learning of this basic surgical skill. A multicenter, prospective, randomized, and controlled study was conducted using surgical trainees with no prior experience in surgically managing femoral neck fractures. After a training session, participants underwent a pretest by performing the surgical task (screw placement) on a simulated hip fracture using fluoroscopic guidance. Immediately after, participants were randomized into either undergoing a training session using conventional fluoroscopy or computer-based navigation. Immediate posttest, retention (4 weeks later), and transfer tests were performed. Performance during the tests was determined by radiographic analysis of hardware placement. Screw placement by trainees was ultimately equal to the level of an expert surgeon with either training technique. Participants who trained with computer navigation took fewer attempts to position hardware and used less fluoroscopy time than those trained with fluoroscopy. When those trained with fluoroscopy used computer navigation at the transfer test, less fluoroscopy time and dosage was used. The concurrent augmented feedback provided by computer navigation did not affect the learning of this basic surgical skill in surgical novices. No compromise in learning occurred if the surgical novice trained with one type of technology and transferred to using the other. The findings of this study suggest that computer navigation may be safely used to train surgical novices in a basic procedure. This model avoids using both live patients and harmful radiation without a compromise in the acquisition of a 3-dimensional technical skill.

  4. Three-Dimensional Viscous Alternating Direction Implicit Algorithm and Strategies for Shape Optimization

    NASA Technical Reports Server (NTRS)

    Pandya, Mohagna J.; Baysal, Oktay

    1997-01-01

    A gradient-based shape optimization based on quasi-analytical sensitivities has been extended for practical three-dimensional aerodynamic applications. The flow analysis has been rendered by a fully implicit, finite-volume formulation of the Euler and Thin-Layer Navier-Stokes (TLNS) equations. Initially, the viscous laminar flow analysis for a wing has been compared with an independent computational fluid dynamics (CFD) code which has been extensively validated. The new procedure has been demonstrated in the design of a cranked arrow wing at Mach 2.4 with coarse- and fine-grid based computations performed with Euler and TLNS equations. The influence of the initial constraints on the geometry and aerodynamics of the optimized shape has been explored. Various final shapes generated for an identical initial problem formulation but with different optimization path options (coarse or fine grid, Euler or TLNS), have been aerodynamically evaluated via a common fine-grid TLNS-based analysis. The initial constraint conditions show significant bearing on the optimization results. Also, the results demonstrate that to produce an aerodynamically efficient design, it is imperative to include the viscous physics in the optimization procedure with the proper resolution. Based upon the present results, to better utilize the scarce computational resources, it is recommended that, a number of viscous coarse grid cases using either a preconditioned bi-conjugate gradient (PbCG) or an alternating-direction-implicit (ADI) method, should initially be employed to improve the optimization problem definition, the design space and initial shape. Optimized shapes should subsequently be analyzed using a high fidelity (viscous with fine-grid resolution) flow analysis to evaluate their true performance potential. Finally, a viscous fine-grid-based shape optimization should be conducted, using an ADI method, to accurately obtain the final optimized shape.

  5. Modeling of fatigue crack induced nonlinear ultrasonics using a highly parallelized explicit local interaction simulation approach

    NASA Astrophysics Data System (ADS)

    Shen, Yanfeng; Cesnik, Carlos E. S.

    2016-04-01

    This paper presents a parallelized modeling technique for the efficient simulation of nonlinear ultrasonics introduced by the wave interaction with fatigue cracks. The elastodynamic wave equations with contact effects are formulated using an explicit Local Interaction Simulation Approach (LISA). The LISA formulation is extended to capture the contact-impact phenomena during the wave damage interaction based on the penalty method. A Coulomb friction model is integrated into the computation procedure to capture the stick-slip contact shear motion. The LISA procedure is coded using the Compute Unified Device Architecture (CUDA), which enables the highly parallelized supercomputing on powerful graphic cards. Both the explicit contact formulation and the parallel feature facilitates LISA's superb computational efficiency over the conventional finite element method (FEM). The theoretical formulations based on the penalty method is introduced and a guideline for the proper choice of the contact stiffness is given. The convergence behavior of the solution under various contact stiffness values is examined. A numerical benchmark problem is used to investigate the new LISA formulation and results are compared with a conventional contact finite element solution. Various nonlinear ultrasonic phenomena are successfully captured using this contact LISA formulation, including the generation of nonlinear higher harmonic responses. Nonlinear mode conversion of guided waves at fatigue cracks is also studied.

  6. On-line range images registration with GPGPU

    NASA Astrophysics Data System (ADS)

    Będkowski, J.; Naruniec, J.

    2013-03-01

    This paper concerns implementation of algorithms in the two important aspects of modern 3D data processing: data registration and segmentation. Solution proposed for the first topic is based on the 3D space decomposition, while the latter on image processing and local neighbourhood search. Data processing is implemented by using NVIDIA compute unified device architecture (NIVIDIA CUDA) parallel computation. The result of the segmentation is a coloured map where different colours correspond to different objects, such as walls, floor and stairs. The research is related to the problem of collecting 3D data with a RGB-D camera mounted on a rotated head, to be used in mobile robot applications. Performance of the data registration algorithm is aimed for on-line processing. The iterative closest point (ICP) approach is chosen as a registration method. Computations are based on the parallel fast nearest neighbour search. This procedure decomposes 3D space into cubic buckets and, therefore, the time of the matching is deterministic. First technique of the data segmentation uses accele-rometers integrated with a RGB-D sensor to obtain rotation compensation and image processing method for defining pre-requisites of the known categories. The second technique uses the adapted nearest neighbour search procedure for obtaining normal vectors for each range point.

  7. Acceleration of Image Segmentation Algorithm for (Breast) Mammogram Images Using High-Performance Reconfigurable Dataflow Computers

    PubMed Central

    Filipovic, Nenad D.

    2017-01-01

    Image segmentation is one of the most common procedures in medical imaging applications. It is also a very important task in breast cancer detection. Breast cancer detection procedure based on mammography can be divided into several stages. The first stage is the extraction of the region of interest from a breast image, followed by the identification of suspicious mass regions, their classification, and comparison with the existing image database. It is often the case that already existing image databases have large sets of data whose processing requires a lot of time, and thus the acceleration of each of the processing stages in breast cancer detection is a very important issue. In this paper, the implementation of the already existing algorithm for region-of-interest based image segmentation for mammogram images on High-Performance Reconfigurable Dataflow Computers (HPRDCs) is proposed. As a dataflow engine (DFE) of such HPRDC, Maxeler's acceleration card is used. The experiments for examining the acceleration of that algorithm on the Reconfigurable Dataflow Computers (RDCs) are performed with two types of mammogram images with different resolutions. There were, also, several DFE configurations and each of them gave a different acceleration value of algorithm execution. Those acceleration values are presented and experimental results showed good acceleration. PMID:28611851

  8. Acceleration of Image Segmentation Algorithm for (Breast) Mammogram Images Using High-Performance Reconfigurable Dataflow Computers.

    PubMed

    Milankovic, Ivan L; Mijailovic, Nikola V; Filipovic, Nenad D; Peulic, Aleksandar S

    2017-01-01

    Image segmentation is one of the most common procedures in medical imaging applications. It is also a very important task in breast cancer detection. Breast cancer detection procedure based on mammography can be divided into several stages. The first stage is the extraction of the region of interest from a breast image, followed by the identification of suspicious mass regions, their classification, and comparison with the existing image database. It is often the case that already existing image databases have large sets of data whose processing requires a lot of time, and thus the acceleration of each of the processing stages in breast cancer detection is a very important issue. In this paper, the implementation of the already existing algorithm for region-of-interest based image segmentation for mammogram images on High-Performance Reconfigurable Dataflow Computers (HPRDCs) is proposed. As a dataflow engine (DFE) of such HPRDC, Maxeler's acceleration card is used. The experiments for examining the acceleration of that algorithm on the Reconfigurable Dataflow Computers (RDCs) are performed with two types of mammogram images with different resolutions. There were, also, several DFE configurations and each of them gave a different acceleration value of algorithm execution. Those acceleration values are presented and experimental results showed good acceleration.

  9. Computer aided fixture design - A case based approach

    NASA Astrophysics Data System (ADS)

    Tanji, Shekhar; Raiker, Saiesh; Mathew, Arun Tom

    2017-11-01

    Automated fixture design plays important role in process planning and integration of CAD and CAM. An automated fixture setup design system is developed where when fixturing surfaces and points are described allowing modular fixture components to get automatically select for generating fixture units and placed into position with satisfying assembled conditions. In past, various knowledge based system have been developed to implement CAFD in practice. In this paper, to obtain an acceptable automated machining fixture design, a case-based reasoning method with developed retrieval system is proposed. Visual Basic (VB) programming language is used in integrating with SolidWorks API (Application programming interface) module for better retrieval procedure reducing computational time. These properties are incorporated in numerical simulation to determine the best fit for practical use.

  10. HEP Computing

    Science.gov Websites

    Service Request Password Help New Users Back to HEP Computing Mail-Migration Procedure on Linux Mail -Migration Procedure on Windows How to Migrate a Folder to GMail using Pine U.S. Department of Energy The

  11. Circus: A Replicated Procedure Call Facility

    DTIC Science & Technology

    1984-08-01

    Computer Science Laboratory, Xerox PARC, July 1082 . [24) Bruce Ja.y Nelson. Remote Procedure Ctdl. Ph.D. dissertation, Computer Science Department...t. Ph.D. dissertation, Computer Science Division, University of California, Berkeley, Xerox PARC report number CSIF 82-7, December 1082 . [30...Tandem Computers Inc. GUARDIAN Opet’ating Sy•tem Programming Mt~nulll, Volumu 1 11nd 2. C upertino, California, 1082 . [31) R. H. Thoma.s. A majority

  12. Automated particle correspondence and accurate tilt-axis detection in tilted-image pairs

    DOE PAGES

    Shatsky, Maxim; Arbelaez, Pablo; Han, Bong-Gyoon; ...

    2014-07-01

    Tilted electron microscope images are routinely collected for an ab initio structure reconstruction as a part of the Random Conical Tilt (RCT) or Orthogonal Tilt Reconstruction (OTR) methods, as well as for various applications using the "free-hand" procedure. These procedures all require identification of particle pairs in two corresponding images as well as accurate estimation of the tilt-axis used to rotate the electron microscope (EM) grid. Here we present a computational approach, PCT (particle correspondence from tilted pairs), based on tilt-invariant context and projection matching that addresses both problems. The method benefits from treating the two problems as a singlemore » optimization task. It automatically finds corresponding particle pairs and accurately computes tilt-axis direction even in the cases when EM grid is not perfectly planar.« less

  13. A comparative study of two codes with an improved two-equation turbulence model for predicting jet plumes

    NASA Technical Reports Server (NTRS)

    Balakrishnan, L.; Abdol-Hamid, Khaled S.

    1992-01-01

    Compressible jet plumes were studied using a two-equation turbulence model. A space marching procedure based on an upwind numerical scheme was used to solve the governing equations and turbulence transport equations. The computed results indicate that extending the space marching procedure for solving supersonic/subsonic mixing problems can be stable, efficient and accurate. Moreover, a newly developed correction for compressible dissipation has been verified in fully expanded and underexpanded jet plumes. For a sonic jet plume, no improvement in results over the standard two-equation model was seen. However for a supersonic jet plume, the correction due to compressible dissipation successfully predicted the reduced spreading rate of the jet compared to the sonic case. The computed results were generally in good agreement with the experimental data.

  14. Advances in PCR technology.

    PubMed

    Lauerman, Lloyd H

    2004-12-01

    Since the discovery of the polymerase chain reaction (PCR) 20 years ago, an avalanche of scientific publications have reported major developments and changes in specialized equipment, reagents, sample preparation, computer programs and techniques, generated through business, government and university research. The requirement for genetic sequences for primer selection and validation has been greatly facilitated by the development of new sequencing techniques, machines and computer programs. Genetic libraries, such as GenBank, EMBL and DDBJ continue to accumulate a wealth of genetic sequence information for the development and validation of molecular-based diagnostic procedures concerning human and veterinary disease agents. The mechanization of various aspects of the PCR assay, such as robotics, microfluidics and nanotechnology, has made it possible for the rapid advancement of new procedures. Real-time PCR, DNA microarray and DNA chips utilize these newer techniques in conjunction with computer and computer programs. Instruments for hand-held PCR assays are being developed. The PCR and reverse transcription-PCR (RT-PCR) assays have greatly accelerated the speed and accuracy of diagnoses of human and animal disease, especially of the infectious agents that are difficult to isolate or demonstrate. The PCR has made it possible to genetically characterize a microbial isolate inexpensively and rapidly for identification, typing and epidemiological comparison.

  15. On some Aitken-like acceleration of the Schwarz method

    NASA Astrophysics Data System (ADS)

    Garbey, M.; Tromeur-Dervout, D.

    2002-12-01

    In this paper we present a family of domain decomposition based on Aitken-like acceleration of the Schwarz method seen as an iterative procedure with a linear rate of convergence. We first present the so-called Aitken-Schwarz procedure for linear differential operators. The solver can be a direct solver when applied to the Helmholtz problem with five-point finite difference scheme on regular grids. We then introduce the Steffensen-Schwarz variant which is an iterative domain decomposition solver that can be applied to linear and nonlinear problems. We show that these solvers have reasonable numerical efficiency compared to classical fast solvers for the Poisson problem or multigrids for more general linear and nonlinear elliptic problems. However, the salient feature of our method is that our algorithm has high tolerance to slow network in the context of distributed parallel computing and is attractive, generally speaking, to use with computer architecture for which performance is limited by the memory bandwidth rather than the flop performance of the CPU. This is nowadays the case for most parallel. computer using the RISC processor architecture. We will illustrate this highly desirable property of our algorithm with large-scale computing experiments.

  16. Energy Navigation: Simulation Evaluation and Benefit Analysis

    NASA Technical Reports Server (NTRS)

    Williams, David H.; Oseguera-Lohr, Rosa M.; Lewis, Elliot T.

    2011-01-01

    This paper presents results from two simulation studies investigating the use of advanced flight-deck-based energy navigation (ENAV) and conventional transport-category vertical navigation (VNAV) for conducting a descent through a busy terminal area, using Continuous Descent Arrival (CDA) procedures. This research was part of the Low Noise Flight Procedures (LNFP) element within the Quiet Aircraft Technology (QAT) Project, and the subsequent Airspace Super Density Operations (ASDO) research focus area of the Airspace Project. A piloted simulation study addressed development of flight guidance, and supporting pilot and Air Traffic Control (ATC) procedures for high density terminal operations. The procedures and charts were designed to be easy to understand, and to make it easy for the crew to make changes via the Flight Management Computer Control-Display Unit (FMC-CDU) to accommodate changes from ATC.

  17. Computer simulation of the metastatic progression.

    PubMed

    Wedemann, Gero; Bethge, Anja; Haustein, Volker; Schumacher, Udo

    2014-01-01

    A novel computer model based on a discrete event simulation procedure describes quantitatively the processes underlying the metastatic cascade. Analytical functions describe the size of the primary tumor and the metastases, while a rate function models the intravasation events of the primary tumor and metastases. Events describe the behavior of the malignant cells until the formation of new metastases. The results of the computer simulations are in quantitative agreement with clinical data determined from a patient with hepatocellular carcinoma in the liver. The model provides a more detailed view on the process than a conventional mathematical model. In particular, the implications of interventions on metastasis formation can be calculated.

  18. Numerical analysis of stiffened shells of revolution. Volume 2: Users' manual for STAR-02S - shell theory automated for rotational structures - 2 (statics), digital computer program

    NASA Technical Reports Server (NTRS)

    Svalbonas, V.

    1973-01-01

    A procedure for the structural analysis of stiffened shells of revolution is presented. A digital computer program based on the Love-Reissner first order shell theory was developed. The computer program can analyze orthotropic thin shells of revolution, subjected to unsymmetric distributed loading or concentrated line loads, as well as thermal strains. The geometrical shapes of the shells which may be analyzed are described. The shell wall cross section can be a sheet, sandwich, or reinforced sheet or sandwich. General stiffness input options are also available.

  19. Numerical Simulation Of Cutting Of Gear Teeth

    NASA Technical Reports Server (NTRS)

    Oswald, Fred B.; Huston, Ronald L.; Mavriplis, Dimitrios

    1994-01-01

    Shapes of gear teeth produced by gear cutters of specified shape simulated computationally, according to approach based on principles of differential geometry. Results of computer simulation displayed as computer graphics and/or used in analyses of design, manufacturing, and performance of gears. Applicable to both standard and non-standard gear-tooth forms. Accelerates and facilitates analysis of alternative designs of gears and cutters. Simulation extended to study generation of surfaces other than gears. Applied to cams, bearings, and surfaces of arbitrary rolling elements as well as to gears. Possible to develop analogous procedures for simulating manufacture of skin surfaces like automobile fenders, airfoils, and ship hulls.

  20. A laboratory procedure for measuring and georeferencing soil colour

    NASA Astrophysics Data System (ADS)

    Marques-Mateu, A.; Balaguer-Puig, M.; Moreno-Ramon, H.; Ibanez-Asensio, S.

    2015-04-01

    Remote sensing and geospatial applications very often require ground truth data to assess outcomes from spatial analyses or environmental models. Those data sets, however, may be difficult to collect in proper format or may even be unavailable. In the particular case of soil colour the collection of reliable ground data can be cumbersome due to measuring methods, colour communication issues, and other practical factors which lead to a lack of standard procedure for soil colour measurement and georeferencing. In this paper we present a laboratory procedure that provides colour coordinates of georeferenced soil samples which become useful in later processing stages of soil mapping and classification from digital images. The procedure requires a laboratory setup consisting of a light booth and a trichromatic colorimeter, together with a computer program that performs colour measurement, storage, and colour space transformation tasks. Measurement tasks are automated by means of specific data logging routines which allow storing recorded colour data in a spatial format. A key feature of the system is the ability of transforming between physically-based colour spaces and the Munsell system which is still the standard in soil science. The working scheme pursues the automation of routine tasks whenever possible and the avoidance of input mistakes by means of a convenient layout of the user interface. The program can readily manage colour and coordinate data sets which eventually allow creating spatial data sets. All the tasks regarding data joining between colorimeter measurements and samples locations are executed by the software in the background, allowing users to concentrate on samples processing. As a result, we obtained a robust and fully functional computer-based procedure which has proven a very useful tool for sample classification or cataloging purposes as well as for integrating soil colour data with other remote sensed and spatial data sets.

Top