Sample records for computer pc pentium

  1. Performance Evaluation of Synthetic Benchmarks and Image Processing (IP) Kernels on Intel and PowerPC Processors

    DTIC Science & Technology

    2013-08-01

    2006 Linux Q1 2005 Pentium D (830) 3 2/2 2511 1148 3617 Windows Vista Q2 2005 Pentium D (830) 3 2/2 2938 1155 3556 Windows XP Q2 2005 PowerPC 970MP 2...1 3734 3439 1304 Cell Broadband Engine 3.2 1/1 0.207 2006 239 441 Pentium D (830) 3 2/2 2 3617 2511 1148 Pentium D (830) 3 2/2 2 3556 2938 1155

  2. Use of off-the-shelf PC-based flight simulators for aviation human factors research.

    DOT National Transportation Integrated Search

    1996-04-01

    Flight simulation has historically been an expensive proposition, particularly if out-the-window views were desired. Advances in computer technology have allowed a modular, off-the-shelf flight simulation (based on 80486 processors or Pentiums) to be...

  3. High-performance software-only H.261 video compression on PC

    NASA Astrophysics Data System (ADS)

    Kasperovich, Leonid

    1996-03-01

    This paper describes an implementation of a software H.261 codec for PC, that takes an advantage of the fast computational algorithms for DCT-based video compression, which have been presented by the author at the February's 1995 SPIE/IS&T meeting. The motivation for developing the H.261 prototype system is to demonstrate a feasibility of real time software- only videoconferencing solution to operate across a wide range of network bandwidth, frame rate, and resolution of the input video. As the bandwidths of current network technology will be increased, the higher frame rate and resolution of video to be transmitted is allowed, that requires, in turn, a software codec to be able to compress pictures of CIF (352 X 288) resolution at up to 30 frame/sec. Running on Pentium 133 MHz PC the codec presented is capable to compress video in CIF format at 21 - 23 frame/sec. This result is comparable to the known hardware-based H.261 solutions, but it doesn't require any specific hardware. The methods to achieve high performance, the program optimization technique for Pentium microprocessor along with the performance profile, showing the actual contribution of the different encoding/decoding stages to the overall computational process, are presented.

  4. a Linux PC Cluster for Lattice QCD with Exact Chiral Symmetry

    NASA Astrophysics Data System (ADS)

    Chiu, Ting-Wai; Hsieh, Tung-Han; Huang, Chao-Hsi; Huang, Tsung-Ren

    A computational system for lattice QCD with overlap Dirac quarks is described. The platform is a home-made Linux PC cluster, built with off-the-shelf components. At present the system constitutes of 64 nodes, with each node consisting of one Pentium 4 processor (1.6/2.0/2.5 GHz), one Gbyte of PC800/1066 RDRAM, one 40/80/120 Gbyte hard disk, and a network card. The computationally intensive parts of our program are written in SSE2 codes. The speed of our system is estimated to be 70 Gflops, and its price/performance ratio is better than $1.0/Mflops for 64-bit (double precision) computations in quenched QCD. We discuss how to optimize its hardware and software for computing propagators of overlap Dirac quarks.

  5. A computational system for lattice QCD with overlap Dirac quarks

    NASA Astrophysics Data System (ADS)

    Chiu, Ting-Wai; Hsieh, Tung-Han; Huang, Chao-Hsi; Huang, Tsung-Ren

    2003-05-01

    We outline the essential features of a Linux PC cluster which is now being developed at National Taiwan University, and discuss how to optimize its hardware and software for lattice QCD with overlap Dirac quarks. At present, the cluster constitutes of 30 nodes, with each node consisting of one Pentium 4 processor (1.6/2.0 GHz), one Gbyte of PC800 RDRAM, one 40/80 Gbyte hard disk, and a network card. The speed of this system is estimated to be 30 Gflops, and its price/performance ratio is better than $1.0/Mflops for 64-bit (double precision) computations in quenched lattice QCD with overlap Dirac quarks.

  6. [Development of an original computer program FISHMet: use for molecular cytogenetic diagnosis and genome mapping by fluorescent in situ hybridization (FISH)].

    PubMed

    Iurov, Iu B; Khazatskiĭ, I A; Akindinov, V A; Dovgilov, L V; Kobrinskiĭ, B A; Vorsanova, S G

    2000-08-01

    Original software FISHMet has been developed and tried for improving the efficiency of diagnosis of hereditary diseases caused by chromosome aberrations and for chromosome mapping by fluorescent in situ hybridization (FISH) method. The program allows creation and analysis of pseudocolor chromosome images and hybridization signals in the Windows 95 system, allows computer analysis and editing of the results of pseudocolor hybridization in situ, including successive imposition of initial black-and-white images created using fluorescent filters (blue, green, and red), and editing of each image individually or of a summary pseudocolor image in BMP, TIFF, and JPEG formats. Components of image computer analysis system (LOMO, Leitz Ortoplan, and Axioplan fluorescent microscopes, COHU 4910 and Sanyo VCB-3512P CCD cameras, Miro-Video, Scion LG-3 and VG-5 image capture maps, and Pentium 100 and Pentium 200 computers) and specialized software for image capture and visualization (Scion Image PC and Video-Cup) have been used with good results in the study.

  7. Real-time image reconstruction and display system for MRI using a high-speed personal computer.

    PubMed

    Haishi, T; Kose, K

    1998-09-01

    A real-time NMR image reconstruction and display system was developed using a high-speed personal computer and optimized for the 32-bit multitasking Microsoft Windows 95 operating system. The system was operated at various CPU clock frequencies by changing the motherboard clock frequency and the processor/bus frequency ratio. When the Pentium CPU was used at the 200 MHz clock frequency, the reconstruction time for one 128 x 128 pixel image was 48 ms and that for the image display on the enlarged 256 x 256 pixel window was about 8 ms. NMR imaging experiments were performed with three fast imaging sequences (FLASH, multishot EPI, and one-shot EPI) to demonstrate the ability of the real-time system. It was concluded that in most cases, high-speed PC would be the best choice for the image reconstruction and display system for real-time MRI. Copyright 1998 Academic Press.

  8. A Commodity Computing Cluster

    NASA Astrophysics Data System (ADS)

    Teuben, P. J.; Wolfire, M. G.; Pound, M. W.; Mundy, L. G.

    We have assembled a cluster of Intel-Pentium based PCs running Linux to compute a large set of Photodissociation Region (PDR) and Dust Continuum models. For various reasons the cluster is heterogeneous, currently ranging from a single Pentium-II 333 MHz to dual Pentium-III 450 MHz CPU machines. Although this will be sufficient for our ``embarrassingly parallelizable problem'' it may present some challenges for as yet unplanned future use. In addition the cluster was used to construct a MIRIAD benchmark, and compared to equivalent Ultra-Sparc based workstations. Currently the cluster consists of 8 machines, 14 CPUs, 50GB of disk-space, and a total peak speed of 5.83 GHz, or about 1.5 Gflops. The total cost of this cluster has been about $12,000, including all cabling, networking equipment, rack, and a CD-R backup system. The URL for this project is http://dustem.astro.umd.edu.

  9. Long-range interactions and parallel scalability in molecular simulations

    NASA Astrophysics Data System (ADS)

    Patra, Michael; Hyvönen, Marja T.; Falck, Emma; Sabouri-Ghomi, Mohsen; Vattulainen, Ilpo; Karttunen, Mikko

    2007-01-01

    Typical biomolecular systems such as cellular membranes, DNA, and protein complexes are highly charged. Thus, efficient and accurate treatment of electrostatic interactions is of great importance in computational modeling of such systems. We have employed the GROMACS simulation package to perform extensive benchmarking of different commonly used electrostatic schemes on a range of computer architectures (Pentium-4, IBM Power 4, and Apple/IBM G5) for single processor and parallel performance up to 8 nodes—we have also tested the scalability on four different networks, namely Infiniband, GigaBit Ethernet, Fast Ethernet, and nearly uniform memory architecture, i.e. communication between CPUs is possible by directly reading from or writing to other CPUs' local memory. It turns out that the particle-mesh Ewald method (PME) performs surprisingly well and offers competitive performance unless parallel runs on PC hardware with older network infrastructure are needed. Lipid bilayers of sizes 128, 512 and 2048 lipid molecules were used as the test systems representing typical cases encountered in biomolecular simulations. Our results enable an accurate prediction of computational speed on most current computing systems, both for serial and parallel runs. These results should be helpful in, for example, choosing the most suitable configuration for a small departmental computer cluster.

  10. Wide-bandwidth high-resolution search for extraterrestrial intelligence

    NASA Technical Reports Server (NTRS)

    Horowitz, Paul

    1995-01-01

    Research was accomplished during the third year of the grant on: BETA architecture, an FFT array, a feature extractor, the Pentium array and workstation, and a radio astronomy spectrometer. The BETA (this SETI project) system architecture has been evolving generally in the direction of greater robustness against terrestrial interference. The new design adds a powerful state-memory feature, multiple simultaneous thresholds, and the ability to integrate multiple spectra in a flexible state-machine architecture. The FFT array is reported with regards to its hardware verification, array production, and control. The feature extractor is responsible for maintaining a moving baseline, recognizing large spectral peaks, following the progress of previously identified interesting spectral regions, and blocking signals from regions previously identified as containing interference. The Pentium array consists of 21 Pentium-based PC motherboards, each with 16 MByte of RAM and an Ethernet interface. Each motherboard receives and processes the data from a feature extractor/correlator board set, passing on the results of a first analysis to the central Unix workstation (through which each is also booted). The radio astronomy spectrometer is a technological spinoff from SETI work. It is proposed to be a combined spectrometer and power-accumulator, for use at Arecibo Observatory to search for neutral hydrogen emission from condensations of neutral hydrogen at high redshift (z = 5).

  11. Automated speech recognition for time recording in out-of-hospital emergency medicine-an experimental approach.

    PubMed

    Gröschel, J; Philipp, F; Skonetzki, St; Genzwürker, H; Wetter, Th; Ellinger, K

    2004-02-01

    Precise documentation of medical treatment in emergency medical missions and for resuscitation is essential from a medical, legal and quality assurance point of view [Anästhesiologie und Intensivmedizin, 41 (2000) 737]. All conventional methods of time recording are either too inaccurate or elaborate for routine application. Automated speech recognition may offer a solution. A special erase programme for the documentation of all time events was developed. Standard speech recognition software (IBM ViaVoice 7.0) was adapted and installed on two different computer systems. One was a stationary PC (500MHz Pentium III, 128MB RAM, Soundblaster PCI 128 Soundcard, Win NT 4.0), the other was a mobile pen-PC that had already proven its value during emergency missions [Der Notarzt 16, p. 177] (Fujitsu Stylistic 2300, 230Mhz MMX Processor, 160MB RAM, embedded soundcard ESS 1879 chipset, Win98 2nd ed.). On both computers two different microphones were tested. One was a standard headset that came with the recognition software, the other was a small microphone (Lavalier-Kondensatormikrofon EM 116 from Vivanco), that could be attached to the operators collar. Seven women and 15 men spoke a text with 29 phrases to be recognised. Two emergency physicians tested the system in a simulated emergency setting using the collar microphone and the pen-PC with an analogue wireless connection. Overall recognition was best for the PC with a headset (89%) followed by the pen-PC with a headset (85%), the PC with a microphone (84%) and the pen-PC with a microphone (80%). Nevertheless, the difference was not statistically significant. Recognition became significantly worse (89.5% versus 82.3%, P<0.0001 ) when numbers had to be recognised. The gender of speaker and the number of words in a sentence had no influence. Average recognition in the simulated emergency setting was 75%. At no time did false recognition appear. Time recording with automated speech recognition seems to be possible in emergency medical missions. Although results show an average recognition of only 75%, it is possible that missing elements may be reconstructed more precisely. Future technology should integrate a secure wireless connection between microphone and mobile computer. The system could then prove its value for real out-of-hospital emergencies.

  12. Fine-grained parallel RNAalifold algorithm for RNA secondary structure prediction on FPGA

    PubMed Central

    Xia, Fei; Dou, Yong; Zhou, Xingming; Yang, Xuejun; Xu, Jiaqing; Zhang, Yang

    2009-01-01

    Background In the field of RNA secondary structure prediction, the RNAalifold algorithm is one of the most popular methods using free energy minimization. However, general-purpose computers including parallel computers or multi-core computers exhibit parallel efficiency of no more than 50%. Field Programmable Gate-Array (FPGA) chips provide a new approach to accelerate RNAalifold by exploiting fine-grained custom design. Results RNAalifold shows complicated data dependences, in which the dependence distance is variable, and the dependence direction is also across two dimensions. We propose a systolic array structure including one master Processing Element (PE) and multiple slave PEs for fine grain hardware implementation on FPGA. We exploit data reuse schemes to reduce the need to load energy matrices from external memory. We also propose several methods to reduce energy table parameter size by 80%. Conclusion To our knowledge, our implementation with 16 PEs is the only FPGA accelerator implementing the complete RNAalifold algorithm. The experimental results show a factor of 12.2 speedup over the RNAalifold (ViennaPackage – 1.6.5) software for a group of aligned RNA sequences with 2981-residue running on a Personal Computer (PC) platform with Pentium 4 2.6 GHz CPU. PMID:19208138

  13. Recent advances in PC-Linux systems for electronic structure computations by optimized compilers and numerical libraries.

    PubMed

    Yu, Jen-Shiang K; Yu, Chin-Hui

    2002-01-01

    One of the most frequently used packages for electronic structure research, GAUSSIAN 98, is compiled on Linux systems with various hardware configurations, including AMD Athlon (with the "Thunderbird" core), AthlonMP, and AthlonXP (with the "Palomino" core) systems as well as the Intel Pentium 4 (with the "Willamette" core) machines. The default PGI FORTRAN compiler (pgf77) and the Intel FORTRAN compiler (ifc) are respectively employed with different architectural optimization options to compile GAUSSIAN 98 and test the performance improvement. In addition to the BLAS library included in revision A.11 of this package, the Automatically Tuned Linear Algebra Software (ATLAS) library is linked against the binary executables to improve the performance. Various Hartree-Fock, density-functional theories, and the MP2 calculations are done for benchmarking purposes. It is found that the combination of ifc with ATLAS library gives the best performance for GAUSSIAN 98 on all of these PC-Linux computers, including AMD and Intel CPUs. Even on AMD systems, the Intel FORTRAN compiler invariably produces binaries with better performance than pgf77. The enhancement provided by the ATLAS library is more significant for post-Hartree-Fock calculations. The performance on one single CPU is potentially as good as that on an Alpha 21264A workstation or an SGI supercomputer. The floating-point marks by SpecFP2000 have similar trends to the results of GAUSSIAN 98 package.

  14. Web interfaces to relational databases

    NASA Technical Reports Server (NTRS)

    Carlisle, W. H.

    1996-01-01

    This reports on a project to extend the capabilities of a Virtual Research Center (VRC) for NASA's Advanced Concepts Office. The work was performed as part of NASA's 1995 Summer Faculty Fellowship program and involved the development of a prototype component of the VRC - a database system that provides data creation and access services within a room of the VRC. In support of VRC development, NASA has assembled a laboratory containing the variety of equipment expected to be used by scientists within the VRC. This laboratory consists of the major hardware platforms, SUN, Intel, and Motorola processors and their most common operating systems UNIX, Windows NT, Windows for Workgroups, and Macintosh. The SPARC 20 runs SUN Solaris 2.4, an Intel Pentium runs Windows NT and is installed on a different network from the other machines in the laboratory, a Pentium PC runs Windows for Workgroups, two Intel 386 machines run Windows 3.1, and finally, a PowerMacintosh and a Macintosh IIsi run MacOS.

  15. A simple and sensitive method to measure timing accuracy.

    PubMed

    De Clercq, Armand; Crombez, Geert; Buysse, Ann; Roeyers, Herbert

    2003-02-01

    Timing accuracy in presenting experimental stimuli (visual information on a PC or on a TV) and responding (keyboard presses and mouse signals) is of importance in several experimental paradigms. In this article, a simple system for measuring timing accuracy is described. The system uses two PCs (at least Pentium II, 200 MHz), a photocell, and an amplifier. No additional boards and timing hardware are needed. The first PC, a SlavePC, monitors the keyboard presses or mouse signals from the PC under test and uses a photocell that is placed in front of the screen to detect the appearance of visual stimuli on the display. The software consists of a small program running on the SlavePC. The SlavePC is connected through a serial line with a second PC. This MasterPC controls the SlavePC through an ActiveX control, which is used in a Visual Basic program. The accuracy of our system was investigated by using a similar setup of a SlavePC and a MasterPC to generate pulses and by using a pulse generator card. These tests revealed that our system has a 0.01-msec accuracy. As an illustration, the reaction time accuracy of INQUISIT for a few applications was tested using our system. It was found that in those applications that we investigated, INQUISIT measures reaction times from keyboard presses with millisecond accuracy.

  16. The Linux operating system: An introduction

    NASA Technical Reports Server (NTRS)

    Bokhari, Shahid H.

    1995-01-01

    Linux is a Unix-like operating system for Intel 386/486/Pentium based IBM-PCs and compatibles. The kernel of this operating system was written from scratch by Linus Torvalds and, although copyrighted by the author, may be freely distributed. A world-wide group has collaborated in developing Linux on the Internet. Linux can run the powerful set of compilers and programming tools of the Free Software Foundation, and XFree86, a port of the X Window System from MIT. Most capabilities associated with high performance workstations, such as networking, shared file systems, electronic mail, TeX, LaTeX, etc. are freely available for Linux. It can thus transform cheap IBM-PC compatible machines into Unix workstations with considerable capabilities. The author explains how Linux may be obtained, installed and networked. He also describes some interesting applications for Linux that are freely available. The enormous consumer market for IBM-PC compatible machines continually drives down prices of CPU chips, memory, hard disks, CDROMs, etc. Linux can convert such machines into powerful workstations that can be used for teaching, research and software development. For professionals who use Unix based workstations at work, Linux permits virtually identical working environments on their personal home machines. For cost conscious educational institutions Linux can create world-class computing environments from cheap, easily maintained, PC clones. Finally, for university students, it provides an essentially cost-free path away from DOS into the world of Unix and X Windows.

  17. Analysis of EDP performance

    NASA Technical Reports Server (NTRS)

    1994-01-01

    The objective of this contract was the investigation of the potential performance gains that would result from an upgrade of the Space Station Freedom (SSF) Data Management System (DMS) Embedded Data Processor (EDP) '386' design with the Intel Pentium (registered trade-mark of Intel Corp.) '586' microprocessor. The Pentium ('586') is the latest member of the industry standard Intel X86 family of CISC (Complex Instruction Set Computer) microprocessors. This contract was scheduled to run in parallel with an internal IBM Federal Systems Company (FSC) Internal Research and Development (IR&D) task that had the goal to generate a baseline flight design for an upgraded EDP using the Pentium. This final report summarizes the activities performed in support of Contract NAS2-13758. Our plan was to baseline performance analyses and measurements on the latest state-of-the-art commercially available Pentium processor, representative of the proposed space station design, and then phase to an IBM capital funded breadboard version of the flight design (if available from IR&D and Space Station work) for additional evaluation of results. Unfortunately, the phase-over to the flight design breadboard did not take place, since the IBM Data Management System (DMS) for the Space Station Freedom was terminated by NASA before the referenced capital funded EDP breadboard could be completed. The baseline performance analyses and measurements, however, were successfully completed, as planned, on the commercial Pentium hardware. The results of those analyses, evaluations, and measurements are presented in this final report.

  18. Moment distributions of clusters and molecules in the adiabatic rotor model

    NASA Astrophysics Data System (ADS)

    Ballentine, G. E.; Bertsch, G. F.; Onishi, N.; Yabana, K.

    2008-01-01

    We present a Fortran program to compute the distribution of dipole moments of free particles for use in analyzing molecular beams experiments that measure moments by deflection in an inhomogeneous field. The theory is the same for magnetic and electric dipole moments, and is based on a thermal ensemble of classical particles that are free to rotate and that have moment vectors aligned along a principal axis of rotation. The theory has two parameters, the ratio of the magnetic (or electric) dipole energy to the thermal energy, and the ratio of moments of inertia of the rotor. Program summaryProgram title:AdiabaticRotor Catalogue identifier:ADZO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZO_v1_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:479 No. of bytes in distributed program, including test data, etc.:4853 Distribution format:tar.gz Programming language:Fortran 90 Computer:Pentium-IV, Macintosh Power PC G4 Operating system:Linux, Mac OS X RAM:600 Kbytes Word size:64 bits Classification:2.3 Nature of problem:The system considered is a thermal ensemble of rotors having a magnetic or electric moment aligned along one of the principal axes. The ensemble is placed in an external field which is turned on adiabatically. The problem is to find the distribution of moments in the presence of the external field. Solution method:There are three adiabatic invariants. The only nontrivial one is the action associated with the polar angle of the rotor axis with respect to external field. It is found by Newton's method. Running time:3 min on a 3 GHz Pentium IV processor.

  19. Modeling the GPR response of leaking, buried pipes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Powers, M.H.; Olhoeft, G.R.

    1996-11-01

    Using a 2.5D, dispersive, full waveform GPR modeling program that generates complete GPR response profiles in minutes on a Pentium PC, the effects of leaking versus non-leaking buried pipes are examined. The program accounts for the dispersive, lossy nature of subsurface materials to GPR wave propagation, and accepts complex functions of dielectric permittivity and magnetic permeability versus frequency through Cole-Cole parameters fit to laboratory data. Steel and plastic pipes containing a DNAPL chlorinated solvent, an LNAPL hydrocarbon, and natural gas are modeled in a surrounding medium of wet, moist, and dry sand. Leaking fluids are found to be more detectablemore » when the sand around the pipes is fully water saturated. The short runtimes of the modeling program and its execution on a PC make it a useful tool for exploring various subsurface models.« less

  20. Evaluation of the use of umbilical artery Doppler flow studies and outcome of pregnancies at a secondary hospital.

    PubMed

    Hugo, Elizabeth J C; Odendaal, Hein J; Grove, Debbie

    2007-03-01

    To investigate the use of a personal computer (PC)-based, continuous-wave Doppler machine by a trained midwife at a secondary hospital to assess umbilical artery flow velocity waveforms (FVW) in referred women. Pregnant women referred for suspected poor fetal growth were evaluated from June 2002 through December 2004. The Umbiflow apparatus, consisting of a Pentium 3 PC with an ultrasound transducer plugged into the USB port and software, was used to analyze the FVW of the umbilical artery. Pregnancies in which the resistance index (RI) was <75(th) percentile (P75) were not further evaluated for fetal well-being unless the clinical condition of the mother changed. Pregnancies with an RI >or=P75 were followed up according to a specific protocol. Primary end points were intrauterine death and intrauterine growth restriction. A total of 572 singleton pregnancies were followed up. Significantly more infants were small-for-gestational-age when the RI was >P95 (55.6%) than those between P75 and P95 (41.2%) or P95, respectively. A normal Doppler FVW of the umbilical artery is less likely to be followed by perinatal death.

  1. Efficient implementation of parallel three-dimensional FFT on clusters of PCs

    NASA Astrophysics Data System (ADS)

    Takahashi, Daisuke

    2003-05-01

    In this paper, we propose a high-performance parallel three-dimensional fast Fourier transform (FFT) algorithm on clusters of PCs. The three-dimensional FFT algorithm can be altered into a block three-dimensional FFT algorithm to reduce the number of cache misses. We show that the block three-dimensional FFT algorithm improves performance by utilizing the cache memory effectively. We use the block three-dimensional FFT algorithm to implement the parallel three-dimensional FFT algorithm. We succeeded in obtaining performance of over 1.3 GFLOPS on an 8-node dual Pentium III 1 GHz PC SMP cluster.

  2. Computer hardware for radiologists: Part I

    PubMed Central

    Indrajit, IK; Alam, A

    2010-01-01

    Computers are an integral part of modern radiology practice. They are used in different radiology modalities to acquire, process, and postprocess imaging data. They have had a dramatic influence on contemporary radiology practice. Their impact has extended further with the emergence of Digital Imaging and Communications in Medicine (DICOM), Picture Archiving and Communication System (PACS), Radiology information system (RIS) technology, and Teleradiology. A basic overview of computer hardware relevant to radiology practice is presented here. The key hardware components in a computer are the motherboard, central processor unit (CPU), the chipset, the random access memory (RAM), the memory modules, bus, storage drives, and ports. The personnel computer (PC) has a rectangular case that contains important components called hardware, many of which are integrated circuits (ICs). The fiberglass motherboard is the main printed circuit board and has a variety of important hardware mounted on it, which are connected by electrical pathways called “buses”. The CPU is the largest IC on the motherboard and contains millions of transistors. Its principal function is to execute “programs”. A Pentium® 4 CPU has transistors that execute a billion instructions per second. The chipset is completely different from the CPU in design and function; it controls data and interaction of buses between the motherboard and the CPU. Memory (RAM) is fundamentally semiconductor chips storing data and instructions for access by a CPU. RAM is classified by storage capacity, access speed, data rate, and configuration. PMID:21042437

  3. New automatic mode of visualizing the colon via Cine CT

    NASA Astrophysics Data System (ADS)

    Udupa, Jayaram K.; Odhner, Dewey; Eisenberg, Harvey C.

    2001-05-01

    Methods of visualizing the inner colonic wall by using CT images has actively been pursued in recent years in an attempt to eventually replace conventional colonoscopic examination. In spite of impressive progress in this direction, there are still several problems, which need satisfactory solutions. Among these, we address three problems in this paper: segmentation, coverage, and speed of rendering. Instead of thresholding, we utilize the fuzzy connectedness framework to segment the colonic wall. Instead of the endoscopic viewing mode and various mapping techniques, we utilize the central line through the colon to generate automatically viewing directions that are enface with respect to the colon wall, thereby avoiding blind spots in viewing. We utilize some modifications of the ultra fast shell rendering framework to ensure fast rendering speed. The combined effect of these developments is that a colon study requires an initial 5 minutes of operator time plus an additional 5 minutes of computational time and subsequently enface renditions are created in real time (15 frames/sec) on a 1 GHz Pentium PC under the Linux operating system.

  4. Latest developments on the loop control system of AdOpt@TNG

    NASA Astrophysics Data System (ADS)

    Ghedina, Adriano; Gaessler, Wolfgang; Cecconi, Massimo; Ragazzoni, Roberto; Puglisi, Alfio T.; De Bonis, Fulvio

    2004-10-01

    The Adaptive Optics System of the Galileo Telescope (AdOpt@TNG) is the only adaptive optics system mounted on a telescope which uses a pyramid wavefront snesor and it has already shown on sky its potentiality. Recently AdOpt@TNG has undergone deep changes at the level of its higher orders control system. The CCD and the Real Time Computer (RTC) have been substituted as a whole. Instead of the VME based RTC, due to its frequent breakdowns, a dual pentium processor PC with Real-Time-Linux has been chosen. The WFS CCD, that feeds the images to the RTC, was changed to an off-the-shelf camera system from SciMeasure with an EEV39 80x80 pixels as detector. While the APD based Tip/Tilt loop has shown the quality on the sky at the TNG site and the ability of TNG to take advantage of this quality, up to the diffraction limit, the High-Order system has been fully re-developed and the performance of the closed loop is under evaluation to offer the system with the best performance to the astronomical community.

  5. Designing Programs for Multiple Configurations: "You Mean Everyone Doesn't Have a Pentium or Better!"

    ERIC Educational Resources Information Center

    Conkright, Thomas D.; Joliat, Judy

    1996-01-01

    Discusses the challenges, solutions, and compromises involved in creating computer-delivered training courseware for Apollo Travel Services, a company whose 50,000 agents must access a mainframe from many different computing configurations. Initial difficulties came in trying to manage random access memory and quicken response time, but the future…

  6. Development of a low-cost virtual reality workstation for training and education

    NASA Technical Reports Server (NTRS)

    Phillips, James A.

    1996-01-01

    Virtual Reality (VR) is a set of breakthrough technologies that allow a human being to enter and fully experience a 3-dimensional, computer simulated environment. A true virtual reality experience meets three criteria: (1) it involves 3-dimensional computer graphics; (2) it includes real-time feedback and response to user actions; and (3) it must provide a sense of immersion. Good examples of a virtual reality simulator are the flight simulators used by all branches of the military to train pilots for combat in high performance jet fighters. The fidelity of such simulators is extremely high -- but so is the price tag, typically millions of dollars. Virtual reality teaching and training methods are manifestly effective, but the high cost of VR technology has limited its practical application to fields with big budgets, such as military combat simulation, commercial pilot training, and certain projects within the space program. However, in the last year there has been a revolution in the cost of VR technology. The speed of inexpensive personal computers has increased dramatically, especially with the introduction of the Pentium processor and the PCI bus for IBM-compatibles, and the cost of high-quality virtual reality peripherals has plummeted. The result is that many public schools, colleges, and universities can afford a PC-based workstation capable of running immersive virtual reality applications. My goal this summer was to assemble and evaluate such a system.

  7. A preliminary study of molecular dynamics on reconfigurable computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolinski, C.; Trouw, F. R.; Gokhale, M.

    2003-01-01

    In this paper we investigate the performance of platform FPGAs on a compute-intensive, floating-point-intensive supercomputing application, Molecular Dynamics (MD). MD is a popular simulation technique to track interacting particles through time by integrating their equations of motion. One part of the MD algorithm was implemented using the Fabric Generator (FG)[l I ] and mapped onto several reconfigurable logic arrays. FG is a Java-based toolset that greatly accelerates construction of the fabrics from an abstract technology independent representation. Our experiments used technology-independent IEEE 32-bit floating point operators so that the design could be easily re-targeted. Experiments were performed using both non-pipelinedmore » and pipelined floating point modules. We present results for the Altera Excalibur ARM System on a Programmable Chip (SoPC), the Altera Strath EPlS80, and the Xilinx Virtex-N Pro 2VP.50. The best results obtained were 5.69 GFlops at 8OMHz(Altera Strath EPlS80), and 4.47 GFlops at 82 MHz (Xilinx Virtex-II Pro 2VF50). Assuming a lOWpower budget, these results compare very favorably to a 4Gjlop/40Wprocessing/power rate for a modern Pentium, suggesting that reconfigurable logic can achieve high performance at low power on jloating-point-intensivea pplications.« less

  8. Variational optical flow computation in real time.

    PubMed

    Bruhn, Andrés; Weickert, Joachim; Feddern, Christian; Kohlberger, Timo; Schnörr, Christoph

    2005-05-01

    This paper investigates the usefulness of bidirectional multigrid methods for variational optical flow computations. Although these numerical schemes are among the fastest methods for solving equation systems, they are rarely applied in the field of computer vision. We demonstrate how to employ those numerical methods for the treatment of variational optical flow formulations and show that the efficiency of this approach even allows for real-time performance on standard PCs. As a representative for variational optic flow methods, we consider the recently introduced combined local-global method. It can be considered as a noise-robust generalization of the Horn and Schunck technique. We present a decoupled, as well as a coupled, version of the classical Gauss-Seidel solver, and we develop several multgrid implementations based on a discretization coarse grid approximation. In contrast, with standard bidirectional multigrid algorithms, we take advantage of intergrid transfer operators that allow for nondyadic grid hierarchies. As a consequence, no restrictions concerning the image size or the number of traversed levels have to be imposed. In the experimental section, we juxtapose the developed multigrid schemes and demonstrate their superior performance when compared to unidirectional multgrid methods and nonhierachical solvers. For the well-known 316 x 252 Yosemite sequence, we succeeded in computing the complete set of dense flow fields in three quarters of a second on a 3.06-GHz Pentium4 PC. This corresponds to a frame rate of 18 flow fields per second which outperforms the widely-used Gauss-Seidel method by almost three orders of magnitude.

  9. Economic impact of off-line PC viewer for private folder management

    NASA Astrophysics Data System (ADS)

    Song, Koun-Sik; Shin, Myung J.; Lee, Joo Hee; Auh, Yong H.

    1999-07-01

    We developed a PC-based clinical workstation and implemented at Asan Medical Center in Seoul, Korea, Hardwares used were Pentium-II, 8M video memory, 64-128 MB RAM, 19 inch color monitor, and 10/100Mbps network adaptor. One of the unique features of this workstation is management tool for folders reside both in PACS short-term storage unit and local hard disk. Users can copy the entire study or part of the study to local hard disk, removable storages, or CD recorder. Even the images in private folders in PACS short-term storage can be copied to local storage devices. All images are saved as DICOM 3.0 file format with 2:1 lossless compression. We compared the prices of copy films and storage medias considering the possible savings of expensive PACS short- term storage and network traffic. Price savings of copy film is most remarkable in MR exam. Price savings arising from minimal use of short-term unit was 50,000 dollars. It as hard to calculate the price savings arising from the network usage. Off-line PC viewer is a cost-effective way of handling private folder management under the PACS environment.

  10. Real-time image sequence segmentation using curve evolution

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Liu, Weisong

    2001-04-01

    In this paper, we describe a novel approach to image sequence segmentation and its real-time implementation. This approach uses the 3D structure tensor to produce a more robust frame difference signal and uses curve evolution to extract whole objects. Our algorithm is implemented on a standard PC running the Windows operating system with video capture from a USB camera that is a standard Windows video capture device. Using the Windows standard video I/O functionalities, our segmentation software is highly portable and easy to maintain and upgrade. In its current implementation on a Pentium 400, the system can perform segmentation at 5 frames/sec with a frame resolution of 160 by 120.

  11. The Military Language Tutor (MILT)

    DTIC Science & Technology

    1998-11-01

    interactive tutor in a Pentium based laptop computer. The first version of MILT with keyboard input was designed for Spanish and Arabic and can recognize... NLP ). The goal of the MILT design team was an authoring system which would require no formal external training and which could be learned within four

  12. Pentium Pro inside. 1; A treecode at 430 Gigaflops on ASCI Red

    NASA Technical Reports Server (NTRS)

    Warren, M. S.; Becker, D. J.; Sterling, T.; Salmon, J. K.; Goda, M. P.

    1997-01-01

    As an entry for the 1997 Gordon Bell performance prize, we present results from two methods of solving the gravitational N-body problem on the Intel Teraflops system at Sandia National Laboratory (ASCI Red). The first method, an O(N2) algorithm, obtained 635 Gigaflops for a 1 million particle problem on 6800 Pentium Pro processors. The second solution method, a tree-code which scales as O(N log N), sustained 170 Gigaflops over a continuous 9.4 hour period on 4096 processors, integrating the motion of 322 million mutually interacting particles in a cosmology simulation, while saving over 100 Gigabytes of raw data. Additionally, the tree-code sustained 430 Gigaflops on 6800 processors for the first 5 time-steps of that simulation. This tree-code solution is approximately 105 times more efficient than the O(N2) algorithm for this problem. As an entry for the 1997 Gordon Bell price/performance prize, we present two calculations from the disciplines of astrophysics and fluid dynamics. The simulations were performed on two 16 Pentium Pro processor Beowulf-class computers (Loki and Hyglac) constructed entirely from commodity personal computer technology, at a cost of roughly $50k each in September, 1996. The price of an equivalent system in August 1997 is less than $30. At Los Alamos, Loki performed a gravitational tree-code N-body simulation of galaxy formation using 9.75 million particles, which sustained an average of 879 Mflops over a ten day period, and produced roughly 10 Gbytes of raw data.

  13. Low-cost real-time 3D PC distributed-interactive-simulation (DIS) application for C4I

    NASA Astrophysics Data System (ADS)

    Gonthier, David L.; Veron, Harry

    1998-04-01

    A 3D Distributed Interactive Simulation (DIS) application was developed and demonstrated in a PC environment. The application is capable of running in the stealth mode or as a player which includes battlefield simulations, such as ModSAF. PCs can be clustered together, but not necessarily collocated, to run a simulation or training exercise on their own. A 3D perspective view of the battlefield is displayed that includes terrain, trees, buildings and other objects supported by the DIS application. Screen update rates of 15 to 20 frames per second have been achieved with fully lit and textured scenes thus providing high quality and fast graphics. A complete PC system can be configured for under $2,500. The software runs under Windows95 and WindowsNT. It is written in C++ and uses a commercial API called RenderWare for 3D rendering. The software uses Microsoft Foundation classes and Microsoft DirectPlay for joystick input. The RenderWare libraries enhance the performance through optimization for MMX and the Pentium Pro processor. The RenderWare and the Righteous 3D graphics board from Orchid Technologies with an advertised rendering rate of up to 2 million texture mapped triangles per second. A low-cost PC DIS simulator that can partake in a real-time collaborative simulation with other platforms is thus achieved.

  14. Automated micromanipulation desktop station based on mobile piezoelectric microrobots

    NASA Astrophysics Data System (ADS)

    Fatikow, Sergej

    1996-12-01

    One of the main problems of present-day research on microsystem technology (MST) is to assemble a whole micro- system from different microcomponents. This paper presents a new concept of an automated micromanipulation desktop- station including piezoelectrically driven microrobots placed on a high-precise x-y-stage of a light microscope, a CCD-camera as a local sensor subsystem, a laser sensor unit as a global sensor subsystem, a parallel computer system with C167 microcontrollers, and a Pentium PC equipped additionally with an optical grabber. The microrobots can perform high-precise manipulations (with an accuracy of up to 10 nm) and a nondestructive transport (at a speed of about 3 cm/sec) of very small objects under the microscope. To control the desktop-station automatically, an advanced control system that includes a task planning level and a real-time execution level is being developed. The main function of the task planning sub-system is to interpret the implicit action plan and to generate a sequence of explicit operations which are sent to the execution level of the control system. The main functions of the execution control level are the object recognition, image processing and feedback position control of the microrobot and the microscope stage.

  15. Interactive dual-volume rendering visualization with real-time fusion and transfer function enhancement

    NASA Astrophysics Data System (ADS)

    Macready, Hugh; Kim, Jinman; Feng, David; Cai, Weidong

    2006-03-01

    Dual-modality imaging scanners combining functional PET and anatomical CT constitute a challenge in volumetric visualization that can be limited by the high computational demand and expense. This study aims at providing physicians with multi-dimensional visualization tools, in order to navigate and manipulate the data running on a consumer PC. We have maximized the utilization of pixel-shader architecture of the low-cost graphic hardware and the texture-based volume rendering to provide visualization tools with high degree of interactivity. All the software was developed using OpenGL and Silicon Graphics Inc. Volumizer, tested on a Pentium mobile CPU on a PC notebook with 64M graphic memory. We render the individual modalities separately, and performing real-time per-voxel fusion. We designed a novel "alpha-spike" transfer function to interactively identify structure of interest from volume rendering of PET/CT. This works by assigning a non-linear opacity to the voxels, thus, allowing the physician to selectively eliminate or reveal information from the PET/CT volumes. As the PET and CT are rendered independently, manipulations can be applied to individual volumes, for instance, the application of transfer function to CT to reveal the lung boundary while adjusting the fusion ration between the CT and PET to enhance the contrast of a tumour region, with the resultant manipulated data sets fused together in real-time as the adjustments are made. In addition to conventional navigation and manipulation tools, such as scaling, LUT, volume slicing, and others, our strategy permits efficient visualization of PET/CT volume rendering which can potentially aid in interpretation and diagnosis.

  16. 50 CFR 660.314 - Groundfish observer program.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... provided to the crew. (2) Safe conditions. Maintain safe conditions on the vessel for the protection of... to safe operation of the vessel, and provisions at §§ 600.725 and 600.746 of this chapter. (3... computer in working condition that contains a full Pentium 120 Mhz or greater capacity processing chip, at...

  17. Computer-based test-bed for clinical assessment of hand/wrist feed-forward neuroprosthetic controllers using artificial neural networks.

    PubMed

    Luján, J L; Crago, P E

    2004-11-01

    Neuroprosthestic systems can be used to restore hand grasp and wrist control in individuals with C5/C6 spinal cord injury. A computer-based system was developed for the implementation, tuning and clinical assessment of neuroprosthetic controllers, using off-the-shelf hardware and software. The computer system turned a Pentium III PC running Windows NT into a non-dedicated, real-time system for the control of neuroprostheses. Software execution (written using the high-level programming languages LabVIEW and MATLAB) was divided into two phases: training and real-time control. During the training phase, the computer system collected input/output data by stimulating the muscles and measuring the muscle outputs in real-time, analysed the recorded data, generated a set of training data and trained an artificial neural network (ANN)-based controller. During real-time control, the computer system stimulated the muscles using stimulus pulsewidths predicted by the ANN controller in response to a sampled input from an external command source, to provide independent control of hand grasp and wrist posture. System timing was stable, reliable and capable of providing muscle stimulation at frequencies up to 24Hz. To demonstrate the application of the test-bed, an ANN-based controller was implemented with three inputs and two independent channels of stimulation. The ANN controller's ability to control hand grasp and wrist angle independently was assessed by quantitative comparison of the outputs of the stimulated muscles with a set of desired grasp or wrist postures determined by the command signal. Controller performance results were mixed, but the platform provided the tools to implement and assess future controller designs.

  18. Cluster Computing For Real Time Seismic Array Analysis.

    NASA Astrophysics Data System (ADS)

    Martini, M.; Giudicepietro, F.

    A seismic array is an instrument composed by a dense distribution of seismic sen- sors that allow to measure the directional properties of the wavefield (slowness or wavenumber vector) radiated by a seismic source. Over the last years arrays have been widely used in different fields of seismological researches. In particular they are applied in the investigation of seismic sources on volcanoes where they can be suc- cessfully used for studying the volcanic microtremor and long period events which are critical for getting information on the volcanic systems evolution. For this reason arrays could be usefully employed for the volcanoes monitoring, however the huge amount of data produced by this type of instruments and the processing techniques which are quite time consuming limited their potentiality for this application. In order to favor a direct application of arrays techniques to continuous volcano monitoring we designed and built a small PC cluster able to near real time computing the kinematics properties of the wavefield (slowness or wavenumber vector) produced by local seis- mic source. The cluster is composed of 8 Intel Pentium-III bi-processors PC working at 550 MHz, and has 4 Gigabytes of RAM memory. It runs under Linux operating system. The developed analysis software package is based on the Multiple SIgnal Classification (MUSIC) algorithm and is written in Fortran. The message-passing part is based upon the LAM programming environment package, an open-source imple- mentation of the Message Passing Interface (MPI). The developed software system includes modules devote to receiving date by internet and graphical applications for the continuous displaying of the processing results. The system has been tested with a data set collected during a seismic experiment conducted on Etna in 1999 when two dense seismic arrays have been deployed on the northeast and the southeast flanks of this volcano. A real time continuous acquisition system has been simulated by a pro- gram which reads data from disk files and send them to a remote host by using the Internet protocols.

  19. Information Warfare and Cyber Defense

    DTIC Science & Technology

    2002-04-22

    Information Technology Trends Power Is Up 1980 1982 1986 1989 1992 1996 1998 2000 286 386 486 Pentium P6 Pentium 4 286k 1MB 4MB 16MB 64MB 256 MB...384 MBDRAM CPU (Source: EIA, CNET, Gartner, Dell -- 2000) 2002 512 MB Pentium 4/ Celeron 5 Information Technology Trends Price Is Down Cost per MIPS...Operations Architecture Technology Info Assurance PDD-56 PDD-63 PDD-68 Information Operations Focus Areas Elements • PSYOP • Deception • EW •

  20. A marker-based watershed method for X-ray image segmentation.

    PubMed

    Zhang, Xiaodong; Jia, Fucang; Luo, Suhuai; Liu, Guiying; Hu, Qingmao

    2014-03-01

    Digital X-ray images are the most frequent modality for both screening and diagnosis in hospitals. To facilitate subsequent analysis such as quantification and computer aided diagnosis (CAD), it is desirable to exclude image background. A marker-based watershed segmentation method was proposed to segment background of X-ray images. The method consisted of six modules: image preprocessing, gradient computation, marker extraction, watershed segmentation from markers, region merging and background extraction. One hundred clinical direct radiograph X-ray images were used to validate the method. Manual thresholding and multiscale gradient based watershed method were implemented for comparison. The proposed method yielded a dice coefficient of 0.964±0.069, which was better than that of the manual thresholding (0.937±0.119) and that of multiscale gradient based watershed method (0.942±0.098). Special means were adopted to decrease the computational cost, including getting rid of few pixels with highest grayscale via percentile, calculation of gradient magnitude through simple operations, decreasing the number of markers by appropriate thresholding, and merging regions based on simple grayscale statistics. As a result, the processing time was at most 6s even for a 3072×3072 image on a Pentium 4 PC with 2.4GHz CPU (4 cores) and 2G RAM, which was more than one time faster than that of the multiscale gradient based watershed method. The proposed method could be a potential tool for diagnosis and quantification of X-ray images. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  1. The Acceleration of Structural Microarchitectural Simulation via Scheduling

    DTIC Science & Technology

    2006-11-01

    193 viii List of Tables 1.1 Size of Intel R ©Processors...Table 1.1 shows the total and estimated non-cache transistor counts in succeeding generations of Intel R ©microprocessors. (Cache array transistors are...Intel486TM 1989 1,200,000 800,000 Intel R ©Pentium R © 1993 3,100,000 2,300,000 Intel R ©Pentium R ©II 1997 7,500,000 5,500,000 Intel R ©Pentium R ©III 1999

  2. Automatic needle segmentation in 3D ultrasound images using 3D improved Hough transform

    NASA Astrophysics Data System (ADS)

    Zhou, Hua; Qiu, Wu; Ding, Mingyue; Zhang, Songgen

    2008-03-01

    3D ultrasound (US) is a new technology that can be used for a variety of diagnostic applications, such as obstetrical, vascular, and urological imaging, and has been explored greatly potential in the applications of image-guided surgery and therapy. Uterine adenoma and uterine bleeding are the two most prevalent diseases in Chinese woman, and a minimally invasive ablation system using a needle-like RF button electrode is widely used to destroy tumor cells or stop bleeding. To avoid accidents or death of the patient by inaccurate localizations of the electrode and the tumor position during treatment, 3D US guidance system was developed. In this paper, a new automated technique, the 3D Improved Hough Transform (3DIHT) algorithm, which is potentially fast, accurate, and robust to provide needle segmentation in 3D US image for use of 3D US imaging guidance, was presented. Based on the coarse-fine search strategy and a four parameter representation of lines in 3D space, 3DIHT algorithm can segment needles quickly, accurately and robustly. The technique was evaluated using the 3D US images acquired by scanning a water phantom. The segmentation position deviation of the line was less than 2mm and angular deviation was much less than 2°. The average computational time measured on a Pentium IV 2.80GHz PC computer with a 381×381×250 image was less than 2s.

  3. Combined approach of shell and shear-warp rendering for efficient volume visualization

    NASA Astrophysics Data System (ADS)

    Falcao, Alexandre X.; Rocha, Leonardo M.; Udupa, Jayaram K.

    2003-05-01

    In Medical Imaging, shell rendering (SR) and shear-warp rendering (SWR) are two ultra-fast and effective methods for volume visualization. We have previously shown that, typically, SWR can be on the average 1.38 times faster than SR, but it requires from 2 to 8 times more memory space than SR. In this paper, we propose an extension of the compact shell data structure utilized in SR to allow shear-warp factorization of the viewing matrix in order to obtain speed up gains for SR, without paying the high storage price of SWR. The new approach is called shear-warp shell rendering (SWSR). The paper describes the methods, points out their major differences in the computational aspects, and presents a comparative analysis of them in terms of speed, storage, and image quality. The experiments involve hard and fuzzy boundaries of 10 different objects of various sizes, shapes, and topologies, rendered on a 1GHz Pentium-III PC with 512MB RAM, utilizing surface and volume rendering strategies. The results indicate that SWSR offers the best speed and storage characteristics compromise among these methods. We also show that SWSR improves the rendition quality over SR, and provides renditions similar to those produced by SWR.

  4. Multithreaded transactions in scientific computing: New versions of a computer program for kinematical calculations of RHEED intensity oscillations

    NASA Astrophysics Data System (ADS)

    Brzuszek, Marcin; Daniluk, Andrzej

    2006-11-01

    Writing a concurrent program can be more difficult than writing a sequential program. Programmer needs to think about synchronisation, race conditions and shared variables. Transactions help reduce the inconvenience of using threads. A transaction is an abstraction, which allows programmers to group a sequence of actions on the program into a logical, higher-level computation unit. This paper presents multithreaded versions of the GROWTH program, which allow to calculate the layer coverages during the growth of thin epitaxial films and the corresponding RHEED intensities according to the kinematical approximation. The presented programs also contain graphical user interfaces, which enable displaying program data at run-time. New version program summaryTitles of programs:GROWTHGr, GROWTH06 Catalogue identifier:ADVL_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVL_v2_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, N. Ireland Catalogue identifier of previous version:ADVL Does the new version supersede the original program:No Computer for which the new version is designed and others on which it has been tested: Pentium-based PC Operating systems or monitors under which the new version has been tested: Windows 9x, XP, NT Programming language used:Object Pascal Memory required to execute with typical data:More than 1 MB Number of bits in a word:64 bits Number of processors used:1 No. of lines in distributed program, including test data, etc.:20 931 Number of bytes in distributed program, including test data, etc.: 1 311 268 Distribution format:tar.gz Nature of physical problem: The programs compute the RHEED intensities during the growth of thin epitaxial structures prepared using the molecular beam epitaxy (MBE). The computations are based on the use of kinematical diffraction theory [P.I. Cohen, G.S. Petrich, P.R. Pukite, G.J. Whaley, A.S. Arrott, Surf. Sci. 216 (1989) 222. [1

  5. A Discussion of the Discrete Fourier Transform Execution on a Typical Desktop PC

    NASA Technical Reports Server (NTRS)

    White, Michael J.

    2006-01-01

    This paper will discuss and compare the execution times of three examples of the Discrete Fourier Transform (DFT). The first two examples will demonstrate the direct implementation of the algorithm. In the first example, the Fourier coefficients are generated at the execution of the DFT. In the second example, the coefficients are generated prior to execution and the DFT coefficients are indexed at execution. The last example will demonstrate the Cooley- Tukey algorithm, better known as the Fast Fourier Transform. All examples were written in C executed on a PC using a Pentium 4 running at 1.7 Ghz. As a function of N, the total complex data size, the direct implementation DFT executes, as expected at order of N2 and the FFT executes at order of N log2 N. At N=16K, there is an increase in processing time beyond what is expected. This is not caused by implementation but is a consequence of the effect that machine architecture and memory hierarchy has on implementation. This paper will include a brief overview of digital signal processing, along with a discussion of contemporary work with discrete Fourier processing.

  6. Short term load forecasting using a self-supervised adaptive neural network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, H.; Pimmel, R.L.

    The authors developed a self-supervised adaptive neural network to perform short term load forecasts (STLF) for a large power system covering a wide service area with several heavy load centers. They used the self-supervised network to extract correlational features from temperature and load data. In using data from the calendar year 1993 as a test case, they found a 0.90 percent error for hour-ahead forecasting and 1.92 percent error for day-ahead forecasting. These levels of error compare favorably with those obtained by other techniques. The algorithm ran in a couple of minutes on a PC containing an Intel Pentium --more » 120 MHz CPU. Since the algorithm included searching the historical database, training the network, and actually performing the forecasts, this approach provides a real-time, portable, and adaptable STLF.« less

  7. NearFar: A computer program for nearside farside decomposition of heavy-ion elastic scattering amplitude

    NASA Astrophysics Data System (ADS)

    Cha, Moon Hoe

    2007-02-01

    The NearFar program is a package for carrying out an interactive nearside-farside decomposition of heavy-ion elastic scattering amplitude. The program is implemented in Java to perform numerical operations on the nearside and farside angular distributions. It contains a graphical display interface for the numerical results. A test run has been applied to the elastic O16+Si28 scattering at E=1503 MeV. Program summaryTitle of program: NearFar Catalogue identifier: ADYP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYP_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: none Computers: designed for any machine capable of running Java, developed on PC-Pentium-4 Operating systems under which the program has been tested: Microsoft Windows XP (Home Edition) Program language used: Java Number of bits in a word: 64 Memory required to execute with typical data: case dependent No. of lines in distributed program, including test data, etc.: 3484 Number of bytes distributed program, including test data, etc.: 142 051 Distribution format: tar.gz Other software required: A Java runtime interpreter, or the Java Development Kit, version 5.0 Nature of physical problem: Interactive nearside-farside decomposition of heavy-ion elastic scattering amplitude. Method of solution: The user must supply a external data file or PPSM parameters which calculates theoretical values of the quantities to be decomposed. Typical running time: Problem dependent. In a test run, it is about 35 s on a 2.40 GHz Intel P4-processor machine.

  8. The Development of the Puerto Rico Lightning Detection Network for Meteorological Research

    NASA Technical Reports Server (NTRS)

    Legault, Marc D.; Miranda, Carmelo; Medin, J.; Ojeda, L. J.; Blakeslee, Richard J.

    2011-01-01

    A land-based Puerto Rico Lightning Detection Network (PR-LDN) dedicated to the academic research of meteorological phenomena has being developed. Five Boltek StormTracker PCI-Receivers with LTS-2 Timestamp Cards with GPS and lightning detectors were integrated to Pentium III PC-workstations running the CentOS linux operating system. The Boltek detector linux driver was compiled under CentOS, modified, and thoroughly tested. These PC-workstations with integrated lightning detectors were installed at five of the University of Puerto Rico (UPR) campuses distributed around the island of PR. The PC-workstations are left on permanently in order to monitor lightning activity at all times. Each is networked to their campus network-backbone permitting quasi-instantaneous data transfer to a central server at the UPR-Bayam n campus. Information generated by each lightning detector is managed by a C-program developed by us called the LDN-client. The LDN-client maintains an open connection to the central server operating the LDN-server program where data is sent real-time for analysis and archival. The LDN-client also manages the storing of data on the PC-workstation hard disk. The LDN-server software (also an in-house effort) analyses the data from each client and performs event triangulations. Time-of-arrival (TOA) and related hybrid algorithms, lightning-type and event discriminating routines are also implemented in the LDN-server software. We also have developed software to visually monitor lightning events in real-time from all clients and the triangulated events. We are currently monitoring and studying the spatial, temporal, and type distribution of lightning strikes associated with electrical storms and tropical cyclones in the vicinity of Puerto Rico.

  9. Monochromatic, Rosseland mean, and Planck mean opacity routine

    NASA Astrophysics Data System (ADS)

    Semenov, D.

    2006-11-01

    Several FORTRAN77 codes were developed to compute frequency-dependent, Rosseland and Planck mean opacities of gas and dust in protoplanetary disks. The opacities can be computed for an ensemble of dust grains having various compositions (ices, silicates, organics, etc), sizes, topologies (homogeneous/composite aggregates, homogeneous/layered/composite spheres, etc.), porosities, and dust-to-gas ratio. Several examples are available. In addition, a very fast opacity routine to be used in modeling of the radiative transfer in hydro simulations of disks is available upon request (10^8 routine calls require about 30s on Pentium 4 3.0GHz).

  10. A computer program for two-particle intrinsic coefficients of fractional parentage

    NASA Astrophysics Data System (ADS)

    Deveikis, A.

    2012-06-01

    A Fortran 90 program CESOS for the calculation of the two-particle intrinsic coefficients of fractional parentage for several j-shells with isospin and an arbitrary number of oscillator quanta (CESOs) is presented. The implemented procedure for CESOs calculation consistently follows the principles of antisymmetry and translational invariance. The approach is based on a simple enumeration scheme for antisymmetric many-particle states, efficient algorithms for calculation of the coefficients of fractional parentage for j-shells with isospin, and construction of the subspace of the center-of-mass Hamiltonian eigenvectors corresponding to the minimal eigenvalue equal to 3/2 (in ℏω). The program provides fast calculation of CESOs for a given particle number and produces results possessing small numerical uncertainties. The introduced CESOs may be used for calculation of expectation values of two-particle nuclear shell-model operators within the isospin formalism. Program summaryProgram title: CESOS Catalogue identifier: AELT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 10 932 No. of bytes in distributed program, including test data, etc.: 61 023 Distribution format: tar.gz Programming language: Fortran 90 Computer: Any computer with a Fortran 90 compiler Operating system: Windows XP, Linux RAM: The memory demand depends on the number of particles A and the excitation energy of the system E. Computation of the A=6 particle system with the total angular momentum J=0 and the total isospin T=1 requires around 4 kB of RAM at E=0,˜3 MB at E=3, and ˜172 MB at E=5. Classification: 17.18 Nature of problem: The code CESOS generates a list of two-particle intrinsic coefficients of fractional parentage for several j-shells with isospin. Solution method: The method is based on the observation that CESOs may be obtained by diagonalizing the center-of-mass Hamiltonian in the basis set of antisymmetric A-particle oscillator functions with singled out dependence on Jacobi coordinates of two last particles and choosing the subspace of its eigenvectors corresponding to the minimal eigenvalue equal to 3/2. Restrictions: One run of the code CESOS generates CESOs for one specified set of (A,E,J,T) values only. The restrictions on the (A,E,J,T) values are completely determined by the restrictions on the computation of the single-shell CFPs and two-particle multishell CFPs (GCFPs) [1]. The full sets of single-shell CFPs may be calculated up to the j=9/2 shell (for any particular shell of the configuration); the shell with j⩾11/2 cannot get full (it is the implementation constraint). The calculation of GCFPs is limited by A<86 when E=0 (due to the memory constraints); small numbers of particles allow significantly higher excitations. Any allowed values of J and T may be chosen for the specified values of A and E. The complete list of allowed values of J and T for the chosen values of A and E may be generated by the GCFP program - CPC Program Library, Catalogue Id. AEBI_v1_0. The actual scale of the CESOs computation problem depends strongly on the magnitude of the A and E values. Though there are no limitations on A and E values (within the limits of single-shell CFPs and multishell CFPs calculation), however the generation of corresponding list of CESOs is the subject of available computing resources. For example, the computing time of CESOs for A=6, JT=10 at E=5 took around 14 hours. The system with A=11, JT=1/23/2 at E=2 requires around 15 hours. These computations were performed on Pentium 3 GHz PC with 1 GB RAM [2]. Unusual features: It is possible to test the computed CESOs without saving them to a file. This allows the user to learn their number and approximate computation time and to evaluate the accuracy of calculations. Additional comments: The program CESOS uses the code from GCFP program for calculation of the two-particle multishell coefficients of fractional parentage. Running time: It depends on the size of the problem. The A=6 particle system with the JT=01 took around 31 seconds on Pentium 3 GHz PC with 1 GB RAM at E=3 and about 2.6 hours at E=5.

  11. Newsgroups, Activist Publics, and Corporate Apologia: The Case of Intel and Its Pentium Chip.

    ERIC Educational Resources Information Center

    Hearit, Keith Michael

    1999-01-01

    Applies J. Grunig's theory of publics to the phenomenon of Internet newsgroups using the case of the flawed Intel Pentium chip. Argues that technology facilitates the rapid movement of publics from the theoretical construct stage to the active stage. Illustrates some of the difficulties companies face in establishing their identity in cyberspace.…

  12. The Bulletin of Military Operations Research, PHALANX, Vol. 31, No. 2.

    DTIC Science & Technology

    1998-06-01

    introduction of the Pentium II processor, the writeable CD, and the Digital Video Disc (DVD). Just around the corner, around the turn of the century...broader audi- ence. Presentations that use special visual aids ( videos , computers, etc.), short presen- tations best depicted with color charts...Throughout the treatment of data, anoth- er weapon we should take is Tukey’s Tor- pedo (John W. Tukey, "Sunset Salvo," The American Statistician, vol

  13. HORN-6 special-purpose clustered computing system for electroholography.

    PubMed

    Ichihashi, Yasuyuki; Nakayama, Hirotaka; Ito, Tomoyoshi; Masuda, Nobuyuki; Shimobaba, Tomoyoshi; Shiraki, Atsushi; Sugie, Takashige

    2009-08-03

    We developed the HORN-6 special-purpose computer for holography. We designed and constructed the HORN-6 board to handle an object image composed of one million points and constructed a cluster system composed of 16 HORN-6 boards. Using this HORN-6 cluster system, we succeeded in creating a computer-generated hologram of a three-dimensional image composed of 1,000,000 points at a rate of 1 frame per second, and a computer-generated hologram of an image composed of 100,000 points at a rate of 10 frames per second, which is near video rate, when the size of a computer-generated hologram is 1,920 x 1,080. The calculation speed is approximately 4,600 times faster than that of a personal computer with an Intel 3.4-GHz Pentium 4 CPU.

  14. 25 ns software correlator for photon and fluorescence correlation spectroscopy

    NASA Astrophysics Data System (ADS)

    Magatti, Davide; Ferri, Fabio

    2003-02-01

    A 25 ns time resolution, multi-tau software correlator developed in LABVIEW based on the use of a standard photon counting unit, a fast timer/counter board (6602-PCI National Instrument) and a personal computer (PC) (1.5 GHz Pentium 4) is presented and quantitatively discussed. The correlator works by processing the stream of incoming data in parallel according to two different algorithms: For large lag times (τ⩾100 μs), a classical time-mode (TM) scheme, based on the measure of the number of pulses per time interval, is used; differently, for τ⩽100 μs a photon-mode (PM) scheme is adopted and the time sequence of the arrival times of the photon pulses is measured. By combining the two methods, we developed a system capable of working out correlation functions on line, in full real time for the TM correlator and partially in batch processing for the PM correlator. For the latter one, the duty cycle depends on the count rate of the incoming pulses, being ˜100% for count rates ⩽3×104 Hz, ˜15% at 105 Hz, and ˜1% at 106 Hz. For limitations imposed by the fairly small first-in, first-out (FIFO) buffer available on the counter board, the maximum count rate permissible for a proper functioning of the PM correlator is limited to ˜105 Hz. However, this limit can be removed by using a board with a deeper FIFO. Similarly, the 25 ns time resolution is only limited by maximum clock frequency available on the 6602-PCI and can be easily improved by using a faster clock. When tested on dilute solutions of calibrated latex spheres, the overall performances of the correlator appear to be comparable with those of commercial hardware correlators, but with several nontrivial advantages related to its flexibility, low cost, and easy adaptability to future developments of PC and data acquisition technology.

  15. Platform-independent software for medical image processing on the Internet

    NASA Astrophysics Data System (ADS)

    Mancuso, Michael E.; Pathak, Sayan D.; Kim, Yongmin

    1997-05-01

    We have developed a software tool for image processing over the Internet. The tool is a general purpose, easy to use, flexible, platform independent image processing software package with functions most commonly used in medical image processing.It provides for processing of medical images located wither remotely on the Internet or locally. The software was written in Java - the new programming language developed by Sun Microsystems. It was compiled and tested using Microsoft's Visual Java 1.0 and Microsoft's Just in Time Compiler 1.00.6211. The software is simple and easy to use. In order to use the tool, the user needs to download the software from our site before he/she runs it using any Java interpreter, such as those supplied by Sun, Symantec, Borland or Microsoft. Future versions of the operating systems supplied by Sun, Microsoft, Apple, IBM, and others will include Java interpreters. The software is then able to access and process any image on the iNternet or on the local computer. Using a 512 X 512 X 8-bit image, a 3 X 3 convolution took 0.88 seconds on an Intel Pentium Pro PC running at 200 MHz with 64 Mbytes of memory. A window/level operation took 0.38 seconds while a 3 X 3 median filter took 0.71 seconds. These performance numbers demonstrate the feasibility of using this software interactively on desktop computes. Our software tool supports various image processing techniques commonly used in medical image processing and can run without the need of any specialized hardware. It can become an easily accessible resource over the Internet to promote the learning and of understanding image processing algorithms. Also, it could facilitate sharing of medical image databases and collaboration amongst researchers and clinicians, regardless of location.

  16. Real-time Java simulations of multiple interference dielectric filters

    NASA Astrophysics Data System (ADS)

    Kireev, Alexandre N.; Martin, Olivier J. F.

    2008-12-01

    An interactive Java applet for real-time simulation and visualization of the transmittance properties of multiple interference dielectric filters is presented. The most commonly used interference filters as well as the state-of-the-art ones are embedded in this platform-independent applet which can serve research and education purposes. The Transmittance applet can be freely downloaded from the site http://cpc.cs.qub.ac.uk. Program summaryProgram title: Transmittance Catalogue identifier: AEBQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5778 No. of bytes in distributed program, including test data, etc.: 90 474 Distribution format: tar.gz Programming language: Java Computer: Developed on PC-Pentium platform Operating system: Any Java-enabled OS. Applet was tested on Windows ME, XP, Sun Solaris, Mac OS RAM: Variable Classification: 18 Nature of problem: Sophisticated wavelength selective multiple interference filters can include some tens or even hundreds of dielectric layers. The spectral response of such a stack is not obvious. On the other hand, there is a strong demand from application designers and students to get a quick insight into the properties of a given filter. Solution method: A Java applet was developed for the computation and the visualization of the transmittance of multilayer interference filters. It is simple to use and the embedded filter library can serve educational purposes. Also, its ability to handle complex structures will be appreciated as a useful research and development tool. Running time: Real-time simulations

  17. [Substantiation of the choice of technical means in reduction of implementation costs of the project "Full automation of a central municipal hospital"].

    PubMed

    Nikitin, Iu D; Ovchinnikov, A V

    1998-01-01

    A way of reducing the cost price of hospital automation is proposed. It is not necessary for it to update the whole equipment, but only a small part--the workstations used by programmers for their work, which support the stability of hospital automation; the working places of operators should be kept without modifications, but to allot them properties to inherit a potency and modernity of the purchased equipment; for this purpose they should be equipped with virtual machines copying properties of workstations being arrange in accordance with the pyramidal structure. A UNIX which represents a multi-user, multitask operational operative system providing an access on several pseudoterminals is simultaneously installed on the PENTIUM 100/133 workstation. A graphic terminal of the AMR "UnTerminal" firm (USA) is proposed for use as working places. Their advantage is that they have a special adapter connected directly to the bus of PC extension. Each user is allotted a video adapter, a keyboard controller, sequential and parallel interfaces for connection of the printer and manipulator. Each working place supports multitasking and it can be equipped with a printer, a "mouse" or modem. The image is transmitted on work places with a very high velocity-77 mehabits/sec that supports not only a text mode, but also VGA or SVGA graphics. Certainly, graphic terminals are more expensive than text terminals, but their capacities are similar to those of the main computer, here, the workstation. They may be located from the main computer at a distance of up to 75 meters or more and do not require adjustment during their installation.

  18. Workload Characterization of CFD Applications Using Partial Differential Equation Solvers

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    Workload characterization is used for modeling and evaluating of computing systems at different levels of detail. We present workload characterization for a class of Computational Fluid Dynamics (CFD) applications that solve Partial Differential Equations (PDEs). This workload characterization focuses on three high performance computing platforms: SGI Origin2000, EBM SP-2, a cluster of Intel Pentium Pro bases PCs. We execute extensive measurement-based experiments on these platforms to gather statistics of system resource usage, which results in workload characterization. Our workload characterization approach yields a coarse-grain resource utilization behavior that is being applied for performance modeling and evaluation of distributed high performance metacomputing systems. In addition, this study enhances our understanding of interactions between PDE solver workloads and high performance computing platforms and is useful for tuning these applications.

  19. High-Fidelity Modeling of Computer Network Worms

    DTIC Science & Technology

    2004-06-22

    plots the propagation of the TCP-based worm. This execution is among the largest TCP worm models simulated to date at packet-level. TCP vs . UDP Worm...the mapping of the virtual IP addresses to honeyd’s MAC address in the proxy’s ARP table. The proxy server listens for packets from both sides of...experimental setup, we used two ntium-4 ThinkPad , and an IBM Pentium-III ThinkPad ), running the proxy server and honeyd respectively. The Code Red II worm

  20. Design of a modified adaptive neuro fuzzy inference system classifier for medical diagnosis of Pima Indians Diabetes

    NASA Astrophysics Data System (ADS)

    Sagir, Abdu Masanawa; Sathasivam, Saratha

    2017-08-01

    Medical diagnosis is the process of determining which disease or medical condition explains a person's determinable signs and symptoms. Diagnosis of most of the diseases is very expensive as many tests are required for predictions. This paper aims to introduce an improved hybrid approach for training the adaptive network based fuzzy inference system with Modified Levenberg-Marquardt algorithm using analytical derivation scheme for computation of Jacobian matrix. The goal is to investigate how certain diseases are affected by patient's characteristics and measurement such as abnormalities or a decision about presence or absence of a disease. To achieve an accurate diagnosis at this complex stage of symptom analysis, the physician may need efficient diagnosis system to classify and predict patient condition by using an adaptive neuro fuzzy inference system (ANFIS) pre-processed by grid partitioning. The proposed hybridised intelligent system was tested with Pima Indian Diabetes dataset obtained from the University of California at Irvine's (UCI) machine learning repository. The proposed method's performance was evaluated based on training and test datasets. In addition, an attempt was done to specify the effectiveness of the performance measuring total accuracy, sensitivity and specificity. In comparison, the proposed method achieves superior performance when compared to conventional ANFIS based gradient descent algorithm and some related existing methods. The software used for the implementation is MATLAB R2014a (version 8.3) and executed in PC Intel Pentium IV E7400 processor with 2.80 GHz speed and 2.0 GB of RAM.

  1. Portable Computer Technology (PCT) Research and Development Program Phase 2

    NASA Technical Reports Server (NTRS)

    Castillo, Michael; McGuire, Kenyon; Sorgi, Alan

    1995-01-01

    The subject of this project report, focused on: (1) Design and development of two Advanced Portable Workstation 2 (APW 2) units. These units incorporate advanced technology features such as a low power Pentium processor, a high resolution color display, National Television Standards Committee (NTSC) video handling capabilities, a Personal Computer Memory Card International Association (PCMCIA) interface, and Small Computer System Interface (SCSI) and ethernet interfaces. (2) Use these units to integrate and demonstrate advanced wireless network and portable video capabilities. (3) Qualification of the APW 2 systems for use in specific experiments aboard the Mir Space Station. A major objective of the PCT Phase 2 program was to help guide future choices in computing platforms and techniques for meeting National Aeronautics and Space Administration (NASA) mission objectives. The focus being on the development of optimal configurations of computing hardware, software applications, and network technologies for use on NASA missions.

  2. Fiber optic interferometry for industrial process monitoring and control applications

    NASA Astrophysics Data System (ADS)

    Marcus, Michael A.

    2002-02-01

    Over the past few years we have been developing applications for a high-resolution (sub-micron accuracy) fiber optic coupled dual Michelson interferometer-based instrument. It is being utilized in a variety of applications including monitoring liquid layer thickness uniformity on coating hoppers, film base thickness uniformity measurement, digital camera focus assessment, optical cell path length assessment and imager and wafer surface profile mapping. The instrument includes both coherent and non-coherent light sources, custom application dependent optical probes and sample interfaces, a Michelson interferometer, custom electronics, a Pentium-based PC with data acquisition cards and LabWindows CVI or LabView based application specific software. This paper describes the development evolution of this instrument platform and applications highlighting robust instrument design, hardware, software, and user interfaces development. The talk concludes with a discussion of a new high-speed instrument configuration, which can be utilized for high speed surface profiling and as an on-line web thickness gauge.

  3. Miniature Heat Pipes

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Small Business Innovation Research contracts from Goddard Space Flight Center to Thermacore Inc. have fostered the company work on devices tagged "heat pipes" for space application. To control the extreme temperature ranges in space, heat pipes are important to spacecraft. The problem was to maintain an 8-watt central processing unit (CPU) at less than 90 C in a notebook computer using no power, with very little space available and without using forced convection. Thermacore's answer was in the design of a powder metal wick that transfers CPU heat from a tightly confined spot to an area near available air flow. The heat pipe technology permits a notebook computer to be operated in any position without loss of performance. Miniature heat pipe technology has successfully been applied, such as in Pentium Processor notebook computers. The company expects its heat pipes to accommodate desktop computers as well. Cellular phones, camcorders, and other hand-held electronics are forsible applications for heat pipes.

  4. Digital tomosynthesis mammography using a parallel maximum-likelihood reconstruction method

    NASA Astrophysics Data System (ADS)

    Wu, Tao; Zhang, Juemin; Moore, Richard; Rafferty, Elizabeth; Kopans, Daniel; Meleis, Waleed; Kaeli, David

    2004-05-01

    A parallel reconstruction method, based on an iterative maximum likelihood (ML) algorithm, is developed to provide fast reconstruction for digital tomosynthesis mammography. Tomosynthesis mammography acquires 11 low-dose projections of a breast by moving an x-ray tube over a 50° angular range. In parallel reconstruction, each projection is divided into multiple segments along the chest-to-nipple direction. Using the 11 projections, segments located at the same distance from the chest wall are combined to compute a partial reconstruction of the total breast volume. The shape of the partial reconstruction forms a thin slab, angled toward the x-ray source at a projection angle 0°. The reconstruction of the total breast volume is obtained by merging the partial reconstructions. The overlap region between neighboring partial reconstructions and neighboring projection segments is utilized to compensate for the incomplete data at the boundary locations present in the partial reconstructions. A serial execution of the reconstruction is compared to a parallel implementation, using clinical data. The serial code was run on a PC with a single PentiumIV 2.2GHz CPU. The parallel implementation was developed using MPI and run on a 64-node Linux cluster using 800MHz Itanium CPUs. The serial reconstruction for a medium-sized breast (5cm thickness, 11cm chest-to-nipple distance) takes 115 minutes, while a parallel implementation takes only 3.5 minutes. The reconstruction time for a larger breast using a serial implementation takes 187 minutes, while a parallel implementation takes 6.5 minutes. No significant differences were observed between the reconstructions produced by the serial and parallel implementations.

  5. Passive perception system for day/night autonomous off-road navigation

    NASA Astrophysics Data System (ADS)

    Rankin, Arturo L.; Bergh, Charles F.; Goldberg, Steven B.; Bellutta, Paolo; Huertas, Andres; Matthies, Larry H.

    2005-05-01

    Passive perception of terrain features is a vital requirement for military related unmanned autonomous vehicle operations, especially under electromagnetic signature management conditions. As a member of Team Raptor, the Jet Propulsion Laboratory developed a self-contained passive perception system under the DARPA funded PerceptOR program. An environmentally protected forward-looking sensor head was designed and fabricated in-house to straddle an off-the-shelf pan-tilt unit. The sensor head contained three color cameras for multi-baseline daytime stereo ranging, a pair of cooled mid-wave infrared cameras for nighttime stereo ranging, and supporting electronics to synchronize captured imagery. Narrow-baseline stereo provided improved range data density in cluttered terrain, while wide-baseline stereo provided more accurate ranging for operation at higher speeds in relatively open areas. The passive perception system processed stereo images and outputted over a local area network terrain maps containing elevation, terrain type, and detected hazards. A novel software architecture was designed and implemented to distribute the data processing on a 533MHz quad 7410 PowerPC single board computer under the VxWorks real-time operating system. This architecture, which is general enough to operate on N processors, has been subsequently tested on Pentium-based processors under Windows and Linux, and a Sparc based-processor under Unix. The passive perception system was operated during FY04 PerceptOR program evaluations at Fort A. P. Hill, Virginia, and Yuma Proving Ground, Arizona. This paper discusses the Team Raptor passive perception system hardware and software design, implementation, and performance, and describes a road map to faster and improved passive perception.

  6. Software for real-time control of a tidal liquid ventilator.

    PubMed

    Heckman, J L; Hoffman, J; Shaffer, T H; Wolfson, M R

    1999-01-01

    The purpose of this project was to develop and test computer software and control algorithms designed to operate a tidal liquid ventilator. The tests were executed on a 90-MHz Pentium PC with 16 MB RAM and a prototype liquid ventilator. The software was designed using Microsoft Visual C++ (Ver. 5.0) and the Microsoft Foundation Classes. It uses a graphic user interface, is multithreaded, runs in real time, and has a built-in simulator that facilitates user education in liquid-ventilation principles. The operator can use the software to specify ventilation parameters such as the frequency of ventilation, the tidal volume, and the inspiratory-expiratory time ratio. Commands are implemented via control of the pump speed and by setting the position of two two-way solenoid-controlled valves. Data for use in monitoring and control are gathered by analog-to-digital conversion. Control strategies are implemented to maintain lung volumes and airway pressures within desired ranges, according to limits set by the operator. Also, the software allows the operator to define the shape of the flow pulse during inspiration and expiration, and to optimize perfluorochemical liquid transfer while minimizing airway pressures and maintaining the desired tidal volume. The operator can stop flow during inspiration and expiration to measure alveolar pressures. At the end of expiration, the software stores all user commands and 30 ventilation parameters into an Excel spreadsheet for later review and analysis. Use of these software and control algorithms affords user-friendly operation of a tidal liquid ventilator while providing precise control of ventilation parameters.

  7. A DNA sequence analysis package for the IBM personal computer.

    PubMed Central

    Lagrimini, L M; Brentano, S T; Donelson, J E

    1984-01-01

    We present here a collection of DNA sequence analysis programs, called "PC Sequence" (PCS), which are designed to run on the IBM Personal Computer (PC). These programs are written in IBM PC compiled BASIC and take full advantage of the IBM PC's speed, error handling, and graphics capabilities. For a modest initial expense in hardware any laboratory can use these programs to quickly perform computer analysis on DNA sequences. They are written with the novice user in mind and require very little training or previous experience with computers. Also provided are a text editing program for creating and modifying DNA sequence files and a communications program which enables the PC to communicate with and collect information from mainframe computers and DNA sequence databases. PMID:6546433

  8. Generating heavy particles with energy and momentum conservation

    NASA Astrophysics Data System (ADS)

    Mereš, Michal; Melo, Ivan; Tomášik, Boris; Balek, Vladimír; Černý, Vladimír

    2011-12-01

    We propose a novel algorithm, called REGGAE, for the generation of momenta of a given sample of particle masses, evenly distributed in Lorentz-invariant phase space and obeying energy and momentum conservation. In comparison to other existing algorithms, REGGAE is designed for the use in multiparticle production in hadronic and nuclear collisions where many hadrons are produced and a large part of the available energy is stored in the form of their masses. The algorithm uses a loop simulating multiple collisions which lead to production of configurations with reasonably large weights. Program summaryProgram title: REGGAE (REscattering-after-Genbod GenerAtor of Events) Catalogue identifier: AEJR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJR_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1523 No. of bytes in distributed program, including test data, etc.: 9608 Distribution format: tar.gz Programming language: C++ Computer: PC Pentium 4, though no particular tuning for this machine was performed. Operating system: Originally designed on Linux PC with g++, but it has been compiled and ran successfully on OS X with g++ and MS Windows with Microsoft Visual C++ 2008 Express Edition, as well. RAM: This depends on the number of particles which are generated. For 10 particles like in the attached example it requires about 120 kB. Classification: 11.2 Nature of problem: The task is to generate momenta of a sample of particles with given masses which obey energy and momentum conservation. Generated samples should be evenly distributed in the available Lorentz-invariant phase space. Solution method: In general, the algorithm works in two steps. First, all momenta are generated with the GENBOD algorithm. There, particle production is modeled as a sequence of two-body decays of heavy resonances. After all momenta are generated this way, they are reshuffled. Each particle undergoes a collision with some other partner such that in the pair center of mass system the new directions of momenta are distributed isotropically. After each particle collides only a few times, the momenta are distributed evenly across the whole available phase space. Starting with GENBOD is not essential for the procedure but it improves the performance. Running time: This depends on the number of particles and number of events one wants to generate. On a LINUX PC with 2 GHz processor, generation of 1000 events with 10 particles each takes about 3 s.

  9. Simulations of dusty plasmas using a special-purpose computer system designed for gravitational N-body problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamamoto, K.; Mizuno, Y.; Hibino, S.

    2006-01-15

    Simulations of dusty plasmas were performed using GRAPE-6, a special-purpose computer designed for gravitational N-body problems. The collective behavior of dust particles, which are injected into the plasma, was studied by means of three-dimensional computer simulations. As an example of a dusty plasma simulation, experiments on Coulomb crystals in plasmas are simulated. Formation of a quasi-two-dimensional Coulomb crystal has been observed under typical laboratory conditions. Another example was to simulate movement of dust particles in plasmas under microgravity conditions. Fully three-dimensional spherical structures of dust clouds have been observed. For the simulation of a dusty plasma in microgravity with 3x10{supmore » 4} particles, GRAPE-6 can perform the whole operation 1000 times faster than by using a Pentium 4 1.6 GHz processor.« less

  10. Predicting Cost/Performance Trade-Offs for Whitney: A Commodity Computing Cluster

    NASA Technical Reports Server (NTRS)

    Becker, Jeffrey C.; Nitzberg, Bill; VanderWijngaart, Rob F.; Kutler, Paul (Technical Monitor)

    1997-01-01

    Recent advances in low-end processor and network technology have made it possible to build a "supercomputer" out of commodity components. We develop simple models of the NAS Parallel Benchmarks version 2 (NPB 2) to explore the cost/performance trade-offs involved in building a balanced parallel computer supporting a scientific workload. We develop closed form expressions detailing the number and size of messages sent by each benchmark. Coupling these with measured single processor performance, network latency, and network bandwidth, our models predict benchmark performance to within 30%. A comparison based on total system cost reveals that current commodity technology (200 MHz Pentium Pros with 100baseT Ethernet) is well balanced for the NPBs up to a total system cost of around $1,000,000.

  11. Comparison of empirical strategies to maximize GENEHUNTER lod scores.

    PubMed

    Chen, C H; Finch, S J; Mendell, N R; Gordon, D

    1999-01-01

    We compare four strategies for finding the settings of genetic parameters that maximize the lod scores reported in GENEHUNTER 1.2. The four strategies are iterated complete factorial designs, iterated orthogonal Latin hypercubes, evolutionary operation, and numerical optimization. The genetic parameters that are set are the phenocopy rate, penetrance, and disease allele frequency; both recessive and dominant models are considered. We selected the optimization of a recessive model on the Collaborative Study on the Genetics of Alcoholism (COGA) data of chromosome 1 for complete analysis. Convergence to a setting producing a local maximum required the evaluation of over 100 settings (for a time budget of 800 minutes on a Pentium II 300 MHz PC). Two notable local maxima were detected, suggesting the need for a more extensive search before claiming that a global maximum had been found. The orthogonal Latin hypercube design was the best strategy for finding areas that produced high lod scores with small numbers of evaluations. Numerical optimization starting from a region producing high lod scores was the strategy that found the highest maximum observed.

  12. Experience of the ARGO autonomous vehicle

    NASA Astrophysics Data System (ADS)

    Bertozzi, Massimo; Broggi, Alberto; Conte, Gianni; Fascioli, Alessandra

    1998-07-01

    This paper presents and discusses the first results obtained by the GOLD (Generic Obstacle and Lane Detection) system as an automatic driver of ARGO. ARGO is a Lancia Thema passenger car equipped with a vision-based system that allows to extract road and environmental information from the acquired scene. By means of stereo vision, obstacles on the road are detected and localized, while the processing of a single monocular image allows to extract the road geometry in front of the vehicle. The generality of the underlying approach allows to detect generic obstacles (without constraints on shape, color, or symmetry) and to detect lane markings even in dark and in strong shadow conditions. The hardware system consists of a PC Pentium 200 Mhz with MMX technology and a frame-grabber board able to acquire 3 b/w images simultaneously; the result of the processing (position of obstacles and geometry of the road) is used to drive an actuator on the steering wheel, while debug information are presented to the user on an on-board monitor and a led-based control panel.

  13. Finite volume multigrid method of the planar contraction flow of a viscoelastic fluid

    NASA Astrophysics Data System (ADS)

    Moatssime, H. Al; Esselaoui, D.; Hakim, A.; Raghay, S.

    2001-08-01

    This paper reports on a numerical algorithm for the steady flow of viscoelastic fluid. The conservative and constitutive equations are solved using the finite volume method (FVM) with a hybrid scheme for the velocities and first-order upwind approximation for the viscoelastic stress. A non-uniform staggered grid system is used. The iterative SIMPLE algorithm is employed to relax the coupled momentum and continuity equations. The non-linear algebraic equations over the flow domain are solved iteratively by the symmetrical coupled Gauss-Seidel (SCGS) method. In both, the full approximation storage (FAS) multigrid algorithm is used. An Oldroyd-B fluid model was selected for the calculation. Results are reported for planar 4:1 abrupt contraction at various Weissenberg numbers. The solutions are found to be stable and smooth. The solutions show that at high Weissenberg number the domain must be long enough. The convergence of the method has been verified with grid refinement. All the calculations have been performed on a PC equipped with a Pentium III processor at 550 MHz. Copyright

  14. A Comparison of the Apple Macintosh and IBM PC in Laboratory Applications.

    ERIC Educational Resources Information Center

    Williams, Ron

    1986-01-01

    Compares Apple Macintosh and IBM PC microcomputers in terms of their usefulness in the laboratory. No attempt is made to equalize the two computer systems since they represent opposite ends of the computer spectrum. Indicates that the IBM PC is the most useful general-purpose personal computer for laboratory applications. (JN)

  15. Biological Visualization, Imaging and Simulation(Bio-VIS) at NASA Ames Research Center: Developing New Software and Technology for Astronaut Training and Biology Research in Space

    NASA Technical Reports Server (NTRS)

    Smith, Jeffrey

    2003-01-01

    The Bio- Visualization, Imaging and Simulation (BioVIS) Technology Center at NASA's Ames Research Center is dedicated to developing and applying advanced visualization, computation and simulation technologies to support NASA Space Life Sciences research and the objectives of the Fundamental Biology Program. Research ranges from high resolution 3D cell imaging and structure analysis, virtual environment simulation of fine sensory-motor tasks, computational neuroscience and biophysics to biomedical/clinical applications. Computer simulation research focuses on the development of advanced computational tools for astronaut training and education. Virtual Reality (VR) and Virtual Environment (VE) simulation systems have become important training tools in many fields from flight simulation to, more recently, surgical simulation. The type and quality of training provided by these computer-based tools ranges widely, but the value of real-time VE computer simulation as a method of preparing individuals for real-world tasks is well established. Astronauts routinely use VE systems for various training tasks, including Space Shuttle landings, robot arm manipulations and extravehicular activities (space walks). Currently, there are no VE systems to train astronauts for basic and applied research experiments which are an important part of many missions. The Virtual Glovebox (VGX) is a prototype VE system for real-time physically-based simulation of the Life Sciences Glovebox where astronauts will perform many complex tasks supporting research experiments aboard the International Space Station. The VGX consists of a physical display system utilizing duel LCD projectors and circular polarization to produce a desktop-sized 3D virtual workspace. Physically-based modeling tools (Arachi Inc.) provide real-time collision detection, rigid body dynamics, physical properties and force-based controls for objects. The human-computer interface consists of two magnetic tracking devices (Ascention Inc.) attached to instrumented gloves (Immersion Inc.) which co-locate the user's hands with hand/forearm representations in the virtual workspace. Force-feedback is possible in a work volume defined by a Phantom Desktop device (SensAble inc.). Graphics are written in OpenGL. The system runs on a 2.2 GHz Pentium 4 PC. The prototype VGX provides astronauts and support personnel with a real-time physically-based VE system to simulate basic research tasks both on Earth and in the microgravity of Space. The immersive virtual environment of the VGX also makes it a useful tool for virtual engineering applications including CAD development, procedure design and simulation of human-system systems in a desktop-sized work volume.

  16. Method for measuring integrated sensitivity of solar cells and multielement photoconverters using an X-Y scanner

    NASA Astrophysics Data System (ADS)

    Naumov, V. V.; Grebenshchikov, O. A.; Zalesskii, V. B.

    2006-09-01

    We describe a method for automated measurement of the integrated sensitivity of solar cells (SCs) and multielement photoconverters (MPCs) using an experimental apparatus including a Pentium III personal computer (PC), an HP-34401A digital multimeter (DM), a stabilized radiation source (SRS), a controllable focusing system, an X-Y positioning device based on CD-RW optical disk storage devices. The method provides high accuracy in measuring the size of photosensitive areas of the solar cells and multielement photoconverters and inhomogeneities in their active regions, which makes it possible to correct the production process in the development stage and during fabrication of test prototypes for the solar cells and multielement photoconverters. The radiation power from the stabilized radiation source was ≤1 W; the ranges of the scanning steps along the X, Y coordinates were 10 100 µm, the range of the transverse cross sectional diameters of the focused radiation beam was 10 100 µm, the measurable photocurrents were 10-9 A to 2 A; scanning rate along the X, Y coordinates, ≤100 mm/sec; relative mean-square error (RMSE) for measurement of the integrated sensitivity of the solar cells, 0.2 ≤ γS int ≤ 0.9% in the ranges of measurable photocurrents 1 mA ≤ Iph ≤ 750 mA and areas 0.1 ≤ A ≤ 25 cm2 for number of measurements equal to ≤ 2· 105; instability of the radiation power (luminosity) ≤ 0.08% for 1 h or ≤ 0.4% for 8 h continuous operation; stabilized power range for the stabilized radiation source, 10-2 102 W. The software was written in Delphi 7.0.

  17. A multimedia perioperative record keeper for clinical research.

    PubMed

    Perrino, A C; Luther, M A; Phillips, D B; Levin, F L

    1996-05-01

    To develop a multimedia perioperative recordkeeper that provides: 1. synchronous, real-time acquisition of multimedia data, 2. on-line access to the patient's chart data, and 3. advanced data analysis capabilities through integrated, multimedia database and analysis applications. To minimize cost and development time, the system design utilized industry standard hardware components and graphical. software development tools. The system was configured to use a Pentium PC complemented with a variety of hardware interfaces to external data sources. These sources included physiologic monitors with data in digital, analog, video, and audio as well as paper-based formats. The development process was guided by trials in over 80 clinical cases and by the critiques from numerous users. As a result of this process, a suite of custom software applications were created to meet the design goals. The Perioperative Data Acquisition application manages data collection from a variety of physiological monitors. The Charter application provides for rapid creation of an electronic medical record from the patient's paper-based chart and investigator's notes. The Multimedia Medical Database application provides a relational database for the organization and management of multimedia data. The Triscreen application provides an integrated data analysis environment with simultaneous, full-motion data display. With recent technological advances in PC power, data acquisition hardware, and software development tools, the clinical researcher now has the ability to collect and examine a more complete perioperative record. It is hoped that the description of the MPR and its development process will assist and encourage others to advance these tools for perioperative research.

  18. PC Tutor. Bericht uber ein PC-gestutzes Tutorensystem = PC Tutor. Report on a Tutoring System with Personal Computer. ZIFF Papiere 75.

    ERIC Educational Resources Information Center

    Fritsch, Helmut

    A project was conducted to increase as well as to professionalize communication between tutors and learners in a West German university's distance education program by the use of personal computers. Two tutors worked on the systematic development of a PC-based correcting system. The goal, apart from developing general language skills in English,…

  19. A Tutorial for SPSS/PC+ Studentware. Study Guide for the Doctor of Arts in Computer-Based Learning.

    ERIC Educational Resources Information Center

    MacFarland, Thomas W.; Hou, Cheng-I

    The purpose of this tutorial is to provide the basic information needed for success with SPSS/PC+ Studentware, a student version of the statistical analysis software offered by SPSS, Inc., for the IBM PC+ and compatible computers. It is intended as a convenient summary of how to organize and conduct the most common computer-based statistical…

  20. Eigen Spreading

    DTIC Science & Technology

    2008-02-27

    between the PHY layer and for example a host PC computer . The PC wants to generate and receive a sequence of data packets. The PC may also want to send...the testbed is quite similar. Given the intense computational requirements of SVD and other matrix mode operations needed to support eigen spreading a...platform for real time operation. This task is probably the major challenge in the development of the testbed. All compute intensive tasks will be

  1. Semi-Automated Identification of Rocks in Images

    NASA Technical Reports Server (NTRS)

    Bornstein, Benjamin; Castano, Andres; Anderson, Robert

    2006-01-01

    Rock Identification Toolkit Suite is a computer program that assists users in identifying and characterizing rocks shown in images returned by the Mars Explorer Rover mission. Included in the program are components for automated finding of rocks, interactive adjustments of outlines of rocks, active contouring of rocks, and automated analysis of shapes in two dimensions. The program assists users in evaluating the surface properties of rocks and soil and reports basic properties of rocks. The program requires either the Mac OS X operating system running on a G4 (or more capable) processor or a Linux operating system running on a Pentium (or more capable) processor, plus at least 128MB of random-access memory.

  2. Application of Reconfigurable Computing Technology to Multi-KiloHertz Micro-Laser Altimeter (MMLA) Data Processing

    NASA Technical Reports Server (NTRS)

    Powell, Wesley; Dabney, Philip; Hicks, Edward; Pinchinat, Maxime; Day, John H. (Technical Monitor)

    2002-01-01

    The Multi-KiloHertz Micro-Laser Altimeter (MMLA) is an aircraft based instrument developed by NASA Goddard Space Flight Center with several potential spaceflight applications. This presentation describes how reconfigurable computing technology was employed to perform MMLA signal extraction in real-time under realistic operating constraints. The MMLA is a "single-photon-counting" airborne laser altimeter that is used to measure land surface features such as topography and vegetation canopy height. This instrument has to date flown a number of times aboard the NASA P3 aircraft acquiring data at a number of sites in the Mid-Atlantic region. This instrument pulses a relatively low-powered laser at a very high rate (10 kHz) and then measures the time-of-flight of discrete returns from the target surface. The instrument then bins these measurements into a two-dimensional array (vertical height vs. horizontal ground track) and selects the most likely signal path through the array. Return data that does not correspond to the selected signal path are classified as noise returns and are then discarded. The MMLA signal extraction algorithm is very compute intensive in that a score must be computed for every possible path through the two dimensional array in order to select the most likely signal path. Given a typical array size with 50 x 6, up to 33 arrays must be processed per second. And for each of these arrays, roughly 12,000 individual paths must be scored. Furthermore, the number of paths increases exponentially with the horizontal size of the array, and linearly with the vertical size. Yet, increasing the horizontal and vertical sizes of the array offer science advantages such as improved range, resolution, and noise rejection. Due to the volume of return data and the compute intensive signal extraction algorithm, the existing PC-based MMLA data system has been unable to perform signal extraction in real-time unless the array is limited in size to one column, This limits the ability of the MMLA to operate in environments with sparse signal returns and a high number of noise return. However, under an IR&D project, an FPGA-based, reconfigurable computing data system has been developed that has been demonstrated to perform real-time signal extraction under realistic operating constraints. This reconfigurable data system is based on the commercially available Firebird Board from Annapolis Microsystems. This PCI board consists of a Xilinx Virtex 2000E FPGA along with 36 MB of SRAM arranged in five separately addressable banks. This board is housed in a rackmount PC with dual 850MHz Pentium processors running the Windows 2000 operating system. This data system performs all signal extraction in hardware on the Firebird, but also runs the existing "software based" signal extraction in tandem for comparison purposes. Using a relatively small amount of the Virtex XCV2000E resources, the reconfigurable data system has demonstrated to improve performance improvement over the existing software based data system by an order of magnitude. Performance could be further improved by employing parallelism. Ground testing and a preliminary engineering test flight aboard the NASA P3 has been performed, during which the reconfigurable data system has been demonstrated to match the results of the existing data system.

  3. Modeling of processes of formation of the images in optical-electronic systems

    NASA Astrophysics Data System (ADS)

    Grudin, B. N.; Plotnikov, V. S.; Fischenko, V. K.

    2001-08-01

    The digital model of the multicomponent coherent optical system with arbitrary layout of optical elements (lasers, lenses, phototransparencies with recording of the function of transmission of a specimens or filters, photoregistrars), constructed with usage of fast algorithms is considered. The model is realized as the program for personal computers in operational systems Windows 95, 98 and Windows NT. At simulation, for example, coherent system consisting of twenty elementary optical cascades a relative error in the output image as a rule does not exceed 0.25% when N >= 256 (N x N - the number of discrete samples on the image), and time of calculation of the output image on a computer (Pentium-2, 300 MHz) for N = 512 does not exceed one minute. The program of simulation of coherent optical systems will be utilized in scientific researches and at tutoring the students of Far East State University.

  4. A Parallel Multigrid Solver for Viscous Flows on Anisotropic Structured Grids

    NASA Technical Reports Server (NTRS)

    Prieto, Manuel; Montero, Ruben S.; Llorente, Ignacio M.; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    This paper presents an efficient parallel multigrid solver for speeding up the computation of a 3-D model that treats the flow of a viscous fluid over a flat plate. The main interest of this simulation lies in exhibiting some basic difficulties that prevent optimal multigrid efficiencies from being achieved. As the computing platform, we have used Coral, a Beowulf-class system based on Intel Pentium processors and equipped with GigaNet cLAN and switched Fast Ethernet networks. Our study not only examines the scalability of the solver but also includes a performance evaluation of Coral where the investigated solver has been used to compare several of its design choices, namely, the interconnection network (GigaNet versus switched Fast-Ethernet) and the node configuration (dual nodes versus single nodes). As a reference, the performance results have been compared with those obtained with the NAS-MG benchmark.

  5. Choice of Human-Computer Interaction Mode in Stroke Rehabilitation.

    PubMed

    Mousavi Hondori, Hossein; Khademi, Maryam; Dodakian, Lucy; McKenzie, Alison; Lopes, Cristina V; Cramer, Steven C

    2016-03-01

    Advances in technology are providing new forms of human-computer interaction. The current study examined one form of human-computer interaction, augmented reality (AR), whereby subjects train in the real-world workspace with virtual objects projected by the computer. Motor performances were compared with those obtained while subjects used a traditional human-computer interaction, that is, a personal computer (PC) with a mouse. Patients used goal-directed arm movements to play AR and PC versions of the Fruit Ninja video game. The 2 versions required the same arm movements to control the game but had different cognitive demands. With AR, the game was projected onto the desktop, where subjects viewed the game plus their arm movements simultaneously, in the same visual coordinate space. In the PC version, subjects used the same arm movements but viewed the game by looking up at a computer monitor. Among 18 patients with chronic hemiparesis after stroke, the AR game was associated with 21% higher game scores (P = .0001), 19% faster reaching times (P = .0001), and 15% less movement variability (P = .0068), as compared to the PC game. Correlations between game score and arm motor status were stronger with the AR version. Motor performances during the AR game were superior to those during the PC game. This result is due in part to the greater cognitive demands imposed by the PC game, a feature problematic for some patients but clinically useful for others. Mode of human-computer interface influences rehabilitation therapy demands and can be individualized for patients. © The Author(s) 2015.

  6. Ada Compiler Validation Summary Report: Certificate Number: 940325S1. 11352 DDC-I DACS Sun SPARC/Solaries to Pentium PM Bare Ada Cross Compiler System, Version 4.6.4 Sun SPARCclassic = Intel Pentium (Operated as Bare Machine) Based in Xpress Desktop (Intel Product Number: XBASE6E4F-B)

    DTIC Science & Technology

    1994-03-25

    Technology Building 225, Room A266 Gait•--eburg, Maryland 20899 U.S.A. Ada Von Ogan~ztionAda Jointt Program Office De & Software David R . Basel...Standards and Technology Building 225, Room A266 Gaithersburg, Maryland 20899 U.S.A. azi Ada Joint Program office Directoz’,’Coputer & Softvare David R ...characters, a bar (" r ) is written in the 16th position and the rest of the characters ame not prined. "* The place of the definition, i.e.. a line

  7. Optimal Padding for the Two-Dimensional Fast Fourier Transform

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.; Aronstein, David L.; Smith, Jeffrey S.

    2011-01-01

    One-dimensional Fast Fourier Transform (FFT) operations work fastest on grids whose size is divisible by a power of two. Because of this, padding grids (that are not already sized to a power of two) so that their size is the next highest power of two can speed up operations. While this works well for one-dimensional grids, it does not work well for two-dimensional grids. For a two-dimensional grid, there are certain pad sizes that work better than others. Therefore, the need exists to generalize a strategy for determining optimal pad sizes. There are three steps in the FFT algorithm. The first is to perform a one-dimensional transform on each row in the grid. The second step is to transpose the resulting matrix. The third step is to perform a one-dimensional transform on each row in the resulting grid. Steps one and three both benefit from padding the row to the next highest power of two, but the second step needs a novel approach. An algorithm was developed that struck a balance between optimizing the grid pad size with prime factors that are small (which are optimal for one-dimensional operations), and with prime factors that are large (which are optimal for two-dimensional operations). This algorithm optimizes based on average run times, and is not fine-tuned for any specific application. It increases the amount of times that processor-requested data is found in the set-associative processor cache. Cache retrievals are 4-10 times faster than conventional memory retrievals. The tested implementation of the algorithm resulted in faster execution times on all platforms tested, but with varying sized grids. This is because various computer architectures process commands differently. The test grid was 512 512. Using a 540 540 grid on a Pentium V processor, the code ran 30 percent faster. On a PowerPC, a 256x256 grid worked best. A Core2Duo computer preferred either a 1040x1040 (15 percent faster) or a 1008x1008 (30 percent faster) grid. There are many industries that can benefit from this algorithm, including optics, image-processing, signal-processing, and engineering applications.

  8. Computer (PC/Network) Coordinator.

    ERIC Educational Resources Information Center

    Ohio State Univ., Columbus. Center on Education and Training for Employment.

    This publication contains 22 subjects appropriate for use in a competency list for the occupation of computer (PC/network) coordinator, 1 of 12 occupations within the business/computer technologies cluster. Each unit consists of a number of competencies; a list of competency builders is provided for each competency. Titles of the 22 units are as…

  9. Computers in Post-Secondary Developmental Education and Learning Assistance.

    ERIC Educational Resources Information Center

    Christ, Frank L.; McLaughlin, Richard C.

    This update on computer technology--as it affects learning assistance directors and developmental education personnel--begins by reporting on new developments and changes that have taken place during the past two years in five areas: (1) hardware (microcomputer systems, low cost PC clones, combination Apple/PC machines, lab computer controllers…

  10. Integrating a Single Tablet PC in Chemistry, Engineering, and Physics Courses

    ERIC Educational Resources Information Center

    Rogers, James W.; Cox, James R.

    2008-01-01

    A tablet PC is a versatile computer that combines the computing power of a notebook with the pen functionality of a PDA (Cox and Rogers 2005b). The authors adopted tablet PC technology in order to improve the process and product of the lecture format in their chemistry, engineering, and physics courses. In this high-tech model, a single tablet PC…

  11. Design and implementation of a scene-dependent dynamically selfadaptable wavefront coding imaging system

    NASA Astrophysics Data System (ADS)

    Carles, Guillem; Ferran, Carme; Carnicer, Artur; Bosch, Salvador

    2012-01-01

    A computational imaging system based on wavefront coding is presented. Wavefront coding provides an extension of the depth-of-field at the expense of a slight reduction of image quality. This trade-off results from the amount of coding used. By using spatial light modulators, a flexible coding is achieved which permits it to be increased or decreased as needed. In this paper a computational method is proposed for evaluating the output of a wavefront coding imaging system equipped with a spatial light modulator, with the aim of thus making it possible to implement the most suitable coding strength for a given scene. This is achieved in an unsupervised manner, thus the whole system acts as a dynamically selfadaptable imaging system. The program presented here controls the spatial light modulator and the camera, and also processes the images in a synchronised way in order to implement the dynamic system in real time. A prototype of the system was implemented in the laboratory and illustrative examples of the performance are reported in this paper. Program summaryProgram title: DynWFC (Dynamic WaveFront Coding) Catalogue identifier: AEKC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 10 483 No. of bytes in distributed program, including test data, etc.: 2 437 713 Distribution format: tar.gz Programming language: Labview 8.5 and NI Vision and MinGW C Compiler Computer: Tested on PC Intel ® Pentium ® Operating system: Tested on Windows XP Classification: 18 Nature of problem: The program implements an enhanced wavefront coding imaging system able to adapt the degree of coding to the requirements of a specific scene. The program controls the acquisition by a camera, the display of a spatial light modulator and the image processing operations synchronously. The spatial light modulator is used to implement the phase mask with flexibility given the trade-off between depth-of-field extension and image quality achieved. The action of the program is to evaluate the depth-of-field requirements of the specific scene and subsequently control the coding established by the spatial light modulator, in real time.

  12. Performance Comparison of Mainframe, Workstations, Clusters, and Desktop Computers

    NASA Technical Reports Server (NTRS)

    Farley, Douglas L.

    2005-01-01

    A performance evaluation of a variety of computers frequently found in a scientific or engineering research environment was conducted using a synthetic and application program benchmarks. From a performance perspective, emerging commodity processors have superior performance relative to legacy mainframe computers. In many cases, the PC clusters exhibited comparable performance with traditional mainframe hardware when 8-12 processors were used. The main advantage of the PC clusters was related to their cost. Regardless of whether the clusters were built from new computers or whether they were created from retired computers their performance to cost ratio was superior to the legacy mainframe computers. Finally, the typical annual maintenance cost of legacy mainframe computers is several times the cost of new equipment such as multiprocessor PC workstations. The savings from eliminating the annual maintenance fee on legacy hardware can result in a yearly increase in total computational capability for an organization.

  13. A Method for Transferring Photoelectric Photometry Data from Apple II+ to IBM PC

    NASA Astrophysics Data System (ADS)

    Powell, Harry D.; Miller, James R.; Stephenson, Kipp

    1989-06-01

    A method is presented for transferring photoelectric photometry data files from an Apple II computer to an IBM PC computer in a form which is compatible with the AAVSO Photoelectric Photometry data collection process.

  14. Control Code for Bearingless Switched-Reluctance Motor

    NASA Technical Reports Server (NTRS)

    Morrison, Carlos R.

    2007-01-01

    A computer program has been devised for controlling a machine that is an integral combination of magnetic bearings and a switched-reluctance motor. The motor contains an eight-pole stator and a hybrid rotor, which has both (1) a circular lamination stack for levitation and (2) a six-pole lamination stack for rotation. The program computes drive and levitation currents for the stator windings with real-time feedback control. During normal operation, two of the four pairs of opposing stator poles (each pair at right angles to the other pair) levitate the rotor. The remaining two pairs of stator poles exert torque on the six-pole rotor lamination stack to produce rotation. This version is executable in a control-loop time of 40 s on a Pentium (or equivalent) processor that operates at a clock speed of 400 MHz. The program can be expanded, by addition of logic blocks, to enable control of position along additional axes. The code enables adjustment of operational parameters (e.g., motor speed and stiffness, and damping parameters of magnetic bearings) through computer keyboard key presses.

  15. IBM PC enhances the world's future

    NASA Technical Reports Server (NTRS)

    Cox, Jozelle

    1988-01-01

    Although the purpose of this research is to illustrate the importance of computers to the public, particularly the IBM PC, present examinations will include computers developed before the IBM PC was brought into use. IBM, as well as other computing facilities, began serving the public years ago, and is continuing to find ways to enhance the existence of man. With new developments in supercomputers like the Cray-2, and the recent advances in artificial intelligence programming, the human race is gaining knowledge at a rapid pace. All have benefited from the development of computers in the world; not only have they brought new assets to life, but have made life more and more of a challenge everyday.

  16. Jargon that Computes: Today's PC Terminology.

    ERIC Educational Resources Information Center

    Crawford, Walt

    1997-01-01

    Discusses PC (personal computer) and telecommunications terminology in context: Integrated Services Digital Network (ISDN); Asymmetric Digital Subscriber Line (ADSL); cable modems; satellite downloads; T1 and T3 lines; magnitudes ("giga-,""nano-"); Central Processing Unit (CPU); Random Access Memory (RAM); Universal Serial Bus…

  17. IBM PC/IX operating system evaluation plan

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Granier, Martin; Hall, Philip P.; Triantafyllopoulos, Spiros

    1984-01-01

    An evaluation plan for the IBM PC/IX Operating System designed for IBM PC/XT computers is discussed. The evaluation plan covers the areas of performance measurement and evaluation, software facilities available, man-machine interface considerations, networking, and the suitability of PC/IX as a development environment within the University of Southwestern Louisiana NASA PC Research and Development project. In order to compare and evaluate the PC/IX system, comparisons with other available UNIX-based systems are also included.

  18. 45 CFR Appendix B to Part 96 - SSBG Reporting Form and Instructions

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... any data must appear elsewhere in the annual report. Report Submission Using PC Diskettes States with personal computer (PC) equipment may submit this data using PC diskettes in addition to the hardcopy form...

  19. 45 CFR Appendix B to Part 96 - SSBG Reporting Form and Instructions

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... any data must appear elsewhere in the annual report. Report Submission Using PC Diskettes States with personal computer (PC) equipment may submit this data using PC diskettes in addition to the hardcopy form...

  20. Two schemes for rapid generation of digital video holograms using PC cluster

    NASA Astrophysics Data System (ADS)

    Park, Hanhoon; Song, Joongseok; Kim, Changseob; Park, Jong-Il

    2017-12-01

    Computer-generated holography (CGH), which is a process of generating digital holograms, is computationally expensive. Recently, several methods/systems of parallelizing the process using graphic processing units (GPUs) have been proposed. Indeed, use of multiple GPUs or a personal computer (PC) cluster (each PC with GPUs) enabled great improvements in the process speed. However, extant literature has less often explored systems involving rapid generation of multiple digital holograms and specialized systems for rapid generation of a digital video hologram. This study proposes a system that uses a PC cluster and is able to more efficiently generate a video hologram. The proposed system is designed to simultaneously generate multiple frames and accelerate the generation by parallelizing the CGH computations across a number of frames, as opposed to separately generating each individual frame while parallelizing the CGH computations within each frame. The proposed system also enables the subprocesses for generating each frame to execute in parallel through multithreading. With these two schemes, the proposed system significantly reduced the data communication time for generating a digital hologram when compared with that of the state-of-the-art system.

  1. An Automated Parallel Image Registration Technique Based on the Correlation of Wavelet Features

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline; Campbell, William J.; Cromp, Robert F.; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    With the increasing importance of multiple platform/multiple remote sensing missions, fast and automatic integration of digital data from disparate sources has become critical to the success of these endeavors. Our work utilizes maxima of wavelet coefficients to form the basic features of a correlation-based automatic registration algorithm. Our wavelet-based registration algorithm is tested successfully with data from the National Oceanic and Atmospheric Administration (NOAA) Advanced Very High Resolution Radiometer (AVHRR) and the Landsat/Thematic Mapper(TM), which differ by translation and/or rotation. By the choice of high-frequency wavelet features, this method is similar to an edge-based correlation method, but by exploiting the multi-resolution nature of a wavelet decomposition, our method achieves higher computational speeds for comparable accuracies. This algorithm has been implemented on a Single Instruction Multiple Data (SIMD) massively parallel computer, the MasPar MP-2, as well as on the CrayT3D, the Cray T3E and a Beowulf cluster of Pentium workstations.

  2. Fortran Program for X-Ray Photoelectron Spectroscopy Data Reformatting

    NASA Technical Reports Server (NTRS)

    Abel, Phillip B.

    1989-01-01

    A FORTRAN program has been written for use on an IBM PC/XT or AT or compatible microcomputer (personal computer, PC) that converts a column of ASCII-format numbers into a binary-format file suitable for interactive analysis on a Digital Equipment Corporation (DEC) computer running the VGS-5000 Enhanced Data Processing (EDP) software package. The incompatible floating-point number representations of the two computers were compared, and a subroutine was created to correctly store floating-point numbers on the IBM PC, which can be directly read by the DEC computer. Any file transfer protocol having provision for binary data can be used to transmit the resulting file from the PC to the DEC machine. The data file header required by the EDP programs for an x ray photoelectron spectrum is also written to the file. The user is prompted for the relevant experimental parameters, which are then properly coded into the format used internally by all of the VGS-5000 series EDP packages.

  3. A general formula for computing maximum proportion correct scores in various psychophysical paradigms with arbitrary probability distributions of stimulus observations.

    PubMed

    Dai, Huanping; Micheyl, Christophe

    2015-05-01

    Proportion correct (Pc) is a fundamental measure of task performance in psychophysics. The maximum Pc score that can be achieved by an optimal (maximum-likelihood) observer in a given task is of both theoretical and practical importance, because it sets an upper limit on human performance. Within the framework of signal detection theory, analytical solutions for computing the maximum Pc score have been established for several common experimental paradigms under the assumption of Gaussian additive internal noise. However, as the scope of applications of psychophysical signal detection theory expands, the need is growing for psychophysicists to compute maximum Pc scores for situations involving non-Gaussian (internal or stimulus-induced) noise. In this article, we provide a general formula for computing the maximum Pc in various psychophysical experimental paradigms for arbitrary probability distributions of sensory activity. Moreover, easy-to-use MATLAB code implementing the formula is provided. Practical applications of the formula are illustrated, and its accuracy is evaluated, for two paradigms and two types of probability distributions (uniform and Gaussian). The results demonstrate that Pc scores computed using the formula remain accurate even for continuous probability distributions, as long as the conversion from continuous probability density functions to discrete probability mass functions is supported by a sufficiently high sampling resolution. We hope that the exposition in this article, and the freely available MATLAB code, facilitates calculations of maximum performance for a wider range of experimental situations, as well as explorations of the impact of different assumptions concerning internal-noise distributions on maximum performance in psychophysical experiments.

  4. Analysis of 100Mb/s Ethernet for the Whitney Commodity Computing Testbed

    NASA Technical Reports Server (NTRS)

    Fineberg, Samuel A.; Pedretti, Kevin T.; Kutler, Paul (Technical Monitor)

    1997-01-01

    We evaluate the performance of a Fast Ethernet network configured with a single large switch, a single hub, and a 4x4 2D torus topology in a testbed cluster of "commodity" Pentium Pro PCs. We also evaluated a mixed network composed of ethernet hubs and switches. An MPI collective communication benchmark, and the NAS Parallel Benchmarks version 2.2 (NPB2) show that the torus network performs best for all sizes that we were able to test (up to 16 nodes). For larger networks the ethernet switch outperforms the hub, though its performance is far less than peak. The hub/switch combination tests indicate that the NAS parallel benchmarks are relatively insensitive to hub densities of less than 7 nodes per hub.

  5. LORES: Low resolution shape program for the calculation of small angle scattering profiles for biological macromolecules in solution

    NASA Astrophysics Data System (ADS)

    Zhou, J.; Deyhim, A.; Krueger, S.; Gregurick, S. K.

    2005-08-01

    A program for determining the low resolution shape of biological macromolecules, based on the optimization of a small angle neutron scattering profile to experimental data, is presented. This program, termed LORES, relies on a Monte Carlo optimization procedure and will allow for multiple scattering length densities of complex structures. It is therefore more versatile than utilizing a form factor approach to produce low resolution structural models. LORES is easy to compile and use, and allows for structural modeling of biological samples in real time. To illustrate the effectiveness and versatility of the program, we present four specific biological examples, Apoferritin (shell model), Ribonuclease S (ellipsoidal model), a 10-mer dsDNA (duplex helix) and a construct of a 10-mer DNA/PNA duplex helix (heterogeneous structure). These examples are taken from protein and nucleic acid SANS studies, of both large and small scale structures. We find, in general, that our program will accurately reproduce the geometric shape of a given macromolecule, when compared with the known crystallographic structures. We also present results to illustrate the lower limit of the experimental resolution which the LORES program is capable of modeling. Program summaryTitle of program:LORES Catalogue identifier: ADVC Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVC Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer:SGI Origin200, SGI Octane, SGI Linux, Intel Pentium PC Operating systems:UNIX64 6.5 and LINUX 2.4.7 Programming language used:C Memory required to execute with typical data:8 MB No. of lines in distributed program, including test data, etc.:2270 No. of bytes in distributed program, including test data, etc.:13 302 Distribution format:tar.gz External subprograms used:The entire code must be linked with the MATH library

  6. A PC-Based Controller for Dextrous Arms

    NASA Technical Reports Server (NTRS)

    Fiorini, Paolo; Seraji, Homayoun; Long, Mark

    1996-01-01

    This paper describes the architecture and performance of a PC-based controller for 7-DOF dextrous manipulators. The computing platform is a 486-based personal computer equipped with a bus extender to access the robot Multibus controller, together with a single board computer as the graphical engine, and with a parallel I/O board to interface with a force-torque sensor mounted on the manipulator wrist.

  7. CARE+ user study: usability and attitudes towards a tablet pc computer counseling tool for HIV+ men and women.

    PubMed

    Skeels, Meredith M; Kurth, Ann; Clausen, Marc; Severynen, Anneleen; Garcia-Smith, Hal

    2006-01-01

    CARE+ is a tablet PC-based computer counseling tool designed to support medication adherence and secondary HIV prevention for people living with HIV. Thirty HIV+ men and women participated in our user study to assess usability and attitudes towards CARE+. We observed them using CARE+ for the first time and conducted a semi-structured interview afterwards. Our findings suggest computer counseling may reduce social bias and encourage participants to answer questions honestly. Participants felt that discussing sensitive subjects with a computer instead of a person reduced feelings of embarrassment and being judged, and promoted privacy. Results also confirm that potential users think computers can provide helpful counseling, and that many also want human counseling interaction. Our study also revealed that tablet PC-based applications are usable by our population of mixed experience computer users. Computer counseling holds great potential for providing assessment and health promotion to individuals with chronic conditions such as HIV.

  8. The use of PC based VR in clinical medicine: the VREPAR projects.

    PubMed

    Riva, G; Bacchetta, M; Baruffi, M; Borgomainerio, E; Defrance, C; Gatti, F; Galimberti, C; Fontaneto, S; Marchi, S; Molinari, E; Nugues, P; Rinaldi, S; Rovetta, A; Ferretti, G S; Tonci, A; Wann, J; Vincelli, F

    1999-01-01

    Virtual reality (VR) is an emerging technology that alters the way individuals interact with computers: a 3D computer-generated environment in which a person can move about and interact as if he actually was inside it. Given to the high computational power required to create virtual environments, these are usually developed on expensive high-end workstations. However, the significant advances in PC hardware that have been made over the last three years, are making PC-based VR a possible solution for clinical assessment and therapy. VREPAR - Virtual Reality Environments for Psychoneurophysiological Assessment and Rehabilitation - are two European Community funded projects (Telematics for health - HC 1053/HC 1055 - http://www.psicologia.net) that are trying to develop a modular PC-based virtual reality system for the medical market. The paper describes the rationale of the developed modules and the preliminary results obtained.

  9. Code OK3 - An upgraded version of OK2 with beam wobbling function

    NASA Astrophysics Data System (ADS)

    Ogoyski, A. I.; Kawata, S.; Popov, P. H.

    2010-07-01

    For computer simulations on heavy ion beam (HIB) irradiation onto a target with an arbitrary shape and structure in heavy ion fusion (HIF), the code OK2 was developed and presented in Computer Physics Communications 161 (2004). Code OK3 is an upgrade of OK2 including an important capability of wobbling beam illumination. The wobbling beam introduces a unique possibility for a smooth mechanism of inertial fusion target implosion, so that sufficient fusion energy is released to construct a fusion reactor in future. New version program summaryProgram title: OK3 Catalogue identifier: ADST_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADST_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 221 517 No. of bytes in distributed program, including test data, etc.: 2 471 015 Distribution format: tar.gz Programming language: C++ Computer: PC (Pentium 4, 1 GHz or more recommended) Operating system: Windows or UNIX RAM: 2048 MBytes Classification: 19.7 Catalogue identifier of previous version: ADST_v2_0 Journal reference of previous version: Comput. Phys. Comm. 161 (2004) 143 Does the new version supersede the previous version?: Yes Nature of problem: In heavy ion fusion (HIF), ion cancer therapy, material processing, etc., a precise beam energy deposition is essentially important [1]. Codes OK1 and OK2 have been developed to simulate the heavy ion beam energy deposition in three-dimensional arbitrary shaped targets [2, 3]. Wobbling beam illumination is important to smooth the beam energy deposition nonuniformity in HIF, so that a uniform target implosion is realized and a sufficient fusion output energy is released. Solution method: OK3 code works on the base of OK1 and OK2 [2, 3]. The code simulates a multi-beam illumination on a target with arbitrary shape and structure, including beam wobbling function. Reasons for new version: The code OK3 is based on OK2 [3] and uses the same algorithm with some improvements, the most important one is the beam wobbling function. Summary of revisions:In the code OK3, beams are subdivided on many bunches. The displacement of each bunch center from the initial beam direction is calculated. Code OK3 allows the beamlet number to vary from bunch to bunch. That reduces the calculation error especially in case of very complicated mesh structure with big internal holes. The target temperature rises during the time of energy deposition. Some procedures are improved to perform faster. The energy conservation is checked up on each step of calculation process and corrected if necessary. New procedures included in OK3 Procedure BeamCenterRot( ) rotates the beam axis around the impinging direction of each beam. Procedure BeamletRot( ) rotates the beamlet axes that belong to each beam. Procedure Rotation( ) sets the coordinates of rotated beams and beamlets in chamber and pellet systems. Procedure BeamletOut( ) calculates the lost energy of ions that have not impinged on the target. Procedure TargetT( ) sets the temperature of the target layer of energy deposition during the irradiation process. Procedure ECL( ) checks up the energy conservation law at each step of the energy deposition process. Procedure ECLt( ) performs the final check up of the energy conservation law at the end of deposition process. Modified procedures in OK3 Procedure InitBeam( ): This procedure initializes the beam radius and coefficients A1, A2, A3, A4 and A5 for Gauss distributed beams [2]. It is enlarged in OK3 and can set beams with radii from 1 to 20 mm. Procedure kBunch( ) is modified to allow beamlet number variation from bunch to bunch during the deposition. Procedure ijkSp( ) and procedure Hole( ) are modified to perform faster. Procedure Espl( ) and procedure ChechE( ) are modified to increase the calculation accuracy. Procedure SD( ) calculates the total relative root-mean-square (RMS) deviation and the total relative peak-to-valley (PTV) deviation in energy deposition non-uniformity. This procedure is not included in code OK2 because of its limited applications (for spherical targets only). It is taken from code OK1 and modified to perform with code OK3. Running time: The execution time depends on the pellet mesh number and the number of beams in the simulated illumination as well as on the beam characteristics (beam radius on the pellet surface, beam subdivision, projectile particle energy and so on). In almost all of the practical running tests performed, the typical running time for one beam deposition is about 30 s on a PC with a CPU of Pentium 4, 2.4 GHz. References:A.I. Ogoyski, et al., Heavy ion beam irradiation non-uniformity in inertial fusion, Phys. Lett. A 315 (2003) 372-377. A.I. Ogoyski, et al., Code OK1 - Simulation of multi-beam irradiation on a spherical target in heavy ion fusion, Comput. Phys. Comm. 157 (2004) 160-172. A.I. Ogoyski, et al., Code OK2 - A simulation code of ion-beam illumination on an arbitrary shape and structure target, Comput. Phys. Comm. 161 (2004) 143-150.

  10. METLIN-PC: An applications-program package for problems of mathematical programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pshenichnyi, B.N.; Sobolenko, L.A.; Sosnovskii, A.A.

    1994-05-01

    The METLIN-PC applications-program package (APP) was developed at the V.M. Glushkov Institute of Cybernetics of the Academy of Sciences of Ukraine on IBM PC XT and AT computers. The present version of the package was written in Turbo Pascal and Fortran-77. The METLIN-PC is chiefly designed for the solution of smooth problems of mathematical programming and is a further development of the METLIN prototype, which was created earlier on a BESM-6 computer. The principal property of the previous package is retained - the applications modules employ a single approach based on the linearization method of B.N. Pschenichnyi. Hence the namemore » {open_quotes}METLIN.{close_quotes}« less

  11. [MapDraw: a microsoft excel macro for drawing genetic linkage maps based on given genetic linkage data].

    PubMed

    Liu, Ren-Hu; Meng, Jin-Ling

    2003-05-01

    MAPMAKER is one of the most widely used computer software package for constructing genetic linkage maps.However, the PC version, MAPMAKER 3.0 for PC, could not draw the genetic linkage maps that its Macintosh version, MAPMAKER 3.0 for Macintosh,was able to do. Especially in recent years, Macintosh computer is much less popular than PC. Most of the geneticists use PC to analyze their genetic linkage data. So a new computer software to draw the same genetic linkage maps on PC as the MAPMAKER for Macintosh to do on Macintosh has been crying for. Microsoft Excel,one component of Microsoft Office package, is one of the most popular software in laboratory data processing. Microsoft Visual Basic for Applications (VBA) is one of the most powerful functions of Microsoft Excel. Using this program language, we can take creative control of Excel, including genetic linkage map construction, automatic data processing and more. In this paper, a Microsoft Excel macro called MapDraw is constructed to draw genetic linkage maps on PC computer based on given genetic linkage data. Use this software,you can freely construct beautiful genetic linkage map in Excel and freely edit and copy it to Word or other application. This software is just an Excel format file. You can freely copy it from ftp://211.69.140.177 or ftp://brassica.hzau.edu.cn and the source code can be found in Excel's Visual Basic Editor.

  12. PC-CUBE: A Personal Computer Based Hypercube

    NASA Technical Reports Server (NTRS)

    Ho, Alex; Fox, Geoffrey; Walker, David; Snyder, Scott; Chang, Douglas; Chen, Stanley; Breaden, Matt; Cole, Terry

    1988-01-01

    PC-CUBE is an ensemble of IBM PCs or close compatibles connected in the hypercube topology with ordinary computer cables. Communication occurs at the rate of 115.2 K-band via the RS-232 serial links. Available for PC-CUBE is the Crystalline Operating System III (CrOS III), Mercury Operating System, CUBIX and PLOTIX which are parallel I/O and graphics libraries. A CrOS performance monitor was developed to facilitate the measurement of communication and computation time of a program and their effects on performance. Also available are CXLISP, a parallel version of the XLISP interpreter; GRAFIX, some graphics routines for the EGA and CGA; and a general execution profiler for determining execution time spent by program subroutines. PC-CUBE provides a programming environment similar to all hypercube systems running CrOS III, Mercury and CUBIX. In addition, every node (personal computer) has its own graphics display monitor and storage devices. These allow data to be displayed or stored at every processor, which has much instructional value and enables easier debugging of applications. Some application programs which are taken from the book Solving Problems on Concurrent Processors (Fox 88) were implemented with graphics enhancement on PC-CUBE. The applications range from solving the Mandelbrot set, Laplace equation, wave equation, long range force interaction, to WaTor, an ecological simulation.

  13. Evaluation of Manual Spelling, Observational and Incidental Learning Using Computer-Based Instruction with a Tablet PC, Large Screen Projection, and a Forward Chaining Procedure

    ERIC Educational Resources Information Center

    Purrazzella, Kimberly; Mechling, Linda C.

    2013-01-01

    The study employed a multiple probe design to investigate the effects of computer-based instruction (CBI) and a forward chaining procedure to teach manual spelling of words to three young adults with moderate intellectual disability in a small group arrangement. The computer-based program included a tablet PC whereby students wrote words directly…

  14. Toward information management in corporations (4)

    NASA Astrophysics Data System (ADS)

    Yamamoto, Takeo

    The roles of personal computers (PC's) and workstations (WS's) in developing the corporate information system is discussed. The history and state of art for PC's and WS's are reviewed. Checkpoints for introducing PC's and WS's are ; Japanese word-processing capabilities, multi-media capabilities and network capabilities.

  15. Modelling copper-phthalocyanine/cobalt-phthalocyanine chains: towards magnetic quantum metamaterials.

    PubMed

    Wu, Wei

    2014-07-23

    The magnetic properties of a theoretically designed molecular chain structure CuCoPc2, in which copper-phthalocyanine (CuPc) and cobalt-phthalocyanine (CoPc) alternate, have been investigated across a range of chain structures. The computed exchange interaction for the α-phase CuCoPc2 is ∼ 5 K (ferromagnetic), in strong contrast to the anti-ferromagnetic interaction recently observed in CuPc and CoPc. The computed exchange interactions are strongly dependent on the stacking angle but weakly on the sliding angle, and peak at 20 K (ferromagnetic). These ferromagnetic interactions are expected to arise from direct exchange with the strong suppression of super-exchange interaction. These first-principles calculations show that π-conjugated molecules, such as phthalocyanine, could be used as building blocks for the design of magnetic materials. This therefore extends the concept of quantum metamaterials further into magnetism. The resulting new magnetic materials could find applications in the studies such as organic spintronics.

  16. USER'S GUIDE TO THE PERSONAL COMPUTER VERSION OF THE BIOGENIC EMISSIONS INVENTORY SYSTEM (PC-BEIS2)

    EPA Science Inventory

    The document is a user's guide for an updated Personal Computer version of the Biogenic Emissions Inventory System (PC-BEIS2), allowing users to estimate hourly emissions of biogenic volatile organic compounds (BVOCs) and soil nitrogen oxide emissions for any county in the contig...

  17. Expert System Enhancement to the Resource Allocation Modules of the NCS Emergency Preparedness Management Information System (EPMIS)

    DTIC Science & Technology

    1987-01-01

    after the MYCIN expert system. Host Computer PC+ is available on both symbolic and numeric computers. It operates on: the IBM PC AT, TI Bus- Pro (IBM PC...suppose that the data baseTool picks up pace contains 100 motors, and in only one case does a lightweight motor pro . duce more power than heavier units...every sor, ART 2.0. In the bargain it con - the figure). decision point takes time. More sub- sumes 10 times less storage. ART 3.0 reduces the comparison

  18. CD-ROM source data uploaded to the operating and storage devices of an IBM 3090 mainframe through a PC terminal.

    PubMed

    Boros, L G; Lepow, C; Ruland, F; Starbuck, V; Jones, S; Flancbaum, L; Townsend, M C

    1992-07-01

    A powerful method of processing MEDLINE and CINAHL source data uploaded to the IBM 3090 mainframe computer through an IBM/PC is described. Data are first downloaded from the CD-ROM's PC devices to floppy disks. These disks then are uploaded to the mainframe computer through an IBM/PC equipped with WordPerfect text editor and computer network connection (SONNGATE). Before downloading, keywords specifying the information to be accessed are typed at the FIND prompt of the CD-ROM station. The resulting abstracts are downloaded into a file called DOWNLOAD.DOC. The floppy disks containing the information are simply carried to an IBM/PC which has a terminal emulation (TELNET) connection to the university-wide computer network (SONNET) at the Ohio State University Academic Computing Services (OSU ACS). The WordPerfect (5.1) processes and saves the text into DOS format. Using the File Transfer Protocol (FTP, 130,000 bytes/s) of SONNET, the entire text containing the information obtained through the MEDLINE and CINAHL search is transferred to the remote mainframe computer for further processing. At this point, abstracts in the specified area are ready for immediate access and multiple retrieval by any PC having network switch or dial-in connection after the USER ID, PASSWORD and ACCOUNT NUMBER are specified by the user. The system provides the user an on-line, very powerful and quick method of searching for words specifying: diseases, agents, experimental methods, animals, authors, and journals in the research area downloaded. The user can also copy the TItles, AUthors and SOurce with optional parts of abstracts into papers under edition. This arrangement serves the special demands of a research laboratory by handling MEDLINE and CINAHL source data resulting after a search is performed with keywords specified for ongoing projects. Since the Ohio State University has a centrally founded mainframe system, the data upload, storage and mainframe operations are free.

  19. A robust and efficient stepwise regression method for building sparse polynomial chaos expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abraham, Simon, E-mail: Simon.Abraham@ulb.ac.be; Raisee, Mehrdad; Ghorbaniasl, Ghader

    2017-03-01

    Polynomial Chaos (PC) expansions are widely used in various engineering fields for quantifying uncertainties arising from uncertain parameters. The computational cost of classical PC solution schemes is unaffordable as the number of deterministic simulations to be calculated grows dramatically with the number of stochastic dimension. This considerably restricts the practical use of PC at the industrial level. A common approach to address such problems is to make use of sparse PC expansions. This paper presents a non-intrusive regression-based method for building sparse PC expansions. The most important PC contributions are detected sequentially through an automatic search procedure. The variable selectionmore » criterion is based on efficient tools relevant to probabilistic method. Two benchmark analytical functions are used to validate the proposed algorithm. The computational efficiency of the method is then illustrated by a more realistic CFD application, consisting of the non-deterministic flow around a transonic airfoil subject to geometrical uncertainties. To assess the performance of the developed methodology, a detailed comparison is made with the well established LAR-based selection technique. The results show that the developed sparse regression technique is able to identify the most significant PC contributions describing the problem. Moreover, the most important stochastic features are captured at a reduced computational cost compared to the LAR method. The results also demonstrate the superior robustness of the method by repeating the analyses using random experimental designs.« less

  20. 39 CFR 501.1 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... limited to postage meters and PC Postage systems. (b) A postage meter is a Postal Service-approved Postage... this part refers to a postage meter. (c) PC Postage products are Postal Service-approved Postage Evidencing Systems that use a personal computer as an integral part of the system. PC Postage products may...

  1. PC Vendor Viability, or Whatever Happened to HiTech International?

    ERIC Educational Resources Information Center

    Crawford, Walt

    1993-01-01

    Reports on the feasibility of vendors of IBM PC compatible computers based on issues of "PC Magazine" from September 1985 to the present. Results by year are given in tabular and text form. Implications of orphan systems for libraries, advertising problems, and predictors of success are discussed. (EAM)

  2. A Fine-Grained Pipelined Implementation for Large-Scale Matrix Inversion on FPGA

    NASA Astrophysics Data System (ADS)

    Zhou, Jie; Dou, Yong; Zhao, Jianxun; Xia, Fei; Lei, Yuanwu; Tang, Yuxing

    Large-scale matrix inversion play an important role in many applications. However to the best of our knowledge, there is no FPGA-based implementation. In this paper, we explore the possibility of accelerating large-scale matrix inversion on FPGA. To exploit the computational potential of FPGA, we introduce a fine-grained parallel algorithm for matrix inversion. A scalable linear array processing elements (PEs), which is the core component of the FPGA accelerator, is proposed to implement this algorithm. A total of 12 PEs can be integrated into an Altera StratixII EP2S130F1020C5 FPGA on our self-designed board. Experimental results show that a factor of 2.6 speedup and the maximum power-performance of 41 can be achieved compare to Pentium Dual CPU with double SSE threads.

  3. Efficacy of Code Optimization on Cache-Based Processors

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Saphir, William C.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    In this paper a number of techniques for improving the cache performance of a representative piece of numerical software is presented. Target machines are popular processors from several vendors: MIPS R5000 (SGI Indy), MIPS R8000 (SGI PowerChallenge), MIPS R10000 (SGI Origin), DEC Alpha EV4 + EV5 (Cray T3D & T3E), IBM RS6000 (SP Wide-node), Intel PentiumPro (Ames' Whitney), Sun UltraSparc (NERSC's NOW). The optimizations all attempt to increase the locality of memory accesses. But they meet with rather varied and often counterintuitive success on the different computing platforms. We conclude that it may be genuinely impossible to obtain portable performance on the current generation of cache-based machines. At the least, it appears that the performance of modern commodity processors cannot be described with parameters defining the cache alone.

  4. Digital data, composite video multiplexer and demultiplexer boards for an IBM PC/AT compatible computer

    NASA Technical Reports Server (NTRS)

    Smith, Dean Lance

    1993-01-01

    Work continued on the design of two IBM PC/AT compatible computer interface boards. The boards will permit digital data to be transmitted over a composite video channel from the Orbiter. One board combines data with a composite video signal. The other board strips the data from the video signal.

  5. An Introduction To PC-TRIM.

    Treesearch

    John R. Mills

    1989-01-01

    The timber resource inventory model (TRIM) has been adapted to run on person al computers. The personal computer version of TRIM (PC-TRIM) is more widely used than its mainframe parent. Errors that existed in previous versions of TRIM have been corrected. Information is presented to help users with program input and output management in the DOS environment, to...

  6. Computing Operating Characteristics Of Bearing/Shaft Systems

    NASA Technical Reports Server (NTRS)

    Moore, James D.

    1996-01-01

    SHABERTH computer program predicts operating characteristics of bearings in multibearing load-support system. Lubricated and nonlubricated bearings modeled. Calculates loads, torques, temperatures, and fatigue lives of ball and/or roller bearings on single shaft. Provides for analysis of reaction of system to termination of supply of lubricant to bearings and other lubricated mechanical elements. Valuable in design and analysis of shaft/bearing systems. Two versions of SHABERTH available. Cray version (LEW-14860), "Computing Thermal Performances Of Shafts and Bearings". IBM PC version (MFS-28818), written for IBM PC-series and compatible computers running MS-DOS.

  7. Development and Flight Results of a PC104/QNX-Based On-Board Computer and Software for the YES2 Tether Experiment

    NASA Astrophysics Data System (ADS)

    Spiliotopoulos, I.; Mirmont, M.; Kruijff, M.

    2008-08-01

    This paper highlights the flight preparation and mission performance of a PC104-based On-Board Computer for ESA's second Young Engineer's Satellite (YES2), with additional attention to the flight software design and experience of QNX as multi-process real-time operating system. This combination of Commercial-Of-The-Shelf (COTS) technologies is an accessible option for small satellites with high computational demands.

  8. Using a Tablet PC to Enhance Student Engagement and Learning in an Introductory Organic Chemistry Course

    ERIC Educational Resources Information Center

    Derting, Terry L.; Cox, James R.

    2008-01-01

    Over the past three decades, computer-based technologies have influenced all aspects of chemistry, including chemical education. Pen-based computing applications, such as the tablet PC, have reemerged in the past few years and are providing new ways for educators to deliver content and engage students inside and outside the classroom and…

  9. Operating a Geiger-Muller Tube Using a PC Sound Card

    ERIC Educational Resources Information Center

    Azooz, A. A.

    2009-01-01

    In this paper, a simple MATLAB-based PC program that enables the computer to function as a replacement for the electronic scalar-counter system associated with a Geiger-Muller (GM) tube is described. The program utilizes the ability of MATLAB to acquire data directly from the computer sound card. The signal from the GM tube is applied to the…

  10. Shortcomings of low-cost imaging systems for viewing computed radiographs.

    PubMed

    Ricke, J; Hänninen, E L; Zielinski, C; Amthauer, H; Stroszczynski, C; Liebig, T; Wolf, M; Hosten, N

    2000-01-01

    To assess potential advantages of a new PC-based viewing tool featuring image post-processing for viewing computed radiographs on low-cost hardware (PC) with a common display card and color monitor, and to evaluate the effect of using color versus monochrome monitors. Computed radiographs of a statistical phantom were viewed on a PC, with and without post-processing (spatial frequency and contrast processing), employing a monochrome or a color monitor. Findings were compared with the viewing on a radiological Workstation and evaluated with ROC analysis. Image post-processing improved the perception of low-contrast details significantly irrespective of the monitor used. No significant difference in perception was observed between monochrome and color monitors. The review at the radiological Workstation was superior to the review done using the PC with image processing. Lower quality hardware (graphic card and monitor) used in low cost PCs negatively affects perception of low-contrast details in computed radiographs. In this situation, it is highly recommended to use spatial frequency and contrast processing. No significant quality gain has been observed for the high-end monochrome monitor compared to the color display. However, the color monitor was affected stronger by high ambient illumination.

  11. Aircraft noise prediction program propeller analysis system IBM-PC version user's manual version 2.0

    NASA Technical Reports Server (NTRS)

    Nolan, Sandra K.

    1988-01-01

    The IBM-PC version of the Aircraft Noise Prediction Program (ANOPP) Propeller Analysis System (PAS) is a set of computational programs for predicting the aerodynamics, performance, and noise of propellers. The ANOPP-PAS is a subset of a larger version of ANOPP which can be executed on CDC or VAX computers. This manual provides a description of the IBM-PC version of the ANOPP-PAS and its prediction capabilities, and instructions on how to use the system on an IBM-XT or IBM-AT personal computer. Sections within the manual document installation, system design, ANOPP-PAS usage, data entry preprocessors, and ANOPP-PAS functional modules and procedures. Appendices to the manual include a glossary of ANOPP terms and information on error diagnostics and recovery techniques.

  12. Creation Stations.

    ERIC Educational Resources Information Center

    Sauer, Jeff; Murphy, Sam

    1997-01-01

    In this comparison, NewMedia lab looks at 10 Pentium II workstations preconfigured for demanding three dimensional and multimedia work with OpenGL cards and fast Ultra SCSI hard drives. Highlights include costs, tests with Photoshop, technical support, and a sidebar that explains Accelerated Graphics Port. (Author/LRW)

  13. Laboratory process control using natural language commands from a personal computer

    NASA Technical Reports Server (NTRS)

    Will, Herbert A.; Mackin, Michael A.

    1989-01-01

    PC software is described which provides flexible natural language process control capability with an IBM PC or compatible machine. Hardware requirements include the PC, and suitable hardware interfaces to all controlled devices. Software required includes the Microsoft Disk Operating System (MS-DOS) operating system, a PC-based FORTRAN-77 compiler, and user-written device drivers. Instructions for use of the software are given as well as a description of an application of the system.

  14. Mitigation of High Altitude and Low Earth Orbit Radiation Effects on Microelectronics via Shielding or Error Detection and Correction Systems

    NASA Technical Reports Server (NTRS)

    Gupta, Kajal (Technical Monitor); Kirby, Kelvin

    2004-01-01

    The NASA Cooperative Agreement NAG4-210 was granted under the FY2000 Faculty Awards for Research (FAR) Program. The project was proposed to examine the effects of charged particles and neutrons on selected random access memory (RAM) technologies. The concept of the project was to add to the current knowledge of Single Event Effects (SEE) concerning RAM and explore the impact of selected forms of radiation on Error Detection and Correction Systems. The project was established as an extension of a previous FAR awarded to Prairie View A&M University (PVAMU), under the direction of Dr. Richard Wilkins as principal investigator. The NASA sponsored Center for Applied Radiation Research (CARR) at PVAMU developed an electronic test-bed to explore and quantify SEE on RAM from charged particles and neutrons. The test-bed was developed using 486DX microprocessor technology (PC-104) and a custom test board to mount RAM integrated circuits or other electronic devices. The test-bed had two configurations - a bench test version for laboratory experiments and a 400 Hz powered rack version for flight experiments. The objectives of this project were to: 1) Upgrade the Electronic Test-bed (ETB) to a Pentium configuration; 2) Accommodate more than only 8 Mbytes of RAM; 3) Explore Error Detection and Correction Systems for radiation effects; 4) Test modern RAM technologies in radiation environments.

  15. Deciding when It's Time to Buy a New PC

    ERIC Educational Resources Information Center

    Goldsborough, Reid

    2004-01-01

    How to best decide when it's time to replace your PC, whether at home or at work, is always tricky. Spending on computers can make you more productive, but it's money you otherwise cannot spend, invest or save, and faster systems always await you in the future. What is clear is that the computer industry really wants you to buy, and the computer…

  16. Combating adverse selection in secondary PC markets.

    PubMed

    Hickey, Stewart W; Fitzpatrick, Colin

    2008-04-15

    Adverse selection is a significant contributor to market failure in secondary personal computer (PC) markets. Signaling can act as a potential solution to adverse selection and facilitate superior remarketing of second-hand PCs. Signaling is a means whereby usage information can be utilized to enhance consumer perception of both value and utility of used PCs and, therefore, promote lifetime extension for these systems. This can help mitigate a large portion of the environmental impact associated with PC system manufacture. In this paper, the computer buying and selling behavior of consumers is characterized via a survey of 270 Irish residential users. Results confirm the existence of adverse selection in the Irish market with 76% of potential buyers being unwilling to purchase and 45% of potential vendors being unwilling to sell a used PC. The so-called "closet affect" is also apparent with 78% of users storing their PC after use has ceased. Results also indicate that consumers place a higher emphasis on specifications when considering a second-hand purchase. This contradicts their application needs which are predominantly Internet and word-processing/spreadsheet/presentation applications, 88% and 60% respectively. Finally, a market solution utilizing self monitoring and reporting technology (SMART) sensors for the purpose of real time usage monitoring is proposed, that can change consumer attitudes with regard to second-hand computer equipment.

  17. Computation of Nonretarded London Dispersion Coefficients and Hamaker Constants of Copper Phthalocyanine.

    PubMed

    Zhao, Yan; Ng, Hou T; Hanson, Eric; Dong, Jiannan; Corti, David S; Franses, Elias I

    2010-02-09

    A time-dependent density functional theory (TDDFT) scheme has been validated for predictions of the dispersion coefficients of five molecules (H2O, NH3, CO2, C6H6, and pentane) and for predictions of the static dipole polarizabilities of three organometallic compounds (TiCl4, OsO4, and Ge(CH3)4). The convergence of grid spacing has been examined, and two types of pseudopotentials and 13 density functionals have been tested. The nonretarded Hamaker constants A11 are calculated by employing a semiempirical parameter a along with the standard Hamaker constant equation. The parameter a is optimized against six accurate Hamaker constants obtained from the full Lifshitz theory. The dispersion coefficients of copper phthalocyanine CuPc and CuPc-SO3H are then computed. Using the theoretical densities of ρ1 = 1.63 and 1.62 g/cm(3), the Hamaker constants A11 of crystalline α-CuPc and β-CuPc are found to be 14.73 × 10(-20) and 14.66 × 10(-20) J, respectively. Using the experimentally derived density of ρ1 = 1.56 g/cm(3) for a commercially available β-CuPc (nanoparticles of ∼90 nm hydrodynamic diameter), A11 = 13.52 × 10(-20) J is found. Its corresponding effective Hamaker constant in water (A121) is calculated to be 3.07 × 10(-20) J. All computed A11 values for CuPc are noted to be higher than those reported previously.

  18. [A skin cell segregating control system based on PC].

    PubMed

    Liu, Wen-zhong; Zhou, Ming; Zhang, Hong-bing

    2005-11-01

    A skin cell segregating control system based on PC (personal computer) is presented in this paper. Its front controller is a single-chip microcomputer which enables the manipulation for 6 patients simultaneously, and thus provides a great convenience for clinical treatments for vitiligo. With the use of serial port communication technology, it's possible to monitor and control the front controller in a PC terminal. And the application of computer image acquisition technology realizes the synchronous acquisition of pathologic shin cell images pre/after the operation and a case history. Clinical tests prove its conformity with national standards and the pre-set technological requirements.

  19. A Computational Study of Laminate Transparent Armor Impacted by FSP

    DTIC Science & Technology

    2009-06-01

    of Hsieh et al [1], on targets consisting of 3mm PC-12mm PMMA-3mm PC impacted by 17-gr, 0.22 caliber fragment simulating projectile (FSP) at impact...results from the experiments of Hsieh et al [1], on targets consisting of 3mm PC-12mm PMMA-3mm PC impacted by 17-gr, 0.22 caliber fragment simulating...investigate several different analysis techniques to qualitatively determine their accuracy when compared with experiments of Hsieh et al [1]. The

  20. SAD-Based Stereo Matching Using FPGAs

    NASA Astrophysics Data System (ADS)

    Ambrosch, Kristian; Humenberger, Martin; Kubinger, Wilfried; Steininger, Andreas

    In this chapter we present a field-programmable gate array (FPGA) based stereo matching architecture. This architecture uses the sum of absolute differences (SAD) algorithm and is targeted at automotive and robotics applications. The disparity maps are calculated using 450×375 input images and a disparity range of up to 150 pixels. We discuss two different implementation approaches for the SAD and analyze their resource usage. Furthermore, block sizes ranging from 3×3 up to 11×11 and their impact on the consumed logic elements as well as on the disparity map quality are discussed. The stereo matching architecture enables a frame rate of up to 600 fps by calculating the data in a highly parallel and pipelined fashion. This way, a software solution optimized by using Intel's Open Source Computer Vision Library running on an Intel Pentium 4 with 3 GHz clock frequency is outperformed by a factor of 400.

  1. Computing Equilibrium Chemical Compositions

    NASA Technical Reports Server (NTRS)

    Mcbride, Bonnie J.; Gordon, Sanford

    1995-01-01

    Chemical Equilibrium With Transport Properties, 1993 (CET93) computer program provides data on chemical-equilibrium compositions. Aids calculation of thermodynamic properties of chemical systems. Information essential in design and analysis of such equipment as compressors, turbines, nozzles, engines, shock tubes, heat exchangers, and chemical-processing equipment. CET93/PC is version of CET93 specifically designed to run within 640K memory limit of MS-DOS operating system. CET93/PC written in FORTRAN.

  2. Remote media vision-based computer input device

    NASA Astrophysics Data System (ADS)

    Arabnia, Hamid R.; Chen, Ching-Yi

    1991-11-01

    In this paper, we introduce a vision-based computer input device which has been built at the University of Georgia. The user of this system gives commands to the computer without touching any physical device. The system receives input through a CCD camera; it is PC- based and is built on top of the DOS operating system. The major components of the input device are: a monitor, an image capturing board, a CCD camera, and some software (developed by use). These are interfaced with a standard PC running under the DOS operating system.

  3. Easy PC Astronomy

    NASA Astrophysics Data System (ADS)

    Duffett-Smith, Peter

    1996-11-01

    Easy PC Astronomy is the perfect book for everyone who wants to make easy and accurate astronomical calculations. The author supplies a simple but powerful script language called AstroScript on a disk, ready to use on any IBM PC-type computer. Equipped with this software, readers can compute complex but interesting astronomical results within minutes: from the time of moonrise or moonset anywhere in the world on any date, to the display of a lunar or solar eclipse on the computer screen--all within a few minutes of opening the book! The Sky Graphics feature of the software displays a detailed image of the sky as seen from any point on earth--at any time in the future or past--showing the constellations, planets, and a host of other features. Readers need no expert knowledge of astronomy, math or programming; the author provides full details of the calculations and formulas, which the reader can absorb or ignore as desired, and a comprehensive glossary of astronomical terms. Easy PC Astronomy is of immediate practical use to beginning and advanced amateur astronomers, students at all levels, science teachers, and research astronomers. Peter Duffett-Smith is at the Cavendish Laboratory of the University of Cambridge and is the author of Astronomy with Your Personal Computer (Cambridge University Press, 1990) and Practical Astronomy with Your Calculator (Cambridge University Press, 1989).

  4. TRIAC II. A MatLab code for track measurements from SSNT detectors

    NASA Astrophysics Data System (ADS)

    Patiris, D. L.; Blekas, K.; Ioannides, K. G.

    2007-08-01

    A computer program named TRIAC II written in MATLAB and running with a friendly GUI has been developed for recognition and parameters measurements of particles' tracks from images of Solid State Nuclear Track Detectors. The program, using image analysis tools, counts the number of tracks and depending on the current working mode classifies them according to their radii (Mode I—circular tracks) or their axis (Mode II—elliptical tracks), their mean intensity value (brightness) and their orientation. Images of the detectors' surfaces are input to the code, which generates text files as output, including the number of counted tracks with the associated track parameters. Hough transform techniques are used for the estimation of the number of tracks and their parameters, providing results even in cases of overlapping tracks. Finally, it is possible for the user to obtain informative histograms as well as output files for each image and/or group of images. Program summaryTitle of program:TRIAC II Catalogue identifier:ADZC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZC_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: Pentium III, 600 MHz Installations: MATLAB 7.0 Operating system under which the program has been tested: Windows XP Programming language used:MATLAB Memory required to execute with typical data:256 MB No. of bits in a word:32 No. of processors used:one Has the code been vectorized or parallelized?:no No. of lines in distributed program, including test data, etc.:25 964 No. of bytes in distributed program including test data, etc.: 4 354 510 Distribution format:tar.gz Additional comments: This program requires the MatLab Statistical toolbox and the Image Processing Toolbox to be installed. Nature of physical problem: Following the passage of a charged particle (protons and heavier) through a Solid State Nuclear Track Detector (SSNTD), a damage region is created, usually named latent track. After the chemical etching of the detectors in aqueous NaOH or KOH solutions, latent tracks can be sufficiently enlarged (with diameters of 1 μm or more) to become visible under an optical microscope. Using the appropriate apparatus, one can record images of the SSNTD's surface. The shapes of the particle's tracks are strongly dependent on their charge, energy and the angle of incidence. Generally, they have elliptical shapes and in the special case of vertical incidence, they are circular. The manual counting of tracks is a tedious and time-consuming task. An automatic system is needed to speed up the process and to increase the accuracy of the results. Method of solution: TRIAC II is based on a segmentation method that groups image pixels according to their intensity value (brightness) in a number of grey level groups. After the segmentation of pixels, the program recognizes and separates the track from the background, subsequently performing image morphology, where oversized objects or objects smaller than a threshold value are removed. Finally, using the appropriate Hough transform technique, the program counts the tracks, even those which overlap and classifies them according to their shape parameters and brightness. Typical running time: The analysis of an image with a PC (Intel Pentium III processor running at 600 MHz) requires 2 to 10 minutes, depending on the number of observed tracks and the digital resolution of the image. Unusual features of the program: This program has been tested with images of CR-39 detectors exposed to alpha particles. Also, in low contrast images with few or small tracks, background pixels can be recognized as track pixels. To avoid this problem the brightness of the background pixels should be sufficiently higher than that of the track pixels.

  5. PC-based note taking in patient-centred diagnostic interviews: a thematic analysis of patient opinion elicited using a pilot survey instrument.

    PubMed

    Barker, Fiona; Court, Gemma

    2011-01-01

    Computers are used increasingly in patient-clinician consultations. There is the potential for PC use to have an effect on the communication process. The aim of this preliminary study was to investigate patient opinion regarding the use of PC-based note taking during diagnostic vestibular assessments. We gave a simple four-item questionnaire to 100 consecutive patients attending for vestibular assessment at a secondary referral level primary care trust audiology service. Written responses to two of the questionnaire items were subject to an inductive thematic analysis. The questionnaire was acceptable to patients, none refused to complete it. Dominant themes identified suggest that patients do perceive consistent positive benefits from the use of PC-based note taking. This pilot study's short survey instrument is usable and may provide insights into patients' perceptions of computer use in a clinical setting.

  6. An Embedded Reconfigurable Logic Module

    NASA Technical Reports Server (NTRS)

    Tucker, Jerry H.; Klenke, Robert H.; Shams, Qamar A. (Technical Monitor)

    2002-01-01

    A Miniature Embedded Reconfigurable Computer and Logic (MERCAL) module has been developed and verified. MERCAL was designed to be a general-purpose, universal module that that can provide significant hardware and software resources to meet the requirements of many of today's complex embedded applications. This is accomplished in the MERCAL module by combining a sub credit card size PC in a DIMM form factor with a XILINX Spartan I1 FPGA. The PC has the ability to download program files to the FPGA to configure it for different hardware functions and to transfer data to and from the FPGA via the PC's ISA bus during run time. The MERCAL module combines, in a compact package, the computational power of a 133 MHz PC with up to 150,000 gate equivalents of digital logic that can be reconfigured by software. The general architecture and functionality of the MERCAL hardware and system software are described.

  7. Collaborative Simulation Grid: Multiscale Quantum-Mechanical/Classical Atomistic Simulations on Distributed PC Clusters in the US and Japan

    NASA Technical Reports Server (NTRS)

    Kikuchi, Hideaki; Kalia, Rajiv; Nakano, Aiichiro; Vashishta, Priya; Iyetomi, Hiroshi; Ogata, Shuji; Kouno, Takahisa; Shimojo, Fuyuki; Tsuruta, Kanji; Saini, Subhash; hide

    2002-01-01

    A multidisciplinary, collaborative simulation has been performed on a Grid of geographically distributed PC clusters. The multiscale simulation approach seamlessly combines i) atomistic simulation backed on the molecular dynamics (MD) method and ii) quantum mechanical (QM) calculation based on the density functional theory (DFT), so that accurate but less scalable computations are performed only where they are needed. The multiscale MD/QM simulation code has been Grid-enabled using i) a modular, additive hybridization scheme, ii) multiple QM clustering, and iii) computation/communication overlapping. The Gridified MD/QM simulation code has been used to study environmental effects of water molecules on fracture in silicon. A preliminary run of the code has achieved a parallel efficiency of 94% on 25 PCs distributed over 3 PC clusters in the US and Japan, and a larger test involving 154 processors on 5 distributed PC clusters is in progress.

  8. The IBM PC at NASA Ames

    NASA Technical Reports Server (NTRS)

    Peredo, James P.

    1988-01-01

    Like many large companies, Ames relies very much on its computing power to get work done. And, like many other large companies, finding the IBM PC a reliable tool, Ames uses it for many of the same types of functions as other companies. Presentation and clarification needs demand much of graphics packages. Programming and text editing needs require simpler, more-powerful packages. The storage space needed by NASA's scientists and users for the monumental amounts of data that Ames needs to keep demand the best database packages that are large and easy to use. Availability to the Micom Switching Network combines the powers of the IBM PC with the capabilities of other computers and mainframes and allows users to communicate electronically. These four primary capabilities of the PC are vital to the needs of NASA's users and help to continue and support the vast amounts of work done by the NASA employees.

  9. PC-based Multiple Information System Interface (PC/MISI) detailed design and implementation plan

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Hall, Philip P.

    1985-01-01

    The design plan for the personal computer multiple information system interface (PC/MISI) project is discussed. The document is intended to be used as a blueprint for the implementation of the system. Each component is described in the detail necessary to allow programmers to implement the system. A description of the system data flow and system file structures is given.

  10. PROFIT-PC: a program for estimating maximum net revenue from multiproduct harvests in Appalachian hardwoods

    Treesearch

    Chris B. LeDoux; John E. Baumgras; R. Bryan Selbe

    1989-01-01

    PROFIT-PC is a menu driven, interactive PC (personal computer) program that estimates optimum product mix and maximum net harvesting revenue based on projected product yields and stump-to-mill timber harvesting costs. Required inputs include the number of trees/acre by species and 2 inches diameter at breast-height class, delivered product prices by species and product...

  11. PC_Eyewitness and the sequential superiority effect: computer-based lineup administration.

    PubMed

    MacLin, Otto H; Zimmerman, Laura A; Malpass, Roy S

    2005-06-01

    Computer technology has become an increasingly important tool for conducting eyewitness identifications. In the area of lineup identifications, computerized administration offers several advantages for researchers and law enforcement. PC_Eyewitness is designed specifically to administer lineups. To assess this new lineup technology, two studies were conducted in order to replicate the results of previous studies comparing simultaneous and sequential lineups. One hundred twenty university students participated in each experiment. Experiment 1 used traditional paper-and-pencil lineup administration methods to compare simultaneous to sequential lineups. Experiment 2 used PC_Eyewitness to administer simultaneous and sequential lineups. The results of these studies were compared to the meta-analytic results reported by N. Steblay, J. Dysart, S. Fulero, and R. C. L. Lindsay (2001). No differences were found between paper-and-pencil and PC_Eyewitness lineup administration methods. The core findings of the N. Steblay et al. (2001) meta-analysis were replicated by both administration procedures. These results show that computerized lineup administration using PC_Eyewitness is an effective means for gathering eyewitness identification data.

  12. A scalable PC-based parallel computer for lattice QCD

    NASA Astrophysics Data System (ADS)

    Fodor, Z.; Katz, S. D.; Pappa, G.

    2003-05-01

    A PC-based parallel computer for medium/large scale lattice QCD simulations is suggested. The Eo¨tvo¨s Univ., Inst. Theor. Phys. cluster consists of 137 Intel P4-1.7GHz nodes. Gigabit Ethernet cards are used for nearest neighbor communication in a two-dimensional mesh. The sustained performance for dynamical staggered (wilson) quarks on large lattices is around 70(110) GFlops. The exceptional price/performance ratio is below $1/Mflop.

  13. Partial correlation-based functional connectivity analysis for functional near-infrared spectroscopy signals

    NASA Astrophysics Data System (ADS)

    Akın, Ata

    2017-12-01

    A theoretical framework, a partial correlation-based functional connectivity (PC-FC) analysis to functional near-infrared spectroscopy (fNIRS) data, is proposed. This is based on generating a common background signal from a high passed version of fNIRS data averaged over all channels as the regressor in computing the PC between pairs of channels. This approach has been employed to real data collected during a Stroop task. The results show a strong significance in the global efficiency (GE) metric computed by the PC-FC analysis for neutral, congruent, and incongruent stimuli (NS, CS, IcS; GEN=0.10±0.009, GEC=0.11±0.01, GEIC=0.13±0.015, p=0.0073). A positive correlation (r=0.729 and p=0.0259) is observed between the interference of reaction times (incongruent-neutral) and interference of GE values (GEIC-GEN) computed from [HbO] signals.

  14. PUZZLE - A program for computer-aided design of printed circuit artwork

    NASA Technical Reports Server (NTRS)

    Harrell, D. A. W.; Zane, R.

    1971-01-01

    Program assists in solving spacing problems encountered in printed circuit /PC/ design. It is intended to have maximum use for two-sided PC boards carrying integrated circuits, and also aids design of discrete component circuits.

  15. 50 CFR 660.15 - Equipment requirements.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... perceived weight of water, slime, mud, debris, or other materials. Scale printouts must show: (A) The vessel... with Pentium 75-MHz or higher. Random Access Memory (RAM) must have sufficient megabyte (MB) space to... space of 217 MB or greater. A CD-ROM drive with a Video Graphics Adapter (VGA) or higher resolution...

  16. Using commercial software products for atmospheric remote sensing

    NASA Astrophysics Data System (ADS)

    Kristl, Joseph A.; Tibaudo, Cheryl; Tang, Kuilian; Schroeder, John W.

    2002-02-01

    The Ontar Corporation (www.Ontar.com) has developed several products for atmospheric remote sensing to calculate radiative transport, atmospheric transmission, and sensor performance in both the normal atmosphere and the atmosphere disturbed by battlefield conditions of smoke, dust, explosives and turbulence. These products include: PcModWin: Uses the USAF standard MODTRAN model to compute the atmospheric transmission and radiance at medium spectral resolution (2 cm-1) from the ultraviolet/visible into the infrared and microwave regions of the spectrum. It can be used for any geometry and atmospheric conditions such as aerosols, clouds and rain. PcLnWin: Uses the USAF standard FASCOD model to compute atmospheric transmission and emission at high (line-by-line) spectral resolution using the HITRAN 2000 database. It can be used over the same spectrum from the UV/visible into the infrared and microwave regions of the spectrum. HitranPC: Computes the absolute high (line-by-line) spectral resolution transmission spectrum of the atmosphere for different temperatures and pressures. HitranPC is a user-friendly program developed by the University of South Florida (USF) and uses the international standard molecular spectroscopic database, HITRAN. LidarPC: A computer program to calculate the Laser Radar/L&n Equation for hard targets and atmospheric backscatter using manual input atmospheric parameters or HitranPC and BETASPEC - transmission and backscatter calculations of the atmosphere. Also developed by the University of South Florida (USF). PcEosael: is a library of programs that mathematically describe aspects of electromagnetic propagation in battlefield environments. 25 modules are connected but can be exercised individually. Covers eight general categories of atmospheric effects, including gases, aerosols and laser propagation. Based on codes developed by the Army Research Lab. NVTherm: NVTherm models parallel scan, serial scan, and staring thermal imagers that operate in the mid and far infrared spectral bands (3 to 12 micrometers wavelength). It predicts the Minimum Resolvable Temperature Difference (MRTD) or just MRT) that can be discriminated by a human when using a thermal imager. NVTherm also predicts the target acquisition range performance likely to be achieved using the sensor.

  17. Computer skills and internet use in adults aged 50-74 years: influence of hearing difficulties.

    PubMed

    Henshaw, Helen; Clark, Daniel P A; Kang, Sujin; Ferguson, Melanie A

    2012-08-24

    The use of personal computers (PCs) and the Internet to provide health care information and interventions has increased substantially over the past decade. Yet the effectiveness of such an approach is highly dependent upon whether the target population has both access and the skill set required to use this technology. This is particularly relevant in the delivery of hearing health care because most people with hearing loss are over 50 years (average age for initial hearing aid fitting is 74 years). Although PC skill and Internet use by demographic factors have been examined previously, data do not currently exist that examine the effects of hearing difficulties on PC skill or Internet use in older adults. To explore the effect that hearing difficulty has on PC skill and Internet use in an opportunistic sample of adults aged 50-74 years. Postal questionnaires about hearing difficulty, PC skill, and Internet use (n=3629) were distributed to adults aged 50-74 years through three family physician practices in Nottingham, United Kingdom. A subsample of 84 respondents completed a second detailed questionnaire on confidence in using a keyboard, mouse, and track pad. Summed scores were termed the "PC confidence index." The PC confidence index was used to verify the PC skill categories in the postal questionnaire (ie, never used a computer, beginner, and competent). The postal questionnaire response rate was 36.78% (1298/3529) and 95.15% (1235/1298) of these contained complete information. There was a significant between-category difference for PC skill by PC confidence index (P<.001), thus verifying the three-category PC skill scale. PC and Internet use was greater in the younger respondents (50-62 years) than in the older respondents (63-74 years). The younger group's PC and Internet use was 81.0% and 60.9%, respectively; the older group's PC and Internet use was 54.0% and 29.8%, respectively. Those with slight hearing difficulties in the older group had significantly greater odds of PC use compared to those with no hearing difficulties (odds ratio [OR]=1.57, 95% confidence interval [CI] 1.06-2.30, P=.02). Those with moderate+ hearing difficulties had lower odds of PC use compared with those with no hearing difficulties, both overall (OR=0.58, 95% CI 0.39-0.87, P=.008) and in the younger group (OR=0.49, 95% CI 0.26-0.86, P=.008). Similar results were demonstrated for Internet use by age group (older: OR=1.57, 95% CI 0.99-2.47, P=.05; younger: OR=0.32, 95% CI 0.16-0.62, P=.001). Hearing health care is of particular relevance to older adults because of the prevalence of age-related hearing loss. Our data show that older adults experiencing slight hearing difficulty have increased odds of greater PC skill and Internet use than those reporting no difficulty. These findings suggest that PC and Internet delivery of hearing screening, information, and intervention is feasible for people between 50-74 years who have hearing loss, but who would not typically present to an audiologist.

  18. Computational fluid dynamic comparison between patch-based and primary closure techniques after carotid endarterectomy.

    PubMed

    Domanin, Maurizio; Bissacco, Daniele; Le Van, Davide; Vergara, Christian

    2018-03-01

    The aim of the study was to provide, by means of computational fluid dynamics, a comparative analysis after carotid endarterectomy (CEA) between patch graft (PG) and primary closure (PC) techniques performed in real carotid geometries to identify disturbed flow conditions potentially involved in the development of restenosis. Eight carotid geometries in seven asymptomatic patients who underwent CEA were analyzed. In six cases (A-F), CEA was performed using PG closure; in two cases (G and H), PC was performed. Three-dimensional carotid geometries, derived from postoperative magnetic resonance angiography, were reconstructed, and a computational fluid dynamics analysis was performed. A virtual scenario with PC closure was designed in patients in whom PG was originally inserted and vice versa. This allowed us to compare for each patient hemodynamic effects in the PG and PC scenarios in terms of oscillatory shear index (OSI) and relative residence time (RRT), considered indicators of disturbed flow. For the six original PG cases, the mean averaged-in-space OSI was 0.07 ± 0.01 for PG and 0.03 ± 0.02 for virtual-PC (difference, 0.04 ± 0.01; P = .0016). The mean of the percentage of area (%A) with OSI >0.2 resulted in 10.08% ± 3.38% for PG and 3.80% ± 3.22% for virtual-PC (difference, 6.28 ± 1.91; P = .008). For the same cases, the mean of the averaged-in-space RRT resulted in 5.48 ± 3.40 1/Pa for PG and 2.62 ± 1.12 1/Pa for virtual-PC (difference, 2.87 ± 1.46; P = .097). The mean of %A RRT >4.0 1/Pa resulted in 26.53% ± 12.98% for PG and 9.95% ± 6.53% for virtual-PC (difference, 16.58 ± 5.93; P = .025). For the two original PC cases, the averaged-in-space OSIs were 0.02 and 0.04 for PC and 0.03 and 0.02 for virtual-PG; the %A OSIs >0.2 were 0.9% and 7.6% for PC and 3.0% and 2.2% for virtual-PG; the averaged-in-space RRTs were 1.8 and 2.0 1/Pa for PC and 2.9 and 1.9 1/Pa for virtual-PG; the %A RRTs >4.0 1/Pa were 6.8% and 9.8% for PC and 9.4% and 6.2% for virtual-PG. These results revealed generally higher disturbed flows in the PG configurations with respect to the PC ones. OSI and RRT values were generally higher in PG cases with respect to PC, especially for high carotids or when the arteriotomy is mainly at the bulb region. Thus, an elective use of patch should be considered to prevent disturbed flows. Copyright © 2017 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  19. IPEG- IMPROVED PRICE ESTIMATION GUIDELINES (IBM PC VERSION)

    NASA Technical Reports Server (NTRS)

    Aster, R. W.

    1994-01-01

    The Improved Price Estimation Guidelines, IPEG, program provides a simple yet accurate estimate of the price of a manufactured product. IPEG facilitates sensitivity studies of price estimates at considerably less expense than would be incurred by using the Standard Assembly-line Manufacturing Industry Simulation, SAMIS, program (COSMIC program NPO-16032). A difference of less than one percent between the IPEG and SAMIS price estimates has been observed with realistic test cases. However, the IPEG simplification of SAMIS allows the analyst with limited time and computing resources to perform a greater number of sensitivity studies than with SAMIS. Although IPEG was developed for the photovoltaics industry, it is readily adaptable to any standard assembly line type of manufacturing industry. IPEG estimates the annual production price per unit. The input data includes cost of equipment, space, labor, materials, supplies, and utilities. Production on an industry wide basis or a process wide basis can be simulated. Once the IPEG input file is prepared, the original price is estimated and sensitivity studies may be performed. The IPEG user selects a sensitivity variable and a set of values. IPEG will compute a price estimate and a variety of other cost parameters for every specified value of the sensitivity variable. IPEG is designed as an interactive system and prompts the user for all required information and offers a variety of output options. The IPEG/PC program is written in TURBO PASCAL for interactive execution on an IBM PC computer under DOS 2.0 or above with at least 64K of memory. The IBM PC color display and color graphics adapter are needed to use the plotting capabilities in IPEG/PC. IPEG/PC was developed in 1984. The original IPEG program is written in SIMSCRIPT II.5 for interactive execution and has been implemented on an IBM 370 series computer with a central memory requirement of approximately 300K of 8 bit bytes. The original IPEG was developed in 1980.

  20. Multithreaded transactions in scientific computing. The Growth06_v2 program

    NASA Astrophysics Data System (ADS)

    Daniluk, Andrzej

    2009-07-01

    Writing a concurrent program can be more difficult than writing a sequential program. Programmer needs to think about synchronization, race conditions and shared variables. Transactions help reduce the inconvenience of using threads. A transaction is an abstraction, which allows programmers to group a sequence of actions on the program into a logical, higher-level computation unit. This paper presents a new version of the GROWTHGr and GROWTH06 programs. New version program summaryProgram title: GROWTH06_v2 Catalogue identifier: ADVL_v2_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVL_v2_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 65 255 No. of bytes in distributed program, including test data, etc.: 865 985 Distribution format: tar.gz Programming language: Object Pascal Computer: Pentium-based PC Operating system: Windows 9x, XP, NT, Vista RAM: more than 1 MB Classification: 4.3, 7.2, 6.2, 8, 14 Catalogue identifier of previous version: ADVL_v2_0 Journal reference of previous version: Comput. Phys. Comm. 175 (2006) 678 Does the new version supersede the previous version?: Yes Nature of problem: The programs compute the RHEED intensities during the growth of thin epitaxial structures prepared using the molecular beam epitaxy (MBE). The computations are based on the use of kinematical diffraction theory. Solution method: Epitaxial growth of thin films is modelled by a set of non-linear differential equations [1]. The Runge-Kutta method with adaptive stepsize control was used for solving initial value problem for non-linear differential equations [2]. Reasons for new version: According to the users' suggestions functionality of the program has been improved. Moreover, new use cases have been added which make the handling of the program easier and more efficient than the previous ones [3]. Summary of revisions:The design pattern (See Fig. 2 of Ref. [3]) has been modified according to the scheme shown on Fig. 1. A graphical user interface (GUI) for the program has been reconstructed. Fig. 2 presents a hybrid diagram of a GUI that shows how onscreen objects connect to use cases. The program has been compiled with English/USA regional and language options. Note: The figures mentioned above are contained in the program distribution file. Unusual features: The program is distributed in the form of source project GROWTH06_v2.dpr with associated files, and should be compiled using Borland Delphi compilers versions 6 or latter (including Borland Developer Studio 2006 and Code Gear compilers for Delphi). Additional comments: Two figures are included in the program distribution file. These are captioned Static classes model for Transaction design pattern. A model of a window that shows how onscreen objects connect to use cases. Running time: The typical running time is machine and user-parameters dependent. References: [1] A. Daniluk, Comput. Phys. Comm. 170 (2005) 265. [2] W.H. Press, B.P. Flannery, S.A. Teukolsky, W.T. Vetterling, Numerical Recipes in Pascal: The Art of Scientific Computing, first ed., Cambridge University Press, 1989. [3] M. Brzuszek, A. Daniluk, Comput. Phys. Comm. 175 (2006) 678.

  1. Program Processes Thermocouple Readings

    NASA Technical Reports Server (NTRS)

    Quave, Christine A.; Nail, William, III

    1995-01-01

    Digital Signal Processor for Thermocouples (DART) computer program implements precise and fast method of converting voltage to temperature for large-temperature-range thermocouple applications. Written using LabVIEW software. DART available only as object code for use on Macintosh II FX or higher-series computers running System 7.0 or later and IBM PC-series and compatible computers running Microsoft Windows 3.1. Macintosh version of DART (SSC-00032) requires LabVIEW 2.2.1 or 3.0 for execution. IBM PC version (SSC-00031) requires LabVIEW 3.0 for Windows 3.1. LabVIEW software product of National Instruments and not included with program.

  2. P ≠NP Millenium-Problem(MP) TRIVIAL Physics Proof Via NATURAL TRUMPS Artificial-``Intelligence'' Via: Euclid Geometry, Plato Forms, Aristotle Square-of-Opposition, Menger Dimension-Theory Connections!!! NO Computational-Complexity(CC)/ANYthing!!!: Geometry!!!

    NASA Astrophysics Data System (ADS)

    Clay, London; Menger, Karl; Rota, Gian-Carlo; Euclid, Alexandria; Siegel, Edward

    P ≠NP MP proof is by computer-''science''/SEANCE(!!!)(CS) computational-''intelligence'' lingo jargonial-obfuscation(JO) NATURAL-Intelligence(NI) DISambiguation! CS P =(?) =NP MEANS (Deterministic)(PC) = (?) =(Non-D)(PC) i.e. D(P) =(?) = N(P). For inclusion(equality) vs. exclusion (inequality) irrelevant (P) simply cancels!!! (Equally any/all other CCs IF both sides identical). Crucial question left: (D) =(?) =(ND), i.e. D =(?) = N. Algorithmics[Sipser[Intro. Thy.Comp.(`97)-p.49Fig.1.15!!!

  3. Operating a Geiger Müller tube using a PC sound card

    NASA Astrophysics Data System (ADS)

    Azooz, A. A.

    2009-01-01

    In this paper, a simple MATLAB-based PC program that enables the computer to function as a replacement for the electronic scalar-counter system associated with a Geiger-Müller (GM) tube is described. The program utilizes the ability of MATLAB to acquire data directly from the computer sound card. The signal from the GM tube is applied to the computer sound card via the line in port. All standard GM experiments, pulse shape and statistical analysis experiments can be carried out using this system. A new visual demonstration of dead time effects is also presented.

  4. Computers--Teaching, Technology, and Applications.

    ERIC Educational Resources Information Center

    Cocco, Anthony M.; And Others

    1995-01-01

    Includes "Managing Personality Types in the Computer Classroom" (Cocco); "External I/O Input/Output with a PC" (Fryda); "The Future of CAD/CAM Computer-Assisted Design/Computer-Assisted Manufacturing Software" (Fulton); and "Teaching Quality Assurance--A Laboratory Approach" (Wojslaw). (SK)

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hopwood, J.E.; Affeldt, B.

    An IBM personal computer (PC), a Gerber coordinate digitizer, and a collection of other instruments make up a system known as the Coordinate Digitizer Interactive Processor (CDIP). The PC extracts coordinate data from the digitizer through a special interface, and then, after reformatting, transmits the data to a remote VAX computer, a floppy disk, and a display terminal. This system has improved the efficiency of producing printed circuit-board artwork and extended the useful life of the Gerber GCD-1 Digitizer. 1 ref., 12 figs.

  6. A novel anisotropic fast marching method and its application to blood flow computation in phase-contrast MRI.

    PubMed

    Schwenke, M; Hennemuth, A; Fischer, B; Friman, O

    2012-01-01

    Phase-contrast MRI (PC MRI) can be used to assess blood flow dynamics noninvasively inside the human body. The acquired images can be reconstructed into flow vector fields. Traditionally, streamlines can be computed based on the vector fields to visualize flow patterns and particle trajectories. The traditional methods may give a false impression of precision, as they do not consider the measurement uncertainty in the PC MRI images. In our prior work, we incorporated the uncertainty of the measurement into the computation of particle trajectories. As a major part of the contribution, a novel numerical scheme for solving the anisotropic Fast Marching problem is presented. A computing time comparison to state-of-the-art methods is conducted on artificial tensor fields. A visual comparison of healthy to pathological blood flow patterns is given. The comparison shows that the novel anisotropic Fast Marching solver outperforms previous schemes in terms of computing time. The visual comparison of flow patterns directly visualizes large deviations of pathological flow from healthy flow. The novel anisotropic Fast Marching solver efficiently resolves even strongly anisotropic path costs. The visualization method enables the user to assess the uncertainty of particle trajectories derived from PC MRI images.

  7. Open architecture CMM motion controller

    NASA Astrophysics Data System (ADS)

    Chang, David; Spence, Allan D.; Bigg, Steve; Heslip, Joe; Peterson, John

    2001-12-01

    Although initially the only Coordinate Measuring Machine (CMM) sensor available was a touch trigger probe, technological advances in sensors and computing have greatly increased the variety of available inspection sensors. Non-contact laser digitizers and analog scanning touch probes require very well tuned CMM motion control, as well as an extensible, open architecture interface. This paper describes the implementation of a retrofit CMM motion controller designed for open architecture interface to a variety of sensors. The controller is based on an Intel Pentium microcomputer and a Servo To Go motion interface electronics card. Motor amplifiers, safety, and additional interface electronics are housed in a separate enclosure. Host Signal Processing (HSP) is used for the motion control algorithm. Compared to the usual host plus DSP architecture, single CPU HSP simplifies integration with the various sensors, and implementation of software geometric error compensation. Motion control tuning is accomplished using a remote computer via 100BaseTX Ethernet. A Graphical User Interface (GUI) is used to enter geometric error compensation data, and to optimize the motion control tuning parameters. It is shown that this architecture achieves the required real time motion control response, yet is much easier to extend to additional sensors.

  8. Use of imagery and GIS for humanitarian demining management

    NASA Astrophysics Data System (ADS)

    Gentile, Jack; Gustafson, Glen C.; Kimsey, Mary; Kraenzle, Helmut; Wilson, James; Wright, Stephen

    1997-11-01

    In the Fall of 1996, the Center for Geographic Information Science at James Madison University became involved in a project for the Department of Defense evaluating the data needs and data management systems for humanitarian demining in the Third World. In particular, the effort focused on the information needs of demining in Cambodia and in Bosnia. In the first phase of the project one team attempted to identify all sources of unclassified country data, image data and map data. Parallel with this, another group collected information and evaluations on most of the commercial off-the-shelf computer software packages for the management of such geographic information. The result was a design for the kinds of data and the kinds of systems necessary to establish and maintain such a database as a humanitarian demining management tool. The second phase of the work involved acquiring the recommended data and systems, integrating the two, and producing a demonstration of the system. In general, the configuration involves ruggedized portable computers for field use with a greatly simplified graphical user interface, supported by a more capable central facility based on Pentium workstations and appropriate technical expertise.

  9. Comparison of 4D Phase-Contrast MRI Flow Measurements to Computational Fluid Dynamics Simulations of Cerebrospinal Fluid Motion in the Cervical Spine

    PubMed Central

    Yiallourou, Theresia I.; Kröger, Jan Robert; Stergiopulos, Nikolaos; Maintz, David

    2012-01-01

    Cerebrospinal fluid (CSF) dynamics in the cervical spinal subarachnoid space (SSS) have been thought to be important to help diagnose and assess craniospinal disorders such as Chiari I malformation (CM). In this study we obtained time-resolved three directional velocity encoded phase-contrast MRI (4D PC MRI) in three healthy volunteers and four CM patients and compared the 4D PC MRI measurements to subject-specific 3D computational fluid dynamics (CFD) simulations. The CFD simulations considered the geometry to be rigid-walled and did not include small anatomical structures such as nerve roots, denticulate ligaments and arachnoid trabeculae. Results were compared at nine axial planes along the cervical SSS in terms of peak CSF velocities in both the cranial and caudal direction and visual interpretation of thru-plane velocity profiles. 4D PC MRI peak CSF velocities were consistently greater than the CFD peak velocities and these differences were more pronounced in CM patients than in healthy subjects. In the upper cervical SSS of CM patients the 4D PC MRI quantified stronger fluid jets than the CFD. Visual interpretation of the 4D PC MRI thru-plane velocity profiles showed greater pulsatile movement of CSF in the anterior SSS in comparison to the posterior and reduction in local CSF velocities near nerve roots. CFD velocity profiles were relatively uniform around the spinal cord for all subjects. This study represents the first comparison of 4D PC MRI measurements to CFD of CSF flow in the cervical SSS. The results highlight the utility of 4D PC MRI for evaluation of complex CSF dynamics and the need for improvement of CFD methodology. Future studies are needed to investigate whether integration of fine anatomical structures and gross motion of the brain and/or spinal cord into the computational model will lead to a better agreement between the two techniques. PMID:23284970

  10. Acceptance of Internet Banking Systems among Young Managers

    NASA Astrophysics Data System (ADS)

    Ariff, Mohd Shoki Md; M, Yeow S.; Zakuan, Norhayati; Zaidi Bahari, Ahamad

    2013-06-01

    The aim of this paper is to determine acceptance of internet banking system among potential young users, specifically future young managers. The relationships and the effects of computer self-efficacy (CSE) and extended technology acceptance model (TAM) on the behavioural intention (BI) to use internet banking system were examined. Measurement of CSE, TAM and BI were adapted from previous studies. However construct for TAM has been extended by adding a new variable which is perceived credibility (PC). A survey through questionnaire was conducted to determine the acceptance level of CSE, TAM and BI. Data were obtained from 275 Technology Management students, who are pursuing their undergraduate studies in a Malaysia's public university. The confirmatory factor analysis performed has identified four variables as determinant factors of internet banking acceptance. The first variable is computer self-efficacy (CSE), and another three variables from TAM constructs which are perceived usefulness (PU), perceived ease of use (PE) and perceived credibility (PC). The finding of this study indicated that CSE has a positive effect on PU and PE of the Internet banking systems. Respondents' CSE was positively affecting their PC of the systems, indicating that the higher the ability of one in computer skills, the higher the security and privacy issues of PC will be concerned. The multiple regression analysis indicated that only two construct of TAM; PU and PC were significantly associated with BI. It was found that the future managers' CSE indirectly affects their BI to use the internet banking systems through PU and PC of TAM. TAM was found to have direct effects on respondents' BI to use the systems. Both CSE and the PU and PC of TAM were good predictors in understanding individual responses to information technology. The role of PE of the original TAM to predict the attitude of users towards the use of information technology systems was surprisingly insignificant.

  11. Perceived control and intrinsic vs. extrinsic motivation for oral self-care: a full factorial experimental test of theory-based persuasive messages.

    PubMed

    Staunton, Liam; Gellert, Paul; Knittle, Keegan; Sniehotta, Falko F

    2015-04-01

    Correlational evidence suggests that perceived control (PC) and intrinsic motivation (IM), key constructs in social cognitive and self-determination theories, may interact to reinforce behavior change. This proof-of-principle study examines the independent and synergistic effects of interventions to increase PC and IM upon dental flossing frequency. University students (n = 185) were randomized in a 2 × 2 full factorial design to receive two computer-based interventions: one to either increase or decrease PC and another to increase either IM or extrinsic motivation. These constructs were measured immediately post-intervention; flossing behavior was measured 1 week later. The interventions to increase PC and PC/IM had main and interaction effects on flossing, respectively. The PC/IM interaction effect was mediated by increases in PC and IM. Combining interventions to increase PC and IM seems to be a promising avenue of research, which has implications for both theory and intervention development.

  12. Implications of Multi-Core Architectures on the Development of Multiple Independent Levels of Security (MILS) Compliant Systems

    DTIC Science & Technology

    2012-10-01

    REPORT 3. DATES COVERED (From - To) MAR 2010 – APR 2012 4 . TITLE AND SUBTITLE IMPLICATIONS OF MULT-CORE ARCHITECTURES ON THE DEVELOPMENT OF...Framework for Multicore Information Flow Analysis ...................................... 23 4 4.1 A Hypothetical Reference Architecture... 4 Figure 2: Pentium II Block Diagram

  13. [Features of control of electromagnetic radiation emitted by personal computers].

    PubMed

    Pal'tsev, Iu P; Buzov, A L; Kol'chugin, Iu I

    1996-01-01

    Measurements of PC electromagnetic irradiation show that the main sources are PC blocks emitting the waves of certain frequencies. Use of wide-range detectors measuring field intensity in assessment of PC electromagnetic irradiation gives unreliable results. More precise measurements by selective devices are required. Thus, it is expedient to introduce a term "spectral density of field intensity" and its maximal allowable level. In this case a frequency spectrum of PC electromagnetic irradiation is divided into 4 ranges, one of which is subjected to calculation of field intensity for each harmonic frequency, and others undergo assessment of spectral density of field intensity.

  14. Reviews.

    ERIC Educational Resources Information Center

    Journal of Chemical Education, 1988

    1988-01-01

    Reviews two computer programs: "Molecular Graphics," which allows molecule manipulation in three-dimensional space (requiring IBM PC with 512K, EGA monitor, and math coprocessor); and "Periodic Law," a database which contains up to 20 items of information on each of the first 103 elements (Apple II or IBM PC). (MVL)

  15. Computer Skills and Internet Use in Adults Aged 50-74 Years: Influence of Hearing Difficulties

    PubMed Central

    Clark, Daniel P A; Kang, Sujin; Ferguson, Melanie A

    2012-01-01

    Background The use of personal computers (PCs) and the Internet to provide health care information and interventions has increased substantially over the past decade. Yet the effectiveness of such an approach is highly dependent upon whether the target population has both access and the skill set required to use this technology. This is particularly relevant in the delivery of hearing health care because most people with hearing loss are over 50 years (average age for initial hearing aid fitting is 74 years). Although PC skill and Internet use by demographic factors have been examined previously, data do not currently exist that examine the effects of hearing difficulties on PC skill or Internet use in older adults. Objective To explore the effect that hearing difficulty has on PC skill and Internet use in an opportunistic sample of adults aged 50-74 years. Methods Postal questionnaires about hearing difficulty, PC skill, and Internet use (n=3629) were distributed to adults aged 50-74 years through three family physician practices in Nottingham, United Kingdom. A subsample of 84 respondents completed a second detailed questionnaire on confidence in using a keyboard, mouse, and track pad. Summed scores were termed the “PC confidence index.” The PC confidence index was used to verify the PC skill categories in the postal questionnaire (ie, never used a computer, beginner, and competent). Results The postal questionnaire response rate was 36.78% (1298/3529) and 95.15% (1235/1298) of these contained complete information. There was a significant between-category difference for PC skill by PC confidence index (P<.001), thus verifying the three-category PC skill scale. PC and Internet use was greater in the younger respondents (50-62 years) than in the older respondents (63-74 years). The younger group’s PC and Internet use was 81.0% and 60.9%, respectively; the older group’s PC and Internet use was 54.0% and 29.8%, respectively. Those with slight hearing difficulties in the older group had significantly greater odds of PC use compared to those with no hearing difficulties (odds ratio [OR]=1.57, 95% confidence interval [CI] 1.06-2.30, P=.02). Those with moderate+ hearing difficulties had lower odds of PC use compared with those with no hearing difficulties, both overall (OR=0.58, 95% CI 0.39-0.87, P=.008) and in the younger group (OR=0.49, 95% CI 0.26-0.86, P=.008). Similar results were demonstrated for Internet use by age group (older: OR=1.57, 95% CI 0.99-2.47, P=.05; younger: OR=0.32, 95% CI 0.16-0.62, P=.001). Conclusions Hearing health care is of particular relevance to older adults because of the prevalence of age-related hearing loss. Our data show that older adults experiencing slight hearing difficulty have increased odds of greater PC skill and Internet use than those reporting no difficulty. These findings suggest that PC and Internet delivery of hearing screening, information, and intervention is feasible for people between 50-74 years who have hearing loss, but who would not typically present to an audiologist. PMID:22954484

  16. ARDS User Manual

    NASA Technical Reports Server (NTRS)

    Fleming, David P.

    2001-01-01

    Personal computers (PCs) are now used extensively for engineering analysis. their capability exceeds that of mainframe computers of only a few years ago. Programs originally written for mainframes have been ported to PCs to make their use easier. One of these programs is ARDS (Analysis of Rotor Dynamic Systems) which was developed at Arizona State University (ASU) by Nelson et al. to quickly and accurately analyze rotor steady state and transient response using the method of component mode synthesis. The original ARDS program was ported to the PC in 1995. Several extensions were made at ASU to increase the capability of mainframe ARDS. These extensions have also been incorporated into the PC version of ARDS. Each mainframe extension had its own user manual generally covering only that extension. Thus to exploit the full capability of ARDS required a large set of user manuals. Moreover, necessary changes and enhancements for PC ARDS were undocumented. The present document is intended to remedy those problems by combining all pertinent information needed for the use of PC ARDS into one volume.

  17. Calculation of Weibull strength parameters, Batdorf flaw density constants and related statistical quantities using PC-CARES

    NASA Technical Reports Server (NTRS)

    Szatmary, Steven A.; Gyekenyesi, John P.; Nemeth, Noel N.

    1990-01-01

    This manual describes the operation and theory of the PC-CARES (Personal Computer-Ceramic Analysis and Reliability Evaluation of Structures) computer program for the IBM PC and compatibles running PC-DOS/MS-DOR OR IBM/MS-OS/2 (version 1.1 or higher) operating systems. The primary purpose of this code is to estimate Weibull material strength parameters, the Batdorf crack density coefficient, and other related statistical quantities. Included in the manual is the description of the calculation of shape and scale parameters of the two-parameter Weibull distribution using the least-squares analysis and maximum likelihood methods for volume- and surface-flaw-induced fracture in ceramics with complete and censored samples. The methods for detecting outliers and for calculating the Kolmogorov-Smirnov and the Anderson-Darling goodness-of-fit statistics and 90 percent confidence bands about the Weibull line, as well as the techniques for calculating the Batdorf flaw-density constants are also described.

  18. A stochastic asymptotic-preserving scheme for a kinetic-fluid model for disperse two-phase flows with uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Shi, E-mail: sjin@wisc.edu; Institute of Natural Sciences, School of Mathematical Science, MOELSEC and SHL-MAC, Shanghai Jiao Tong University, Shanghai 200240; Shu, Ruiwen, E-mail: rshu2@math.wisc.edu

    In this paper we consider a kinetic-fluid model for disperse two-phase flows with uncertainty. We propose a stochastic asymptotic-preserving (s-AP) scheme in the generalized polynomial chaos stochastic Galerkin (gPC-sG) framework, which allows the efficient computation of the problem in both kinetic and hydrodynamic regimes. The s-AP property is proved by deriving the equilibrium of the gPC version of the Fokker–Planck operator. The coefficient matrices that arise in a Helmholtz equation and a Poisson equation, essential ingredients of the algorithms, are proved to be positive definite under reasonable and mild assumptions. The computation of the gPC version of a translation operatormore » that arises in the inversion of the Fokker–Planck operator is accelerated by a spectrally accurate splitting method. Numerical examples illustrate the s-AP property and the efficiency of the gPC-sG method in various asymptotic regimes.« less

  19. Establishing a communications link between two different, incompatible, personal computers: with practical examples and illustrations and program code.

    PubMed

    Davidson, R W

    1985-01-01

    The increasing need to communicate to exchange data can be handled by personal microcomputers. The necessity for the transference of information stored in one type of personal computer to another type of personal computer is often encountered in the process of integrating multiple sources of information stored in different and incompatible computers in Medical Research and Practice. A practical example is demonstrated with two relatively inexpensive commonly used computers, the IBM PC jr. and the Apple IIe. The basic input/output (I/O) interface chip for serial communication for each computer are joined together using a Null connector and cable to form a communications link. Using BASIC (Beginner's All-purpose Symbolic Instruction Code) Computer Language and the Disk Operating System (DOS) the communications handshaking protocol and file transfer is established between the two computers. The BASIC programming languages used are Applesoft (Apple Personal Computer) and PC BASIC (IBM Personal computer).

  20. Automatic tuned MRI RF coil for multinuclear imaging of small animals at 3T.

    PubMed

    Muftuler, L Tugan; Gulsen, Gultekin; Sezen, Kumsal D; Nalcioglu, Orhan

    2002-03-01

    We have developed an MRI RF coil whose tuning can be adjusted automatically between 120 and 128 MHz for sequential spectroscopic imaging of hydrogen and fluorine nuclei at field strength 3 T. Variable capacitance (varactor) diodes were placed on each rung of an eight-leg low-pass birdcage coil to change the tuning frequency of the coil. The diode junction capacitance can be controlled by the amount of applied reverse bias voltage. Impedance matching was also done automatically by another pair of varactor diodes to obtain the maximum SNR at each frequency. The same bias voltage was applied to the tuning varactors on all rungs to avoid perturbations in the coil. A network analyzer was used to monitor matching and tuning of the coil. A Pentium PC controlled the analyzer through the GPIB bus. A code written in LABVIEW was used to communicate with the network analyzer and adjust the bias voltages of the varactors via D/A converters. Serially programmed D/A converter devices were used to apply the bias voltages to the varactors. Isolation amplifiers were used together with RF choke inductors to provide isolation between the RF coil and the DC bias lines. We acquired proton and fluorine images sequentially from a multicompartment phantom using the designed coil. Good matching and tuning were obtained at both resonance frequencies. The tuning and matching of the coil were changed from one resonance frequency to the other within 60 s. (c) 2002 Elsevier Science (USA).

  1. Color analysis of the human airway wall

    NASA Astrophysics Data System (ADS)

    Gopalakrishnan, Deepa; McLennan, Geoffrey; Donnelley, Martin; Delsing, Angela; Suter, Melissa; Flaherty, Dawn; Zabner, Joseph; Hoffman, Eric A.; Reinhardt, Joseph M.

    2002-04-01

    A bronchoscope can be used to examine the mucosal surface of the airways for abnormalities associated with a variety of lung diseases. The diagnosis of these abnormalities through the process of bronchoscopy is based, in part, on changes in airway wall color. Therefore it is important to characterize the normal color inside the airways. We propose a standardized method to calibrate the bronchoscopic imaging system and to tabulate the normal colors of the airway. Our imaging system consists of a Pentium PC and video frame grabber, coupled with a true color bronchoscope. The calibration procedure uses 24 standard color patches. Images of these color patches at three different distances (1, 1.5, and 2 cm) were acquired using the bronchoscope in a darkened room, to assess repeatability and sensitivity to illumination. The images from the bronchoscope are in a device-dependent Red-Green-Blue (RGB) color space, which was converted to a tri-stimulus image and then into a device-independent color space sRGB image by a fixed polynomial transformation. Images were acquired from five normal human volunteer subjects, two cystic fibrosis (CF) patients and one normal heavy smoker subject. The hue and saturation values of regions within the normal airway were tabulated and these values were compared with the values obtained from regions within the airways of the CF patients and the normal heavy smoker. Repeated measurements of the same region in the airways showed no measurable change in hue or saturation.

  2. Intelligent control system based on ARM for lithography tool

    NASA Astrophysics Data System (ADS)

    Chen, Changlong; Tang, Xiaoping; Hu, Song; Wang, Nan

    2014-08-01

    The control system of traditional lithography tool is based on PC and MCU. The PC handles the complex algorithm, human-computer interaction, and communicates with MCU via serial port; The MCU controls motors and electromagnetic valves, etc. This mode has shortcomings like big volume, high power consumption, and wasting of PC resource. In this paper, an embedded intelligent control system of lithography tool, based on ARM, is provided. The control system used S5PV210 as processor, completing the functions of PC in traditional lithography tool, and provided a good human-computer interaction by using LCD and capacitive touch screen. Using Android4.0.3 as operating system, the equipment provided a cool and easy UI which made the control more user-friendly, and implemented remote control and debug, pushing video information of product by network programming. As a result, it's convenient for equipment vendor to provide technical support for users. Finally, compared with traditional lithography tool, this design reduced the PC part, making the hardware resources efficiently used and reducing the cost and volume. Introducing embedded OS and the concepts in "The Internet of things" into the design of lithography tool can be a development trend.

  3. THERMINATOR: THERMal heavy-IoN generATOR

    NASA Astrophysics Data System (ADS)

    Kisiel, Adam; Tałuć, Tomasz; Broniowski, Wojciech; Florkowski, Wojciech

    2006-04-01

    THERMINATOR is a Monte Carlo event generator designed for studying of particle production in relativistic heavy-ion collisions performed at such experimental facilities as the SPS, RHIC, or LHC. The program implements thermal models of particle production with single freeze-out. It performs the following tasks: (1) generation of stable particles and unstable resonances at the chosen freeze-out hypersurface with the local phase-space density of particles given by the statistical distribution factors, (2) subsequent space-time evolution and decays of hadronic resonances in cascades, (3) calculation of the transverse-momentum spectra and numerous other observables related to the space-time evolution. The geometry of the freeze-out hypersurface and the collective velocity of expansion may be chosen from two successful models, the Cracow single-freeze-out model and the Blast-Wave model. All particles from the Particle Data Tables are used. The code is written in the object-oriented c++ language and complies to the standards of the ROOT environment. Program summaryProgram title:THERMINATOR Catalogue identifier:ADXL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXL_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland RAM required to execute with typical data:50 Mbytes Number of processors used:1 Computer(s) for which the program has been designed: PC, Pentium III, IV, or Athlon, 512 MB RAM not hardware dependent (any computer with the c++ compiler and the ROOT environment [R. Brun, F. Rademakers, Nucl. Instrum. Methods A 389 (1997) 81, http://root.cern.ch] Operating system(s) for which the program has been designed:Linux: Mandrake 9.0, Debian 3.0, SuSE 9.0, Red Hat FEDORA 3, etc., Windows XP with Cygwin ver. 1.5.13-1 and gcc ver. 3.3.3 (cygwin special)—not system dependent External routines/libraries used: ROOT ver. 4.02.00 Programming language:c++ Size of the package: (324 KB directory 40 KB compressed distribution archive), without the ROOT libraries (see http://root.cern.ch for details on the ROOT [R. Brun, F. Rademakers, Nucl. Instrum. Methods A 389 (1997) 81, http://root.cern.ch] requirements). The output files created by the code need 1.1 GB for each 500 events. Distribution format: tar gzip file Number of lines in distributed program, including test data, etc.: 6534 Number of bytes in ditribution program, including test data, etc.:41 828 Nature of the physical problem: Statistical models have proved to be very useful in the description of soft physics in relativistic heavy-ion collisions [P. Braun-Munzinger, K. Redlich, J. Stachel, 2003, nucl-th/0304013. [2

  4. Kinematical calculations of RHEED intensity oscillations during the growth of thin epitaxial films

    NASA Astrophysics Data System (ADS)

    Daniluk, Andrzej

    2005-08-01

    A practical computing algorithm working in real time has been developed for calculating the reflection high-energy electron diffraction (RHEED) from the molecular beam epitaxy (MBE) growing surface. The calculations are based on the use of kinematical diffraction theory. Simple mathematical models are used for the growth simulation in order to investigate the fundamental behaviors of reflectivity change during the growth of thin epitaxial films prepared using MBE. Program summaryTitle of program:GROWTH Catalogue identifier:ADVL Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVL Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computer for which the program is designed and others on which is has been tested:Pentium-based PC Operating systems or monitors under which the program has been tested:Windows 9x, XP, NT Programming language used:Object Pascal Memory required to execute with typical data:more than 1 MB Number of bits in a word: 64 bits Number of processors used: 1 Number of lines in distributed program, including test data, etc.: 10 989 Number of bytes in distributed program, including test data, etc.:103 048 Nature of the physical problem:Reflection high-energy electron diffraction (RHEED) is a very useful technique for studying growth and surface analysis of thin epitaxial structures prepared using the molecular beam epitaxy (MBE). The simplest approach to calculating the RHEED intensity during the growth of thin epitaxial films is the kinematical diffraction theory (often called kinematical approximation), in which only a single scattering event is taken into account. The biggest advantage of this approach is that we can calculate RHEED intensity in real time. Also, the approach facilitates intuitive understanding of the growth mechanism and surface morphology [P.I. Cohen, G.S. Petrich, P.R. Pukite, G.J. Whaley, A.S. Arrott, Surf. Sci. 216 (1989) 222]. Method of solution:Epitaxial growth of thin films is modeled by a set of non-linear differential equations [P.I. Cohen, G.S. Petrich, P.R. Pukite, G.J. Whaley, A.S. Arrott, Surf. Sci. 216 (1989) 222]. The Runge-Kutta method with adaptive stepsize control was used for solving initial value problem for non-linear differential equations [W.H. Press, B.P. Flannery, S.A. Teukolsky, W.T. Vetterling, Numerical Recipes in Pascal: The Art of Scientific Computing; first ed., Cambridge University Press, 1989; See also: Numerical Recipes in C++, second ed., Cambridge University Press, 1992]. Typical running time: The typical running time is machine and user-parameters dependent. Unusual features of the program: The program is distributed in the form of a main project Growth.dpr file and an independent Rhd.pas file and should be compiled using Object Pascal compilers, including Borland Delphi.

  5. Stopping computer crimes

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1989-01-01

    Two new books about intrusions and computer viruses remind us that attacks against our computers on networks are the actions of human beings. Cliff Stoll's book about the hacker who spent a year, beginning in Aug. 1986, attempting to use the Lawrence Berkeley Computer as a stepping-stone for access to military secrets is a spy thriller that illustrates the weaknesses of our password systems and the difficulties in compiling evidence against a hacker engaged in espionage. Pamela Kane's book about viruses that attack IBM PC's shows that viruses are the modern version of the old problem of a Trojan horse attack. It discusses the most famous viruses and their countermeasures, and it comes with a floppy disk of utility programs that will disinfect your PC and thwart future attack.

  6. Mathematics Programming on the Apple II and IBM PC.

    ERIC Educational Resources Information Center

    Myers, Roy E.; Schneider, David I.

    1987-01-01

    Details the features of BASIC used in mathematics programming and provides the information needed to translate between the Apple II and IBM PC computers. Discusses inputing a user-defined function, setting scroll windows, displaying subscripts and exponents, variable names, mathematical characters and special symbols. (TW)

  7. Detection of Post-Therapeutic Effects in Breast Carcinoma Using Hard X-Ray Index of Refraction Computed Tomography - A Feasibility Study.

    PubMed

    Grandl, Susanne; Sztrókay-Gaul, Anikó; Mittone, Alberto; Gasilov, Sergey; Brun, Emmanuel; Bravin, Alberto; Mayr, Doris; Auweter, Sigrid D; Hellerhoff, Karin; Reiser, Maximilian; Coan, Paola

    2016-01-01

    Neoadjuvant chemotherapy is the state-of-the-art treatment in advanced breast cancer. A correct visualization of the post-therapeutic tumor size is of high prognostic relevance. X-ray phase-contrast computed tomography (PC-CT) has been shown to provide improved soft-tissue contrast at a resolution formerly restricted to histopathology, at low doses. This study aimed at assessing ex-vivo the potential use of PC-CT for visualizing the effects of neoadjuvant chemotherapy on breast carcinoma. The analysis was performed on two ex-vivo formalin-fixed mastectomy samples containing an invasive carcinoma removed from two patients treated with neoadjuvant chemotherapy. Images were matched with corresponding histological slices. The visibility of typical post-therapeutic tissue changes was assessed and compared to results obtained with conventional clinical imaging modalities. PC-CT depicted the different tissue types with an excellent correlation to histopathology. Post-therapeutic tissue changes were correctly visualized and the residual tumor mass could be detected. PC-CT outperformed clinical imaging modalities in the detection of chemotherapy-induced tissue alterations including post-therapeutic tumor size. PC-CT might become a unique diagnostic tool in the prediction of tumor response to neoadjuvant chemotherapy. PC-CT might be used to assist during histopathological diagnosis, offering a high-resolution and high-contrast virtual histological tool for the accurate delineation of tumor boundaries.

  8. Validity of computational hemodynamics in human arteries based on 3D time-of-flight MR angiography and 2D electrocardiogram gated phase contrast images

    NASA Astrophysics Data System (ADS)

    Yu, Huidan (Whitney); Chen, Xi; Chen, Rou; Wang, Zhiqiang; Lin, Chen; Kralik, Stephen; Zhao, Ye

    2015-11-01

    In this work, we demonstrate the validity of 4-D patient-specific computational hemodynamics (PSCH) based on 3-D time-of-flight (TOF) MR angiography (MRA) and 2-D electrocardiogram (ECG) gated phase contrast (PC) images. The mesoscale lattice Boltzmann method (LBM) is employed to segment morphological arterial geometry from TOF MRA, to extract velocity profiles from ECG PC images, and to simulate fluid dynamics on a unified GPU accelerated computational platform. Two healthy volunteers are recruited to participate in the study. For each volunteer, a 3-D high resolution TOF MRA image and 10 2-D ECG gated PC images are acquired to provide the morphological geometry and the time-varying flow velocity profiles for necessary inputs of the PSCH. Validation results will be presented through comparisons of LBM vs. 4D Flow Software for flow rates and LBM simulation vs. MRA measurement for blood flow velocity maps. Indiana University Health (IUH) Values Fund.

  9. A PC-based multispectral scanner data evaluation workstation: Application to Daedalus scanners

    NASA Technical Reports Server (NTRS)

    Jedlovec, Gary J.; James, Mark W.; Smith, Matthew R.; Atkinson, Robert J.

    1991-01-01

    In late 1989, a personal computer (PC)-based data evaluation workstation was developed to support post flight processing of Multispectral Atmospheric Mapping Sensor (MAMS) data. The MAMS Quick View System (QVS) is an image analysis and display system designed to provide the capability to evaluate Daedalus scanner data immediately after an aircraft flight. Even in its original form, the QVS offered the portability of a personal computer with the advanced analysis and display features of a mainframe image analysis system. It was recognized, however, that the original QVS had its limitations, both in speed and processing of MAMS data. Recent efforts are presented that focus on overcoming earlier limitations and adapting the system to a new data tape structure. In doing so, the enhanced Quick View System (QVS2) will accommodate data from any of the four spectrometers used with the Daedalus scanner on the NASA ER2 platform. The QVS2 is designed around the AST 486/33 MHz CPU personal computer and comes with 10 EISA expansion slots, keyboard, and 4.0 mbytes of memory. Specialized PC-McIDAS software provides the main image analysis and display capability for the system. Image analysis and display of the digital scanner data is accomplished with PC-McIDAS software.

  10. Grid and basis adaptive polynomial chaos techniques for sensitivity and uncertainty analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perkó, Zoltán, E-mail: Z.Perko@tudelft.nl; Gilli, Luca, E-mail: Gilli@nrg.eu; Lathouwers, Danny, E-mail: D.Lathouwers@tudelft.nl

    2014-03-01

    The demand for accurate and computationally affordable sensitivity and uncertainty techniques is constantly on the rise and has become especially pressing in the nuclear field with the shift to Best Estimate Plus Uncertainty methodologies in the licensing of nuclear installations. Besides traditional, already well developed methods – such as first order perturbation theory or Monte Carlo sampling – Polynomial Chaos Expansion (PCE) has been given a growing emphasis in recent years due to its simple application and good performance. This paper presents new developments of the research done at TU Delft on such Polynomial Chaos (PC) techniques. Our work ismore » focused on the Non-Intrusive Spectral Projection (NISP) approach and adaptive methods for building the PCE of responses of interest. Recent efforts resulted in a new adaptive sparse grid algorithm designed for estimating the PC coefficients. The algorithm is based on Gerstner's procedure for calculating multi-dimensional integrals but proves to be computationally significantly cheaper, while at the same it retains a similar accuracy as the original method. More importantly the issue of basis adaptivity has been investigated and two techniques have been implemented for constructing the sparse PCE of quantities of interest. Not using the traditional full PC basis set leads to further reduction in computational time since the high order grids necessary for accurately estimating the near zero expansion coefficients of polynomial basis vectors not needed in the PCE can be excluded from the calculation. Moreover the sparse PC representation of the response is easier to handle when used for sensitivity analysis or uncertainty propagation due to the smaller number of basis vectors. The developed grid and basis adaptive methods have been implemented in Matlab as the Fully Adaptive Non-Intrusive Spectral Projection (FANISP) algorithm and were tested on four analytical problems. These show consistent good performance both in terms of the accuracy of the resulting PC representation of quantities and the computational costs associated with constructing the sparse PCE. Basis adaptivity also seems to make the employment of PC techniques possible for problems with a higher number of input parameters (15–20), alleviating a well known limitation of the traditional approach. The prospect of larger scale applicability and the simplicity of implementation makes such adaptive PC algorithms particularly appealing for the sensitivity and uncertainty analysis of complex systems and legacy codes.« less

  11. Male sexual dysfunctions: immersive virtual reality and multimedia therapy.

    PubMed

    Optale, Gabriele; Pastore, Massimiliano; Marin, Silvia; Bordin, Diego; Nasta, Alberto; Pianon, Carlo

    2004-01-01

    The study describes a therapeutic approach using psycho-dynamic psychotherapy integrating virtual environment (VE) for resolving impotence or better erectile dysfunction (ED) of presumably psychological or mixed origin and premature ejaculation (PE). The plan for therapy consists of 12 sessions (15 if a sexual partner was involved) over a 25-week period on the ontogenetic development of male sexual identity, and the methods involved the use of a laptop PC, joystick, Virtual Reality (VR) helmet with miniature television screen showing a new specially-designed CD-ROM programs using Virtools with Windows 2000 and an audio CD. This study was composed of 30 patients, 15 (10 suffering from ED and 5 PE) plus 15 control patients (10 ED and 5 PE), that underwent the same therapeutic protocol but used an old VR helmet to interact with the old VE using a PC Pentium 133 16 Mb RAM. We also compared this study with another study we carried out on 160 men affected by sexual disorders, underwent the same therapeutic protocol, but treated using a VE created (in Superscape VRT 5.6) using always Windows 2000 with portable tools. Comparing the groups of patients affected by ED and PE, there emerged a significant positive results value without any important differences among the different VE used. However, we had a % increase of undesirable physical reactions during the more realistic 15-minute VR experience using Virtools development kit. Psychotherapy alone normally requires long periods of treatment in order to resolve sexual dysfunctions. Considering the particular way in which full-immersion VR involves the subject who experiences it (he is totally unobserved and in complete privacy), we hypothesise that this methodological approach might speed up the therapeutic psycho-dynamic process, which eludes cognitive defences and directly stimulates the subconscious, and that better results could be obtained in the treatment of these sexual disorders. This method can be used by any psychotherapist and it can be used alone or associated with pharmacotherapy prescribed by the urologist/andrologist as part of a therapeutic alliance.

  12. Parallel Computation of the Jacobian Matrix for Nonlinear Equation Solvers Using MATLAB

    NASA Technical Reports Server (NTRS)

    Rose, Geoffrey K.; Nguyen, Duc T.; Newman, Brett A.

    2017-01-01

    Demonstrating speedup for parallel code on a multicore shared memory PC can be challenging in MATLAB due to underlying parallel operations that are often opaque to the user. This can limit potential for improvement of serial code even for the so-called embarrassingly parallel applications. One such application is the computation of the Jacobian matrix inherent to most nonlinear equation solvers. Computation of this matrix represents the primary bottleneck in nonlinear solver speed such that commercial finite element (FE) and multi-body-dynamic (MBD) codes attempt to minimize computations. A timing study using MATLAB's Parallel Computing Toolbox was performed for numerical computation of the Jacobian. Several approaches for implementing parallel code were investigated while only the single program multiple data (spmd) method using composite objects provided positive results. Parallel code speedup is demonstrated but the goal of linear speedup through the addition of processors was not achieved due to PC architecture.

  13. SU-D-BRD-01: Cloud-Based Radiation Treatment Planning: Performance Evaluation of Dose Calculation and Plan Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Na, Y; Kapp, D; Kim, Y

    2014-06-01

    Purpose: To report the first experience on the development of a cloud-based treatment planning system and investigate the performance improvement of dose calculation and treatment plan optimization of the cloud computing platform. Methods: A cloud computing-based radiation treatment planning system (cc-TPS) was developed for clinical treatment planning. Three de-identified clinical head and neck, lung, and prostate cases were used to evaluate the cloud computing platform. The de-identified clinical data were encrypted with 256-bit Advanced Encryption Standard (AES) algorithm. VMAT and IMRT plans were generated for the three de-identified clinical cases to determine the quality of the treatment plans and computationalmore » efficiency. All plans generated from the cc-TPS were compared to those obtained with the PC-based TPS (pc-TPS). The performance evaluation of the cc-TPS was quantified as the speedup factors for Monte Carlo (MC) dose calculations and large-scale plan optimizations, as well as the performance ratios (PRs) of the amount of performance improvement compared to the pc-TPS. Results: Speedup factors were improved up to 14.0-fold dependent on the clinical cases and plan types. The computation times for VMAT and IMRT plans with the cc-TPS were reduced by 91.1% and 89.4%, respectively, on average of the clinical cases compared to those with pc-TPS. The PRs were mostly better for VMAT plans (1.0 ≤ PRs ≤ 10.6 for the head and neck case, 1.2 ≤ PRs ≤ 13.3 for lung case, and 1.0 ≤ PRs ≤ 10.3 for prostate cancer cases) than for IMRT plans. The isodose curves of plans on both cc-TPS and pc-TPS were identical for each of the clinical cases. Conclusion: A cloud-based treatment planning has been setup and our results demonstrate the computation efficiency of treatment planning with the cc-TPS can be dramatically improved while maintaining the same plan quality to that obtained with the pc-TPS. This work was supported in part by the National Cancer Institute (1R01 CA133474) and by Leading Foreign Research Institute Recruitment Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT and Future Planning (MSIP) (Grant No.2009-00420)« less

  14. What's New with MS Office Suites

    ERIC Educational Resources Information Center

    Goldsborough, Reid

    2012-01-01

    If one buys a new PC, laptop, or netbook computer today, it probably comes preloaded with Microsoft Office 2010 Starter Edition. This is a significantly limited, advertising-laden version of Microsoft's suite of productivity programs, Microsoft Office. This continues the trend of PC makers providing ever more crippled versions of Microsoft's…

  15. Building a Better Biology Lab? Testing Tablet PC Technology in a Core Laboratory Course

    ERIC Educational Resources Information Center

    Pryor, Gregory; Bauer, Vernon

    2008-01-01

    Tablet PC technology can enliven the classroom environment because it is dynamic, interactive, and "organic," relative to the rigidity of chalkboards, whiteboards, overhead projectors, and PowerPoint presentations. Unlike traditional computers, tablet PCs employ "digital linking," allowing instructors and students to freehand annotate, clarify,…

  16. Design of Multimedia Situational Awareness Training for Pilots.

    ERIC Educational Resources Information Center

    Homan, Willem J.

    1998-01-01

    A recent development in aviation is the personal computer aviation training device (PC-ATD). This article provides an overview of instructional multimedia for pilot training, specifically for enhancing situational awareness (SA), a state in which a pilot's perceptions match reality. Discusses how PC-based trainers can be used to familiarize pilots…

  17. Computed Tomography Perfusion Improves Diagnostic Accuracy in Acute Posterior Circulation Stroke.

    PubMed

    Sporns, Peter; Schmidt, Rene; Minnerup, Jens; Dziewas, Rainer; Kemmling, André; Dittrich, Ralf; Zoubi, Tarek; Heermann, Philipp; Cnyrim, Christian; Schwindt, Wolfram; Heindel, Walter; Niederstadt, Thomas; Hanning, Uta

    2016-01-01

    Computed tomography perfusion (CTP) has a high diagnostic value in the detection of acute ischemic stroke in the anterior circulation. However, the diagnostic value in suspected posterior circulation (PC) stroke is uncertain, and whole brain volume perfusion is not yet in widespread use. We therefore studied the additional value of whole brain volume perfusion to non-contrast CT (NCCT) and CT angiography source images (CTA-SI) for infarct detection in patients with suspected acute ischemic PC stroke. This is a retrospective review of patients with suspected stroke in the PC in a database of our stroke center (n = 3,011) who underwent NCCT, CTA and CTP within 9 h after stroke onset and CT or MRI on follow-up. Images were evaluated for signs and pc-ASPECTS locations of ischemia. Three imaging models - A (NCCT), B (NCCT + CTA-SI) and C (NCCT + CTA-SI + CTP) - were compared with regard to the misclassification rate relative to gold standard (infarction in follow-up imaging) using the McNemar's test. Of 3,011 stroke patients, 267 patients had a suspected stroke in the PC and 188 patients (70.4%) evidenced a PC infarct on follow-up imaging. The sensitivity of Model C (76.6%) was higher compared with that of Model A (21.3%) and Model B (43.6%). CTP detected significantly more ischemic lesions, especially in the cerebellum, posterior cerebral artery territory and thalami. Our findings in a large cohort of consecutive patients show that CTP detects significantly more ischemic strokes in the PC than CTA and NCCT alone. © 2016 S. Karger AG, Basel.

  18. Database for the collection and analysis of clinical data and images of neoplasms of the sinonasal tract.

    PubMed

    Trimarchi, Matteo; Lund, Valerie J; Nicolai, Piero; Pini, Massimiliano; Senna, Massimo; Howard, David J

    2004-04-01

    The Neoplasms of the Sinonasal Tract software package (NSNT v 1.0) implements a complete visual database for patients with sinonasal neoplasia, facilitating standardization of data and statistical analysis. The software, which is compatible with the Macintosh and Windows platforms, provides multiuser application with a dedicated server (on Windows NT or 2000 or Macintosh OS 9 or X and a network of clients) together with web access, if required. The system hardware consists of an Apple Power Macintosh G4500 MHz computer with PCI bus, 256 Mb of RAM plus 60 Gb hard disk, or any IBM-compatible computer with a Pentium 2 processor. Image acquisition may be performed with different frame-grabber cards for analog or digital video input of different standards (PAL, SECAM, or NTSC) and levels of quality (VHS, S-VHS, Betacam, Mini DV, DV). The visual database is based on 4th Dimension by 4D Inc, and video compression is made in real-time MPEG format. Six sections have been developed: demographics, symptoms, extent of disease, radiology, treatment, and follow-up. Acquisition of data includes computed tomography and magnetic resonance imaging, histology, and endoscopy images, allowing sequential comparison. Statistical analysis integral to the program provides Kaplan-Meier survival curves. The development of a dedicated, user-friendly database for sinonasal neoplasia facilitates a multicenter network and has obvious clinical and research benefits.

  19. A PCA-Based method for determining craniofacial relationship and sexual dimorphism of facial shapes.

    PubMed

    Shui, Wuyang; Zhou, Mingquan; Maddock, Steve; He, Taiping; Wang, Xingce; Deng, Qingqiong

    2017-11-01

    Previous studies have used principal component analysis (PCA) to investigate the craniofacial relationship, as well as sex determination using facial factors. However, few studies have investigated the extent to which the choice of principal components (PCs) affects the analysis of craniofacial relationship and sexual dimorphism. In this paper, we propose a PCA-based method for visual and quantitative analysis, using 140 samples of 3D heads (70 male and 70 female), produced from computed tomography (CT) images. There are two parts to the method. First, skull and facial landmarks are manually marked to guide the model's registration so that dense corresponding vertices occupy the same relative position in every sample. Statistical shape spaces of the skull and face in dense corresponding vertices are constructed using PCA. Variations in these vertices, captured in every principal component (PC), are visualized to observe shape variability. The correlations of skull- and face-based PC scores are analysed, and linear regression is used to fit the craniofacial relationship. We compute the PC coefficients of a face based on this craniofacial relationship and the PC scores of a skull, and apply the coefficients to estimate a 3D face for the skull. To evaluate the accuracy of the computed craniofacial relationship, the mean and standard deviation of every vertex between the two models are computed, where these models are reconstructed using real PC scores and coefficients. Second, each PC in facial space is analysed for sex determination, for which support vector machines (SVMs) are used. We examined the correlation between PCs and sex, and explored the extent to which the choice of PCs affects the expression of sexual dimorphism. Our results suggest that skull- and face-based PCs can be used to describe the craniofacial relationship and that the accuracy of the method can be improved by using an increased number of face-based PCs. The results show that the accuracy of the sex classification is related to the choice of PCs. The highest sex classification rate is 91.43% using our method. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Effective use of principal component analysis with high resolution remote sensing data to delineate hydrothermal alteration and carbonate rocks

    NASA Technical Reports Server (NTRS)

    Feldman, Sandra C.

    1987-01-01

    Methods of applying principal component (PC) analysis to high resolution remote sensing imagery were examined. Using Airborne Imaging Spectrometer (AIS) data, PC analysis was found to be useful for removing the effects of albedo and noise and for isolating the significant information on argillic alteration, zeolite, and carbonate minerals. An effective technique for using PC analysis using an input the first 16 AIS bands, 7 intermediate bands, and the last 16 AIS bands from the 32 flat field corrected bands between 2048 and 2337 nm. Most of the significant mineralogical information resided in the second PC. PC color composites and density sliced images provided a good mineralogical separation when applied to a AIS data set. Although computer intensive, the advantage of PC analysis is that it employs algorithms which already exist on most image processing systems.

  1. Monte Carlo tests of the ELIPGRID-PC algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davidson, J.R.

    1995-04-01

    The standard tool for calculating the probability of detecting pockets of contamination called hot spots has been the ELIPGRID computer code of Singer and Wickman. The ELIPGRID-PC program has recently made this algorithm available for an IBM{reg_sign} PC. However, no known independent validation of the ELIPGRID algorithm exists. This document describes a Monte Carlo simulation-based validation of a modified version of the ELIPGRID-PC code. The modified ELIPGRID-PC code is shown to match Monte Carlo-calculated hot-spot detection probabilities to within {plus_minus}0.5% for 319 out of 320 test cases. The one exception, a very thin elliptical hot spot located within a rectangularmore » sampling grid, differed from the Monte Carlo-calculated probability by about 1%. These results provide confidence in the ability of the modified ELIPGRID-PC code to accurately predict hot-spot detection probabilities within an acceptable range of error.« less

  2. An overview of the evaluation plan for PC/MISI: PC-based Multiple Information System Interface

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Lim, Bee Lee; Hall, Philip P.

    1985-01-01

    An initial evaluation plan for the personal computer multiple information system interface (PC/MISI) project is discussed. The document is intend to be used as a blueprint for the evaluation of this system. Each objective of the design project is discussed along with the evaluation parameters and methodology to be used in the evaluation of the implementation's achievement of those objectives. The potential of the system for research activities related to more general aspects of information retrieval is also discussed.

  3. Relativistic central-field Green's functions for the RATIP package

    NASA Astrophysics Data System (ADS)

    Koval, Peter; Fritzsche, Stephan

    2005-11-01

    From perturbation theory, Green's functions are known for providing a simple and convenient access to the (complete) spectrum of atoms and ions. Having these functions available, they may help carry out perturbation expansions to any order beyond the first one. For most realistic potentials, however, the Green's functions need to be calculated numerically since an analytic form is known only for free electrons or for their motion in a pure Coulomb field. Therefore, in order to facilitate the use of Green's functions also for atoms and ions other than the hydrogen-like ions, here we provide an extension to the RATIP program which supports the computation of relativistic (one-electron) Green's functions in an—arbitrarily given—central-field potential V(r). Different computational modes have been implemented to define these effective potentials and to generate the radial Green's functions for all bound-state energies E<0. In addition, care has been taken to provide a user-friendly component of the RATIP package by utilizing features of the Fortran 90/95 standard such as data structures, allocatable arrays, or a module-oriented design. Program summaryTitle of program:XGREENS Catalogue number: ADWM Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWM Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions:None Computer for which the new version has been tested: PC Pentium II, III, IV, Athlon Installations: University of Kassel (Germany) Operating systems: SuSE Linux 8.2, SuSE Linux 9.0 Program language used in the new version: ANSI standard Fortran 90/95 Memory required to execute with typical data: On a standard grid (400 nodes), one central-field Green's function requires about 50 kBytes in RAM while approximately 3 MBytes are needed if saved as two-dimensional array on some external disc space No. of bits in a word: Real variables of double- and quad-precision are used Peripheral used: Disk for input/output CPU time required to execute test data: 2 min on a 450 MHz Pentium III processor No. of lines in distributed program, including test data etc.: 82 042 No. of bytes in distributed program, including test data etc.: 814 096 Distribution format: tar.gz Nature of the physical problem: In atomic perturbation theory, Green's functions may help carry out the summation over the complete spectrum of atom and ions, including the (summation over the) bound states as well as an integration over the continuum [R.A. Swainson, G.W.F. Drake, J. Phys. A 24 (1991) 95]. Analytically, however, these functions are known only for free electrons ( V(r)≡0) and for electrons in a pure Coulomb field ( V(r)=-Z/r). For all other choices of the potential, in contrast, the Green's functions must be determined numerically. Method of solution: Relativistic Green's functions are generated for an arbitrary central-field potential V(r)=-Z(r)/r by using a piecewise linear approximation of the effective nuclear charge function Z(r) on some grid r(i=1,…,N): Z(r)=Z0i+Z1ir. Then, following McGuire's algorithm [E.J. McGuire, Phys. Rev. A 23 (1981) 186], the radial Green's functions are constructed from the (two) linear-independent solutions of the homogeneous equation [P. Morse, H. Feshbach, Methods of Theoretical Physics, McGraw-Hill, New York 1953 (Part 1, p. 825)]. In the computation of these radial functions, the Kummer and Tricomi functions [J. Spanier, B. Keith, An Atlas of Functions, Springer, New York, 1987] are used extensively. Restrictions onto the complexity of the problem: The main restrictions of the program concern the shape of the effective nuclear charge Z(r)=-rV(r), i.e. the choice of the potential, and the allowed energies. Apart from obeying the proper boundary conditions for a point-like nucleus, namely, Z(r→0)=Z>0 and Z(r→∞)=Z-N⩾0, the first derivative of the charge function Z(r) must be smaller than the (absolute value of the) energy of the Green's function, {∂Z(r)}/{∂r}<|E|. Unusual features of the program:XGREENS has been designed as a part of the RATIP package [S. Fritzsche, J. Elec. Spec. Rel. Phen. 114-116 (2001) 1155] for the calculation of relativistic atomic transition and ionization properties. In a short dialog at the beginning of the execution, the user can specify the choice of the potential as well as the energies and the symmetries of the radial Green's functions to be calculated. Apart from central-field Green's functions, of course, the Coulomb Green's function [P. Koval, S. Fritzsche, Comput. Phys. Comm. 152 (2003) 191] can also be computed by selecting a constant nuclear charge Z(r)=Z. In order to test the generated Green's functions, moreover, we compare the two lowest bound-state orbitals which are calculated from the Green's functions with those as generated separately for the given potential. Like the other components of the RATIP package, XGREENS makes careful use of the Fortran 90/95 standard.

  4. Use of a simplified method of optical recording to identify foci of maximal neuron activity in the somatosensory cortex of white rats.

    PubMed

    Inyushin, M Y; Volnova, A B; Lenkov, D N

    2001-01-01

    Eight mongrel white male rats were studied under urethane anesthesia, and neuron activity evoked by mechanical and/or electrical stimulation of the contralateral whiskers was recorded in the primary somatosensory cortex. Recordings were made using a digital USB chamber attached to the printer port of a Pentium 200MMX computer running standard programs. Optical images were obtained in the barrel-field zone using a differential signal, i.e., the difference signal for cortex images in control and experimental animals. The results obtained here showed that subtraction of averaged sequences of frames yielded images consisting of spots reflecting the probable position of activated groups of neurons. The most effective stimulation consisted of natural low-frequency stimulation of the whiskers. The method can be used for preliminary mapping of cortical zones, as it provides for rapid and reproducible testing of the activity of neuron ensembles over large areas of the cortex.

  5. Hardware implementation of hierarchical volume subdivision-based elastic registration.

    PubMed

    Dandekar, Omkar; Walimbe, Vivek; Shekhar, Raj

    2006-01-01

    Real-time, elastic and fully automated 3D image registration is critical to the efficiency and effectiveness of many image-guided diagnostic and treatment procedures relying on multimodality image fusion or serial image comparison. True, real-time performance will make many 3D image registration-based techniques clinically viable. Hierarchical volume subdivision-based image registration techniques are inherently faster than most elastic registration techniques, e.g. free-form deformation (FFD)-based techniques, and are more amenable for achieving real-time performance through hardware acceleration. Our group has previously reported an FPGA-based architecture for accelerating FFD-based image registration. In this article we show how our existing architecture can be adapted to support hierarchical volume subdivision-based image registration. A proof-of-concept implementation of the architecture achieved speedups of 100 for elastic registration against an optimized software implementation on a 3.2 GHz Pentium III Xeon workstation. Due to inherent parallel nature of the hierarchical volume subdivision-based image registration techniques further speedup can be achieved by using several computing modules in parallel.

  6. DAQ: Software Architecture for Data Acquisition in Sounding Rockets

    NASA Technical Reports Server (NTRS)

    Ahmad, Mohammad; Tran, Thanh; Nichols, Heidi; Bowles-Martinez, Jessica N.

    2011-01-01

    A multithreaded software application was developed by Jet Propulsion Lab (JPL) to collect a set of correlated imagery, Inertial Measurement Unit (IMU) and GPS data for a Wallops Flight Facility (WFF) sounding rocket flight. The data set will be used to advance Terrain Relative Navigation (TRN) technology algorithms being researched at JPL. This paper describes the software architecture and the tests used to meet the timing and data rate requirements for the software used to collect the dataset. Also discussed are the challenges of using commercial off the shelf (COTS) flight hardware and open source software. This includes multiple Camera Link (C-link) based cameras, a Pentium-M based computer, and Linux Fedora 11 operating system. Additionally, the paper talks about the history of the software architecture's usage in other JPL projects and its applicability for future missions, such as cubesats, UAVs, and research planes/balloons. Also talked about will be the human aspect of project especially JPL's Phaeton program and the results of the launch.

  7. Message Passing vs. Shared Address Space on a Cluster of SMPs

    NASA Technical Reports Server (NTRS)

    Shan, Hongzhang; Singh, Jaswinder Pal; Oliker, Leonid; Biswas, Rupak

    2000-01-01

    The convergence of scalable computer architectures using clusters of PCs (or PC-SMPs) with commodity networking has become an attractive platform for high end scientific computing. Currently, message-passing and shared address space (SAS) are the two leading programming paradigms for these systems. Message-passing has been standardized with MPI, and is the most common and mature programming approach. However message-passing code development can be extremely difficult, especially for irregular structured computations. SAS offers substantial ease of programming, but may suffer from performance limitations due to poor spatial locality, and high protocol overhead. In this paper, we compare the performance of and programming effort, required for six applications under both programming models on a 32 CPU PC-SMP cluster. Our application suite consists of codes that typically do not exhibit high efficiency under shared memory programming. due to their high communication to computation ratios and complex communication patterns. Results indicate that SAS can achieve about half the parallel efficiency of MPI for most of our applications: however, on certain classes of problems SAS performance is competitive with MPI. We also present new algorithms for improving the PC cluster performance of MPI collective operations.

  8. Numerical investigation of diffraction of acoustic waves by phononic crystals

    NASA Astrophysics Data System (ADS)

    Moiseyenko, Rayisa P.; Declercq, Nico F.; Laude, Vincent

    2012-05-01

    Diffraction as well as transmission of acoustic waves by two-dimensional phononic crystals (PCs) composed of steel rods in water are investigated in this paper. The finite element simulations were performed in order to compute pressure fields generated by a line source that are incident on a finite size PC. Such field maps are analyzed based on the complex band structure for the infinite periodic PC. Finite size computations indicate that the exponential decrease of the transmission at deaf frequencies is much stronger than that in Bragg band gaps.

  9. A PC-based bus monitor program for use with the transport systems research vehicle RS-232 communication interfaces

    NASA Technical Reports Server (NTRS)

    Easley, Wesley C.

    1991-01-01

    Experiment critical use of RS-232 data busses in the Transport Systems Research Vehicle (TSRV) operated by the Advanced Transport Operating Systems Program Office at the NASA Langley Research Center has recently increased. Each application utilizes a number of nonidentical computer and peripheral configurations and requires task specific software development. To aid these development tasks, an IBM PC-based RS-232 bus monitoring system was produced. It can simultaneously monitor two communication ports of a PC or clone, including the nonstandard bus expansion of the TSRV Grid laptop computers. Display occurs in a separate window for each port's input with binary display being selectable. A number of other features including binary log files, screen capture to files, and a full range of communication parameters are provided.

  10. Self-Admitted Pretensions of Mac Users on a Predominantly PC University Campus

    ERIC Educational Resources Information Center

    Firmin, Michael W.; Wood, Whitney L. Muhlenkamp; Firmin, Ruth L.; Wood, Jordan C.

    2010-01-01

    The present qualitative research study addressed the overall research question of college students' pretention dynamics in the context of a university setting. Thirty-five Mac users were interviewed on a university campus that exclusively supports PC machines. Mac users shared four self-admitted pretensions related to using Macintosh computers.…

  11. PC vs. Mac--Which Way Should You Go?

    ERIC Educational Resources Information Center

    Wodarz, Nan

    1997-01-01

    Outlines the factors in hardware, software, and administration to consider in developing specifications for choosing a computer operating system. Compares Microsoft Windows 95/NT that runs on PC/Intel-based systems and System 7.5 that runs on the Apple-based systems. Lists reasons why the Microsoft platform clearly stands above the Apple platform.…

  12. Poor computer access hinders use of web forums to exchange ideas.

    PubMed

    Duffin, Christian

    2010-09-29

    'There WaS one PC between seven in my team when I was in district nursing. It's not changed a lot.' 'I am sharing a PC with an unknown number of people - until recently, we did not even have access to the intranet, we had to go to another building.'

  13. PC Games and the Teaching of History

    ERIC Educational Resources Information Center

    McMichael, Andrew

    2007-01-01

    Although the use of PC games in the history classroom might be relatively new, the ideas for these assignments and the theory behind their use borrows heavily from a number of areas and combines different pedagogical techniques. Using computer games allows teachers to recombine disparate teaching threads into something novel that will serve…

  14. A Simulation of AI Programming Techniques in BASIC.

    ERIC Educational Resources Information Center

    Mandell, Alan

    1986-01-01

    Explains the functions of and the techniques employed in expert systems. Offers the program "The Periodic Table Expert," as a model for using artificial intelligence techniques in BASIC. Includes the program listing and directions for its use on: Tandy 1000, 1200, and 2000; IBM PC; PC Jr; TRS-80; and Apple computers. (ML)

  15. Decimetre Waves and Cerebellar Cortex Key Element - Purkinje Cells

    NASA Astrophysics Data System (ADS)

    Maharramov, Akif A.

    2007-04-01

    Acute experiments have been carried out on both decerebrated and anaesthetized adult cats exposed to Decimetre Range Microwaves (DRM) (λ=65 cm, duration of exposition -10 min., at least) by the help of a 2 cm radius contact applicator located on the temple part of the head with cerebellum in the centre of projection of irradiation conducted from portable physiotherapeutic apparatus ``Romashka''. Extracelullar registration of Purkinje Cells (PC) impulse activities have been led by glass microelectrodes. Statistical and computational analyses of PC activities have been realized by the help of histograms - the characteristic distribution of the number of interimpulse intervals (II) between electrical discharges of a neuron on the II durations, drawn up by the help of a computer. The results obtained revealed the reaction of PC to DRM as a succession of starting to react of PC electrophysiological parameters, beginning from decreasing of known ``Inhibitory Pause'' duration, and further, at first, ``Simple Spikes'' then ``Complex Spikes'' frequencies' increase, and furthermore, durations' increase in ``Big Interimpulse Intervals'', parameter, introduced for the first time by us, in this way, showing the ``evolutionary'' nature of Electromagnetic Fields and Living object interactions.

  16. Bayesian bivariate meta-analysis of diagnostic test studies with interpretable priors.

    PubMed

    Guo, Jingyi; Riebler, Andrea; Rue, Håvard

    2017-08-30

    In a bivariate meta-analysis, the number of diagnostic studies involved is often very low so that frequentist methods may result in problems. Using Bayesian inference is particularly attractive as informative priors that add a small amount of information can stabilise the analysis without overwhelming the data. However, Bayesian analysis is often computationally demanding and the selection of the prior for the covariance matrix of the bivariate structure is crucial with little data. The integrated nested Laplace approximations method provides an efficient solution to the computational issues by avoiding any sampling, but the important question of priors remain. We explore the penalised complexity (PC) prior framework for specifying informative priors for the variance parameters and the correlation parameter. PC priors facilitate model interpretation and hyperparameter specification as expert knowledge can be incorporated intuitively. We conduct a simulation study to compare the properties and behaviour of differently defined PC priors to currently used priors in the field. The simulation study shows that the PC prior seems beneficial for the variance parameters. The use of PC priors for the correlation parameter results in more precise estimates when specified in a sensible neighbourhood around the truth. To investigate the usage of PC priors in practice, we reanalyse a meta-analysis using the telomerase marker for the diagnosis of bladder cancer and compare the results with those obtained by other commonly used modelling approaches. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  17. Computer Virus Bibliography, 1988-1989.

    ERIC Educational Resources Information Center

    Bologna, Jack, Comp.

    This bibliography lists 14 books, 154 journal articles, 34 newspaper articles, and 3 research papers published during 1988-1989 on the subject of computer viruses, software protection and 'cures', virus hackers, and other related issues. Some of the sources listed include Computers and Security, Computer Security Digest, PC Week, Time, the New…

  18. IPEG- IMPROVED PRICE ESTIMATION GUIDELINES (IBM 370 VERSION)

    NASA Technical Reports Server (NTRS)

    Chamberlain, R. G.

    1994-01-01

    The Improved Price Estimation Guidelines, IPEG, program provides a simple yet accurate estimate of the price of a manufactured product. IPEG facilitates sensitivity studies of price estimates at considerably less expense than would be incurred by using the Standard Assembly-line Manufacturing Industry Simulation, SAMIS, program (COSMIC program NPO-16032). A difference of less than one percent between the IPEG and SAMIS price estimates has been observed with realistic test cases. However, the IPEG simplification of SAMIS allows the analyst with limited time and computing resources to perform a greater number of sensitivity studies than with SAMIS. Although IPEG was developed for the photovoltaics industry, it is readily adaptable to any standard assembly line type of manufacturing industry. IPEG estimates the annual production price per unit. The input data includes cost of equipment, space, labor, materials, supplies, and utilities. Production on an industry wide basis or a process wide basis can be simulated. Once the IPEG input file is prepared, the original price is estimated and sensitivity studies may be performed. The IPEG user selects a sensitivity variable and a set of values. IPEG will compute a price estimate and a variety of other cost parameters for every specified value of the sensitivity variable. IPEG is designed as an interactive system and prompts the user for all required information and offers a variety of output options. The IPEG/PC program is written in TURBO PASCAL for interactive execution on an IBM PC computer under DOS 2.0 or above with at least 64K of memory. The IBM PC color display and color graphics adapter are needed to use the plotting capabilities in IPEG/PC. IPEG/PC was developed in 1984. The original IPEG program is written in SIMSCRIPT II.5 for interactive execution and has been implemented on an IBM 370 series computer with a central memory requirement of approximately 300K of 8 bit bytes. The original IPEG was developed in 1980.

  19. Basic ICT adoption and use by general practitioners: an analysis of primary care systems in 31 European countries.

    PubMed

    De Rosis, Sabina; Seghieri, Chiara

    2015-08-22

    There is general consensus that appropriate development and use of information and communication technologies (ICT) are crucial in the delivery of effective primary care (PC). Several countries are defining policies to support and promote a structural change of the health care system through the introduction of ICT. This study analyses the state of development of basic ICT in PC systems of 31 European countries with the aim to describe the extent of, and main purposes for, computer use by General Practitioners (GPs) across Europe. Additionally, trends over time have been analysed. Descriptive statistical analysis was performed on data from the QUALICOPC (Quality and Costs of Primary Care in Europe) survey, to describe the geographic differences in the general use of computer, and in specific computerized clinical functions for different health-related purposes such as prescribing, medication checking, generating health records and research for medical information on the Internet. While all the countries have achieved a near-universal adoption of a computer in their primary care practices, with only a few countries near or under the boundary of 90 %, the computerisation of primary care clinical functions presents a wide variability of adoption within and among countries and, in several cases (such as in the southern and central-eastern Europe), a large room for improvement. At European level, more efforts could be done to support southern and central-eastern Europe in closing the gap in adoption and use of ICT in PC. In particular, more attention seems to be need on the current usages of the computer in PC, by focusing policies and actions on the improvement of the appropriate usages that can impact on quality and costs of PC and can facilitate an interconnected health care system. However, policies and investments seem necessary but not sufficient to achieve these goals. Organizational, behavioural and also networking aspects should be taken in consideration.

  20. Computer-assisted three-dimensional reconstructions of ( sup 14 C)-2-deoxy-D-glucose metabolism in cat lumbosacral spinal cord following cutaneous stimulation of the hindfoot

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crockett, D.P.; Smith, W.K.; Proshansky, E.

    1989-10-08

    We report on computer-assisted three-dimensional reconstruction of spinal cord activity associated with stimulation of the plantar cushion (PC) as revealed by (14C)-2-deoxy-D-glucose (2-DG) serial autoradiographs. Moderate PC stimulation in cats elicits a reflex phasic plantar flexion of the toes. Four cats were chronically spinalized at about T6 under barbiturate anesthesia. Four to 11 days later, the cats were injected (i.v.) with 2-DG (100 microCi/kg) and the PC was electrically stimulated with needle electrodes at 2-5 times threshold for eliciting a reflex. Following stimulation, the spinal cord was processed for autoradiography. Subsequently, autoradiographs, representing approximately 8-18 mm from spinal segments L6-S1,more » were digitized for computer analysis and 3-D reconstruction. Several strategies of analysis were employed: (1) Three-dimensional volume images were color-coded to represent different levels of functional activity. (2) On the reconstructed volumes, virtual sections were made in the horizontal, sagittal, and transverse planes to view regions of 2-DG activity. (3) In addition, we were able to sample different regions within the grey and white matter semi-quantitatively (i.e., pixel intensity) from section to section to reveal differences between ipsi- and contralateral activity, as well as possible variation between sections. These analyses revealed 2-DG activity associated with moderate PC stimulation, not only in the ipsilateral dorsal horn as we had previously demonstrated, but also in both the ipsilateral and contralateral ventral horns, as well as in the intermediate grey matter. The use of novel computer analysis techniques--combined with an unanesthetized preparation--enabled us to demonstrate that the increased metabolic activity in the lumbosacral spinal cord associated with PC stimulation was much more extensive than had heretofore been observed.« less

  1. Temporal Accuracy and Modern High Performance Processors: A Case Study Using Pentium Pro

    DTIC Science & Technology

    1998-10-15

    conducted. We discuss the results of our experiments and how these results will be usedfor implementing the next release of Maruti hard real - time operating system in...Even though the resolution of the APIC timer is not as good as the TSCcounter, an interruptible timer may be used in several ways in a real - time operating system . Theobjective

  2. Personal Computer (PC) Thermal Analyzer

    DTIC Science & Technology

    1990-03-01

    demonstrate the power of the PC Thermal Analyzer, it was compared with an existing thermal analysis method. Specifically, the PC Thermal Analyzer was...34Intelligence" I T Kowledge 1 User I Inference e Base I Interface 1i FMechanisms H 1 asI I II - I L m m m m m m - m m i m m - m m - m I- m i m Expert...Temperature in degrees centi- grade? (2) What is the total Heat Output ( power dissipation) in watts?). 25 BOARD ASSEMBLY ~UI U2 aooo 0i0000t00 U15

  3. Using Raspberry Pi to Teach Computing "Inside Out"

    ERIC Educational Resources Information Center

    Jaokar, Ajit

    2013-01-01

    This article discusses the evolution of computing education in preparing for the next wave of computing. With the proliferation of mobile devices, most agree that we are living in a "post-PC" world. Using the Raspberry Pi computer platform, based in the UK, as an example, the author discusses computing education in a world where the…

  4. A Novel Method for Characterization of Superconductors: Physical Measurements and Modeling of Thin Films

    NASA Technical Reports Server (NTRS)

    Kim, B. F.; Moorjani, K.; Phillips, T. E.; Adrian, F. J.; Bohandy, J.; Dolecek, Q. E.

    1993-01-01

    A method for characterization of granular superconducting thin films has been developed which encompasses both the morphological state of the sample and its fabrication process parameters. The broad scope of this technique is due to the synergism between experimental measurements and their interpretation using numerical simulation. Two novel technologies form the substance of this system: the magnetically modulated resistance method for characterizing superconductors; and a powerful new computer peripheral, the Parallel Information Processor card, which provides enhanced computing capability for PC computers. This enhancement allows PC computers to operate at speeds approaching that of supercomputers. This makes atomic scale simulations possible on low cost machines. The present development of this system involves the integration of these two technologies using mesoscale simulations of thin film growth. A future stage of development will incorporate atomic scale modeling.

  5. A personal computer-based nuclear magnetic resonance spectrometer

    NASA Astrophysics Data System (ADS)

    Job, Constantin; Pearson, Robert M.; Brown, Michael F.

    1994-11-01

    Nuclear magnetic resonance (NMR) spectroscopy using personal computer-based hardware has the potential of enabling the application of NMR methods to fields where conventional state of the art equipment is either impractical or too costly. With such a strategy for data acquisition and processing, disciplines including civil engineering, agriculture, geology, archaeology, and others have the possibility of utilizing magnetic resonance techniques within the laboratory or conducting applications directly in the field. Another aspect is the possibility of utilizing existing NMR magnets which may be in good condition but unused because of outdated or nonrepairable electronics. Moreover, NMR applications based on personal computer technology may open up teaching possibilities at the college or even secondary school level. The goal of developing such a personal computer (PC)-based NMR standard is facilitated by existing technologies including logic cell arrays, direct digital frequency synthesis, use of PC-based electrical engineering software tools to fabricate electronic circuits, and the use of permanent magnets based on neodymium-iron-boron alloy. Utilizing such an approach, we have been able to place essentially an entire NMR spectrometer console on two printed circuit boards, with the exception of the receiver and radio frequency power amplifier. Future upgrades to include the deuterium lock and the decoupler unit are readily envisioned. The continued development of such PC-based NMR spectrometers is expected to benefit from the fast growing, practical, and low cost personal computer market.

  6. Teaching Mathematics in the PC Lab--The Students' Viewpoints

    ERIC Educational Resources Information Center

    Schmidt, Karsten; Kohler, Anke

    2013-01-01

    The Matrix Algebra portion of the intermediate mathematics course at the Schmalkalden University Faculty of Business and Economics has been moved from a traditional classroom setting to a technology-based setting in the PC lab. A Computer Algebra System license was acquired that also allows its use on the students' own PCs. A survey was carried…

  7. A Mobile Computing Solution for Collecting Functional Analysis Data on a Pocket PC

    ERIC Educational Resources Information Center

    Jackson, James; Dixon, Mark R.

    2007-01-01

    The present paper provides a task analysis for creating a computerized data system using a Pocket PC and Microsoft Visual Basic. With Visual Basic software and any handheld device running the Windows MOBLE operating system, this task analysis will allow behavior analysts to program and customize their own functional analysis data-collection…

  8. A Method of Predicting Queuing at Library Online PCs

    ERIC Educational Resources Information Center

    Beranek, Lea G.

    2006-01-01

    On-campus networked personal computer (PC) usage at La Trobe University Library was surveyed during September 2005. The survey's objectives were to confirm peak usage times, to measure some of the relevant parameters of online PC usage, and to determine the effect that 24 new networked PCs had on service quality. The survey found that clients…

  9. Examples of Data Analysis with SPSS/PC+ Studentware.

    ERIC Educational Resources Information Center

    MacFarland, Thomas W.

    Intended for classroom use only, these unpublished notes contain computer lessons on descriptive statistics with files previously created in WordPerfect 4.2 and Lotus 1-2-3 Version 1.A for the IBM PC+. The statistical measures covered include Student's t-test with two independent samples; Student's t-test with a paired sample; Chi-square analysis;…

  10. [Cardiology for the veterinary practice--information processing with the help of a computer program, with a working example].

    PubMed

    Bohn, F K; Emmerichs, H; Müller, H

    1991-06-01

    Description of a PC-Program "Cardiology for the Veterinary Practice", title "Kardiag", with Illustrations and a report of a congenital heart disease in a one year old, male, Newfoundland dog, with the use of the described PC-Program. After euthanasia an autopsy was performed which verified the clinical diagnosis.

  11. Real-Time Assessment of Problem-Solving of Physics Students Using Computer-Based Technology

    ERIC Educational Resources Information Center

    Gok, Tolga

    2012-01-01

    The change in students' problem solving ability in upper-level course through the application of a technological interactive environment--Tablet PC running InkSurvey--was investigated in present study. Tablet PC/InkSurvey interactive technology allowing the instructor to receive real-time formative assessment as the class works through the problem…

  12. Lane Detection on the iPhone

    NASA Astrophysics Data System (ADS)

    Ren, Feixiang; Huang, Jinsheng; Terauchi, Mutsuhiro; Jiang, Ruyi; Klette, Reinhard

    A robust and efficient lane detection system is an essential component of Lane Departure Warning Systems, which are commonly used in many vision-based Driver Assistance Systems (DAS) in intelligent transportation. Various computation platforms have been proposed in the past few years for the implementation of driver assistance systems (e.g., PC, laptop, integrated chips, PlayStation, and so on). In this paper, we propose a new platform for the implementation of lane detection, which is based on a mobile phone (the iPhone). Due to physical limitations of the iPhone w.r.t. memory and computing power, a simple and efficient lane detection algorithm using a Hough transform is developed and implemented on the iPhone, as existing algorithms developed based on the PC platform are not suitable for mobile phone devices (currently). Experiments of the lane detection algorithm are made both on PC and on iPhone.

  13. Performance characteristics of the Cooper PC-9 centrifugal compressor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foster, R.E.; Neely, R.F.

    1988-06-30

    Mathematical performance modeling of the PC-9 centrifugal compressor has been completed. Performance characteristics curves have never been obtained for them in test loops with the same degree of accuracy as for the uprated axial compressors and, consequently, computer modeling of the top cascade and purge cascades has been very difficult and of limited value. This compressor modeling work has been carried out in an attempt to generate data which would more accurately define the compressor's performance and would permit more accurate cascade modeling. A computer code, COMPAL, was used to mathematically model the PC-9 performance with variations in gas composition,more » flow ratios, pressure ratios, speed and temperature. The results of this effort, in the form of graphs, with information about the compressor and the code, are the subject of this report. Compressor characteristic curves are featured. 13 figs.« less

  14. Early Palliative Care for Patients With Brain Metastases Decreases Inpatient Admissions and Need for Imaging Studies.

    PubMed

    Habibi, Akram; Wu, S Peter; Gorovets, Daniel; Sansosti, Alexandra; Kryger, Marc; Beaudreault, Cameron; Chung, Wei-Yi; Shelton, Gary; Silverman, Joshua; Lowy, Joseph; Kondziolka, Douglas

    2018-01-01

    Early encounters with palliative care (PC) can influence health-care utilization, clinical outcome, and cost. To study the effect of timing of PC encounters on brain metastasis patients at an academic medical center. All patients diagnosed with brain metastases from January 2013 to August 2015 at a single institution with inpatient and/or outpatient PC records available for review (N = 145). Early PC was defined as having a PC encounter within 8 weeks of diagnosis with brain metastases; late PC was defined as having PC after 8 weeks of diagnosis. Propensity score matched cohorts of early (n = 46) and late (n = 46) PC patients were compared to control for differences in age, gender, and Karnofsky Performance Status (KPS) at diagnosis. Details of the palliative encounter, patient outcomes, and health-care utilization were collected. Early PC versus late PC patients had no differences in baseline KPS, age, or gender. Early PC patients had significantly fewer number of inpatient visits per patient (1.5 vs 2.9; P = .004), emergency department visits (1.2 vs 2.1; P = .006), positron emission tomography/computed tomography studies (1.2 vs 2.7, P = .005), magnetic resonance imaging scans (5.8 vs 8.1; P = .03), and radiosurgery procedures (0.6 vs 1.3; P < .001). There were no differences in overall survival (median 8.2 vs 11.2 months; P = .2). Following inpatient admissions, early PC patients were more likely to be discharged home (59% vs 35%; P = .04). Timely PC consultations are advisable in this patient population and can reduce health-care utilization.

  15. 76 FR 43278 - Privacy Act; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-20

    ... computer (PC). The Security Management Officer's office remains locked when not in use. RETENTION AND... records to include names, addresses, social security numbers, service computation dates, leave usage data... that resides on a desktop computer. RETRIEVABILITY: Records maintained in file folders are indexed and...

  16. Real-time polarization-sensitive optical coherence tomography data processing with parallel computing

    PubMed Central

    Liu, Gangjun; Zhang, Jun; Yu, Lingfeng; Xie, Tuqiang; Chen, Zhongping

    2010-01-01

    With the increase of the A-line speed of optical coherence tomography (OCT) systems, real-time processing of acquired data has become a bottleneck. The shared-memory parallel computing technique is used to process OCT data in real time. The real-time processing power of a quad-core personal computer (PC) is analyzed. It is shown that the quad-core PC could provide real-time OCT data processing ability of more than 80K A-lines per second. A real-time, fiber-based, swept source polarization-sensitive OCT system with 20K A-line speed is demonstrated with this technique. The real-time 2D and 3D polarization-sensitive imaging of chicken muscle and pig tendon is also demonstrated. PMID:19904337

  17. MORPH-I (Ver 1.0) a software package for the analysis of scanning electron micrograph (binary formatted) images for the assessment of the fractal dimension of enclosed pore surfaces

    USGS Publications Warehouse

    Mossotti, Victor G.; Eldeeb, A. Raouf; Oscarson, Robert

    1998-01-01

    MORPH-I is a set of C-language computer programs for the IBM PC and compatible minicomputers. The programs in MORPH-I are used for the fractal analysis of scanning electron microscope and electron microprobe images of pore profiles exposed in cross-section. The program isolates and traces the cross-sectional profiles of exposed pores and computes the Richardson fractal dimension for each pore. Other programs in the set provide for image calibration, display, and statistical analysis of the computed dimensions for highly complex porous materials. Requirements: IBM PC or compatible; minimum 640 K RAM; mathcoprocessor; SVGA graphics board providing mode 103 display.

  18. Monitoring Temperature and Fan Speed Using Ganglia and Winbond Chips

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCaffrey, Cattie; /SLAC

    2006-09-27

    Effective monitoring is essential to keep a large group of machines, like the ones at Stanford Linear Accelerator Center (SLAC), up and running. SLAC currently uses Ganglia Monitoring System to observe about 2000 machines, analyzing metrics like CPU usage and I/O rate. However, metrics essential to machine hardware health, such as temperature and fan speed, are not being monitored. Many machines have a Winbond w83782d chip which monitors three temperatures, two of which come from dual CPUs, and returns the information when the sensor command is invoked. Ganglia also provides a feature, gmetric, that allows the users to monitor theirmore » own metrics and incorporate them into the monitoring system. The programming language Perl is chosen to implement a script that invokes the sensors command, extracts the temperature and fan speed information, and calls gmetric with the appropriate arguments. Two machines were used to test the script; the two CPUs on each machine run at about 65 Celsius, which is well within the operating temperature range (The maximum safe temperature range is 77-82 Celsius for the Pentium III processors being used). Installing the script on all machines with a Winbond w83782d chip allows the SLAC Scientific Computing and Computing Services group (SCCS) to better evaluate current cooling methods.« less

  19. Tele-surgery: a new virtual tool for medical education.

    PubMed

    Russomano, Thais; Cardoso, Ricardo B; Fernandes, Jefferson; Cardoso, Paulizan G; Alves, Jarcedy M; Pianta, Christina D; Souza, Hamilton P; Lopes, Maria Helena I

    2009-01-01

    The rapid evolution of telecommunication technology has enabled advances to be made in low cost video-conferencing through the improvement of high speed computer communication networks and the enhancement of Internet security protocols. As a result of this progress, eHealth education programs are becoming a reality in universities, providing the opportunity for students to have greater interaction at live surgery classes by means of virtual participation. Undergraduate students can be introduced to new concepts of medical care, remote second opinion and to telecommunication systems, whilst virtually experiencing surgical procedures and lectures. The better access this provides to the operating theater environment, the patient and the surgeon can improve the learning process for students. An analogical system was used for this experimental pilot project due to the benefits of it being low cost with a comparatively easy setup. The tele-surgery lectures were also transmitted to other universities by means of a Pentium 4 computer using open source software and connected to a portable image acquisition device located in the São Lucas University Hospital. Telemedicine technology has proven to be an important instrument for the improvement of medical education and health care. This study allowed health professionals, professors and students to have greater interaction during surgical procedures, thus enabling a greater opportunity for knowledge exchange.

  20. Collaborative Physical Chemistry Projects Involving Computational Chemistry

    NASA Astrophysics Data System (ADS)

    Whisnant, David M.; Howe, Jerry J.; Lever, Lisa S.

    2000-02-01

    The physical chemistry classes from three colleges have collaborated on two computational chemistry projects using Quantum CAChe 3.0 and Gaussian 94W running on Pentium II PCs. Online communication by email and the World Wide Web was an important part of the collaboration. In the first project, students used molecular modeling to predict benzene derivatives that might be possible hair dyes. They used PM3 and ZINDO calculations to predict the electronic spectra of the molecules and tested the predicted spectra by comparing some with experimental measurements. They also did literature searches for real hair dyes and possible health effects. In the final phase of the project they proposed a synthetic pathway for one compound. In the second project the students were asked to predict which isomer of a small carbon cluster (C3, C4, or C5) was responsible for a series of IR lines observed in the spectrum of a carbon star. After preliminary PM3 calculations, they used ab initio calculations at the HF/6-31G(d) and MP2/6-31G(d) level to model the molecules and predict their vibrational frequencies and rotational constants. A comparison of the predictions with the experimental spectra suggested that the linear isomer of the C5 molecule was responsible for the lines.

  1. Automatic Recognition of Road Signs

    NASA Astrophysics Data System (ADS)

    Inoue, Yasuo; Kohashi, Yuuichirou; Ishikawa, Naoto; Nakajima, Masato

    2002-11-01

    The increase in traffic accidents is becoming a serious social problem with the recent rapid traffic increase. In many cases, the driver"s carelessness is the primary factor of traffic accidents, and the driver assistance system is demanded for supporting driver"s safety. In this research, we propose the new method of automatic detection and recognition of road signs by image processing. The purpose of this research is to prevent accidents caused by driver"s carelessness, and call attention to a driver when the driver violates traffic a regulation. In this research, high accuracy and the efficient sign detecting method are realized by removing unnecessary information except for a road sign from an image, and detect a road sign using shape features. At first, the color information that is not used in road signs is removed from an image. Next, edges except for circular and triangle ones are removed to choose sign shape. In the recognition process, normalized cross correlation operation is carried out to the two-dimensional differentiation pattern of a sign, and the accurate and efficient method for detecting the road sign is realized. Moreover, the real-time operation in a software base was realized by holding down calculation cost, maintaining highly precise sign detection and recognition. Specifically, it becomes specifically possible to process by 0.1 sec(s)/frame using a general-purpose PC (CPU: Pentium4 1.7GHz). As a result of in-vehicle experimentation, our system could process on real time and has confirmed that detection and recognition of a sign could be performed correctly.

  2. Intelligence system based classification approach for medical disease diagnosis

    NASA Astrophysics Data System (ADS)

    Sagir, Abdu Masanawa; Sathasivam, Saratha

    2017-08-01

    The prediction of breast cancer in women who have no signs or symptoms of the disease as well as survivability after undergone certain surgery has been a challenging problem for medical researchers. The decision about presence or absence of diseases depends on the physician's intuition, experience and skill for comparing current indicators with previous one than on knowledge rich data hidden in a database. This measure is a very crucial and challenging task. The goal is to predict patient condition by using an adaptive neuro fuzzy inference system (ANFIS) pre-processed by grid partitioning. To achieve an accurate diagnosis at this complex stage of symptom analysis, the physician may need efficient diagnosis system. A framework describes methodology for designing and evaluation of classification performances of two discrete ANFIS systems of hybrid learning algorithms least square estimates with Modified Levenberg-Marquardt and Gradient descent algorithms that can be used by physicians to accelerate diagnosis process. The proposed method's performance was evaluated based on training and test datasets with mammographic mass and Haberman's survival Datasets obtained from benchmarked datasets of University of California at Irvine's (UCI) machine learning repository. The robustness of the performance measuring total accuracy, sensitivity and specificity is examined. In comparison, the proposed method achieves superior performance when compared to conventional ANFIS based gradient descent algorithm and some related existing methods. The software used for the implementation is MATLAB R2014a (version 8.3) and executed in PC Intel Pentium IV E7400 processor with 2.80 GHz speed and 2.0 GB of RAM.

  3. ELIPGRID-PC: A PC program for calculating hot spot probabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davidson, J.R.

    1994-10-01

    ELIPGRID-PC, a new personal computer program has been developed to provide easy access to Singer`s 1972 ELIPGRID algorithm for hot-spot detection probabilities. Three features of the program are the ability to determine: (1) the grid size required for specified conditions, (2) the smallest hot spot that can be sampled with a given probability, and (3) the approximate grid size resulting from specified conditions and sampling cost. ELIPGRID-PC also provides probability of hit versus cost data for graphing with spread-sheets or graphics software. The program has been successfully tested using Singer`s published ELIPGRID results. An apparent error in the original ELIPGRIDmore » code has been uncovered and an appropriate modification incorporated into the new program.« less

  4. [Dutch computer domestication, 1975-1990].

    PubMed

    Veraart, Frank

    2008-01-01

    A computer seems an indispensable tool among twenty-first century households. Computers however, did not come as manna from heaven. The domestication and appropriation of computers in Dutch households was a result of activities by various intermediary actors. Computers became household commodities only gradually. Technophile computer hobbyists imported the first computers into the Netherlands from the USA, and started small businesses from 1975 onwards. They developed a social network in which computer technology was made available for use by individuals. This network extended itself via shops, clubs, magazines, and other means of acquiring and exchanging computer hard- and software. Hobbyist culture established the software-copying habits of private computer users as well as their ambivalence to commercial software. They also made the computer into a game machine. Under the impulse of a national policy that aimed at transforming society into an 'Information Society', clubs and other actors extended their activities and tailored them to this new agenda. Hobby clubs presented themselves as consumer organizations and transformed into intermediary actors that filled the gap between suppliers and a growing group of users. They worked hard to give meaning to (proper) use of computers. A second impulse to the increasing use of computers in the household came from so-called 'private-PC' projects in the late 1980s. In these projects employers financially aided employees in purchasing their own private PCs'. The initially important intermediary actors such as hobby clubs lost control and the agenda for personal computers was shifted to interoperability with office equipment. IBM compatible PC's flooded the households. In the household the new equipment blended with the established uses, such as gaming. The copying habits together with the PC standard created a risky combination in which computer viruses could spread easily. New roles arose for intermediary actors in guiding and educating computer users. The activities of intermediaries had a lasting influence on contemporary computer use and user preferences. Technical choices and the nature of Dutch computer use in households can be explained by analyzing the historical developments of intermediaries and users.

  5. An Assessment of a Beowulf System for a Wide Class of Analysis and Design Software

    NASA Technical Reports Server (NTRS)

    Katz, D. S.; Cwik, T.; Kwan, B. H.; Lou, J. Z.; Springer, P. L.; Sterling, T. L.; Wang, P.

    1997-01-01

    A typical Beowulf system, such as the machine at the Jet Propulsion Laboratory (JPL), may comprise 16 nodes interconnected by 100 base T Fast Ethernet. Each node may include a single Inter Pentium Pro 200 MHz microprocessor, 128 MBytes of DRAM, 2.5 GBytes of IDE disk, and PCI bus backplane, and an assortment of other devices.

  6. Domain Wall Fermion Inverter on Pentium 4

    NASA Astrophysics Data System (ADS)

    Pochinsky, Andrew

    2005-03-01

    A highly optimized domain wall fermion inverter has been developed as part of the SciDAC lattice initiative. By designing the code to minimize memory bus traffic, it achieves high cache reuse and performance in excess of 2 GFlops for out of L2 cache problem sizes on a GigE cluster with 2.66 GHz Xeon processors. The code uses the SciDAC QMP communication library.

  7. Multi-Target Single Cycle Instrument Placement

    NASA Technical Reports Server (NTRS)

    Pedersen, Liam; Smith, David E.; Deans, Matthew; Sargent, Randy; Kunz, Clay; Lees, David; Rajagopalan, Srikanth; Bualat, Maria

    2005-01-01

    This presentation is about the robotic exploration of Mars using multiple targets command cycle, safe instrument placements, safe operation, and K9 Rover which has a 6 wheel steer rocket-bogey chassis (FIDO, MER), 70% MER size, 1.2 GHz Pentium M laptop running Linux OS, Odometry and compass/inclinometer, CLARAty architecture, 5 DOF manipulator w/CHAMP microscopic camera, SciCams, NavCams and HazCams.

  8. Tablet computer enhanced training improves internal medicine exam performance.

    PubMed

    Baumgart, Daniel C; Wende, Ilja; Grittner, Ulrike

    2017-01-01

    Traditional teaching concepts in medical education do not take full advantage of current information technology. We aimed to objectively determine the impact of Tablet PC enhanced training on learning experience and MKSAP® (medical knowledge self-assessment program) exam performance. In this single center, prospective, controlled study final year medical students and medical residents doing an inpatient service rotation were alternatingly assigned to either the active test (Tablet PC with custom multimedia education software package) or traditional education (control) group, respectively. All completed an extensive questionnaire to collect their socio-demographic data, evaluate educational status, computer affinity and skills, problem solving, eLearning knowledge and self-rated medical knowledge. Both groups were MKSAP® tested at the beginning and the end of their rotation. The MKSAP® score at the final exam was the primary endpoint. Data of 55 (tablet n = 24, controls n = 31) male 36.4%, median age 28 years, 65.5% students, were evaluable. The mean MKSAP® score improved in the tablet PC (score Δ + 8 SD: 11), but not the control group (score Δ- 7, SD: 11), respectively. After adjustment for baseline score and confounders the Tablet PC group showed on average 11% better MKSAP® test results compared to the control group (p<0.001). The most commonly used resources for medical problem solving were journal articles looked up on PubMed or Google®, and books. Our study provides evidence, that tablet computer based integrated training and clinical practice enhances medical education and exam performance. Larger, multicenter trials are required to independently validate our data. Residency and fellowship directors are encouraged to consider adding portable computer devices, multimedia content and introduce blended learning to their respective training programs.

  9. Tablet computer enhanced training improves internal medicine exam performance

    PubMed Central

    Wende, Ilja; Grittner, Ulrike

    2017-01-01

    Background Traditional teaching concepts in medical education do not take full advantage of current information technology. We aimed to objectively determine the impact of Tablet PC enhanced training on learning experience and MKSAP® (medical knowledge self-assessment program) exam performance. Methods In this single center, prospective, controlled study final year medical students and medical residents doing an inpatient service rotation were alternatingly assigned to either the active test (Tablet PC with custom multimedia education software package) or traditional education (control) group, respectively. All completed an extensive questionnaire to collect their socio-demographic data, evaluate educational status, computer affinity and skills, problem solving, eLearning knowledge and self-rated medical knowledge. Both groups were MKSAP® tested at the beginning and the end of their rotation. The MKSAP® score at the final exam was the primary endpoint. Results Data of 55 (tablet n = 24, controls n = 31) male 36.4%, median age 28 years, 65.5% students, were evaluable. The mean MKSAP® score improved in the tablet PC (score Δ + 8 SD: 11), but not the control group (score Δ- 7, SD: 11), respectively. After adjustment for baseline score and confounders the Tablet PC group showed on average 11% better MKSAP® test results compared to the control group (p<0.001). The most commonly used resources for medical problem solving were journal articles looked up on PubMed or Google®, and books. Conclusions Our study provides evidence, that tablet computer based integrated training and clinical practice enhances medical education and exam performance. Larger, multicenter trials are required to independently validate our data. Residency and fellowship directors are encouraged to consider adding portable computer devices, multimedia content and introduce blended learning to their respective training programs. PMID:28369063

  10. Contrasting Diffusion Patterns for PC and Mobile Videos: A User-Centric View of the Influencing Factors

    ERIC Educational Resources Information Center

    Wu, Baixue

    2010-01-01

    As both computer and mobile phone reach nearly ubiquity in the U.S. market, the slow uptake of mobile video, in contrast to the thriving usage of PC-based video, warrants a deeper understanding of user-oriented factors contributing to the two diffusion paths. Unlike the majority of existing diffusion research practices, the dissertation…

  11. Students' Acceptance of Tablet PCs in Italian High Schools: Profiles and Differences

    ERIC Educational Resources Information Center

    Villani, Daniela; Morganti, Laura; Carissoli, Claudia; Gatti, Elena; Bonanomi, Andrea; Cacciamani, Stefano; Confalonieri, Emanuela; Riva, Giuseppe

    2018-01-01

    The tablet PC represents a very popular mobile computing device, and together with other technologies it is changing the world of education. This study aimed to explore the acceptance of tablet PC of Italian high school students in order to outline the typical students' profiles and to compare the acceptance conveyed in two types of use (learning…

  12. Spelling Practice Intervention: A Comparison of Tablet PC and Picture Cards as Spelling Practice Methods for Students with Developmental Disabilities

    ERIC Educational Resources Information Center

    Seok, Soonhwa; DaCosta, Boaventura; Yu, Byeong Min

    2015-01-01

    The present study compared a spelling practice intervention using a tablet personal computer (PC) and picture cards with three students diagnosed with developmental disabilities. An alternating-treatments design with a non-concurrent multiple-baseline across participants was used. The aims of the present study were: (a) to determine if…

  13. Mathematics Instruction and the Tablet PC

    ERIC Educational Resources Information Center

    Fister, K. Renee; McCarthy, Maeve L.

    2008-01-01

    The use of tablet PCs in teaching is a relatively new phenomenon. A cross between a notebook computer and a personal digital assistant (PDA), the tablet PC has all of the features of a notebook with the additional capability that the screen can also be used for input. Tablet PCs are usually equipped with a stylus that allows the user to write on…

  14. Implementation of data acquisition interface using on-board field-programmable gate array (FPGA) universal serial bus (USB) link

    NASA Astrophysics Data System (ADS)

    Yussup, N.; Ibrahim, M. M.; Lombigit, L.; Rahman, N. A. A.; Zin, M. R. M.

    2014-02-01

    Typically a system consists of hardware as the controller and software which is installed in the personal computer (PC). In the effective nuclear detection, the hardware involves the detection setup and the electronics used, with the software consisting of analysis tools and graphical display on PC. A data acquisition interface is necessary to enable the communication between the controller hardware and PC. Nowadays, Universal Serial Bus (USB) has become a standard connection method for computer peripherals and has replaced many varieties of serial and parallel ports. However the implementation of USB is complex. This paper describes the implementation of data acquisition interface between a field-programmable gate array (FPGA) board and a PC by exploiting the USB link of the FPGA board. The USB link is based on an FTDI chip which allows direct access of input and output to the Joint Test Action Group (JTAG) signals from a USB host and a complex programmable logic device (CPLD) with a 24 MHz clock input to the USB link. The implementation and results of using the USB link of FPGA board as the data interfacing are discussed.

  15. Implementation of data acquisition interface using on-board field-programmable gate array (FPGA) universal serial bus (USB) link

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yussup, N.; Ibrahim, M. M.; Lombigit, L.

    Typically a system consists of hardware as the controller and software which is installed in the personal computer (PC). In the effective nuclear detection, the hardware involves the detection setup and the electronics used, with the software consisting of analysis tools and graphical display on PC. A data acquisition interface is necessary to enable the communication between the controller hardware and PC. Nowadays, Universal Serial Bus (USB) has become a standard connection method for computer peripherals and has replaced many varieties of serial and parallel ports. However the implementation of USB is complex. This paper describes the implementation of datamore » acquisition interface between a field-programmable gate array (FPGA) board and a PC by exploiting the USB link of the FPGA board. The USB link is based on an FTDI chip which allows direct access of input and output to the Joint Test Action Group (JTAG) signals from a USB host and a complex programmable logic device (CPLD) with a 24 MHz clock input to the USB link. The implementation and results of using the USB link of FPGA board as the data interfacing are discussed.« less

  16. BALANCER: A Computer Program for Balancing Chemical Equations.

    ERIC Educational Resources Information Center

    Jones, R. David; Schwab, A. Paul

    1989-01-01

    Describes the theory and operation of a computer program which was written to balance chemical equations. Software consists of a compiled file of 46K for use under MS-DOS 2.0 or later on IBM PC or compatible computers. Additional specifications of courseware and availability information are included. (Author/RT)

  17. Computer Series, 102: Bits and Pieces, 40.

    ERIC Educational Resources Information Center

    Birk, James P., Ed.

    1989-01-01

    Discussed are seven computer programs: (1) a computer graphics experiment for organic chemistry laboratory; (2) a gel filtration simulation; (3) judging spelling correctness; (4) interfacing the TLC548 ADC; (5) a digitizing circuit for the Apple II game port; (6) a chemical information base; and (7) an IBM PC article database. (MVL)

  18. The IBM PC as an Online Search Machine--Part 2: Physiology for Searchers.

    ERIC Educational Resources Information Center

    Kolner, Stuart J.

    1985-01-01

    Enumerates "hardware problems" associated with use of the IBM personal computer as an online search machine: purchase of machinery, unpacking of parts, and assembly into a properly functioning computer. Components that allow transformations of computer into a search machine (combination boards, printer, modem) and diagnostics software…

  19. High Performance Parallel Computational Nanotechnology

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to control mini robotic manipulators for positional control; scalable numerical algorithms for reliability, verifications and testability. There appears no fundamental obstacle to simulating molecular compilers and molecular computers on high performance parallel computers, just as the Boeing 777 was simulated on a computer before manufacturing it.

  20. Three-dimensional quantification of vorticity and helicity from 3D cine PC-MRI using finite-element interpolations.

    PubMed

    Sotelo, Julio; Urbina, Jesús; Valverde, Israel; Mura, Joaquín; Tejos, Cristián; Irarrazaval, Pablo; Andia, Marcelo E; Hurtado, Daniel E; Uribe, Sergio

    2018-01-01

    We propose a 3D finite-element method for the quantification of vorticity and helicity density from 3D cine phase-contrast (PC) MRI. By using a 3D finite-element method, we seamlessly estimate velocity gradients in 3D. The robustness and convergence were analyzed using a combined Poiseuille and Lamb-Ossen equation. A computational fluid dynamics simulation was used to compared our method with others available in the literature. Additionally, we computed 3D maps for different 3D cine PC-MRI data sets: phantom without and with coarctation (18 healthy volunteers and 3 patients). We found a good agreement between our method and both the analytical solution of the combined Poiseuille and Lamb-Ossen. The computational fluid dynamics results showed that our method outperforms current approaches to estimate vorticity and helicity values. In the in silico model, we observed that for a tetrahedral element of 2 mm of characteristic length, we underestimated the vorticity in less than 5% with respect to the analytical solution. In patients, we found higher values of helicity density in comparison to healthy volunteers, associated with vortices in the lumen of the vessels. We proposed a novel method that provides entire 3D vorticity and helicity density maps, avoiding the used of reformatted 2D planes from 3D cine PC-MRI. Magn Reson Med 79:541-553, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  1. Automatic control of a negative ion source

    NASA Astrophysics Data System (ADS)

    Saadatmand, K.; Sredniawski, J.; Solensten, L.

    1989-04-01

    A CAMAC based control architecture is devised for a Berkeley-type H - volume ion source [1]. The architecture employs three 80386 TM PCs. One PC is dedicated to control and monitoring of source operation. The other PC functions with digitizers to provide data acquisition of waveforms. The third PC is used for off-line analysis. Initially, operation of the source was put under remote computer control (supervisory). This was followed by development of an automated startup procedure. Finally, a study of the physics of operation is now underway to establish a data base from which automatic beam optimization can be derived.

  2. Integrating Commercial Off-The-Shelf (COTS) graphics and extended memory packages with CLIPS

    NASA Technical Reports Server (NTRS)

    Callegari, Andres C.

    1990-01-01

    This paper addresses the question of how to mix CLIPS with graphics and how to overcome PC's memory limitations by using the extended memory available in the computer. By adding graphics and extended memory capabilities, CLIPS can be converted into a complete and powerful system development tool, on the other most economical and popular computer platform. New models of PCs have amazing processing capabilities and graphic resolutions that cannot be ignored and should be used to the fullest of their resources. CLIPS is a powerful expert system development tool, but it cannot be complete without the support of a graphics package needed to create user interfaces and general purpose graphics, or without enough memory to handle large knowledge bases. Now, a well known limitation on the PC's is the usage of real memory which limits CLIPS to use only 640 Kb of real memory, but now that problem can be solved by developing a version of CLIPS that uses extended memory. The user has access of up to 16 MB of memory on 80286 based computers and, practically, all the available memory (4 GB) on computers that use the 80386 processor. So if we give CLIPS a self-configuring graphics package that will automatically detect the graphics hardware and pointing device present in the computer, and we add the availability of the extended memory that exists in the computer (with no special hardware needed), the user will be able to create more powerful systems at a fraction of the cost and on the most popular, portable, and economic platform available such as the PC platform.

  3. Comparison of phase-contrast MR and flow simulations for the study of CSF dynamics in the cervical spine.

    PubMed

    Lindstrøm, Erika Kristina; Schreiner, Jakob; Ringstad, Geir Andre; Haughton, Victor; Eide, Per Kristian; Mardal, Kent-Andre

    2018-06-01

    Background Investigators use phase-contrast magnetic resonance (PC-MR) and computational fluid dynamics (CFD) to assess cerebrospinal fluid dynamics. We compared qualitative and quantitative results from the two methods. Methods Four volunteers were imaged with a heavily T2-weighted volume gradient echo scan of the brain and cervical spine at 3T and with PC-MR. Velocities were calculated from PC-MR for each phase in the cardiac cycle. Mean pressure gradients in the PC-MR acquisition through the cardiac cycle were calculated with the Navier-Stokes equations. Volumetric MR images of the brain and upper spine were segmented and converted to meshes. Models of the subarachnoid space were created from volume images with the Vascular Modeling Toolkit. CFD simulations were performed with a previously verified flow solver. The flow patterns, velocities and pressures were compared in PC-MR and CFD flow images. Results PC-MR images consistently revealed more inhomogeneous flow patterns than CFD, especially in the anterolateral subarachnoid space where spinal nerve roots are located. On average, peak systolic and diastolic velocities in PC-MR exceeded those in CFD by 31% and 41%, respectively. On average, systolic and diastolic pressure gradients calculated from PC-MR exceeded those of CFD by 11% and 39%, respectively. Conclusions PC-MR shows local flow disturbances that are not evident in typical CFD. The velocities and pressure gradients calculated from PC-MR are systematically larger than those calculated from CFD.

  4. Searching for Faint Companions to Nearby Stars with the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Schroeder, Daniel J.; Golimowski, David A.

    1996-01-01

    A search for faint companions (FC's) to selected stars within 5 pc of the Sun using the Hubble Space Telescope's Planetary Camera (PC) has been initiated. To assess the PC's ability to detect FCs, we have constructed both model and laboratory-simulated images and compared them to actual PC images. We find that the PC's point-spread function (PSF) is 3-4 times brighter over the angular range 2-5 sec than the PSF expected for a perfect optical system. Azimuthal variations of the PC's PSF are 10-20 times larger than expected for a perfect PSF. These variations suggest that light is scattered nonuniformly from the surface of the detector. Because the anomalies in the PC's PSF cannot be precisely simulated, subtracting a reference PSF from the PC image is problematic. We have developed a computer algorithm that identifies local brightness anomalies within the PSF as potential FCs. We find that this search algorithm will successfully locate FCs anywhere within the circumstellar field provided that the average pixel signal from the FC is at least 10 sigma above the local background. This detection limit suggests that a comprehensive search for extrasolar Jovian planets with the PC is impractical. However, the PC is useful for detecting other types of substellar objects. With a stellar signal of 10(exp 9) e(-), for example, we may detect brown dwarfs as faint as M(sub I) = 16.7 separated by 1 sec from alpha Cen A.

  5. PC-assisted translation of photogrammetric papers

    NASA Astrophysics Data System (ADS)

    Güthner, Karlheinz; Peipe, Jürgen

    A PC-based system for machine translation of photogrammetric papers from the English into the German language and vice versa is described. The computer-assisted translating process is not intended to create a perfect interpretation of a text but to produce a rough rendering of the content of a paper. Starting with the original text, a continuous data flow is effected into the translated version by means of hardware (scanner, personal computer, printer) and software (OCR, translation, word processing, DTP). An essential component of the system is a photogrammetric microdictionary which is being established at present. It is based on several sources, including e.g. the ISPRS Multilingual Dictionary.

  6. Webcam mouse using face and eye tracking in various illumination environments.

    PubMed

    Lin, Yuan-Pin; Chao, Yi-Ping; Lin, Chung-Chih; Chen, Jyh-Horng

    2005-01-01

    Nowadays, due to enhancement of computer performance and popular usage of webcam devices, it has become possible to acquire users' gestures for the human-computer-interface with PC via webcam. However, the effects of illumination variation would dramatically decrease the stability and accuracy of skin-based face tracking system; especially for a notebook or portable platform. In this study we present an effective illumination recognition technique, combining K-Nearest Neighbor classifier and adaptive skin model, to realize the real-time tracking system. We have demonstrated that the accuracy of face detection based on the KNN classifier is higher than 92% in various illumination environments. In real-time implementation, the system successfully tracks user face and eyes features at 15 fps under standard notebook platforms. Although KNN classifier only initiates five environments at preliminary stage, the system permits users to define and add their favorite environments to KNN for computer access. Eventually, based on this efficient tracking algorithm, we have developed a "Webcam Mouse" system to control the PC cursor using face and eye tracking. Preliminary studies in "point and click" style PC web games also shows promising applications in consumer electronic markets in the future.

  7. Personal Computer Price and Performance.

    ERIC Educational Resources Information Center

    Crawford, Walt

    1993-01-01

    Discusses personal computer price trends since 1986; describes offerings and prices for four direct-market suppliers, i.e., Dell CompuAdd, PC Brand, and Gateway 2000; and discusses overall value and price/performance ratios. Tables and graphs chart value over time. (EA)

  8. Faster, Better, Cheaper: A Decade of PC Progress.

    ERIC Educational Resources Information Center

    Crawford, Walt

    1997-01-01

    Reviews the development of personal computers and how computer components have changed in price and value. Highlights include disk drives; keyboards; displays; memory; color graphics; modems; CPU (central processing unit); storage; direct mail vendors; and future possibilities. (LRW)

  9. Reviews.

    ERIC Educational Resources Information Center

    Journal of Chemical Education, 1988

    1988-01-01

    Reviews three computer software packages for chemistry education including "Osmosis and Diffusion" and "E.M.E. Titration Lab" for Apple II and "Simplex-V: An Interactive Computer Program for Experimental Optimization" for IBM PC. Summary ratings include ease of use, content, pedagogic value, student reaction, and…

  10. The Computer Bulletin Board.

    ERIC Educational Resources Information Center

    Batt, Russell H., Ed.

    1989-01-01

    Describes two chemistry computer programs: (1) "Eureka: A Chemistry Problem Solver" (problem files may be written by the instructor, MS-DOS 2.0, IBM with 384K); and (2) "PC-File+" (database management, IBM with 416K and two floppy drives). (MVL)

  11. Quantitative phase imaging method based on an analytical nonparaxial partially coherent phase optical transfer function.

    PubMed

    Bao, Yijun; Gaylord, Thomas K

    2016-11-01

    Multifilter phase imaging with partially coherent light (MFPI-PC) is a promising new quantitative phase imaging method. However, the existing MFPI-PC method is based on the paraxial approximation. In the present work, an analytical nonparaxial partially coherent phase optical transfer function is derived. This enables the MFPI-PC to be extended to the realistic nonparaxial case. Simulations over a wide range of test phase objects as well as experimental measurements on a microlens array verify higher levels of imaging accuracy compared to the paraxial method. Unlike the paraxial version, the nonparaxial MFPI-PC with obliquity factor correction exhibits no systematic error. In addition, due to its analytical expression, the increase in computation time compared to the paraxial version is negligible.

  12. Falling PC Solitaire Cards: An Open-Inquiry Approach

    ERIC Educational Resources Information Center

    Gonzalez-Espada, Wilson J.

    2012-01-01

    Many of us have played the PC Solitaire game that comes as standard software in many computers. Although I am not a great player, occasionally I win a game or two. The game celebrates my accomplishment by pushing the cards forward, one at a time, falling gracefully in what appears to look like a parabolic path in a drag-free environment. One day,…

  13. The diagnostic accuracy of neck ultrasound, 4D-Computed tomographyand sestamibi imaging in parathyroid carcinoma.

    PubMed

    Christakis, Ioannis; Vu, Thinh; Chuang, Hubert H; Fellman, Bryan; Figueroa, Angelica M Silva; Williams, Michelle D; Busaidy, Naifa L; Perrier, Nancy D

    2017-10-01

    Our aim was to investigate the accuracy of available imaging modalities for parathyroid carcinoma (PC) in our institution and to identify which imaging modality, or combination thereof, is optimal in preoperative determination of precise tumor location. All operated PC patients in our institution between 2000 and 2015 that had at least one of the following in-house preoperative scans: neck ultrasonography (US), neck 4D-Computed Tomography (4DCT) and 99mTc Sestamibi SPECT/CT (MIBI). Sensitivity, specificity and accuracy of PC tumor localization were assessed individually and in combination. 20 patients fulfilled the inclusion criteria and were analysed. There were 18 US, 18 CT and 9 MIBI scans. The sensitivity and accuracy for tumor localisation of US was 80% (CI 56-94%) and 73% respectively, of 4DCT was 79% (CI 58-93%) and 82%, and of MIBI was 81% (CI 54-96%) and 78%. The sensitivity and accuracy of the combination of CT and MIBI was 94% (CI 73-100%) and 95% and for the combination of US, CT and MIBI was 100% (CI 72-100%) and 100% respectively. The wash-out of the PC lesions, expressed as a percentage change in Hounsfield Units from the arterial phase to early delayed phase was -9.29% and to the late delayed phase was -16.88% (n=11). The sensitivity of solitary preoperative imaging of PC patients, whether by US, CT or MIBI, is approximately 80%. Combinations of CT with MIBI and US increase the sensitivity to 95% or better. Combined preoperative imaging of patients with clinical possibility of PC is therefore recommended. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Agent-Based Framework for Discrete Entity Simulations

    DTIC Science & Technology

    2006-11-01

    Postgres database server for environment queries of neighbors and continuum data. As expected for raw database queries (no database optimizations in...form. Eventually the code was ported to GNU C++ on the same single Intel Pentium 4 CPU running RedHat Linux 9.0 and Postgres database server...Again Postgres was used for environmental queries, and the tool remained relatively slow because of the immense number of queries necessary to assess

  15. Using PC Software To Enhance the Student's Ability To Learn the Exporting Process.

    ERIC Educational Resources Information Center

    Buckles, Tom A.; Lange, Irene

    This paper describes the advantages of using computer simulations in the classroom or managerial environment and the major premise and principal components of Export to Win!, a computer simulation used in international marketing seminars. A rationale for using computer simulations argues that they improve the quality of teaching by building…

  16. Tablet PCs: A Physical Educator's New Clipboard

    ERIC Educational Resources Information Center

    Nye, Susan B.

    2010-01-01

    Computers in education have come a long way from the abacus of 5,000 years ago to the desktop and laptop computers of today. Computers have transformed the educational environment, and with each new iteration of smaller and more powerful machines come additional advantages for teaching practices. The Tablet PC is one. Tablet PCs are fully…

  17. Setting new standards for customer advocacy.

    PubMed

    McDonald, L

    1993-01-01

    Dell Computer Corporation pioneered the direct marketing of personal computers in 1984 and became the first company in the PC industry to offer manufacturer-direct technical support. According to surveys of corporate buyers, the company provides the best after-sale service and support of any computer maker. Here's how Dell has institutionalized the delivery of customer satisfaction.

  18. Software Solution Saves Dollars

    ERIC Educational Resources Information Center

    Trotter, Andrew

    2004-01-01

    This article discusses computer software that can give classrooms and computer labs the capabilities of costly PC's at a small fraction of the cost. A growing number of cost-conscious school districts are finding budget relief in low-cost computer software known as "open source" that can do everything from manage school Web sites to equip…

  19. Optimisation of multiplet identifier processing on a PLAYSTATION® 3

    NASA Astrophysics Data System (ADS)

    Hattori, Masami; Mizuno, Takashi

    2010-02-01

    To enable high-performance computing (HPC) for applications with large datasets using a Sony® PLAYSTATION® 3 (PS3™) video game console, we configured a hybrid system consisting of a Windows® PC and a PS3™. To validate this system, we implemented the real-time multiplet identifier (RTMI) application, which identifies multiplets of microearthquakes in terms of the similarity of their waveforms. The cross-correlation computation, which is a core algorithm of the RTMI application, was optimised for the PS3™ platform, while the rest of the computation, including data input and output remained on the PC. With this configuration, the core part of the algorithm ran 69 times faster than the original program, accelerating total computation speed more than five times. As a result, the system processed up to 2100 total microseismic events, whereas the original implementation had a limit of 400 events. These results indicate that this system enables high-performance computing for large datasets using the PS3™, as long as data transfer time is negligible compared with computation time.

  20. Providing Assistive Technology Applications as a Service Through Cloud Computing.

    PubMed

    Mulfari, Davide; Celesti, Antonio; Villari, Massimo; Puliafito, Antonio

    2015-01-01

    Users with disabilities interact with Personal Computers (PCs) using Assistive Technology (AT) software solutions. Such applications run on a PC that a person with a disability commonly uses. However the configuration of AT applications is not trivial at all, especially whenever the user needs to work on a PC that does not allow him/her to rely on his / her AT tools (e.g., at work, at university, in an Internet point). In this paper, we discuss how cloud computing provides a valid technological solution to enhance such a scenario.With the emergence of cloud computing, many applications are executed on top of virtual machines (VMs). Virtualization allows us to achieve a software implementation of a real computer able to execute a standard operating system and any kind of application. In this paper we propose to build personalized VMs running AT programs and settings. By using the remote desktop technology, our solution enables users to control their customized virtual desktop environment by means of an HTML5-based web interface running on any computer equipped with a browser, whenever they are.

  1. GPU-accelerated FDTD modeling of radio-frequency field-tissue interactions in high-field MRI.

    PubMed

    Chi, Jieru; Liu, Feng; Weber, Ewald; Li, Yu; Crozier, Stuart

    2011-06-01

    The analysis of high-field RF field-tissue interactions requires high-performance finite-difference time-domain (FDTD) computing. Conventional CPU-based FDTD calculations offer limited computing performance in a PC environment. This study presents a graphics processing unit (GPU)-based parallel-computing framework, producing substantially boosted computing efficiency (with a two-order speedup factor) at a PC-level cost. Specific details of implementing the FDTD method on a GPU architecture have been presented and the new computational strategy has been successfully applied to the design of a novel 8-element transceive RF coil system at 9.4 T. Facilitated by the powerful GPU-FDTD computing, the new RF coil array offers optimized fields (averaging 25% improvement in sensitivity, and 20% reduction in loop coupling compared with conventional array structures of the same size) for small animal imaging with a robust RF configuration. The GPU-enabled acceleration paves the way for FDTD to be applied for both detailed forward modeling and inverse design of MRI coils, which were previously impractical.

  2. Wide-bandwidth high-resolution search for extraterrestrial intelligence

    NASA Technical Reports Server (NTRS)

    Horowitz, Paul

    1993-01-01

    A third antenna was added to the system. It is a terrestrial low-gain feed, to act as a veto for local interference. The 3-chip design for a 4 megapoint complex FFT was reduced to finished working hardware. The 4-Megachannel circuit board contains 36 MByte of DRAM, 5 CPLDs, the three large FFT ASICs, and 74 ICs in all. The Austek FDP-based Spectrometer/Power Accumulator (SPA) has now been implemented as a 4-layer printed circuit. A PC interface board has been designed and together with its associated user interface and control software allows an IBM compatible computer to control the SPA board, and facilitates the transfer of spectra to the PC for display, processing, and storage. The Feature Recognizer Array cards receive the stream of modulus words from the 4M FFT cards, and forward a greatly thinned set of reports to the PC's in whose backplane they reside. In particular, a powerful ROM-based state-machine architecture has been adopted, and DRAM has been added to permit integration modes when tracking or reobserving source candidates. The general purpose (GP) array consists of twenty '486 PC class computers, each of which receives and processes the data from a feature extractor/correlator board set. The array performs a first analysis on the provided 'features' and then passes this information on to the workstation. The core workstation software is now written. That is, the communication channels between the user interface, the backend monitor program and the PC's have working software.

  3. Microcosm to Cosmos: The Growth of a Divisional Computer Network

    PubMed Central

    Johannes, R.S.; Kahane, Stephen N.

    1987-01-01

    In 1982, we reported the deployment of a network of microcomputers in the Division of Gastroenterology[1]. This network was based upon Corvus Systems Omninet®. Corvus was one of the very first firms to offer networking products for PC's. This PC development occurred coincident with the planning phase of the Johns Hopkins Hospital's multisegment ethernet project. A rich communications infra-structure is now in place at the Johns Hopkins Medical Institutions[2,3]. Shortly after the hospital development under the direction of the Operational and Clinical Systems Division (OCS) development began, the Johns Hopkins School of Medicine began an Integrated Academic Information Management Systems (IAIMS) planning effort. We now present a model that uses aspects of all three planning efforts (PC networks, Hospital Information Systems & IAIMS) to build a divisional computing facility. This facility is viewed as a terminal leaf on then institutional network diagram. Nevertheless, it is noteworthy that this leaf, the divisional resource in the Division of Gastroenterology (GASNET), has a rich substructure and functionality of its own, perhaps revealing the recursive nature of network architecture. The current status, design and function of the GASNET computational facility is discussed. Among the major positive aspects of this design are the sharing and centralization of MS-DOS software, the high-speed DOS/Unix link that makes available most of the our institution's computing resources.

  4. SWPS3 - fast multi-threaded vectorized Smith-Waterman for IBM Cell/B.E. and x86/SSE2.

    PubMed

    Szalkowski, Adam; Ledergerber, Christian; Krähenbühl, Philipp; Dessimoz, Christophe

    2008-10-29

    We present swps3, a vectorized implementation of the Smith-Waterman local alignment algorithm optimized for both the Cell/BE and x86 architectures. The paper describes swps3 and compares its performances with several other implementations. Our benchmarking results show that swps3 is currently the fastest implementation of a vectorized Smith-Waterman on the Cell/BE, outperforming the only other known implementation by a factor of at least 4: on a Playstation 3, it achieves up to 8.0 billion cell-updates per second (GCUPS). Using the SSE2 instruction set, a quad-core Intel Pentium can reach 15.7 GCUPS. We also show that swps3 on this CPU is faster than a recent GPU implementation. Finally, we note that under some circumstances, alignments are computed at roughly the same speed as BLAST, a heuristic method. The Cell/BE can be a powerful platform to align biological sequences. Besides, the performance gap between exact and heuristic methods has almost disappeared, especially for long protein sequences.

  5. SWPS3 – fast multi-threaded vectorized Smith-Waterman for IBM Cell/B.E. and ×86/SSE2

    PubMed Central

    Szalkowski, Adam; Ledergerber, Christian; Krähenbühl, Philipp; Dessimoz, Christophe

    2008-01-01

    Background We present swps3, a vectorized implementation of the Smith-Waterman local alignment algorithm optimized for both the Cell/BE and ×86 architectures. The paper describes swps3 and compares its performances with several other implementations. Findings Our benchmarking results show that swps3 is currently the fastest implementation of a vectorized Smith-Waterman on the Cell/BE, outperforming the only other known implementation by a factor of at least 4: on a Playstation 3, it achieves up to 8.0 billion cell-updates per second (GCUPS). Using the SSE2 instruction set, a quad-core Intel Pentium can reach 15.7 GCUPS. We also show that swps3 on this CPU is faster than a recent GPU implementation. Finally, we note that under some circumstances, alignments are computed at roughly the same speed as BLAST, a heuristic method. Conclusion The Cell/BE can be a powerful platform to align biological sequences. Besides, the performance gap between exact and heuristic methods has almost disappeared, especially for long protein sequences. PMID:18959793

  6. A System-on-Chip Solution for Point-of-Care Ultrasound Imaging Systems: Architecture and ASIC Implementation.

    PubMed

    Kang, Jeeun; Yoon, Changhan; Lee, Jaejin; Kye, Sang-Bum; Lee, Yongbae; Chang, Jin Ho; Kim, Gi-Duck; Yoo, Yangmo; Song, Tai-kyong

    2016-04-01

    In this paper, we present a novel system-on-chip (SOC) solution for a portable ultrasound imaging system (PUS) for point-of-care applications. The PUS-SOC includes all of the signal processing modules (i.e., the transmit and dynamic receive beamformer modules, mid- and back-end processors, and color Doppler processors) as well as an efficient architecture for hardware-based imaging methods (e.g., dynamic delay calculation, multi-beamforming, and coded excitation and compression). The PUS-SOC was fabricated using a UMC 130-nm NAND process and has 16.8 GFLOPS of computing power with a total equivalent gate count of 12.1 million, which is comparable to a Pentium-4 CPU. The size and power consumption of the PUS-SOC are 27×27 mm(2) and 1.2 W, respectively. Based on the PUS-SOC, a prototype hand-held US imaging system was implemented. Phantom experiments demonstrated that the PUS-SOC can provide appropriate image quality for point-of-care applications with a compact PDA size ( 200×120×45 mm(3)) and 3 hours of battery life.

  7. Real-time detection of moving objects from moving vehicles using dense stereo and optical flow

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2004-01-01

    Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time, dense stereo system to include realtime, dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identify & other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6-DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop, computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.

  8. Real-time detection of moving objects from moving vehicles using dense stereo and optical flow

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2004-01-01

    Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time, dense stereo system to include real-time, dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identity other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6-DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop, computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.

  9. Real-time Detection of Moving Objects from Moving Vehicles Using Dense Stereo and Optical Flow

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2004-01-01

    Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time. dense stereo system to include realtime. dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identify other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop. computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.

  10. Vehicle counting system using real-time video processing

    NASA Astrophysics Data System (ADS)

    Crisóstomo-Romero, Pedro M.

    2006-02-01

    Transit studies are important for planning a road network with optimal vehicular flow. A vehicular count is essential. This article presents a vehicle counting system based on video processing. An advantage of such system is the greater detail than is possible to obtain, like shape, size and speed of vehicles. The system uses a video camera placed above the street to image transit in real-time. The video camera must be placed at least 6 meters above the street level to achieve proper acquisition quality. Fast image processing algorithms and small image dimensions are used to allow real-time processing. Digital filters, mathematical morphology, segmentation and other techniques allow identifying and counting all vehicles in the image sequences. The system was implemented under Linux in a 1.8 GHz Pentium 4 computer. A successful count was obtained with frame rates of 15 frames per second for images of size 240x180 pixels and 24 frames per second for images of size 180x120 pixels, thus being able to count vehicles whose speeds do not exceed 150 km/h.

  11. Real-time spectral analysis of HRV signals: an interactive and user-friendly PC system.

    PubMed

    Basano, L; Canepa, F; Ottonello, P

    1998-01-01

    We present a real-time system, built around a PC and a low-cost data acquisition board, for the spectral analysis of the heart rate variability signal. The Windows-like operating environment on which it is based makes the computer program very user-friendly even for non-specialized personnel. The Power Spectral Density is computed through the use of a hybrid method, in which a classical FFT analysis follows an autoregressive finite-extension of data; the stationarity of the sequence is continuously checked. The use of this algorithm gives a high degree of robustness of the spectral estimation. Moreover, always in real time, the FFT of every data block is computed and displayed in order to corroborate the results as well as to allow the user to interactively choose a proper AR model order.

  12. A PC-based generator of surface ECG potentials for computer electrocardiograph testing.

    PubMed

    Franchi, D; Palagi, G; Bedini, R

    1994-02-01

    The system is composed of an electronic circuit, connected to a PC, whose outputs, starting from ECGs digitally collected by commercial interpretative electrocardiographs, simulate virtual patients' limb and chest electrode potentials. Appropriate software manages the D/A conversion and lines up the original short-term signal in a ring buffer to generate continuous ECG traces. The device also permits the addition of artifacts and/or baseline wanders/shifts on each lead separately. The system has been accurately tested and statistical indexes have been computed to quantify the reproduction accuracy analyzing, in the generated signal, both the errors induced on the fiducial point measurements and the capability to retain the diagnostic significance. The device integrated with an annotated ECG data base constitutes a reliable and powerful system to be used in the quality assurance testing of computer electrocardiographs.

  13. The USL NASA PC R and D project: Detailed specifications of objects

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Chum, Frank Y.; Hall, Philip P.; Moreau, Dennis R.; Triantafyllopoulos, Spiros

    1984-01-01

    The specifications for a number of projects which are to be implemented within the University of Southwestern Louisiana NASA PC R and D Project are discussed. The goals and objectives of the PC development project and the interrelationships of the various components are discussed. Six projects are described. They are a NASA/RECON simulator, a user interface to multiple remote information systems, evaluation of various personal computer systems, statistical analysis software development, interactive presentation system development, and the development of a distributed processing environment. The relationships of these projects to one another and to the goals and objectives of the overall project are discussed.

  14. High performance, low cost, self-contained, multipurpose PC based ground systems

    NASA Technical Reports Server (NTRS)

    Forman, Michael; Nickum, William; Troendly, Gregory

    1993-01-01

    The use of embedded processors greatly enhances the capabilities of personal computers when used for telemetry processing and command control center functions. Parallel architectures based on the use of transputers are shown to be very versatile and reusable, and the synergism between the PC and the embedded processor with transputers results in single unit, low cost workstations of 20 less than MIPS less than or equal to 1000.

  15. Predictive Models for Dynamic Brittle Fracture and Damage at High-velocity Impact in Multilayered Targets

    DTIC Science & Technology

    2016-11-01

    layered glass/PC systems,Functionally Graded Materials (FGMs), polycrystalline AlON, and fiber-reinforced composite (FRC) materials. For the first time we...multi-layered glass/PC systems,Functionally Graded Materials (FGMs), polycrystalline AlON, and fiber-reinforced composite (FRC) materials. For the... Composite Lamina with Peridynamics, International Journal for Multiscale Computational Engineering, (12 2011): 0. doi: Florin Bobaru, Youn Doh Ha

  16. Teaching mathematics in the PC lab - the students' viewpoints

    NASA Astrophysics Data System (ADS)

    Schmidt, Karsten; Köhler, Anke

    2013-04-01

    The Matrix Algebra portion of the intermediate mathematics course at the Schmalkalden University Faculty of Business and Economics has been moved from a traditional classroom setting to a technology-based setting in the PC lab. A Computer Algebra System license was acquired that also allows its use on the students' own PCs. A survey was carried out to analyse the students' attitudes towards the use of technology in mathematics teaching.

  17. PC/AT-based architecture for shared telerobotic control

    NASA Astrophysics Data System (ADS)

    Schinstock, Dale E.; Faddis, Terry N.; Barr, Bill G.

    1993-03-01

    A telerobotic control system must include teleoperational, shared, and autonomous modes of control in order to provide a robot platform for incorporating the rapid advances that are occurring in telerobotics and associated technologies. These modes along with the ability to modify the control algorithms are especially beneficial for telerobotic control systems used for research purposes. The paper describes an application of the PC/AT platform to the control system of a telerobotic test cell. The paper provides a discussion of the suitability of the PC/AT as a platform for a telerobotic control system. The discussion is based on the many factors affecting the choice of a computer platform for a real time control system. The factors include I/O capabilities, simplicity, popularity, computational performance, and communication with external systems. The paper also includes a description of the actuation, measurement, and sensor hardware of both the master manipulator and the slave robot. It also includes a description of the PC-Bus interface cards. These cards were developed by the researchers in the KAT Laboratory, specifically for interfacing to the master manipulator and slave robot. Finally, a few different versions of the low level telerobotic control software are presented. This software incorporates shared control by supervisory systems and the human operator and traded control between supervisory systems and the human operator.

  18. The impact of Internet and PC addiction in school performance of Cypriot adolescents.

    PubMed

    Siomos, Konstantinos; Paradeisioti, Anna; Hadjimarcou, Michalis; Mappouras, Demetrios G; Kalakouta, Olga; Avagianou, Penelope; Floros, Georgios

    2013-01-01

    In this paper we present the results of a cross-sectional survey designed to ascertain Internet and personal computer (PC) addiction in the Republic of Cyprus. This is a follow-up to a pilot study conducted one year earlier. Data were collected from a representative sample of the adolescent student population of the first and fourth grades of high school. Total sample was 2684 students, 48.5% of them male and 51.5% female. Research material included extended demographics and an Internet security questionnaire, the Young's Diagnostic questionnaire (YDQ), the Adolescent Computer Addiction Test (ACAT). Results indicated that the Cypriot population had comparable addiction statistics with other Greek-speaking populations in Greece; 15.3% of the students were classified as Internet addicted by their YDQ scores and 16.3% as PC addicted by their ACAT scores. Those results are among the highest in Europe. Our results were alarming and have led to the creation of an Internet and PC addiction prevention program which will focus on high-school professor training and the creation of appropriate prevention material for all high-schools, starting immediately after the conclusion of the pan-Cypriot survey, focusing especially on those areas where the frequency of addictive behaviors will be highest.

  19. Muscle Velocity and Inertial Force from Phase Contrast Magnetic Resonance Imaging

    PubMed Central

    Wentland, Andrew L.; McWalter, Emily J.; Pal, Saikat; Delp, Scott L.; Gold, Garry E.

    2014-01-01

    Purpose To evaluate velocity waveforms in muscle and to create a tool and algorithm for computing and analyzing muscle inertial forces derived from 2D phase contrast (PC) MRI. Materials and Methods PC MRI was performed in the forearm of four healthy volunteers during 1 Hz cycles of wrist flexion-extension as well as in the lower leg of six healthy volunteers during 1 Hz cycles of plantarflexion-dorsiflexion. Inertial forces (F) were derived via the equation F = ma. The mass, m, was derived by multiplying voxel volume by voxel-by-voxel estimates of density via fat-water separation techniques. Acceleration, a, was obtained via the derivative of the PC MRI velocity waveform. Results Mean velocities in the flexors of the forearm and lower leg were 1.94 ± 0.97 cm/s and 5.57 ± 2.72 cm/s, respectively, as averaged across all subjects; the inertial forces in the flexors of the forearm and lower leg were 1.9 × 10-3 ± 1.3 × 10-3 N and 1.1 × 10-2 ± 6.1 × 10-3 N, respectively, as averaged across all subjects. Conclusion PC MRI provided a promising means of computing muscle velocities and inertial forces—providing the first method for quantifying inertial forces. PMID:25425185

  20. Paradigm Paralysis and the Plight of the PC in Education.

    ERIC Educational Resources Information Center

    O'Neil, Mick

    1998-01-01

    Examines the varied factors involved in providing Internet access in K-12 education, including expense, computer installation and maintenance, and security, and explores how the network computer could be useful in this context. Operating systems and servers are discussed. (MSE)

  1. On the predictive potential of Pc5 ULF waves to forecast relativistic electrons based on their relationships over two solar cycles

    NASA Astrophysics Data System (ADS)

    Lam, Hing-Lan

    2017-01-01

    A statistical study of relativistic electron (>2 MeV) fluence derived from geosynchronous satellites and Pc5 ultralow frequency (ULF) wave power computed from a ground magnetic observatory data located in Canada's auroral zone has been carried out. The ground observations were made near the foot points of field lines passing through the GOESs from 1987 to 2009 (cycles 22 and 23). We determine statistical relationships between the two quantities for different phases of a solar cycle and validate these relationships in two different cycles. There is a positive linear relationship between log fluence and log Pc5 power for all solar phases; however, the power law indices vary for different phases of the cycle. High index values existed during the descending phase. The Pearson's cross correlation between electron fluence and Pc5 power indicates fluence enhancement 2-3 days after strong Pc5 wave activity for all solar phases. The lag between the two quantities is shorter for extremely high fluence (due to high Pc5 power), which tends to occur during the declining phases of both cycles. Most occurrences of extremely low fluence were observed during the extended solar minimum of cycle 23. The precursory attribute of Pc5 power with respect to fluence and the enhancement of fluence due to rising Pc5 power both support the notion of an electron acceleration mechanism by Pc5 ULF waves. This precursor behavior establishes the potential of using Pc5 power to predict relativistic electron fluence.

  2. Does long time spending on the electronic devices affect the reading abilities? A cross-sectional study among Chinese school-aged children.

    PubMed

    He, Zhen; Shao, Shanshan; Zhou, Jie; Ke, Juntao; Kong, Rui; Guo, Shengnan; Zhang, Jiajia; Song, Ranran

    2014-12-01

    Home literacy environment (HLE) is one of most important modifiable risk factors to dyslexia. With the development in technology, we include the electronic devices usage at home, such as computers and televisions, to the definition of HLE and investigate its impact on dyslexia based on the on-going project of Tongji's Reading Environment and Dyslexia Study. The data include 5063 children, primary school students (grade 3-grade 6), from a middle-sized city in China. We apply the principal component analysis (PCA) to reduce the large dimension of variables in HLE, and find the first three components, denoted as PC1, PC2 and PC3, can explain 95.45% of HLE information. PC1 and PC2 demonstrate strong positive association with 'total time spending on electronic devices' and 'literacy-related activity', respectively. PC3 demonstrates strong negative association with 'restrictions on using electronic devices'. From the generalized linear model, we find that PC1 significantly increases the risk of dyslexia (OR = 1.043, 95% CI: 1.018-1.070), while PC2 significantly decreases the risk of dyslexia (OR = 0.839, 95% CI: 0.795-0.886). Therefore, reducing the total time spending on electronic devices and increasing the literacy-related activity would be the potential protective factors for dyslexic children in China. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. The automation of an inlet mass flow control system

    NASA Technical Reports Server (NTRS)

    Supplee, Frank; Tcheng, Ping; Weisenborn, Michael

    1989-01-01

    The automation of a closed-loop computer controlled system for the inlet mass flow system (IMFS) developed for a wind tunnel facility at Langley Research Center is presented. This new PC based control system is intended to replace the manual control system presently in use in order to fully automate the plug positioning of the IMFS during wind tunnel testing. Provision is also made for communication between the PC and a host-computer in order to allow total animation of the plug positioning and data acquisition during the complete sequence of predetermined plug locations. As extensive running time is programmed for the IMFS, this new automated system will save both manpower and tunnel running time.

  4. Second-Language Composition Instruction, Computers and First-Language Pedagogy: A Descriptive Survey.

    ERIC Educational Resources Information Center

    Harvey, T. Edward

    1987-01-01

    A national survey of full-time instructional faculty (N=208) at universities, 2-year colleges, and high schools regarding attitudes toward using computers in second-language composition instruction revealed a predomination of Apple and IBM-PC computers used, a major frustration in lack of foreign character support, and mixed opinions about real…

  5. Computer Aided Drafting Packages for Secondary Education. Edition 2. PC DOS Compatible Programs. A MicroSIFT Quarterly Report.

    ERIC Educational Resources Information Center

    Pollard, Jim

    This report reviews eight IBM-compatible software packages that are available to secondary schools to teach computer-aided drafting (CAD). Software packages to be considered were selected following reviews of CAD periodicals, computers in education periodicals, advertisements, and recommendations of teachers. The packages were then rated by…

  6. Impacts of Mobile Computing on Student Learning in the University: A Comparison of Course Assessment Data

    ERIC Educational Resources Information Center

    Hawkes, Mark; Hategekimana, Claver

    2010-01-01

    This study focuses on the impact of wireless, mobile computing tools on student assessment outcomes. In a campus-wide wireless, mobile computing environment at an upper Midwest university, an empirical analysis is applied to understand the relationship between student performance and Tablet PC use. An experimental/control group comparison of…

  7. Enhancing Engineering Computer-Aided Design Education Using Lectures Recorded on the PC

    ERIC Educational Resources Information Center

    McGrann, Roy T. R.

    2006-01-01

    Computer-Aided Engineering (CAE) is a course that is required during the third year in the mechanical engineering curriculum at Binghamton University. The primary objective of the course is to educate students in the procedures of computer-aided engineering design. The solid modeling and analysis program Pro/Engineer[TM] (PTC[R]) is used as the…

  8. NFDRSPC: The National Fire-Danger Rating System on a Personal Computer

    Treesearch

    Bryan G. Donaldson; James T. Paul

    1990-01-01

    This user's guide is an introductory manual for using the 1988 version (Burgan 1988) of the National Fire-Danger Rating System on an IBM PC or compatible computer. NFDRSPC is a window-oriented, interactive computer program that processes observed and forecast weather with fuels data to produce NFDRS indices. Other program features include user-designed display...

  9. First Order Fire Effects Model: FOFEM 4.0, user's guide

    Treesearch

    Elizabeth D. Reinhardt; Robert E. Keane; James K. Brown

    1997-01-01

    A First Order Fire Effects Model (FOFEM) was developed to predict the direct consequences of prescribed fire and wildfire. FOFEM computes duff and woody fuel consumption, smoke production, and fire-caused tree mortality for most forest and rangeland types in the United States. The model is available as a computer program for PC or Data General computer.

  10. Benchmarking and tuning the MILC code on clusters and supercomputers

    NASA Astrophysics Data System (ADS)

    Gottlieb, Steven

    2002-03-01

    Recently, we have benchmarked and tuned the MILC code on a number of architectures including Intel Itanium and Pentium IV (PIV), dual-CPU Athlon, and the latest Compaq Alpha nodes. Results will be presented for many of these, and we shall discuss some simple code changes that can result in a very dramatic speedup of the KS conjugate gradient on processors with more advanced memory systems such as PIV, IBM SP and Alpha.

  11. Benchmarking and tuning the MILC code on clusters and supercomputers

    NASA Astrophysics Data System (ADS)

    Gottlieb, Steven

    Recently, we have benchmarked and tuned the MILC code on a number of architectures including Intel Itanium and Pentium IV (PIV), dual-CPU Athlon, and the latest Compaq Alpha nodes. Results will be presented for many of these, and we shall discuss some simple code changes that can result in a very dramatic speedup of the KS conjugate gradient on processors with more advanced memory systems such as PIV, IBM SP and Alpha.

  12. Defect detection of castings in radiography images using a robust statistical feature.

    PubMed

    Zhao, Xinyue; He, Zaixing; Zhang, Shuyou

    2014-01-01

    One of the most commonly used optical methods for defect detection is radiographic inspection. Compared with methods that extract defects directly from the radiography image, model-based methods deal with the case of an object with complex structure well. However, detection of small low-contrast defects in nonuniformly illuminated images is still a major challenge for them. In this paper, we present a new method based on the grayscale arranging pairs (GAP) feature to detect casting defects in radiography images automatically. First, a model is built using pixel pairs with a stable intensity relationship based on the GAP feature from previously acquired images. Second, defects can be extracted by comparing the difference of intensity-difference signs between the input image and the model statistically. The robustness of the proposed method to noise and illumination variations has been verified on casting radioscopic images with defects. The experimental results showed that the average computation time of the proposed method in the testing stage is 28 ms per image on a computer with a Pentium Core 2 Duo 3.00 GHz processor. For the comparison, we also evaluated the performance of the proposed method as well as that of the mixture-of-Gaussian-based and crossing line profile methods. The proposed method achieved 2.7% and 2.0% false negative rates in the noise and illumination variation experiments, respectively.

  13. Haptic feedback for virtual assembly

    NASA Astrophysics Data System (ADS)

    Luecke, Greg R.; Zafer, Naci

    1998-12-01

    Assembly operations require high speed and precision with low cost. The manufacturing industry has recently turned attenuation to the possibility of investigating assembly procedures using graphical display of CAD parts. For these tasks, some sort of feedback to the person is invaluable in providing a real sense of interaction with virtual parts. This research develops the use of a commercial assembly robot as the haptic display in such tasks. For demonstration, a peg-hole insertion task is studied. Kane's Method is employed to derive the dynamics of the peg and the contact motions between the peg and the hole. A handle modeled as a cylindrical peg is attached to the end effector of a PUMA 560 robotic arm. The arm is handle modeled as a cylindrical peg is attached to the end effector of a PUMA 560 robotic arm. The arm is equipped with a six axis force/torque transducer. The use grabs the handle and the user-applied forces are recorded. A 300 MHz Pentium computer is used to simulate the dynamics of the virtual peg and its interactions as it is inserted in the virtual hole. The computed torque control is then employed to exert the full dynamics of the task to the user hand. Visual feedback is also incorporated to help the user in the process of inserting the peg into the hole. Experimental results are presented to show several contact configurations for this virtually simulated task.

  14. DFLOW USER'S MANUAL

    EPA Science Inventory

    DFLOW is a computer program for estimating design stream flows for use in water quality studies. The manual describes the use of the program on both the EPA's IBM mainframe system and on a personal computer (PC). The mainframe version of DFLOW can extract a river's daily flow rec...

  15. Speckle interferometry. Data acquisition and control for the SPID instrument.

    NASA Astrophysics Data System (ADS)

    Altarac, S.; Tallon, M.; Thiebaut, E.; Foy, R.

    1998-08-01

    SPID (SPeckle Imaging by Deconvolution) is a new speckle camera currently under construction at CRAL-Observatoire de Lyon. Its high spectral resolution and high image restoration capabilities open new astrophysical programs. The instrument SPID is composed of four main optical modules which are fully automated and computer controlled by a software written in Tcl/Tk/Tix and C. This software provides an intelligent assistance to the user by choosing observational parameters as a function of atmospheric parameters, computed in real time, and the desired restored image quality. Data acquisition is made by a photon-counting detector (CP40). A VME-based computer under OS9 controls the detector and stocks the data. The intelligent system runs under Linux on a PC. A slave PC under DOS commands the motors. These 3 computers communicate through an Ethernet network. SPID can be considered as a precursor for VLT's (Very Large Telescope, four 8-meter telescopes currently built in Chile by European Southern Observatory) very high spatial resolution camera.

  16. Visualization of Morse connection graphs for topologically rich 2D vector fields.

    PubMed

    Szymczak, Andrzej; Sipeki, Levente

    2013-12-01

    Recent advances in vector field topologymake it possible to compute its multi-scale graph representations for autonomous 2D vector fields in a robust and efficient manner. One of these representations is a Morse Connection Graph (MCG), a directed graph whose nodes correspond to Morse sets, generalizing stationary points and periodic trajectories, and arcs - to trajectories connecting them. While being useful for simple vector fields, the MCG can be hard to comprehend for topologically rich vector fields, containing a large number of features. This paper describes a visual representation of the MCG, inspired by previous work on graph visualization. Our approach aims to preserve the spatial relationships between the MCG arcs and nodes and highlight the coherent behavior of connecting trajectories. Using simulations of ocean flow, we show that it can provide useful information on the flow structure. This paper focuses specifically on MCGs computed for piecewise constant (PC) vector fields. In particular, we describe extensions of the PC framework that make it more flexible and better suited for analysis of data on complex shaped domains with a boundary. We also describe a topology simplification scheme that makes our MCG visualizations less ambiguous. Despite the focus on the PC framework, our approach could also be applied to graph representations or topological skeletons computed using different methods.

  17. Ergonomic guidelines for using notebook personal computers. Technical Committee on Human-Computer Interaction, International Ergonomics Association.

    PubMed

    Saito, S; Piccoli, B; Smith, M J; Sotoyama, M; Sweitzer, G; Villanueva, M B; Yoshitake, R

    2000-10-01

    In the 1980's, the visual display terminal (VDT) was introduced in workplaces of many countries. Soon thereafter, an upsurge in reported cases of related health problems, such as musculoskeletal disorders and eyestrain, was seen. Recently, the flat panel display or notebook personal computer (PC) became the most remarkable feature in modern workplaces with VDTs and even in homes. A proactive approach must be taken to avert foreseeable ergonomic and occupational health problems from the use of this new technology. Because of its distinct physical and optical characteristics, the ergonomic requirements for notebook PCs in terms of machine layout, workstation design, lighting conditions, among others, should be different from the CRT-based computers. The Japan Ergonomics Society (JES) technical committee came up with a set of guidelines for notebook PC use following exploratory discussions that dwelt on its ergonomic aspects. To keep in stride with this development, the Technical Committee on Human-Computer Interaction under the auspices of the International Ergonomics Association worked towards the international issuance of the guidelines. This paper unveils the result of this collaborative effort.

  18. Implementing Realistic Helicopter Physics in 3D Game Environments

    DTIC Science & Technology

    2002-09-01

    developed a highly realistic and innovative PC video game that puts you inside an Army unit. You’ll face your first tour of duty along with your fellow...helicopter physics. Many other video games include helicopters but omit realistic third person helicopter behaviors in their applications. Of the 48...to be too computationally expensive for a PC based video game . Generally, some basic parts of blade element theory are present in any attempt to

  19. Execution Time of Symmetric Eigensolvers

    DTIC Science & Technology

    1997-01-01

    322 , 398,405 nP n0=1;nb (2n 0 nb pr +dlog2(pr)em nbpc +3 +2m nb 2 2 pc 3+2 vnbn0 nb pr 3 +2nn 0 nb p 3) n2 pr + nm pc dlog2(pr)e + n nb 3+ nm...Sci. Comput., pages 1331{1348, 1994. http://www.nws.e-technik. tu-muenchen.de/~ jugo /pub/SIAMjac.ps.Z. [86] A. Greenbaum and J. Dongarra. Experiments

  20. Beam orientation optimization for intensity-modulated radiation therapy using mixed integer programming

    NASA Astrophysics Data System (ADS)

    Yang, Ruijie; Dai, Jianrong; Yang, Yong; Hu, Yimin

    2006-08-01

    The purpose of this study is to extend an algorithm proposed for beam orientation optimization in classical conformal radiotherapy to intensity-modulated radiation therapy (IMRT) and to evaluate the algorithm's performance in IMRT scenarios. In addition, the effect of the candidate pool of beam orientations, in terms of beam orientation resolution and starting orientation, on the optimized beam configuration, plan quality and optimization time is also explored. The algorithm is based on the technique of mixed integer linear programming in which binary and positive float variables are employed to represent candidates for beam orientation and beamlet weights in beam intensity maps. Both beam orientations and beam intensity maps are simultaneously optimized in the algorithm with a deterministic method. Several different clinical cases were used to test the algorithm and the results show that both target coverage and critical structures sparing were significantly improved for the plans with optimized beam orientations compared to those with equi-spaced beam orientations. The calculation time was less than an hour for the cases with 36 binary variables on a PC with a Pentium IV 2.66 GHz processor. It is also found that decreasing beam orientation resolution to 10° greatly reduced the size of the candidate pool of beam orientations without significant influence on the optimized beam configuration and plan quality, while selecting different starting orientations had large influence. Our study demonstrates that the algorithm can be applied to IMRT scenarios, and better beam orientation configurations can be obtained using this algorithm. Furthermore, the optimization efficiency can be greatly increased through proper selection of beam orientation resolution and starting beam orientation while guaranteeing the optimized beam configurations and plan quality.

  1. Trusted Computing Management Server Making Trusted Computing User Friendly

    NASA Astrophysics Data System (ADS)

    Sothmann, Sönke; Chaudhuri, Sumanta

    Personal Computers (PC) with build in Trusted Computing (TC) technology are already well known and widely distributed. Nearly every new business notebook contains now a Trusted Platform Module (TPM) and could be used with increased trust and security features in daily application and use scenarios. However in real life the number of notebooks and PCs where the TPM is really activated and used is still very small.

  2. A Real Time Controller For Applications In Smart Structures

    NASA Astrophysics Data System (ADS)

    Ahrens, Christian P.; Claus, Richard O.

    1990-02-01

    Research in smart structures, especially the area of vibration suppression, has warranted the investigation of advanced computing environments. Real time PC computing power has limited development of high order control algorithms. This paper presents a simple Real Time Embedded Control System (RTECS) in an application of Intelligent Structure Monitoring by way of modal domain sensing for vibration control. It is compared to a PC AT based system for overall functionality and speed. The system employs a novel Reduced Instruction Set Computer (RISC) microcontroller capable of 15 million instructions per second (MIPS) continuous performance and burst rates of 40 MIPS. Advanced Complimentary Metal Oxide Semiconductor (CMOS) circuits are integrated on a single 100 mm by 160 mm printed circuit board requiring only 1 Watt of power. An operating system written in Forth provides high speed operation and short development cycles. The system allows for implementation of Input/Output (I/O) intensive algorithms and provides capability for advanced system development.

  3. Personal Computer-less (PC-less) Microcontroller Training Kit

    NASA Astrophysics Data System (ADS)

    Somantri, Y.; Wahyudin, D.; Fushilat, I.

    2018-02-01

    The need of microcontroller training kit is necessary for practical work of students of electrical engineering education. However, to use available training kit not only costly but also does not meet the need of laboratory requirements. An affordable and portable microcontroller kit could answer such problem. This paper explains the design and development of Personal Computer Less (PC-Less) Microcontroller Training Kit. It was developed based on Lattepanda processor and Arduino microcontroller as target. The training kit equipped with advanced input-output interfaces that adopted the concept of low cost and low power system. The preliminary usability testing proved this device can be used as a tool for microcontroller programming and industrial automation training. By adopting the concept of portability, the device could be operated in the rural area which electricity and computer infrastructure are limited. Furthermore, the training kit is suitable for student of electrical engineering student from university and vocational high school.

  4. Enhancing PC Cluster-Based Parallel Branch-and-Bound Algorithms for the Graph Coloring Problem

    NASA Astrophysics Data System (ADS)

    Taoka, Satoshi; Takafuji, Daisuke; Watanabe, Toshimasa

    A branch-and-bound algorithm (BB for short) is the most general technique to deal with various combinatorial optimization problems. Even if it is used, computation time is likely to increase exponentially. So we consider its parallelization to reduce it. It has been reported that the computation time of a parallel BB heavily depends upon node-variable selection strategies. And, in case of a parallel BB, it is also necessary to prevent increase in communication time. So, it is important to pay attention to how many and what kind of nodes are to be transferred (called sending-node selection strategy). In this paper, for the graph coloring problem, we propose some sending-node selection strategies for a parallel BB algorithm by adopting MPI for parallelization and experimentally evaluate how these strategies affect computation time of a parallel BB on a PC cluster network.

  5. What Can You Learn from a Cell Phone? Almost Anything!

    ERIC Educational Resources Information Center

    Prensky, Marc

    2005-01-01

    Today's high-end cell phones have the computing power of a mid-1990s personal computer (PC)--while consuming only one one-hundredth of the energy. Even the simplest, voice-only phones have more complex and powerful chips than the 1969 on-board computer that landed a spaceship on the moon. In the United States, it is almost universally acknowledged…

  6. 37 CFR 1.824 - Form and format for nucleotide and/or amino acid sequence submissions in computer readable form.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... which the data were recorded on the computer readable form, the operating system used, a reference... in a self-extracting format that will decompress on one of the systems described in paragraph (b) of... these format requirements: (1) Computer Compatibility: IBM PC/XT/AT or Apple Macintosh; (2) Operating...

  7. Effect of 'PC Game Room' use and polycyclic aromatic hydrocarbon exposure on plasma testosterone concentrations in young male Koreans.

    PubMed

    Kim, Heon; Kang, Jong-Won; Ku, Seung-Yup; Kim, Seok Hyun; Cho, Soo-Hun; Koong, Sung-Soo; Kim, Yong-Dae; Lee, Chul-Ho

    2005-03-01

    'PC Game Rooms' were first popularized in Korea, although the concept is now becoming popular worldwide. PC Game Rooms provide users with high-performance PC connected to the high-speed internet, and access to computer games. However, PC Game Room users are exposed to various hazardous agents such as cigarette smoke in a confined environment, and thus it is likely that excessive PC Game Room use involves abnormal exposure to polycyclic aromatic hydrocarbons (PAH) as well as being associated with disturbed sleep or circadian rhythm. In this cross-sectional study, the exposure to PAH was evaluated by measuring urinary 1-hydroxypyrene (1-OHP) and 2-naphthol. The correlations between PC Game Room use and PAH exposure and plasma testosterone and LH levels were analysed in 208 young male Koreans. Urinary 1-OHP concentrations increased (P = 0.0001) and plasma testosterone levels decreased (P = 0.0153) significantly with increased duration of PC Game Room use. Correlation analysis showed that plasma testosterone concentrations were significantly negatively correlated with urinary 1-OHP (r = -0.22, P = 0.0012) and 2-naphthol (r = -0.15, P = 0.0308) concentrations. Moreover, these associations persisted after adjusting for other independent variables. However, the duration of PC Game Room use itself was not found to be an independent significant determinant of plasma testosterone level. Rather, PC Game Room use increased PAH exposure, which decreased plasma testosterone level. The younger age group (15-19 years) showed a more prominent decrease in plasma testosterone concentrations with increasing duration of PC Game Room use than the older age group (20-24 years) (r2 = 0.355, P = 0.0301 versus r2 = 0.213, P = 0.0001). These results imply that the excessive use of PC Game Rooms is related to an adverse impact on sex hormonal status in young male Koreans via PAH exposure. This effect was more prominent in the younger age group.

  8. LEOPARD on a personal computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lancaster, D.B.

    1988-01-01

    The LEOPARD code is very widely used to produce four- or two-group cross sections for water reactors. Although it is heavily used it had not been downloaded to the PC. This paper has been written to announce the completion of downloading LEOPARD. LEOPARD can now be run on anything from the early PC to the most advanced 80386 machines. The only requirements are 512 Kbytes of memory (LEOPARD actually only needs 235, but with buffers, 256 Kbytes may not be enough) and two disk rives (preferably, one is a hard drive). The run times for various machines and configurations aremore » summarized. The accuracy of the PC-LEOPARD results are documented.« less

  9. A PC based fault diagnosis expert system

    NASA Technical Reports Server (NTRS)

    Marsh, Christopher A.

    1990-01-01

    The Integrated Status Assessment (ISA) prototype expert system performs system level fault diagnosis using rules and models created by the user. The ISA evolved from concepts to a stand-alone demonstration prototype using OPS5 on a LISP Machine. The LISP based prototype was rewritten in C and the C Language Integrated Production System (CLIPS) to run on a Personal Computer (PC) and a graphics workstation. The ISA prototype has been used to demonstrate fault diagnosis functions of Space Station Freedom's Operation Management System (OMS). This paper describes the development of the ISA prototype from early concepts to the current PC/workstation version used today and describes future areas of development for the prototype.

  10. Development a computer codes to couple PWR-GALE output and PC-CREAM input

    NASA Astrophysics Data System (ADS)

    Kuntjoro, S.; Budi Setiawan, M.; Nursinta Adi, W.; Deswandri; Sunaryo, G. R.

    2018-02-01

    Radionuclide dispersion analysis is part of an important reactor safety analysis. From the analysis it can be obtained the amount of doses received by radiation workers and communities around nuclear reactor. The radionuclide dispersion analysis under normal operating conditions is carried out using the PC-CREAM code, and it requires input data such as source term and population distribution. Input data is derived from the output of another program that is PWR-GALE and written Population Distribution data in certain format. Compiling inputs for PC-CREAM programs manually requires high accuracy, as it involves large amounts of data in certain formats and often errors in compiling inputs manually. To minimize errors in input generation, than it is make coupling program for PWR-GALE and PC-CREAM programs and a program for writing population distribution according to the PC-CREAM input format. This work was conducted to create the coupling programming between PWR-GALE output and PC-CREAM input and programming to written population data in the required formats. Programming is done by using Python programming language which has advantages of multiplatform, object-oriented and interactive. The result of this work is software for coupling data of source term and written population distribution data. So that input to PC-CREAM program can be done easily and avoid formatting errors. Programming sourceterm coupling program PWR-GALE and PC-CREAM is completed, so that the creation of PC-CREAM inputs in souceterm and distribution data can be done easily and according to the desired format.

  11. Singular value decomposition for photon-processing nuclear imaging systems and applications for reconstruction and computing null functions.

    PubMed

    Jha, Abhinav K; Barrett, Harrison H; Frey, Eric C; Clarkson, Eric; Caucci, Luca; Kupinski, Matthew A

    2015-09-21

    Recent advances in technology are enabling a new class of nuclear imaging systems consisting of detectors that use real-time maximum-likelihood (ML) methods to estimate the interaction position, deposited energy, and other attributes of each photon-interaction event and store these attributes in a list format. This class of systems, which we refer to as photon-processing (PP) nuclear imaging systems, can be described by a fundamentally different mathematical imaging operator that allows processing of the continuous-valued photon attributes on a per-photon basis. Unlike conventional photon-counting (PC) systems that bin the data into images, PP systems do not have any binning-related information loss. Mathematically, while PC systems have an infinite-dimensional null space due to dimensionality considerations, PP systems do not necessarily suffer from this issue. Therefore, PP systems have the potential to provide improved performance in comparison to PC systems. To study these advantages, we propose a framework to perform the singular-value decomposition (SVD) of the PP imaging operator. We use this framework to perform the SVD of operators that describe a general two-dimensional (2D) planar linear shift-invariant (LSIV) PP system and a hypothetical continuously rotating 2D single-photon emission computed tomography (SPECT) PP system. We then discuss two applications of the SVD framework. The first application is to decompose the object being imaged by the PP imaging system into measurement and null components. We compare these components to the measurement and null components obtained with PC systems. In the process, we also present a procedure to compute the null functions for a PC system. The second application is designing analytical reconstruction algorithms for PP systems. The proposed analytical approach exploits the fact that PP systems acquire data in a continuous domain to estimate a continuous object function. The approach is parallelizable and implemented for graphics processing units (GPUs). Further, this approach leverages another important advantage of PP systems, namely the possibility to perform photon-by-photon real-time reconstruction. We demonstrate the application of the approach to perform reconstruction in a simulated 2D SPECT system. The results help to validate and demonstrate the utility of the proposed method and show that PP systems can help overcome the aliasing artifacts that are otherwise intrinsically present in PC systems.

  12. Singular value decomposition for photon-processing nuclear imaging systems and applications for reconstruction and computing null functions

    NASA Astrophysics Data System (ADS)

    Jha, Abhinav K.; Barrett, Harrison H.; Frey, Eric C.; Clarkson, Eric; Caucci, Luca; Kupinski, Matthew A.

    2015-09-01

    Recent advances in technology are enabling a new class of nuclear imaging systems consisting of detectors that use real-time maximum-likelihood (ML) methods to estimate the interaction position, deposited energy, and other attributes of each photon-interaction event and store these attributes in a list format. This class of systems, which we refer to as photon-processing (PP) nuclear imaging systems, can be described by a fundamentally different mathematical imaging operator that allows processing of the continuous-valued photon attributes on a per-photon basis. Unlike conventional photon-counting (PC) systems that bin the data into images, PP systems do not have any binning-related information loss. Mathematically, while PC systems have an infinite-dimensional null space due to dimensionality considerations, PP systems do not necessarily suffer from this issue. Therefore, PP systems have the potential to provide improved performance in comparison to PC systems. To study these advantages, we propose a framework to perform the singular-value decomposition (SVD) of the PP imaging operator. We use this framework to perform the SVD of operators that describe a general two-dimensional (2D) planar linear shift-invariant (LSIV) PP system and a hypothetical continuously rotating 2D single-photon emission computed tomography (SPECT) PP system. We then discuss two applications of the SVD framework. The first application is to decompose the object being imaged by the PP imaging system into measurement and null components. We compare these components to the measurement and null components obtained with PC systems. In the process, we also present a procedure to compute the null functions for a PC system. The second application is designing analytical reconstruction algorithms for PP systems. The proposed analytical approach exploits the fact that PP systems acquire data in a continuous domain to estimate a continuous object function. The approach is parallelizable and implemented for graphics processing units (GPUs). Further, this approach leverages another important advantage of PP systems, namely the possibility to perform photon-by-photon real-time reconstruction. We demonstrate the application of the approach to perform reconstruction in a simulated 2D SPECT system. The results help to validate and demonstrate the utility of the proposed method and show that PP systems can help overcome the aliasing artifacts that are otherwise intrinsically present in PC systems.

  13. A new surrogate modeling technique combining Kriging and polynomial chaos expansions – Application to uncertainty analysis in computational dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kersaudy, Pierric, E-mail: pierric.kersaudy@orange.com; Whist Lab, 38 avenue du Général Leclerc, 92130 Issy-les-Moulineaux; ESYCOM, Université Paris-Est Marne-la-Vallée, 5 boulevard Descartes, 77700 Marne-la-Vallée

    2015-04-01

    In numerical dosimetry, the recent advances in high performance computing led to a strong reduction of the required computational time to assess the specific absorption rate (SAR) characterizing the human exposure to electromagnetic waves. However, this procedure remains time-consuming and a single simulation can request several hours. As a consequence, the influence of uncertain input parameters on the SAR cannot be analyzed using crude Monte Carlo simulation. The solution presented here to perform such an analysis is surrogate modeling. This paper proposes a novel approach to build such a surrogate model from a design of experiments. Considering a sparse representationmore » of the polynomial chaos expansions using least-angle regression as a selection algorithm to retain the most influential polynomials, this paper proposes to use the selected polynomials as regression functions for the universal Kriging model. The leave-one-out cross validation is used to select the optimal number of polynomials in the deterministic part of the Kriging model. The proposed approach, called LARS-Kriging-PC modeling, is applied to three benchmark examples and then to a full-scale metamodeling problem involving the exposure of a numerical fetus model to a femtocell device. The performances of the LARS-Kriging-PC are compared to an ordinary Kriging model and to a classical sparse polynomial chaos expansion. The LARS-Kriging-PC appears to have better performances than the two other approaches. A significant accuracy improvement is observed compared to the ordinary Kriging or to the sparse polynomial chaos depending on the studied case. This approach seems to be an optimal solution between the two other classical approaches. A global sensitivity analysis is finally performed on the LARS-Kriging-PC model of the fetus exposure problem.« less

  14. Novel Kunitz-like Peptides Discovered in the Zoanthid Palythoa caribaeorum through Transcriptome Sequencing.

    PubMed

    Liao, Qiwen; Li, Shengnan; Siu, Shirley Weng In; Yang, Binrui; Huang, Chen; Chan, Judy Yuet-Wa; Morlighem, Jean-Étienne R L; Wong, Clarence Tsun Ting; Rádis-Baptista, Gandhi; Lee, Simon Ming-Yuen

    2018-02-02

    Palythoa caribaeorum (class Anthozoa) is a zoanthid that together jellyfishes, hydra, and sea anemones, which are venomous and predatory, belongs to the Phyllum Cnidaria. The distinguished feature in these marine animals is the cnidocytes in the body tissues, responsible for toxin production and injection that are used majorly for prey capture and defense. With exception for other anthozoans, the toxin cocktails of zoanthids have been scarcely studied and are poorly known. Here, on the basis of the analysis of P. caribaeorum transcriptome, numerous predicted venom-featured polypeptides were identified including allergens, neurotoxins, membrane-active, and Kunitz-like peptides (PcKuz). The three predicted PcKuz isotoxins (1-3) were selected for functional studies. Through computational processing comprising structural phylogenetic analysis, molecular docking, and dynamics simulation, PcKuz3 was shown to be a potential voltage gated potassium-channel inhibitor. PcKuz3 fitted well as new functional Kunitz-type toxins with strong antilocomotor activity as in vivo assessed in zebrafish larvae, with weak inhibitory effect toward proteases, as evaluated in vitro. Notably, PcKuz3 can suppress, at low concentration, the 6-OHDA-induced neurotoxicity on the locomotive behavior of zebrafish, which indicated PcKuz3 may have a neuroprotective effect. Taken together, PcKuz3 figures as a novel neurotoxin structure, which differs from known homologous peptides expressed in sea anemone. Moreover, the novel PcKuz3 provides an insightful hint for biodrug development for prospective neurodegenerative disease treatment.

  15. PERSONAL COMPUTER MONITORS: A SCREENING EVALUATION OF VOLATILE ORGANIC EMISSIONS FROM EXISTING PRINTED CIRCUIT BOARD LAMINATES AND POTENTIAL POLLUTION PREVENTION ALTERNATIVES

    EPA Science Inventory

    The report gives results of a screening evaluation of volatile organic emissions from printed circuit board laminates and potential pollution prevention alternatives. In the evaluation, printed circuit board laminates, without circuitry, commonly found in personal computer (PC) m...

  16. Rotordynamics on the PC: Further Capabilities of ARDS

    NASA Technical Reports Server (NTRS)

    Fleming, David P.

    1997-01-01

    Rotordynamics codes for personal computers are now becoming available. One of the most capable codes is Analysis of RotorDynamic Systems (ARDS) which uses the component mode synthesis method to analyze a system of up to 5 rotating shafts. ARDS was originally written for a mainframe computer but has been successfully ported to a PC; its basic capabilities for steady-state and transient analysis were reported in an earlier paper. Additional functions have now been added to the PC version of ARDS. These functions include: 1) Estimation of the peak response following blade loss without resorting to a full transient analysis; 2) Calculation of response sensitivity to input parameters; 3) Formulation of optimum rotor and damper designs to place critical speeds in desirable ranges or minimize bearing loads; 4) Production of Poincard plots so the presence of chaotic motion can be ascertained. ARDS produces printed and plotted output. The executable code uses the full array sizes of the mainframe version and fits on a high density floppy disc. Examples of all program capabilities are presented and discussed.

  17. Development of embedded real-time and high-speed vision platform

    NASA Astrophysics Data System (ADS)

    Ouyang, Zhenxing; Dong, Yimin; Yang, Hua

    2015-12-01

    Currently, high-speed vision platforms are widely used in many applications, such as robotics and automation industry. However, a personal computer (PC) whose over-large size is not suitable and applicable in compact systems is an indispensable component for human-computer interaction in traditional high-speed vision platforms. Therefore, this paper develops an embedded real-time and high-speed vision platform, ER-HVP Vision which is able to work completely out of PC. In this new platform, an embedded CPU-based board is designed as substitution for PC and a DSP and FPGA board is developed for implementing image parallel algorithms in FPGA and image sequential algorithms in DSP. Hence, the capability of ER-HVP Vision with size of 320mm x 250mm x 87mm can be presented in more compact condition. Experimental results are also given to indicate that the real-time detection and counting of the moving target at a frame rate of 200 fps at 512 x 512 pixels under the operation of this newly developed vision platform are feasible.

  18. A Personal Computer-Based Head-Spine Model

    DTIC Science & Technology

    1998-09-01

    the CHSM. CHSM was comprised of the pelvis, the thoracolumbar spine, a single beam representation of the cervical spine, the head, the rib cage , and...developing the private sector HSM-PC project follows the Phase II program Work Plan , but continues into a Phase m SBIR program internally funded by...on completing the head and neck portion of HSM-PC, which as described in the Confidence Assessment Plan (CA Plan ) will be known as the Head Cervical

  19. Performance of Wireless Unattended Sensor Network in Maritime Applications

    DTIC Science & Technology

    2007-06-01

    longevity. Crossbow Technologies produces a number of gateways for use with their motes which include the MIB510, the MIB600 and the Stargate . The...MIB510 and MIB600 gateways require interface directly with a PC while he Stargate gateway interfaces remotely using the IEEE 802.11 standard for access...dedicated PC is unfeasible, the Stargate gateway allows remote access using the IEEE 802.11 standard. This can be accomplished via a Personal Computer

  20. CdTe-based Light-Controllable Frequency-Selective Photonic Crystal Switch for Millimeter Waves

    DTIC Science & Technology

    2011-09-01

    position (magenta curves with circular points which correspond to different light pulses) 23 Fig. 11.3. (a) Phase of transmission wave (in...11.4. Transmission spectra of plastic-air PC with CdTe-coated triple -quartz-wafer insertion of the kind ‘6t-qvqvqs-6t’ (computed yellow and measured...experimental requirements of matching the frequency band of VNA facility (f = 75–110 GHz), PC structures with triple -wafer insertion layers

  1. Western State Hospital: implementing a MUMPS-based PC network.

    PubMed

    Russ, D C

    1991-06-01

    Western State Hospital, a state-administered 1,200-bed mental health institution near Tacoma, Wash., confronted the challenge of automating its large campus through the application of the Healthcare Integrated Information System (HIIS). It is the first adaptation of the Veterans Administration's Decentralized Hospital Computer Program software in a mental health institution of this size, and the first DHCP application to be installed on a PC client/server network in a large U.S. hospital.

  2. A comparison between digital images viewed on a picture archiving and communication system diagnostic workstation and on a PC-based remote viewing system by emergency physicians.

    PubMed

    Parasyn, A; Hanson, R M; Peat, J K; De Silva, M

    1998-02-01

    Picture Archiving and Communication Systems (PACS) make possible the viewing of radiographic images on computer workstations located where clinical care is delivered. By the nature of their work this feature is particularly useful for emergency physicians who view radiographic studies for information and use them to explain results to patients and their families. However, the high cost of PACS diagnostic workstations with fuller functionality places limits on the number of and therefore the accessibility to workstations in the emergency department. This study was undertaken to establish how well less expensive personal computer-based workstations would work to support these needs of emergency physicians. The study compared the outcome of observations by 5 emergency physicians on a series of radiographic studies containing subtle abnormalities displayed on both a PACS diagnostic workstation and on a PC-based workstation. The 73 digitized radiographic studies were randomly arranged on both types of workstation over four separate viewing sessions for each emergency physician. There was no statistical difference between a PACS diagnostic workstation and a PC-based workstation in this trial. The mean correct ratings were 59% on the PACS diagnostic workstations and 61% on the PC-based workstations. These findings also emphasize the need for prompt reporting by a radiologist.

  3. A Steep-Slope Transistor Combining Phase-Change and Band-to-Band-Tunneling to Achieve a sub-Unity Body Factor.

    PubMed

    Vitale, Wolfgang A; Casu, Emanuele A; Biswas, Arnab; Rosca, Teodor; Alper, Cem; Krammer, Anna; Luong, Gia V; Zhao, Qing-T; Mantl, Siegfried; Schüler, Andreas; Ionescu, A M

    2017-03-23

    Steep-slope transistors allow to scale down the supply voltage and the energy per computed bit of information as compared to conventional field-effect transistors (FETs), due to their sub-60 mV/decade subthreshold swing at room temperature. Currently pursued approaches to achieve such a subthermionic subthreshold swing consist in alternative carrier injection mechanisms, like quantum mechanical band-to-band tunneling (BTBT) in Tunnel FETs or abrupt phase-change in metal-insulator transition (MIT) devices. The strengths of the BTBT and MIT have been combined in a hybrid device architecture called phase-change tunnel FET (PC-TFET), in which the abrupt MIT in vanadium dioxide (VO 2 ) lowers the subthreshold swing of strained-silicon nanowire TFETs. In this work, we demonstrate that the principle underlying the low swing in the PC-TFET relates to a sub-unity body factor achieved by an internal differential gate voltage amplification. We study the effect of temperature on the switching ratio and the swing of the PC-TFET, reporting values as low as 4.0 mV/decade at 25 °C, 7.8 mV/decade at 45 °C. We discuss how the unique characteristics of the PC-TFET open new perspectives, beyond FETs and other steep-slope transistors, for low power electronics, analog circuits and neuromorphic computing.

  4. Operational Implementation of a Pc Uncertainty Construct for Conjunction Assessment Risk Analysis

    NASA Technical Reports Server (NTRS)

    Newman, Lauri K.; Hejduk, Matthew D.; Johnson, Lauren C.

    2016-01-01

    Earlier this year the NASA Conjunction Assessment and Risk Analysis (CARA) project presented the theoretical and algorithmic aspects of a method to include the uncertainties in the calculation inputs when computing the probability of collision (Pc) between two space objects, principally uncertainties in the covariances and the hard-body radius. The output of this calculation approach is to produce rather than a single Pc value an entire probability density function that will represent the range of possible Pc values given the uncertainties in the inputs and bring CA risk analysis methodologies more in line with modern risk management theory. The present study provides results from the exercise of this method against an extended dataset of satellite conjunctions in order to determine the effect of its use on the evaluation of conjunction assessment (CA) event risk posture. The effects are found to be considerable: a good number of events are downgraded from or upgraded to a serious risk designation on the basis of consideration of the Pc uncertainty. The findings counsel the integration of the developed methods into NASA CA operations.

  5. Velocity Measurement in Carotid Artery: Quantitative Comparison of Time-Resolved 3D Phase-Contrast MRI and Image-based Computational Fluid Dynamics

    PubMed Central

    Sarrami-Foroushani, Ali; Nasr Esfahany, Mohsen; Nasiraei Moghaddam, Abbas; Saligheh Rad, Hamidreza; Firouznia, Kavous; Shakiba, Madjid; Ghanaati, Hossein; Wilkinson, Iain David; Frangi, Alejandro Federico

    2015-01-01

    Background: Understanding hemodynamic environment in vessels is important for realizing the mechanisms leading to vascular pathologies. Objectives: Three-dimensional velocity vector field in carotid bifurcation is visualized using TR 3D phase-contrast magnetic resonance imaging (TR 3D PC MRI) and computational fluid dynamics (CFD). This study aimed to present a qualitative and quantitative comparison of the velocity vector field obtained by each technique. Subjects and Methods: MR imaging was performed on a 30-year old male normal subject. TR 3D PC MRI was performed on a 3 T scanner to measure velocity in carotid bifurcation. 3D anatomical model for CFD was created using images obtained from time-of-flight MR angiography. Velocity vector field in carotid bifurcation was predicted using CFD and PC MRI techniques. A statistical analysis was performed to assess the agreement between the two methods. Results: Although the main flow patterns were the same for the both techniques, CFD showed a greater resolution in mapping the secondary and circulating flows. Overall root mean square (RMS) errors for all the corresponding data points in PC MRI and CFD were 14.27% in peak systole and 12.91% in end diastole relative to maximum velocity measured at each cardiac phase. Bland-Altman plots showed a very good agreement between the two techniques. However, this study was not aimed to validate any of methods, instead, the consistency was assessed to accentuate the similarities and differences between Time-resolved PC MRI and CFD. Conclusion: Both techniques provided quantitatively consistent results of in vivo velocity vector fields in right internal carotid artery (RCA). PC MRI represented a good estimation of main flow patterns inside the vasculature, which seems to be acceptable for clinical use. However, limitations of each technique should be considered while interpreting results. PMID:26793288

  6. PC-SEAPAK - ANALYSIS OF COASTAL ZONE COLOR SCANNER AND ADVANCED VERY HIGH RESOLUTION RADIOMETER DATA

    NASA Technical Reports Server (NTRS)

    Mcclain, C. R.

    1994-01-01

    PC-SEAPAK is a user-interactive satellite data analysis software package specifically developed for oceanographic research. The program is used to process and interpret data obtained from the Nimbus-7/Coastal Zone Color Scanner (CZCS), and the NOAA Advanced Very High Resolution Radiometer (AVHRR). PC-SEAPAK is a set of independent microcomputer-based image analysis programs that provide the user with a flexible, user-friendly, standardized interface, and facilitates relatively low-cost analysis of oceanographic satellite data. Version 4.0 includes 114 programs. PC-SEAPAK programs are organized into categories which include CZCS and AVHRR level-1 ingest, level-2 analyses, statistical analyses, data extraction, remapping to standard projections, graphics manipulation, image board memory manipulation, hardcopy output support and general utilities. Most programs allow user interaction through menu and command modes and also by the use of a mouse. Most programs also provide for ASCII file generation for further analysis in spreadsheets, graphics packages, etc. The CZCS scanning radiometer aboard the NIMBUS-7 satellite was designed to measure the concentration of photosynthetic pigments and their degradation products in the ocean. AVHRR data is used to compute sea surface temperatures and is supported for the NOAA 6, 7, 8, 9, 10, 11, and 12 satellites. The CZCS operated from November 1978 to June 1986. CZCS data may be obtained free of charge from the CZCS archive at NASA/Goddard Space Flight Center. AVHRR data may be purchased through NOAA's Satellite Data Service Division. Ordering information is included in the PC-SEAPAK documentation. Although PC-SEAPAK was developed on a COMPAQ Deskpro 386/20, it can be run on most 386-compatible computers with an AT bus, EGA controller, Intel 80387 coprocessor, and MS-DOS 3.3 or higher. A Matrox MVP-AT image board with appropriate monitor and cables is also required. Note that the authors have received some reports of incompatibilities between the MVP-AT image board and ZENITH computers. Also, the MVP-AT image board is not necessarily compatible with 486-based systems; users of 486-based systems should consult with Matrox about compatibility concerns. Other PC-SEAPAK requirements include a Microsoft mouse (serial version), 2Mb RAM, and 100Mb hard disk space. For data ingest and backup, 9-track tape, 8mm tape and optical disks are supported and recommended. PC-SEAPAK has been under development since 1988. Version 4.0 was updated in 1992, and is distributed without source code. It is available only as a set of 36 1.2Mb 5.25 inch IBM MS-DOS format diskettes. PC-SEAPAK is a copyrighted product with all copyright vested in the National Aeronautics and Space Administration. Phar Lap's DOS_Extender run-time version is integrated into several of the programs; therefore, the PC-SEAPAK programs may not be duplicated. Three of the distribution diskettes contain DOS_Extender files. One of the distribution diskettes contains Media Cybernetics' HALO88 font files, also licensed by NASA for dissemination but not duplication. IBM is a registered trademark of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation. HALO88 is a registered trademark of Media Cybernetics, but the product was discontinued in 1991.

  7. The Workstation Approach to Laboratory Computing

    PubMed Central

    Crosby, P.A.; Malachowski, G.C.; Hall, B.R.; Stevens, V.; Gunn, B.J.; Hudson, S.; Schlosser, D.

    1985-01-01

    There is a need for a Laboratory Workstation which specifically addresses the problems associated with computing in the scientific laboratory. A workstation based on the IBM PC architecture and including a front end data acquisition system which communicates with a host computer via a high speed communications link; a new graphics display controller with hardware window management and window scrolling; and an integrated software package is described.

  8. DataPlus™ - a revolutionary applications generator for DOS hand-held computers

    Treesearch

    David Dean; Linda Dean

    2000-01-01

    DataPlus allows the user to easily design data collection templates for DOS-based hand-held computers that mimic clipboard data sheets. The user designs and tests the application on the desktop PC and then transfers it to a DOS field computer. Other features include: error checking, missing data checks, and sensor input from RS-232 devices such as bar code wands,...

  9. Supramolecular architectures of iron phthalocyanine Langmuir-Blodgett films: The role played by the solution solvents

    NASA Astrophysics Data System (ADS)

    Rubira, Rafael Jesus Gonçalves; Aoki, Pedro Henrique Benites; Constantino, Carlos José Leopoldo; Alessio, Priscila

    2017-09-01

    The developing of organic-based devices has been widely explored using ultrathin films as the transducer element, whose supramolecular architecture plays a central role in the device performance. Here, Langmuir and Langmuir-Blodgett (LB) ultrathin films were fabricated from iron phthalocyanine (FePc) solutions in chloroform (CHCl3), dichloromethane (CH2Cl2), dimethylformamide (DMF), and tetrahydrofuran (THF) to determine the influence of different solvents on the supramolecular architecture of the ultrathin films. The UV-vis absorption spectroscopy shows a strong dependence of the FePc aggregation on these solvents. As a consequence, the surface pressure vs. mean molecular area (π-A) isotherms and Brewster angle microscopy (BAM) reveal a more homogeneous (surface morphology) Langmuir film at the air/water interface for FePc in DMF. The same morphological pattern observed for the Langmuir films is preserved upon LB deposition onto solid substrates. The Raman and FTIR analyses indicate the DMF-FePc interaction relies on coordination bonds between N atom (from DMF) and Fe atom (from FePc). Besides, the FePc molecular organization was also found to be affected by the DMF-FePc chemical interaction. It is interesting to note that, if the DMF-FePc leads to less aggregated FePc either in solution or ultrathin films (Langmuir and LB), with time (one week) the opposite trend is found. Taking into account the N-Fe interaction, the performance of the FePc ultrathin films with distinct supramolecular architectures composing sensing units was explored as proof-of-principle in the detection of trace amounts of atrazine herbicide in water using impedance spectroscopy. Further statistical and computational analysis reveal not only the role played by FePc supramolecular architecture but also the sensitivity of the system to detect atrazine solutions down to 10-10 mol/L, which is sufficient to monitor the quality of drinking water even according to the most stringent international regulations.

  10. High fidelity computational characterization of the mechanical response of thermally aged polycarbonate

    NASA Astrophysics Data System (ADS)

    Zhang, Zesheng; Zhang, Lili; Jasa, John; Li, Wenlong; Gazonas, George; Negahban, Mehrdad

    2017-07-01

    A representative all-atom molecular dynamics (MD) system of polycarbonate (PC) is built and conditioned to capture and predict the behaviours of PC in response to a broad range of thermo-mechanical loadings for various thermal aging. The PC system is constructed to have a distribution of molecular weights comparable to a widely used commercial PC (LEXAN 9034), and thermally conditioned to produce models for aged and unaged PC. The MD responses of these models are evaluated through comparisons to existing experimental results carried out at much lower loading rates, but done over a broad range of temperatures and loading modes. These experiments include monotonic extension/compression/shear, unilaterally and bilaterally confined compression, and load-reversal during shear. It is shown that the MD simulations show both qualitative and quantitative similarity with the experimental response. The quantitative similarity is evaluated by comparing the dilatational response under bilaterally confined compression, the shear flow viscosity and the equivalent yield stress. The consistency of the in silico response to real laboratory experiments strongly suggests that the current PC models are physically and mechanically relevant and potentially can be used to investigate thermo-mechanical response to loading conditions that would not easily be possible. These MD models may provide valuable insight into the molecular sources of certain observations, and could possibly offer new perspectives on how to develop constitutive models that are based on better understanding the response of PC under complex loadings. To this latter end, the models are used to predict the response of PC to complex loading modes that would normally be difficult to do or that include characteristics that would be difficult to measure. These include the responses of unaged and aged PC to unilaterally confined extension/compression, cyclic uniaxial/shear loadings, and saw-tooth extension/compression/shear.

  11. Volumetric velocity measurements in restricted geometries using spiral sampling: a phantom study.

    PubMed

    Nilsson, Anders; Revstedt, Johan; Heiberg, Einar; Ståhlberg, Freddy; Bloch, Karin Markenroth

    2015-04-01

    The aim of this study was to evaluate the accuracy of maximum velocity measurements using volumetric phase-contrast imaging with spiral readouts in a stenotic flow phantom. In a phantom model, maximum velocity, flow, pressure gradient, and streamline visualizations were evaluated using volumetric phase-contrast magnetic resonance imaging (MRI) with velocity encoding in one (extending on current clinical practice) and three directions (for characterization of the flow field) using spiral readouts. Results of maximum velocity and pressure drop were compared to computational fluid dynamics (CFD) simulations, as well as corresponding low-echo-time (TE) Cartesian data. Flow was compared to 2D through-plane phase contrast (PC) upstream from the restriction. Results obtained with 3D through-plane PC as well as 4D PC at shortest TE using a spiral readout showed excellent agreements with the maximum velocity values obtained with CFD (<1 % for both methods), while larger deviations were seen using Cartesian readouts (-2.3 and 13 %, respectively). Peak pressure drop calculations from 3D through-plane PC and 4D PC spiral sequences were respectively 14 and 13 % overestimated compared to CFD. Identification of the maximum velocity location, as well as the accurate velocity quantification can be obtained in stenotic regions using short-TE spiral volumetric PC imaging.

  12. A Phosphorus Phthalocyanine Formulation with Intense Absorbance at 1000 nm for Deep Optical Imaging

    PubMed Central

    Zhou, Yang; Wang, Depeng; Zhang, Yumiao; Chitgupi, Upendra; Geng, Jumin; Wang, Yuehang; Zhang, Yuzhen; Cook, Timothy R.; Xia, Jun; Lovell, Jonathan F.

    2016-01-01

    Although photoacoustic computed tomography (PACT) operates with high spatial resolution in biological tissues deeper than other optical modalities, light scattering is a limiting factor. The use of longer near infrared wavelengths reduces scattering. Recently, the rational design of a stable phosphorus phthalocyanine (P-Pc) with a long wavelength absorption band beyond 1000 nm has been reported. Here, we show that when dissolved in liquid surfactants, P-Pc can give rise to formulations with absorbance of greater than 1000 (calculated for a 1 cm path length) at wavelengths beyond 1000 nm. Using the broadly accessible Nd:YAG pulse laser emission output of 1064 nm, P-Pc could be imaged through 11.6 cm of chicken breast with PACT. P-Pc accumulated passively in tumors following intravenous injection in mice as observed by PACT. Following oral administration, P-Pc passed through the intestine harmlessly, and PACT could be used to non-invasively observe intestine function. When the contrast agent placed under the arm of a healthy adult human, a PACT transducer on the top of the arm could readily detect P-Pc through the entire 5 cm limb. Thus, the approach of using contrast media with extreme absorption at 1064 nm readily enables high quality optical imaging in vitro and in vivo in humans at exceptional depths. PMID:27022416

  13. Comparison of velocity patterns in an AComA aneurysm measured with 2D phase contrast MRI and simulated with CFD.

    PubMed

    Karmonik, Christof; Klucznik, Richard; Benndorf, Goetz

    2008-01-01

    Computational Fluid Dynamic (CFD) is increasingly being used for modeling hemodynamics in intracranial aneurysms. While CFD techniques are well established, need for validation of the results remains. By quantifying features in velocity patterns measured with 2D phase contrast magnetic resonance (pcMRI) in vivo and simulated with CFD, the role of pcMRI for providing reference data for the CFD simulation is explored. Unsteady CFD simulations were performed with inflow boundary conditions obtained from 2D pcMRI measurements of an aneurysm of the anterior communication artery. Intra-aneurysmal velocity profiles were recorded with 2D pcMRI and calculated with CFD. Relative areas of positive and negative velocity were calculated in these profiles for maximum and minimum inflow. Areas of positive and of negative velocity similar in shape were found in the velocity profiles obtained with both methods. Relative difference in size of the relative areas for the whole cardiac cycle ranged from 1%-25% (average 12%). 2D pcMRI is able to record velocity profiles in an aneurysm of the anterior commuting artery in vivo. These velocity profiles can serve as reference data for validation of CFD simulations. Further studies are needed to explore the role of pcMRI in the context of CFD simulations.

  14. Power centroid radar and its rise from the universal cybernetics duality

    NASA Astrophysics Data System (ADS)

    Feria, Erlan H.

    2014-05-01

    Power centroid radar (PC-Radar) is a fast and powerful adaptive radar scheme that naturally surfaced from the recent discovery of the time-dual for information theory which has been named "latency theory." Latency theory itself was born from the universal cybernetics duality (UC-Duality), first identified in the late 1970s, that has also delivered a time dual for thermodynamics that has been named "lingerdynamics" and anchors an emerging lifespan theory for biological systems. In this paper the rise of PC-Radar from the UC-Duality is described. The development of PC-Radar, US patented, started with Defense Advanced Research Projects Agency (DARPA) funded research on knowledge-aided (KA) adaptive radar of the last decade. The outstanding signal to interference plus noise ratio (SINR) performance of PC-Radar under severely taxing environmental disturbances will be established. More specifically, it will be seen that the SINR performance of PC-Radar, either KA or knowledgeunaided (KU), approximates that of an optimum KA radar scheme. The explanation for this remarkable result is that PC-Radar inherently arises from the UC-Duality, which advances a "first principles" duality guidance theory for the derivation of synergistic storage-space/computational-time compression solutions. Real-world synthetic aperture radar (SAR) images will be used as prior-knowledge to illustrate these results.

  15. An interactive program to display user-generated or file-based maps on a personal computer monitor

    USGS Publications Warehouse

    Langer, W.H.; Stephens, R.W.

    1987-01-01

    PC MAP-MAKER is an ADVANCED BASIC program written to provide users of IBM XT, IBM AT, and compatible computers with a straight-forward, flexible method to display geographical data on a color or monochrome PC (personal computer) monitor. Data can be political boundaries such as State and county boundaries; natural curvilinear features such as rivers, drainage areas, and geological contacts; and points such as well locations and mineral localities. Essentially any point defined by a latitude and longitude and any line defined by a series of latitude and longitude values can be displayed using the program. PC MAP MAKER allows users to view tabular data from U.S. Geological Survey files such as WATSTORE (National Water Data Storage and Retrieval System) in a map format in a time much shorter than required by sending the data to a line plotter. The screen image can be saved to disk for recall at a later date, and hard copies can be printed with a dot matrix printer. The program is user-friendly, using menus or prompts to guide user input. It is fully documented and structured to allow the user to tailor the program to the user 's specific needs. The documentation includes a tutorial designed to introduce users to the capabilities of the program using the State of Colorado as a demonstration map area. (Author 's abstract)

  16. Personal computer versus personal computer/mobile device combination users' preclinical laboratory e-learning activity.

    PubMed

    Kon, Haruka; Kobayashi, Hiroshi; Sakurai, Naoki; Watanabe, Kiyoshi; Yamaga, Yoshiro; Ono, Takahiro

    2017-11-01

    The aim of the present study was to clarify differences between personal computer (PC)/mobile device combination and PC-only user patterns. We analyzed access frequency and time spent on a complete denture preclinical website in order to maximize website effectiveness. Fourth-year undergraduate students (N=41) in the preclinical complete denture laboratory course were invited to participate in this survey during the final week of the course to track login data. Students accessed video demonstrations and quizzes via our e-learning site/course program, and were instructed to view online demonstrations before classes. When the course concluded, participating students filled out a questionnaire about the program, their opinions, and devices they had used to access the site. Combination user access was significantly more frequent than PC-only during supplementary learning time, indicating that students with mobile devices studied during lunch breaks and before morning classes. Most students had favorable opinions of the e-learning site, but a few combination users commented that some videos were too long and that descriptive answers were difficult on smartphones. These results imply that mobile devices' increased accessibility encouraged learning by enabling more efficient time use between classes. They also suggest that e-learning system improvements should cater to mobile device users by reducing video length and including more short-answer questions. © 2016 John Wiley & Sons Australia, Ltd.

  17. Towards predictive data-driven simulations of wildfire spread - Part I: Reduced-cost Ensemble Kalman Filter based on a Polynomial Chaos surrogate model for parameter estimation

    NASA Astrophysics Data System (ADS)

    Rochoux, M. C.; Ricci, S.; Lucor, D.; Cuenot, B.; Trouvé, A.

    2014-05-01

    This paper is the first part in a series of two articles and presents a data-driven wildfire simulator for forecasting wildfire spread scenarios, at a reduced computational cost that is consistent with operational systems. The prototype simulator features the following components: a level-set-based fire propagation solver FIREFLY that adopts a regional-scale modeling viewpoint, treats wildfires as surface propagating fronts, and uses a description of the local rate of fire spread (ROS) as a function of environmental conditions based on Rothermel's model; a series of airborne-like observations of the fire front positions; and a data assimilation algorithm based on an ensemble Kalman filter (EnKF) for parameter estimation. This stochastic algorithm partly accounts for the non-linearities between the input parameters of the semi-empirical ROS model and the fire front position, and is sequentially applied to provide a spatially-uniform correction to wind and biomass fuel parameters as observations become available. A wildfire spread simulator combined with an ensemble-based data assimilation algorithm is therefore a promising approach to reduce uncertainties in the forecast position of the fire front and to introduce a paradigm-shift in the wildfire emergency response. In order to reduce the computational cost of the EnKF algorithm, a surrogate model based on a polynomial chaos (PC) expansion is used in place of the forward model FIREFLY in the resulting hybrid PC-EnKF algorithm. The performance of EnKF and PC-EnKF is assessed on synthetically-generated simple configurations of fire spread to provide valuable information and insight on the benefits of the PC-EnKF approach as well as on a controlled grassland fire experiment. The results indicate that the proposed PC-EnKF algorithm features similar performance to the standard EnKF algorithm, but at a much reduced computational cost. In particular, the re-analysis and forecast skills of data assimilation strongly relate to the spatial and temporal variability of the errors in the ROS model parameters.

  18. Towards predictive data-driven simulations of wildfire spread - Part I: Reduced-cost Ensemble Kalman Filter based on a Polynomial Chaos surrogate model for parameter estimation

    NASA Astrophysics Data System (ADS)

    Rochoux, M. C.; Ricci, S.; Lucor, D.; Cuenot, B.; Trouvé, A.

    2014-11-01

    This paper is the first part in a series of two articles and presents a data-driven wildfire simulator for forecasting wildfire spread scenarios, at a reduced computational cost that is consistent with operational systems. The prototype simulator features the following components: an Eulerian front propagation solver FIREFLY that adopts a regional-scale modeling viewpoint, treats wildfires as surface propagating fronts, and uses a description of the local rate of fire spread (ROS) as a function of environmental conditions based on Rothermel's model; a series of airborne-like observations of the fire front positions; and a data assimilation (DA) algorithm based on an ensemble Kalman filter (EnKF) for parameter estimation. This stochastic algorithm partly accounts for the nonlinearities between the input parameters of the semi-empirical ROS model and the fire front position, and is sequentially applied to provide a spatially uniform correction to wind and biomass fuel parameters as observations become available. A wildfire spread simulator combined with an ensemble-based DA algorithm is therefore a promising approach to reduce uncertainties in the forecast position of the fire front and to introduce a paradigm-shift in the wildfire emergency response. In order to reduce the computational cost of the EnKF algorithm, a surrogate model based on a polynomial chaos (PC) expansion is used in place of the forward model FIREFLY in the resulting hybrid PC-EnKF algorithm. The performance of EnKF and PC-EnKF is assessed on synthetically generated simple configurations of fire spread to provide valuable information and insight on the benefits of the PC-EnKF approach, as well as on a controlled grassland fire experiment. The results indicate that the proposed PC-EnKF algorithm features similar performance to the standard EnKF algorithm, but at a much reduced computational cost. In particular, the re-analysis and forecast skills of DA strongly relate to the spatial and temporal variability of the errors in the ROS model parameters.

  19. A Computational Framework for Design and Development of Novel Prostate Cancer Therapies

    DTIC Science & Technology

    2015-09-01

    a ‘wound healing’ assay. The anti-cancer effect of CTN06 was further validated in vivo in a PC3 xenograft mouse model. Cell Death and Disease (2014) 5...autophagy and apoptosis in prostate cancer cells, and inhibits prostate cancer xenograft tumor growth in vivo. To our knowledge, this is the first report...sensitizer, and blocking of autophagy by CQ promotes CTN06-induced cell death. CTN06 inhibits PC3 xenograft tumor growth in vivo. Given the in vitro

  20. Time-Reversal Based Range Extension Technique for Ultra-wideband (UWB) Sensors and Applications in Tactical Communications and Networking

    DTIC Science & Technology

    2008-04-16

    Zhen (Edward) Hu Peng (Peter) Zhang Yu Song Amanpreet Singh Saini Corey Cooke April 16, 2006 Department of Electrical and Computer Engineering Center...and RF frequency agility is the most challenging issue for spectrum sensing. The radio under development is an ultra-wideband software -defined radio...PC USB programming cable and accom- panying PC software as well as download test vectors to the waveform memory module, as shown in Figure 3.25,3I

  1. QUEST Hanford Site Computer Users - What do they do?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    WITHERSPOON, T.T.

    2000-03-02

    The Fluor Hanford Chief Information Office requested that a computer-user survey be conducted to determine the user's dependence on the computer and its importance to their ability to accomplish their work. Daily use trends and future needs of Hanford Site personal computer (PC) users was also to be defined. A primary objective was to use the data to determine how budgets should be focused toward providing those services that are truly needed by the users.

  2. Hard Hitters!

    ERIC Educational Resources Information Center

    Dupont, Stephen

    2000-01-01

    Presents a selection of computers and peripherals designed to enhance the classroom. They include personal digital assistants (the AlphaSmart 30001R, CalcuScribe Duo, and DreamWriter IT); new Apple products (the iBook laptop, improved iMac, and OS 9 operating system); PC options (new Gateway and Compaq computers); and gadgets (imagiLab, the QX3…

  3. A Computer Spreadsheet for Locating Assistive Devices.

    ERIC Educational Resources Information Center

    Palmer, Catherine V.; Garstecki, Dean C.

    1988-01-01

    The article presents a directory of assistive devices for persons with hearing impairments in a grid format by distributor and type of device (alerting devices, telephone, TV/radio/stereo, personal communication, group communication, and other). The product locator is also available in spreadsheet form for either the Macintosh or IBM-PC computers.…

  4. Defense Automation Resources Management Manual

    DTIC Science & Technology

    1988-09-01

    Electronic Command Signals Programmer, Plugboard Programmers Punch, Card Punch, Paper Tape Reader, Character Reader-Generator, Time Cards Reader...Multiplexor-Shift Register Group Multiplier Panel Control, Plugboard Panel, Interconnection, Digital Computer Panel, Meter-Attenuator, Tape Recorder PC Cards...Perforator, Tape Plug-In Unit Potentiometer, Coefficient, Analog Computer Programmer, Plugboard Punch, Paper Tape Racks Reader, Time Code Reader

  5. Cloud Computing and the Power to Choose

    ERIC Educational Resources Information Center

    Bristow, Rob; Dodds, Ted; Northam, Richard; Plugge, Leo

    2010-01-01

    Some of the most significant changes in information technology are those that have given the individual user greater power to choose. The first of these changes was the development of the personal computer. The PC liberated the individual user from the limitations of the mainframe and minicomputers and from the rules and regulations of centralized…

  6. Chemical Engineering and Instructional Computing: Are They in Step? (Part 2).

    ERIC Educational Resources Information Center

    Seider, Warren D.

    1988-01-01

    Describes the use of "CACHE IBM PC Lessons for Courses Other than Design and Control" as open-ended design oriented problems. Presents graphics from some of the software and discusses high-resolution graphics workstations. Concludes that computing tools are in line with design and control practice in chemical engineering. (MVL)

  7. Drowning in PC Management: Could a Linux Solution Save Us?

    ERIC Educational Resources Information Center

    Peters, Kathleen A.

    2004-01-01

    Short on funding and IT staff, a Western Canada library struggled to provide adequate public computing resources. Staff turned to a Linux-based solution that supports up to 10 users from a single computer, and blends Web browsing and productivity applications with session management, Internet filtering, and user authentication. In this article,…

  8. Dr. Sanger's Apprentice: A Computer-Aided Instruction to Protein Sequencing.

    ERIC Educational Resources Information Center

    Schmidt, Thomas G.; Place, Allen R.

    1985-01-01

    Modeled after the program "Mastermind," this program teaches students the art of protein sequencing. The program (written in Turbo Pascal for the IBM PC, requiring 128K, a graphics adapter, and an 8070 mathematics coprocessor) generates a polypeptide whose sequence and length can be user-defined (for practice) or computer-generated (for…

  9. ROTRAN 1 - SOLUTION OF EQUATIONS FOR ROTARY TRANSFORMERS

    NASA Technical Reports Server (NTRS)

    Salomon, P. M.

    1994-01-01

    ROTRAN1 is a computer program to calculate the impedance and current gain of a simple transformer. Inputs to the program are primary resistance, primary inductance, secondary (load) resistance, secondary inductance, and mutual inductance. ROTRAN1 was written in BASICA for execution on the IBM PC personal computer. It was written in 1986.

  10. [The use of computer technology and digital photography in the work of a forensic medical expertise bureau].

    PubMed

    Novoselov, V P; Fedorov, S A

    1999-01-01

    UNISCAN scanner with PC was used at department of medical criminology and at the histological department of the Novosibirsk Regional Bureau of Forensic Medical Expert Evaluations. The quality of images obtained by computers and digital photography is not inferior to that of traditional photographs.

  11. Learning from Home: Implementing Technical Infrastructure for Distributed Learning via Home-PC within Telenor.

    ERIC Educational Resources Information Center

    Folkman, Kristian; Berge, Zane L.

    2002-01-01

    Presents results from a study conducted at Telenor, a Norwegian telecom operator, regarding the distribution of home personal computers to employees to encourage professional development outside working hours on an individual basis. Findings from a survey suggest that home computers can contribute positively to increasing employee's knowledge…

  12. Buying Your Next (or First) PC: What Matters Now?

    ERIC Educational Resources Information Center

    Crawford, Walt

    1993-01-01

    Discussion of factors to consider in purchasing a personal computer covers present and future needs, computing environments, memory, processing performance, disk size, and display quality. Issues such as bundled systems, where and when to purchase, and vendor support are addressed; and an annotated bibliography of 28 recent articles is included.…

  13. A Design Tool for Liquid Rocket Engine Injectors

    NASA Technical Reports Server (NTRS)

    Farmer, Richard C.; Cheng, Gary; Trinh, Huu Phuoc; Tucker, P. Kevin; Hutt, John

    1999-01-01

    A practical design tool for the analysis of flowfields near the injector face has been developed and used to analyze the Fastrac engine. The objective was to produce a computational design tool which was detailed enough to predict the interactive effects of injector element impingement angles and points and the momenta of the individual orifice flows. To obtain a model which could be used to simulate a significant number of individual orifices, a homogeneous computational fluid dynamics model was developed. To describe liquid and vapor sub- and super-critical flows, the model included thermal and caloric equations of state which were valid over a wide range of pressures and temperatures. A homogeneous model was constructed such that the local state of the flow was determined directly, i.e. the quality of the flow was calculated. Such a model does not identify drops or their distribution, but it does allow the flow along the injector face and into the acoustic cavity to be predicted. It also allows the film coolant flow to be accurately described. The initial evaluation of the injector code was made by simulating cold flow from an unlike injector element and from a like-on-like overlapping fan (LOL) injector element. The predicted mass flux distributions of these injector elements compared well to cold flow test results. These are the same cold flow tests which serve as the data base for the JANNAF performance prediction codes. The flux distributions 1 inch downstream of the injector face are very similar; the differences were somewhat larger at further distances from the faceplate. Since the cold flow testing did not achieve good mass balances when integrations across the entire fan were made, the CFD simulation appears to be reasonable alternative to future cold flow testing. To simulate the Fastrac, an RP-1/LOX combustion model must be chosen. This submodel must be relatively simple to accomplish three-dimensional, multiphase flow simulations. Single RP-1 pyrolysis and partial oxidation steps were chosen and the combustion was completed with the wet CO mechanism. Soot was also formed with a single global reaction. To validate the combustion submodel, global data from gas generator tests and from subscale motor test were used to predict qualitatively correct mean molecular weights, temperature, and soot levels. Because such tests do not provide general kinetics rates, the methodology is not necessarily appropriate for other than rocket type flows conditions. Soot predictions were made so that radiation heating to the motor walls can be made. These initial studies of the Fastrac were for a small region close to the injector face and chamber wall which included a segment of the acoustic cavity. The region analyzed includes 11 individual orifice holes to represent the LOL elements and the H2 film coolant holes. Typical results of this simulation are shown in Figure 1. At this point the only available test data to verify the predictions are temperatures measured in the acoustic cavity. These temperatures are in reasonable agreement at about 2000R (1111 K). Future work is expected to include improving the computational efficiency or the CFD model and/or using more computer capacity than the single Pentium PC with which these simulations were made.

  14. Personal computer security: part 1. Firewalls, antivirus software, and Internet security suites.

    PubMed

    Caruso, Ronald D

    2003-01-01

    Personal computer (PC) security in the era of the Health Insurance Portability and Accountability Act of 1996 (HIPAA) involves two interrelated elements: safeguarding the basic computer system itself and protecting the information it contains and transmits, including personal files. HIPAA regulations have toughened the requirements for securing patient information, requiring every radiologist with such data to take further precautions. Security starts with physically securing the computer. Account passwords and a password-protected screen saver should also be set up. A modern antivirus program can easily be installed and configured. File scanning and updating of virus definitions are simple processes that can largely be automated and should be performed at least weekly. A software firewall is also essential for protection from outside intrusion, and an inexpensive hardware firewall can provide yet another layer of protection. An Internet security suite yields additional safety. Regular updating of the security features of installed programs is important. Obtaining a moderate degree of PC safety and security is somewhat inconvenient but is necessary and well worth the effort. Copyright RSNA, 2003

  15. Efficiently Distributing Component-Based Applications Across Wide-Area Environments

    DTIC Science & Technology

    2002-01-01

    Oracle 8.1.7 Enterprise Edition), each running on a dedicated 1GHz dual-processor Pentium III workstation. For the RUBiS tests, we used a MySQL 4.0.12...a variety of sophisticated network-accessible services such as e-mail, banking, on-line shopping, entertainment, and serv - ing as a data exchange...Beans Catalog Handles read-only queries to product database Customer Serves as a façade to Order and Account Stateful Session Beans ShoppingCart

  16. Multivariate statistical analysis of low-voltage EDS spectrum images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, I.M.

    1998-03-01

    Whereas energy-dispersive X-ray spectrometry (EDS) has been used for compositional analysis in the scanning electron microscope for 30 years, the benefits of using low operating voltages for such analyses have been explored only during the last few years. This paper couples low-voltage EDS with two other emerging areas of characterization: spectrum imaging and multivariate statistical analysis. The specimen analyzed for this study was a finished Intel Pentium processor, with the polyimide protective coating stripped off to expose the final active layers.

  17. Multi-threaded Event Processing with DANA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David Lawrence; Elliott Wolin

    2007-05-14

    The C++ data analysis framework DANA has been written to support the next generation of Nuclear Physics experiments at Jefferson Lab commensurate with the anticipated 12GeV upgrade. The DANA framework was designed to allow multi-threaded event processing with a minimal impact on developers of reconstruction software. This document describes how DANA implements multi-threaded event processing and compares it to simply running multiple instances of a program. Also presented are relative reconstruction rates for Pentium4, Xeon, and Opteron based machines.

  18. Accelerated Modeling and New Ferroelectric Materials for Naval SONAR

    DTIC Science & Technology

    2004-06-01

    AN other platforms was achieved. As expected, proper into BZ leads to a development of small polarization, vectorization and optimal memory usage were...polarization is due to a combination code was fully vectorized , a speed-up of 9.2 times over of large Ag off-centering and small displacements by the Pentium 4... Xeon and 6.6 times over the SGI 03K was other cations. The large Ag displacements are due to a achieved. We are currently using the X1 in production

  19. Individual Decision-Making in Uncertain and Large-Scale Multi-Agent Environments

    DTIC Science & Technology

    2009-02-18

    first method, labeled as MC, limits and holds constant the number of models, 0 < KMC < M, where M is the possibly large number of candidate models of...equivalent and hence may be replaced by a subset of representative models without a significant loss in the optimality of the decision maker. KMC ...for different horizons. KMC and M are equal to 50 and 100 respectively for both approximate and exact approaches (Pentium 4, 3.0GHz, 1GB RAM, WinXP

  20. Directions in parallel programming: HPF, shared virtual memory and object parallelism in pC++

    NASA Technical Reports Server (NTRS)

    Bodin, Francois; Priol, Thierry; Mehrotra, Piyush; Gannon, Dennis

    1994-01-01

    Fortran and C++ are the dominant programming languages used in scientific computation. Consequently, extensions to these languages are the most popular for programming massively parallel computers. We discuss two such approaches to parallel Fortran and one approach to C++. The High Performance Fortran Forum has designed HPF with the intent of supporting data parallelism on Fortran 90 applications. HPF works by asking the user to help the compiler distribute and align the data structures with the distributed memory modules in the system. Fortran-S takes a different approach in which the data distribution is managed by the operating system and the user provides annotations to indicate parallel control regions. In the case of C++, we look at pC++ which is based on a concurrent aggregate parallel model.

  1. Goddard Space Flight Center's Structural Dynamics Data Acquisition System

    NASA Technical Reports Server (NTRS)

    McLeod, Christopher

    2004-01-01

    Turnkey Commercial Off The Shelf (COTS) data acquisition systems typically perform well and meet most of the objectives of the manufacturer. The problem is that they seldom meet most of the objectives of the end user. The analysis software, if any, is unlikely to be tailored to the end users specific application; and there is seldom the chance of incorporating preferred algorithms to solve unique problems. Purchasing a customized system allows the end user to get a system tailored to the actual application, but the cost can be prohibitive. Once the system has been accepted, future changes come with a cost and response time that's often not workable. When it came time to replace the primary digital data acquisition system used in the Goddard Space Flight Center's Structural Dynamics Test Section, the decision was made to use a combination of COTS hardware and in-house developed software. The COTS hardware used is the DataMAX II Instrumentation Recorder built by R.C. Electronics Inc. and a desktop Pentium 4 computer system. The in-house software was developed using MATLAB from The MathWorks. This paper will describe the design and development of the new data acquisition and analysis system.

  2. Protein structure database search and evolutionary classification.

    PubMed

    Yang, Jinn-Moon; Tung, Chi-Hua

    2006-01-01

    As more protein structures become available and structural genomics efforts provide structural models in a genome-wide strategy, there is a growing need for fast and accurate methods for discovering homologous proteins and evolutionary classifications of newly determined structures. We have developed 3D-BLAST, in part, to address these issues. 3D-BLAST is as fast as BLAST and calculates the statistical significance (E-value) of an alignment to indicate the reliability of the prediction. Using this method, we first identified 23 states of the structural alphabet that represent pattern profiles of the backbone fragments and then used them to represent protein structure databases as structural alphabet sequence databases (SADB). Our method enhanced BLAST as a search method, using a new structural alphabet substitution matrix (SASM) to find the longest common substructures with high-scoring structured segment pairs from an SADB database. Using personal computers with Intel Pentium4 (2.8 GHz) processors, our method searched more than 10 000 protein structures in 1.3 s and achieved a good agreement with search results from detailed structure alignment methods. [3D-BLAST is available at http://3d-blast.life.nctu.edu.tw].

  3. On-Line Fringe Tracking and Prediction at IOTA

    NASA Technical Reports Server (NTRS)

    Wilson, Edward; Mah, Robert; Lau, Sonie (Technical Monitor)

    1999-01-01

    The Infrared/Optical Telescope Array (IOTA) is a multi-aperture Michelson interferometer located on Mt. Hopkins near Tucson, Arizona. To enable viewing of fainter targets, an on-line fringe tracking system is presently under development at NASA Ames Research Center. The system has been developed off-line using actual data from IOTA, and is presently undergoing on-line implementation at IOTA. The system has two parts: (1) a fringe tracking system that identifies the center of a fringe packet by fitting a parametric model to the data; and (2) a fringe packet motion prediction system that uses characteristics of past fringe packets to predict fringe packet motion. Combined, this information will be used to optimize on-line the scanning trajectory, resulting in improved visibility of faint targets. Fringe packet identification is highly accurate and robust (99% of the 4000 fringe packets were identified correctly, the remaining 1% were either out of the scan range or too noisy to be seen) and is performed in 30-90 milliseconds on a Pentium II-based computer. Fringe packet prediction, currently performed using an adaptive linear predictor, delivers a 10% improvement over the baseline of predicting no motion.

  4. Goddard Space Flight Center's Structural Dynamics Data Acquisition System

    NASA Technical Reports Server (NTRS)

    McLeod, Christopher

    2004-01-01

    Turnkey Commercial Off The Shelf (COTS) data acquisition systems typically perform well and meet most of the objectives of the manufacturer. The problem is that they seldom meet most of the objectives of the end user. The analysis software, if any, is unlikely to be tailored to the end users specific application; and there is seldom the chance of incorporating preferred algorithms to solve unique problems. Purchasing a customized system allows the end user to get a system tailored to the actual application, but the cost can be prohibitive. Once the system has been accepted, future changes come with a cost and response time that's often not workable. When it came time to replace the primary digital data acquisition system used in the Goddard Space Flight Center's Structural Dynamics Test Section, the decision was made to use a combination of COTS hardware and in-house developed software. The COTS hardware used is the DataMAX II Instrumentation Recorder built by R.C. Electronics Inc. and a desktop Pentium 4 computer system. The in-house software was developed using MATLAF3 from The Mathworks. This paper will describe the design and development of the new data acquisition and analysis system.

  5. A PC-based single-ADC multi-parameter data acquisition system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woodring, M.; Kegel, G.H.R.; Egan, J.J.

    1995-10-01

    A personal computer (PC) based mult parameter data acquisition system using the Microsoft Window operating environment has been designed and constructed. An IBI AT compatible personal computer with an Intel 486DX5 microprocessor was combined with a National Instruments ATIDIO 32 digital I/O card, a single Canberra 8713 ADC with 13-bit resolution and a modified Canberra 8223 8-input analog multiplexer to acquil data from experiments carried out at the UML Van de Graa accelerator. The accelerator data acquisition (ADAC) computer environment was programmed in Microsoft Visual BASIC for use i Windows. ADAC allows event-mode data acquisition with up to eight parametersmore » (modifiable to 64) and the simultaneous display parameters during acquisition. Additional features of ADAC include replay of event-mode data and graphical analysis/display of data. TV ADAC environment is easy to upgrade or expand, inexpensive 1 implement, and is specifically designed to meet the needs of nuclei spectroscopy.« less

  6. Space station operating system study

    NASA Technical Reports Server (NTRS)

    Horn, Albert E.; Harwell, Morris C.

    1988-01-01

    The current phase of the Space Station Operating System study is based on the analysis, evaluation, and comparison of the operating systems implemented on the computer systems and workstations in the software development laboratory. Primary emphasis has been placed on the DEC MicroVMS operating system as implemented on the MicroVax II computer, with comparative analysis of the SUN UNIX system on the SUN 3/260 workstation computer, and to a limited extent, the IBM PC/AT microcomputer running PC-DOS. Some benchmark development and testing was also done for the Motorola MC68010 (VM03 system) before the system was taken from the laboratory. These systems were studied with the objective of determining their capability to support Space Station software development requirements, specifically for multi-tasking and real-time applications. The methodology utilized consisted of development, execution, and analysis of benchmark programs and test software, and the experimentation and analysis of specific features of the system or compilers in the study.

  7. [A survey of information literacy for undergraduate students in the department of radiological technology].

    PubMed

    Ohba, Hisateru; Matsutani, Hideya; Kashiwakura, Ikuo

    2009-01-20

    The purpose of this study was to clarify the information literacy of undergraduate students and problems in information education. An annual questionnaire survey was carried out by an anonymous method from 2003 to 2006. The survey was intended for third-year students in the Department of Radiological Technology. The questionnaire items were as follows: (1) ownership of a personal computer (PC), (2) usage purpose and frequency of PC operation, (3) operation frequency and mechanism of the Internet, and (4) IT terminology. The response rate was 100% in each year. The ratio of PC possession exceeded 80%. The ratio of students who replied "nearly every day" for the use of a PC and the Internet increased twofold and threefold in four years, respectively. More than 70% of students did not understand the mechanism of the Internet, and more than 60% of students did not know about TCP/IP. In the future, we need to consider information literacy education in undergraduate education.

  8. Effects of Increasing Drag on Conjunction Assessment

    NASA Technical Reports Server (NTRS)

    Frigm, Ryan Clayton; McKinley, David P.

    2010-01-01

    Conjunction Assessment Risk Analysis relies heavily on the computation of the Probability of Collision (Pc) and the understanding of the sensitivity of this calculation to the position errors as defined by the covariance. In Low Earth Orbit (LEO), covariance is predominantly driven by perturbations due to atmospheric drag. This paper describes the effects of increasing atmospheric drag through Solar Cycle 24 on Pc calculations. The process of determining these effects is found through analyzing solar flux predictions on Energy Dissipation Rate (EDR), historical relationship between EDR and covariance, and the sensitivity of Pc to covariance. It is discovered that while all LEO satellites will be affected by the increase in solar activity, the relative effect is more significant in the LEO regime around 700 kilometers in altitude compared to 400 kilometers. Furthermore, it is shown that higher Pc values can be expected at larger close approach miss distances. Understanding these counter-intuitive results is important to setting Owner/Operator expectations concerning conjunctions as solar maximum approaches.

  9. On the solvation of the phosphocholine headgroup in an aqueous propylene glycol solution

    NASA Astrophysics Data System (ADS)

    Rhys, Natasha H.; Al-Badri, Mohamed Ali; Ziolek, Robert M.; Gillams, Richard J.; Collins, Louise E.; Lawrence, M. Jayne; Lorenz, Christian D.; McLain, Sylvia E.

    2018-04-01

    The atomic-scale structure of the phosphocholine (PC) headgroup in 30 mol. % propylene glycol (PG) in an aqueous solution has been investigated using a combination of neutron diffraction with isotopic substitution experiments and computer simulation techniques—molecular dynamics and empirical potential structure refinement. Here, the hydration of the PC headgroup remains largely intact compared with the hydration of this group in a bilayer and in a bulk water solution, with the PG molecules showing limited interactions with the headgroup. When direct PG interactions with PC do occur, they are most likely to coordinate to the 3+N (CH 3 ) motifs. Further, PG does not affect the bulk water structure and the addition of PC does not perturb the PG-solvent interactions. This suggests that the reason why PG is able to penetrate into membranes easily is that it does not form strong-hydrogen bonding or electrostatic interactions with the headgroup allowing it to easily move across the membrane barrier.

  10. Scaling Atomic Partial Charges of Carbonate Solvents for Lithium Ion Solvation and Diffusion

    DOE PAGES

    Chaudhari, Mangesh I.; Nair, Jijeesh R.; Pratt, Lawrence R.; ...

    2016-10-21

    Lithium-ion solvation and diffusion properties in ethylene carbonate (EC) and propylene carbonate (PC) were studied by molecular simulation, experiments, and electronic structure calculations. Studies carried out in water provide a reference for interpretation. Classical molecular dynamics simulation results are compared to ab initio molecular dynamics to assess nonpolarizable force field parameters for solvation structure of the carbonate solvents. Quasi-chemical theory (QCT) was adapted to take advantage of fourfold occupancy of the near-neighbor solvation structure observed in simulations and used to calculate solvation free energies. The computed free energy for transfer of Li + to PC from water, based on electronicmore » structure calculations with cluster-QCT, agrees with the experimental value. The simulation-based direct-QCT results with scaled partial charges agree with the electronic structure-based QCT values. The computed Li +/PF 6 - transference numbers of 0.35/0.65 (EC) and 0.31/0.69 (PC) agree well with NMR experimental values of 0.31/0.69 (EC) and 0.34/0.66 (PC) and similar values obtained here with impedance spectroscopy. These combined results demonstrate that solvent partial charges can be scaled in systems dominated by strong electrostatic interactions to achieve trends in ion solvation and transport properties that are comparable to ab initio and experimental results. Thus, the results support the use of scaled partial charges in simple, nonpolarizable force fields in future studies of these electrolyte solutions.« less

  11. Transitioning EEG experiments away from the laboratory using a Raspberry Pi 2.

    PubMed

    Kuziek, Jonathan W P; Shienh, Axita; Mathewson, Kyle E

    2017-02-01

    Electroencephalography (EEG) experiments are typically performed in controlled laboratory settings to minimise noise and produce reliable measurements. These controlled conditions also reduce the applicability of the obtained results to more varied environments and may limit their relevance to everyday situations. Advances in computer portability may increase the mobility and applicability of EEG results while decreasing costs. In this experiment we show that stimulus presentation using a Raspberry Pi 2 computer provides a low cost, reliable alternative to a traditional desktop PC in the administration of EEG experimental tasks. Significant and reliable MMN and P3 activity, typical event-related potentials (ERPs) associated with an auditory oddball paradigm, were measured while experiments were administered using the Raspberry Pi 2. While latency differences in ERP triggering were observed between systems, these differences reduced power only marginally, likely due to the reduced processing power of the Raspberry Pi 2. An auditory oddball task administered using the Raspberry Pi 2 produced similar ERPs to those derived from a desktop PC in a laboratory setting. Despite temporal differences and slight increases in trials needed for similar statistical power, the Raspberry Pi 2 can be used to design and present auditory experiments comparable to a PC. Our results show that the Raspberry Pi 2 is a low cost alternative to the desktop PC when administering EEG experiments and, due to its small size and low power consumption, will enable mobile EEG experiments unconstrained by a traditional laboratory setting. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Statistical Analysis of the Processes Controlling Choline and Ethanolamine Glycerophospholipid Molecular Species Composition

    PubMed Central

    Kiebish, Michael A.; Yang, Kui; Han, Xianlin; Gross, Richard W.; Chuang, Jeffrey

    2012-01-01

    The regulation and maintenance of the cellular lipidome through biosynthetic, remodeling, and catabolic mechanisms are critical for biological homeostasis during development, health and disease. These complex mechanisms control the architectures of lipid molecular species, which have diverse yet highly regulated fatty acid chains at both the sn1 and sn2 positions. Phosphatidylcholine (PC) and phosphatidylethanolamine (PE) serve as the predominant biophysical scaffolds in membranes, acting as reservoirs for potent lipid signals and regulating numerous enzymatic processes. Here we report the first rigorous computational dissection of the mechanisms influencing PC and PE molecular architectures from high-throughput shotgun lipidomic data. Using novel statistical approaches, we have analyzed multidimensional mass spectrometry-based shotgun lipidomic data from developmental mouse heart and mature mouse heart, lung, brain, and liver tissues. We show that in PC and PE, sn1 and sn2 positions are largely independent, though for low abundance species regulatory processes may interact with both the sn1 and sn2 chain simultaneously, leading to cooperative effects. Chains with similar biochemical properties appear to be remodeled similarly. We also see that sn2 positions are more regulated than sn1, and that PC exhibits stronger cooperative effects than PE. A key aspect of our work is a novel statistically rigorous approach to determine cooperativity based on a modified Fisher's exact test using Markov Chain Monte Carlo sampling. This computational approach provides a novel tool for developing mechanistic insight into lipidomic regulation. PMID:22662143

  13. Advanced vehicle systems assessment. Volume 5: Appendices

    NASA Technical Reports Server (NTRS)

    Hardy, K.

    1985-01-01

    An appendix to the systems assessment for the electric hybrid vehicle project is presented. Included are battery design, battery cost, aluminum vehicle construction, IBM PC computer programs and battery discharge models.

  14. Data Acquisition System for Multi-Frequency Radar Flight Operations Preparation

    NASA Technical Reports Server (NTRS)

    Leachman, Jonathan

    2010-01-01

    A three-channel data acquisition system was developed for the NASA Multi-Frequency Radar (MFR) system. The system is based on a commercial-off-the-shelf (COTS) industrial PC (personal computer) and two dual-channel 14-bit digital receiver cards. The decimated complex envelope representations of the three radar signals are passed to the host PC via the PCI bus, and then processed in parallel by multiple cores of the PC CPU (central processing unit). The innovation is this parallelization of the radar data processing using multiple cores of a standard COTS multi-core CPU. The data processing portion of the data acquisition software was built using autonomous program modules or threads, which can run simultaneously on different cores. A master program module calculates the optimal number of processing threads, launches them, and continually supplies each with data. The benefit of this new parallel software architecture is that COTS PCs can be used to implement increasingly complex processing algorithms on an increasing number of radar range gates and data rates. As new PCs become available with higher numbers of CPU cores, the software will automatically utilize the additional computational capacity.

  15. Acoustic scattering from phononic crystals with complex geometry.

    PubMed

    Kulpe, Jason A; Sabra, Karim G; Leamy, Michael J

    2016-05-01

    This work introduces a formalism for computing external acoustic scattering from phononic crystals (PCs) with arbitrary exterior shape using a Bloch wave expansion technique coupled with the Helmholtz-Kirchhoff integral (HKI). Similar to a Kirchhoff approximation, a geometrically complex PC's surface is broken into a set of facets in which the scattering from each facet is calculated as if it was a semi-infinite plane interface in the short wavelength limit. When excited by incident radiation, these facets introduce wave modes into the interior of the PC. Incorporation of these modes in the HKI, summed over all facets, then determines the externally scattered acoustic field. In particular, for frequencies in a complete bandgap (the usual operating frequency regime of many PC-based devices and the requisite operating regime of the presented theory), no need exists to solve for internal reflections from oppositely facing edges and, thus, the total scattered field can be computed without the need to consider internal multiple scattering. Several numerical examples are provided to verify the presented approach. Both harmonic and transient results are considered for spherical and bean-shaped PCs, each containing over 100 000 inclusions. This facet formalism is validated by comparison to an existing self-consistent scattering technique.

  16. An improved method for estimating capillary pressure from 3D microtomography images and its application to the study of disconnected nonwetting phase

    NASA Astrophysics Data System (ADS)

    Li, Tianyi; Schlüter, Steffen; Dragila, Maria Ines; Wildenschild, Dorthe

    2018-04-01

    We present an improved method for estimating interfacial curvatures from x-ray computed microtomography (CMT) data that significantly advances the potential for this tool to unravel the mechanisms and phenomena associated with multi-phase fluid motion in porous media. CMT data, used to analyze the spatial distribution and capillary pressure-saturation (Pc-S) relationships of liquid phases, requires accurate estimates of interfacial curvature. Our improved method for curvature estimation combines selective interface modification and distance weighting approaches. It was verified against synthetic (analytical computer-generated) and real image data sets, demonstrating a vast improvement over previous methods. Using this new tool on a previously published data set (multiphase flow) yielded important new insights regarding the pressure state of the disconnected nonwetting phase during drainage and imbibition. The trapped and disconnected non-wetting phase delimits its own hysteretic Pc-S curve that inhabits the space within the main hysteretic Pc-S loop of the connected wetting phase. Data suggests that the pressure of the disconnected, non-wetting phase is strongly modified by the pore geometry rather than solely by the bulk liquid phase that surrounds it.

  17. Fast software-based volume rendering using multimedia instructions on PC platforms and its application to virtual endoscopy

    NASA Astrophysics Data System (ADS)

    Mori, Kensaku; Suenaga, Yasuhito; Toriwaki, Jun-ichiro

    2003-05-01

    This paper describes a software-based fast volume rendering (VolR) method on a PC platform by using multimedia instructions, such as SIMD instructions, which are currently available in PCs' CPUs. This method achieves fast rendering speed through highly optimizing software rather than an improved rendering algorithm. In volume rendering using a ray casting method, the system requires fast execution of the following processes: (a) interpolation of voxel or color values at sample points, (b) computation of normal vectors (gray-level gradient vectors), (c) calculation of shaded values obtained by dot-products of normal vectors and light source direction vectors, (d) memory access to a huge area, and (e) efficient ray skipping at translucent regions. The proposed software implements these fundamental processes in volume rending by using special instruction sets for multimedia processing. The proposed software can generate virtual endoscopic images of a 3-D volume of 512x512x489 voxel size by volume rendering with perspective projection, specular reflection, and on-the-fly normal vector computation on a conventional PC without any special hardware at thirteen frames per second. Semi-translucent display is also possible.

  18. Vision Based Localization in Urban Environments

    NASA Technical Reports Server (NTRS)

    McHenry, Michael; Cheng, Yang; Matthies, Larry

    2005-01-01

    As part of DARPA's MARS2020 program, the Jet Propulsion Laboratory developed a vision-based system for localization in urban environments that requires neither GPS nor active sensors. System hardware consists of a pair of small FireWire cameras and a standard Pentium-based computer. The inputs to the software system consist of: 1) a crude grid-based map describing the positions of buildings, 2) an initial estimate of robot location and 3) the video streams produced by each camera. At each step during the traverse the system: captures new image data, finds image features hypothesized to lie on the outside of a building, computes the range to those features, determines an estimate of the robot's motion since the previous step and combines that data with the map to update a probabilistic representation of the robot's location. This probabilistic representation allows the system to simultaneously represent multiple possible locations, For our testing, we have derived the a priori map manually using non-orthorectified overhead imagery, although this process could be automated. The software system consists of two primary components. The first is the vision system which uses binocular stereo ranging together with a set of heuristics to identify features likely to be part of building exteriors and to compute an estimate of the robot's motion since the previous step. The resulting visual features and the associated range measurements are software component, a particle-filter based localization system. This system uses the map and the then fed to the second primary most recent results from the vision system to update the estimate of the robot's location. This report summarizes the design of both the hardware and software and will include the results of applying the system to the global localization of a robot over an approximately half-kilometer traverse across JPL'S Pasadena campus.

  19. Organizational Characteristics and Use of Personal Computer Software by Graduate Students in Makerere University

    ERIC Educational Resources Information Center

    Bakkabulindi, Fred Edward K.; Adebanjo, Oyebade Stephen

    2011-01-01

    This paper reports a survey that sought to establish levels of use of PC (personal computer) software by graduate students in Makerere University and to link the same to organizational characteristics, related to a given respondent's "unit", that is school, faculty or institute, namely its ability to absorb change, its ICT (Information…

  20. Newsletter for Asian and Middle Eastern Languages on Computer, Volume 1, Numbers 3 & 4.

    ERIC Educational Resources Information Center

    Meadow, Anthony, Ed.

    1986-01-01

    Volume 1, numbers 3 and 4, of the newsletter on the use of non-Western languages with computers contains the following articles: "Reversing the Screen under MS/PC-DOS" (Dan Brink); "Comments on Diacritics Using Wordstar, etc. and CP/M Software for Non-Western Languages" (Michael Broschat); "Carving Tibetan in Silicon: A…

  1. The MicronEye Motion Monitor: A New Tool for Class and Laboratory Demonstrations.

    ERIC Educational Resources Information Center

    Nissan, M.; And Others

    1988-01-01

    Describes a special camera that can be directly linked to a computer that has been adapted for studying movement. Discusses capture, processing, and analysis of two-dimensional data with either IBM PC or Apple II computers. Gives examples of a variety of mechanical tests including pendulum motion, air track, and air table. (CW)

  2. Automated Estimation Of Software-Development Costs

    NASA Technical Reports Server (NTRS)

    Roush, George B.; Reini, William

    1993-01-01

    COSTMODL is automated software development-estimation tool. Yields significant reduction in risk of cost overruns and failed projects. Accepts description of software product developed and computes estimates of effort required to produce it, calendar schedule required, and distribution of effort and staffing as function of defined set of development life-cycle phases. Written for IBM PC(R)-compatible computers.

  3. "Smart" Sensor Module

    NASA Technical Reports Server (NTRS)

    Mahajan, Ajay

    2007-01-01

    An assembly that contains a sensor, sensor-signal-conditioning circuitry, a sensor-readout analog-to-digital converter (ADC), data-storage circuitry, and a microprocessor that runs special-purpose software and communicates with one or more external computer(s) has been developed as a prototype of "smart" sensor modules for monitoring the integrity and functionality (the "health") of engineering systems. Although these modules are now being designed specifically for use on rocket-engine test stands, it is anticipated that they could also readily be designed to be incorporated into health-monitoring subsystems of such diverse engineering systems as spacecraft, aircraft, land vehicles, bridges, buildings, power plants, oilrigs, and defense installations. The figure is a simplified block diagram of the "smart" sensor module. The analog sensor readout signal is processed by the ADC, the digital output of which is fed to the microprocessor. By means of a standard RS-232 cable, the microprocessor is connected to a local personal computer (PC), from which software is downloaded into a randomaccess memory in the microprocessor. The local PC is also used to debug the software. Once the software is running, the local PC is disconnected and the module is controlled by, and all output data from the module are collected by, a remote PC via an Ethernet bus. Several smart sensor modules like this one could be connected to the same Ethernet bus and controlled by the single remote PC. The software running in the microprocessor includes driver programs for operation of the sensor, programs that implement self-assessment algorithms, programs that implement protocols for communication with the external computer( s), and programs that implement evolutionary methodologies to enable the module to improve its performance over time. The design of the module and of the health-monitoring system of which it is a part reflects the understanding that the main purpose of a health-monitoring system is to detect damage and, therefore, the health-monitoring system must be able to function effectively in the presence of damage and should be capable of distinguishing between damage to itself and damage to the system being monitored. A major benefit afforded by the self-assessment algorithms is that in the output of the module, the sensor data indicative of the health of the engineering system being monitored are coupled with a confidence factor that quantifies the degree of reliability of the data. Hence, the output includes information on the health of the sensor module itself in addition to information on the health of the engineering system being monitored.

  4. Finding Bounded Rational Equilibria. Part 1; Iterative Focusing

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.

    2004-01-01

    A long-running difficulty with conventional game theory has been how to modify it to accommodate the bounded rationality characterizing all real-world players. A recurring issue in statistical physics is how best to approximate joint probability distributions with decoupled (and therefore far more tractable) distributions. It has recently been shown that the same information theoretic mathematical structure, known as Probability Collectives (PC) underlies both issues. This relationship between statistical physics and game theory allows techniques and insights from the one field to be applied to the other. In particular, PC provides a formal model-independent definition of the degree of rationality of a player and of bounded rationality equilibria. This pair of papers extends previous work on PC by introducing new computational approaches to effectively find bounded rationality equilibria of common-interest (team) games.

  5. FLY MPI-2: a parallel tree code for LSS

    NASA Astrophysics Data System (ADS)

    Becciani, U.; Comparato, M.; Antonuccio-Delogu, V.

    2006-04-01

    New version program summaryProgram title: FLY 3.1 Catalogue identifier: ADSC_v2_0 Licensing provisions: yes Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSC_v2_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland No. of lines in distributed program, including test data, etc.: 158 172 No. of bytes in distributed program, including test data, etc.: 4 719 953 Distribution format: tar.gz Programming language: Fortran 90, C Computer: Beowulf cluster, PC, MPP systems Operating system: Linux, Aix RAM: 100M words Catalogue identifier of previous version: ADSC_v1_0 Journal reference of previous version: Comput. Phys. Comm. 155 (2003) 159 Does the new version supersede the previous version?: yes Nature of problem: FLY is a parallel collisionless N-body code for the calculation of the gravitational force Solution method: FLY is based on the hierarchical oct-tree domain decomposition introduced by Barnes and Hut (1986) Reasons for the new version: The new version of FLY is implemented by using the MPI-2 standard: the distributed version 3.1 was developed by using the MPICH2 library on a PC Linux cluster. Today the FLY performance allows us to consider the FLY code among the most powerful parallel codes for tree N-body simulations. Another important new feature regards the availability of an interface with hydrodynamical Paramesh based codes. Simulations must follow a box large enough to accurately represent the power spectrum of fluctuations on very large scales so that we may hope to compare them meaningfully with real data. The number of particles then sets the mass resolution of the simulation, which we would like to make as fine as possible. The idea to build an interface between two codes, that have different and complementary cosmological tasks, allows us to execute complex cosmological simulations with FLY, specialized for DM evolution, and a code specialized for hydrodynamical components that uses a Paramesh block structure. Summary of revisions: The parallel communication schema was totally changed. The new version adopts the MPICH2 library. Now FLY can be executed on all Unix systems having an MPI-2 standard library. The main data structure, is declared in a module procedure of FLY (fly_h.F90 routine). FLY creates the MPI Window object for one-sided communication for all the shared arrays, with a call like the following: CALL MPI_WIN_CREATE(POS, SIZE, REAL8, MPI_INFO_NULL, MPI_COMM_WORLD, WIN_POS, IERR) the following main window objects are created: win_pos, win_vel, win_acc: particles positions velocities and accelerations, win_pos_cell, win_mass_cell, win_quad, win_subp, win_grouping: cells positions, masses, quadrupole momenta, tree structure and grouping cells. Other windows are created for dynamic load balance and global counters. Restrictions: The program uses the leapfrog integrator schema, but could be changed by the user. Unusual features: FLY uses the MPI-2 standard: the MPICH2 library on Linux systems was adopted. To run this version of FLY the working directory must be shared among all the processors that execute FLY. Additional comments: Full documentation for the program is included in the distribution in the form of a README file, a User Guide and a Reference manuscript. Running time: IBM Linux Cluster 1350, 512 nodes with 2 processors for each node and 2 GB RAM for each processor, at Cineca, was adopted to make performance tests. Processor type: Intel Xeon Pentium IV 3.0 GHz and 512 KB cache (128 nodes have Nocona processors). Internal Network: Myricom LAN Card "C" Version and "D" Version. Operating System: Linux SuSE SLES 8. The code was compiled using the mpif90 compiler version 8.1 and with basic optimization options in order to have performances that could be useful compared with other generic clusters Processors

  6. Potential-Field Geophysical Software for the PC

    USGS Publications Warehouse

    ,

    1995-01-01

    The computer programs of the Potential-Field Software Package run under the DOS operating system on IBM-compatible personal computers. They are used for the processing, display, and interpretation of potential-field geophysical data (gravity- and magnetic-field measurements) and other data sets that can be represented as grids or profiles. These programs have been developed on a variety of computer systems over a period of 25 years by the U.S. Geological Survey.

  7. Runwien: a text-based interface for the WIEN package

    NASA Astrophysics Data System (ADS)

    Otero de la Roza, A.; Luaña, Víctor

    2009-05-01

    A new text-based interface for WIEN2k, the full-potential linearized augmented plane-waves (FPLAPW) program, is presented. This code provides an easy to use, yet powerful way of generating arbitrarily large sets of calculations. Thus, properties over a potential energy surface and WIEN2k parameter exploration can be calculated using a simple input text file. This interface also provides new capabilities to the WIEN2k package, such as the calculation of elastic constants on hexagonal systems or the automatic gathering of relevant information. Additionally, runwien is modular, flexible and intuitive. Program summaryProgram title: runwien Catalogue identifier: AECM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECM_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL version 3 No. of lines in distributed program, including test data, etc.: 62 567 No. of bytes in distributed program, including test data, etc.: 610 973 Distribution format: tar.gz Programming language: gawk (with locale POSIX or similar) Computer: All running Unix, Linux Operating system: Unix, GNU/Linux Classification: 7.3 External routines: WIEN2k ( http://www.wien2k.at/), GAWK ( http://www.gnu.org/software/gawk/), rename by L. Wall, a Perl script which renames files, modified by R. Barker to check for the existence of target files, gnuplot ( http://www.gnuplot.info/) Subprograms used:Cat Id: ADSY_v1_0/AECB_v1_0, Title: GIBBS/CRITIC, Reference: CPC 158 (2004) 57/CPC 999 (2009) 999 Nature of problem: Creation of a text-based, batch-oriented interface for the WIEN2k package. Solution method: WIEN2k solves the Kohn-Sham equations of a solid using the FPLAPW formalism. Runwien interprets an input file containing the description of the geometry and structure of the solid and drives the execution of the WIEN2k programs. The input is simplified thanks to the default values of the WIEN2k parameters known to runwien. Additional comments: Designed for WIEN2k versions 06.4, 07.2, 08.2, and 08.3. Running time: For the test case (TiC), a single geometry takes 5 to 10 minutes on a typical desktop PC (Intel Pentium 4, 3.4 GHz, 1 GB RAM). The full example including the calculation of the elastic constants and the equation of state, takes 9 hours and 32 minutes.

  8. Expanding the substrate scope of phenylalanine ammonia-lyase from Petroselinum crispum towards styrylalanines.

    PubMed

    Bencze, László Csaba; Filip, Alina; Bánóczi, Gergely; Toşa, Monica Ioana; Irimie, Florin Dan; Gellért, Ákos; Poppe, László; Paizs, Csaba

    2017-05-03

    This study focuses on the expansion of the substrate scope of phenylalanine ammonia-lyase from Petroselinum crispum (PcPAL) towards the l-enantiomers of racemic styrylalanines rac-1a-d - which are less studied and synthetically challenging unnatural amino acids - by reshaping the aromatic binding pocket of the active site of PcPAL by point mutations. Ammonia elimination from l-styrylalanine (l-1a) catalyzed by non-mutated PcPAL (wt-PcPAL) took place with a 777-fold lower k cat /K M value than the deamination of the natural substrate, l-Phe. Computer modeling of the reactions catalyzed by wt-PcPAL indicated an unproductive and two major catalytically active conformations and detrimental interactions between the aromatic moiety of l-styrylalanine, l-1a, and the phenyl ring of the residue F137 in the aromatic binding region of the active site. Replacing the residue F137 by smaller hydrophobic residues resulted in a small mutant library (F137X-PcPAL, X being V, A, and G), from which F137V-PcPAL could transform l-styrylalanine with comparable activity to that of the wt-PcPAL with l-Phe. Furthermore, F137V-PcPAL showed superior catalytic efficiency in the ammonia elimination reaction of several racemic styrylalanine derivatives (rac-1a-d) providing access to d-1a-d by kinetic resolution, even though the d-enantiomers proved to be reversible inhibitors. The enhanced catalytic efficiency of F137V-PcPAL towards racemic styrylalanines rac-1a-d could be rationalized by molecular modeling, indicating the more relaxed enzyme-substrate complexes and the promotion of conformations with higher catalytic activities as the main reasons. Unfortunately, ammonia addition onto the corresponding styrylacrylates 2a-d failed with both wt-PcPAL and F137V-PcPAL. The low equilibrium constant of the ammonia addition, the poor ligand binding affinities of 2a-d, and the non-productive binding states of the unsaturated ligands 2a-d within the active sites of either wt-PcPAL or F137V-PcPAL - as indicated by molecular modeling - might be responsible for the inactivity of the PcPAL variants in the reverse reaction. Modeling predicted that the F137V mutation is beneficial for the KRs of 4-fluoro-, 4-cyano- and 4-bromostyrylalanines, but non-effective for the KR process of 4-trifluoromethylstyrylalanine.

  9. Fast detection of covert visuospatial attention using hybrid N2pc and SSVEP features

    NASA Astrophysics Data System (ADS)

    Xu, Minpeng; Wang, Yijun; Nakanishi, Masaki; Wang, Yu-Te; Qi, Hongzhi; Jung, Tzyy-Ping; Ming, Dong

    2016-12-01

    Objective. Detecting the shift of covert visuospatial attention (CVSA) is vital for gaze-independent brain-computer interfaces (BCIs), which might be the only communication approach for severely disabled patients who cannot move their eyes. Although previous studies had demonstrated that it is feasible to use CVSA-related electroencephalography (EEG) features to control a BCI system, the communication speed remains very low. This study aims to improve the speed and accuracy of CVSA detection by fusing EEG features of N2pc and steady-state visual evoked potential (SSVEP). Approach. A new paradigm was designed to code the left and right CVSA with the N2pc and SSVEP features, which were then decoded by a classification strategy based on canonical correlation analysis. Eleven subjects were recruited to perform an offline experiment in this study. Temporal waves, amplitudes, and topographies for brain responses related to N2pc and SSVEP were analyzed. The classification accuracy derived from the hybrid EEG features (SSVEP and N2pc) was compared with those using the single EEG features (SSVEP or N2pc). Main results. The N2pc could be significantly enhanced under certain conditions of SSVEP modulations. The hybrid EEG features achieved significantly higher accuracy than the single features. It obtained an average accuracy of 72.9% by using a data length of 400 ms after the attention shift. Moreover, the average accuracy reached ˜80% (peak values above 90%) when using 2 s long data. Significance. The results indicate that the combination of N2pc and SSVEP is effective for fast detection of CVSA. The proposed method could be a promising approach for implementing a gaze-independent BCI.

  10. Interactive RadioEpidemiological Program (IREP): a web-based tool for estimating probability of causation/assigned share of radiogenic cancers.

    PubMed

    Kocher, David C; Apostoaei, A Iulian; Henshaw, Russell W; Hoffman, F Owen; Schubauer-Berigan, Mary K; Stancescu, Daniel O; Thomas, Brian A; Trabalka, John R; Gilbert, Ethel S; Land, Charles E

    2008-07-01

    The Interactive RadioEpidemiological Program (IREP) is a Web-based, interactive computer code that is used to estimate the probability that a given cancer in an individual was induced by given exposures to ionizing radiation. IREP was developed by a Working Group of the National Cancer Institute and Centers for Disease Control and Prevention, and was adopted and modified by the National Institute for Occupational Safety and Health (NIOSH) for use in adjudicating claims for compensation for cancer under the Energy Employees Occupational Illness Compensation Program Act of 2000. In this paper, the quantity calculated in IREP is referred to as "probability of causation/assigned share" (PC/AS). PC/AS for a given cancer in an individual is calculated on the basis of an estimate of the excess relative risk (ERR) associated with given radiation exposures and the relationship PC/AS = ERR/ERR+1. IREP accounts for uncertainties in calculating probability distributions of ERR and PC/AS. An accounting of uncertainty is necessary when decisions about granting claims for compensation for cancer are made on the basis of an estimate of the upper 99% credibility limit of PC/AS to give claimants the "benefit of the doubt." This paper discusses models and methods incorporated in IREP to estimate ERR and PC/AS. Approaches to accounting for uncertainty are emphasized, and limitations of IREP are discussed. Although IREP is intended to provide unbiased estimates of ERR and PC/AS and their uncertainties to represent the current state of knowledge, there are situations described in this paper in which NIOSH, as a matter of policy, makes assumptions that give a higher estimate of the upper 99% credibility limit of PC/AS than other plausible alternatives and, thus, are more favorable to claimants.

  11. Synthesis, X-ray structure, magnetic resonance, and DFT analysis of a soluble copper(II) phthalocyanine lacking C-H bonds.

    PubMed

    Moons, Hans; Łapok, Łukasz; Loas, Andrei; Van Doorslaer, Sabine; Gorun, Sergiu M

    2010-10-04

    The synthesis, crystal structure, and electronic properties of perfluoro-isopropyl-substituted perfluorophthalocyanine bearing a copper atom in the central cavity (F(64)PcCu) are reported. While most halogenated phthalocyanines do not exhibit long-term order sufficient to form large single crystals, this is not the case for F(64)PcCu. Its crystal structure was determined by X-ray analysis and linked to the electronic properties determined by electron paramagnetic resonance (EPR). The findings are corroborated by density functional theory (DFT) computations, which agree well with the experiment. X-band continuous-wave EPR spectra of undiluted F(64)PcCu powder, indicate the existence of isolated metal centers. The electron-withdrawing effect of the perfluoroalkyl (R(f)) groups significantly enhances the complexes solubility in organic solvents like alcohols, including via their axial coordination. This coordination is confirmed by X-band (1)H HYSCORE experiments and is also seen in the solid state via the X-ray structure. Detailed X-band CW-EPR, X-band Davies and Mims ENDOR, and W-band electron spin-echo-detected EPR studies of F(64)PcCu in ethanol allow the determination of the principal g values and the hyperfine couplings of the metal, nitrogen, and fluorine nuclei. Comparison of the g and metal hyperfine values of F(64)PcCu and other PcCu complexes in different matrices reveals a dominant effect of the matrix on these EPR parameters, while variations in the ring substituents have only a secondary effect. The relatively strong axial coordination occurs despite the diminished covalency of the C-N bonds and potentially weakening Jahn-Teller effects. Surprisingly, natural abundance (13)C HYSCORE signals could be observed for a frozen ethanol solution of F(64)PcCu. The (13)C nuclei contributing to the HYSCORE spectra could be identified as the pyrrole carbons by means of DFT. Finally, (19)F ENDOR and easily observable paramagnetic NMR were found to relate well to the DFT computations, revealing negligible isotropic hyperfine (Fermi contact) contributions. The single-site isolation in solution and solid state and the relatively strong coordination of axial ligands, both attributed to the introduction of R(f) groups, are features important for materials and catalyst design.

  12. Structural Characterization of Unsaturated Phosphatidylcholines Using Traveling Wave Ion Mobility Spectrometry

    PubMed Central

    Kim, Hugh I.; Kim, Hyungjun; Pang, Eric S.; Ryu, Ernest K.; Beegle, Luther W.; Loo, Joseph A.; Goddard, William A.; Kanik, Isik

    2009-01-01

    A number of phosphatidylcholine (PC) cations spanning a mass range of 400 to 1000 Da are investigated using electrospray ionization mass spectrometry coupled with traveling wave ion mobility spectrometry (TWIMS). A high correlation between mass and mobility is demonstrated with saturated phosphatidylcholine cations in N2. A significant deviation from this mass-mobility correlation line is observed for the unsaturated PC cation. We found that the double bond in the acyl chain causes a 5% reduction in drift time. The drift time is reduced at a rate of ~1% for each additional double bond. Theoretical collision cross sections of PC cations exhibit good agreement with experimentally evaluated values. Collision cross sections are determined using the recently derived relationship between mobility and drift time in TWIMS stacked ring ion guide (SRIG) and compared to estimate collision cross-sections using empiric calibration method. Computational analysis was performed using the modified trajectory (TJ) method with nonspherical N2 molecules as the drift gas. The difference between estimated collision cross-sections and theoretical collision cross-sections of PC cations is related to the sensitivity of the PC cation collision cross-sections to the details of the ion-neutral interactions. The origin of the observed correlation and deviation between mass and mobility of PC cations is discussed in terms of the structural rigidity of these molecules using molecular dynamic simulations. PMID:19764704

  13. Unwrapping eddy current compensation: improved compensation of eddy current induced baseline shifts in high-resolution phase-contrast MRI at 9.4 Tesla.

    PubMed

    Espe, Emil K S; Zhang, Lili; Sjaastad, Ivar

    2014-10-01

    Phase-contrast MRI (PC-MRI) is a versatile tool allowing evaluation of in vivo motion, but is sensitive to eddy current induced phase offsets, causing errors in the measured velocities. In high-resolution PC-MRI, these offsets can be sufficiently large to cause wrapping in the baseline phase, rendering conventional eddy current compensation (ECC) inadequate. The purpose of this study was to develop an improved ECC technique (unwrapping ECC) able to handle baseline phase discontinuities. Baseline phase discontinuities are unwrapped by minimizing the spatiotemporal standard deviation of the static-tissue phase. Computer simulations were used for demonstrating the theoretical foundation of the proposed technique. The presence of baseline wrapping was confirmed in high-resolution myocardial PC-MRI of a normal rat heart at 9.4 Tesla (T), and the performance of unwrapping ECC was compared with conventional ECC. Areas of phase wrapping in static regions were clearly evident in high-resolution PC-MRI. The proposed technique successfully eliminated discontinuities in the baseline, and resulted in significantly better ECC than the conventional approach. We report the occurrence of baseline phase wrapping in PC-MRI, and provide an improved ECC technique capable of handling its presence. Unwrapping ECC offers improved correction of eddy current induced baseline shifts in high-resolution PC-MRI. Copyright © 2013 Wiley Periodicals, Inc.

  14. COMPPAP - COMPOSITE PLATE BUCKLING ANALYSIS PROGRAM (IBM PC VERSION)

    NASA Technical Reports Server (NTRS)

    Smith, J. P.

    1994-01-01

    The Composite Plate Buckling Analysis Program (COMPPAP) was written to help engineers determine buckling loads of orthotropic (or isotropic) irregularly shaped plates without requiring hand calculations from design curves or extensive finite element modeling. COMPPAP is a one element finite element program that utilizes high-order displacement functions. The high order of the displacement functions enables the user to produce results more accurate than traditional h-finite elements. This program uses these high-order displacement functions to perform a plane stress analysis of a general plate followed by a buckling calculation based on the stresses found in the plane stress solution. The current version assumes a flat plate (constant thickness) subject to a constant edge load (normal or shear) on one or more edges. COMPPAP uses the power method to find the eigenvalues of the buckling problem. The power method provides an efficient solution when only one eigenvalue is desired. Once the eigenvalue is found, the eigenvector, which corresponds to the plate buckling mode shape, results as a by-product. A positive feature of the power method is that the dominant eigenvalue is the first found, which is this case is the plate buckling load. The reported eigenvalue expresses a load factor to induce plate buckling. COMPPAP is written in ANSI FORTRAN 77. Two machine versions are available from COSMIC: a PC version (MSC-22428), which is for IBM PC 386 series and higher computers and compatibles running MS-DOS; and a UNIX version (MSC-22286). The distribution medium for both machine versions includes source code for both single and double precision versions of COMPPAP. The PC version includes source code which has been optimized for implementation within DOS memory constraints as well as sample executables for both the single and double precision versions of COMPPAP. The double precision versions of COMPPAP have been successfully implemented on an IBM PC 386 compatible running MS-DOS, a Sun4 series computer running SunOS, an HP-9000 series computer running HP-UX, and a CRAY X-MP series computer running UNICOS. COMPPAP requires 1Mb of RAM and the BLAS and LINPACK math libraries, which are included on the distribution medium. The COMPPAP documentation provides instructions for using the commercial post-processing package PATRAN for graphical interpretation of COMPPAP output. The UNIX version includes two electronic versions of the documentation: one in LaTex format and one in PostScript format. The standard distribution medium for the PC version (MSC-22428) is a 5.25 inch 1.2Mb MS-DOS format diskette. The standard distribution medium for the UNIX version (MSC-22286) is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. For the UNIX version, alternate distribution media and formats are available upon request. COMPPAP was developed in 1992.

  15. Dynamical calculations for RHEED intensity oscillations

    NASA Astrophysics Data System (ADS)

    Daniluk, Andrzej

    2005-03-01

    A practical computing algorithm working in real time has been developed for calculating the reflection high-energy electron diffraction from the molecular beam epitaxy growing surface. The calculations are based on the use of a dynamical diffraction theory in which the electrons are taken to be diffracted by a potential, which is periodic in the dimension perpendicular to the surface. The results of the calculations are presented in the form of rocking curves to illustrate how the diffracted beam intensities depend on the glancing angle of the incident beam. Program summaryTitle of program: RHEED Catalogue identifier:ADUY Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUY Program obtainable from:CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested: Pentium-based PC Operating systems or monitors under which the program has been tested: Windows 9x, XP, NT, Linux Programming language used: Borland C++ Memory required to execute with typical data: more than 1 MB Number of bits in a word: 64 bits Number of processors used: 1 Distribution format:tar.gz Number of lines in distributed program, including test data, etc.:982 Number of bytes in distributed program, including test data, etc.: 126 051 Nature of physical problem: Reflection high-energy electron diffraction (RHEED) is a very useful technique for studying growth and surface analysis of thin epitaxial structures prepared by the molecular beam epitaxy (MBE). Nowadays, RHEED is used in many laboratories all over the world where researchers deal with the growth of materials by MBE. The RHEED technique can reveal, almost instantaneously, changes either in the coverage of the sample surface by adsorbates or in the surface structure of a thin film. In most cases the interpretation of experimental results is based on the use of dynamical diffraction approaches. Such approaches are said to be quite useful in qualitative and quantitative analysis of RHEED experimental data. Method of solution: RHEED intensities are calculated within the framework of the general matrix formulation of Peng and Whelan [Surf. Sci. Lett. 238 (1990) L446] under the one-beam condition. The dynamical diffraction calculations presented in this paper utilize the systematic reflection case in RHEED, in which the atomic potential in the planes parallel to the surface are projected on the surface normal, so that the results are insensitive to the atomic arrangement in the layers parallel to the surface. This model shows a systematic approximation in calculating dynamical RHEED intensities, and only a layer coverage factor for the nth layer was taken into account in calculating the interaction potential between the fast electron and that layer. Typical running time: The typical running time is machine and user-parameters dependent. Unusual features of the program: The program is presented in the form of a basic unit RHEED.cpp and should be compiled using C++ compilers, including C++ Builder and g++.

  16. Evaluation of FPGA to PC feedback loop

    NASA Astrophysics Data System (ADS)

    Linczuk, Pawel; Zabolotny, Wojciech M.; Wojenski, Andrzej; Krawczyk, Rafal D.; Pozniak, Krzysztof T.; Chernyshova, Maryna; Czarski, Tomasz; Gaska, Michal; Kasprowicz, Grzegorz; Kowalska-Strzeciwilk, Ewa; Malinowski, Karol

    2017-08-01

    The paper presents the evaluation study of the performance of the data transmission subsystem which can be used in High Energy Physics (HEP) and other High-Performance Computing (HPC) systems. The test environment consisted of Xilinx Artix-7 FPGA and server-grade PC connected via the PCIe 4xGen2 bus. The DMA engine was based on the Xilinx DMA for PCI Express Subsystem1 controlled by the modified Xilinx XDMA kernel driver.2 The research is focused on the influence of the system configuration on achievable throughput and latency of data transfer.

  17. PC Scene Generation

    NASA Astrophysics Data System (ADS)

    Buford, James A., Jr.; Cosby, David; Bunfield, Dennis H.; Mayhall, Anthony J.; Trimble, Darian E.

    2007-04-01

    AMRDEC has successfully tested hardware and software for Real-Time Scene Generation for IR and SAL Sensors on COTS PC based hardware and video cards. AMRDEC personnel worked with nVidia and Concurrent Computer Corporation to develop a Scene Generation system capable of frame rates of at least 120Hz while frame locked to an external source (such as a missile seeker) with no dropped frames. Latency measurements and image validation were performed using COTS and in-house developed hardware and software. Software for the Scene Generation system was developed using OpenSceneGraph.

  18. Radar Ocean Wave Spectrometer (ROWS) preprocessing program (PREROWS2.EXE). User's manual and program description

    NASA Technical Reports Server (NTRS)

    Vaughn, Charles R.

    1993-01-01

    This Technical Memorandum is a user's manual with additional program documentation for the computer program PREROWS2.EXE. PREROWS2 works with data collected by an ocean wave spectrometer that uses radar (ROWS) as an active remote sensor. The original ROWS data acquisition subsystem was replaced with a PC in 1990. PREROWS2.EXE is a compiled QuickBasic 4.5 program that unpacks the recorded data, displays various variables, and provides for copying blocks of data from the original 8mm tape to a PC file.

  19. PC based graphic display real-time particle beam uniformity

    NASA Technical Reports Server (NTRS)

    Huebner, M. A.; Malone, C. J.; Smith, L. S.; Soli, G. A.

    1989-01-01

    A technique has been developed to support the study of the effects of cosmic rays on integrated circuits. The system is designed to determine the particle distribution across the surface of an integrated circuit accurately while the circuit is bombarded by a particle beam. The system uses photomultiplier tubes, an octal discriminator, a computer-controlled NIM quad counter, and an IBM PC. It provides real-time operator feedback for fast beam tuning and monitors momentary fluctuations in the particle beam. The hardware, software, and system performance are described.

  20. Wrist display concept demonstration based on 2-in. color AMOLED

    NASA Astrophysics Data System (ADS)

    Meyer, Frederick M.; Longo, Sam J.; Hopper, Darrel G.

    2004-09-01

    The wrist watch needs an upgrade. Recent advances in optoelectronics, microelectronics, and communication theory have established a technology base that now make the multimedia Dick Tracy watch attainable during the next decade. As a first step towards stuffing the functionality of an entire personnel computer (PC) and television receiver under a watch face, we have set a goal of providing wrist video capability to warfighters. Commercial sector work on the wrist form factor already includes all the functionality of a personal digital assistant (PDA) and full PC operating system. Our strategy is to leverage these commercial developments. In this paper we describe our use of a 2.2 in. diagonal color active matrix light emitting diode (AMOLED) device as a wrist-mounted display (WMD) to present either full motion video or computer generated graphical image formats.

  1. WE-F-16A-01: Commissioning and Clinical Use of PC-ISO for Customized, 3D Printed, Gynecological Brachytherapy Applicators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cunha, J; Sethi, R; Mellis, K

    Purpose: (1) Evaluate the safety and radiation attenuation properties of PCISO, a bio-compatible, sterilizable 3D printing material by Stratasys, (2) establish a method for commissioning customized multi- and single-use 3D printed applicators, (3) report on use of customized vaginal cylinders used to treat a series of serous endometrial cancer patient. Methods: A custom film dosimetry apparatus was designed to hold a Gafchromic radio film segment between two blocks of PC-ISO and 3D-printed using a Fortus 400mc (StrataSys). A dose plan was computed using 13 dwell positions at 2.5 mm spacing and normalized to 1500 cGy at 1 cm. Film exposuremore » was compared to control tests in only air and only water. The average Hounsfield Unit (HU) was computed and used to verify water equivalency. For the clinical use cases, the physician specifies the dimensions and geometry of a custom applicator from which a CAD model is designed and printed. Results: The doses measured from the PC-ISO Gafchromic film test were within 1% of the dose measured in only water between 1cm and 6cm from the channel. Doses increased 7–4% measured in only air. HU range was 11–43. The applicators were sterilized using the Sterrad system multiple times without damage. As of submission 3 unique cylinders have been designed, printed, and used in the clinic. A standardizable workflow for commissioning custom 3D printed applicators was codified and will be reported. Conclusions: Quality assurance (QA) evaluation of the PC-ISO 3D-printing material showed that PC-ISO is a suitable material for a gynecological brachytherapy vaginal cylinder in a clinical setting. With the material commissioning completed, if the physician determines that a better treatment would Result, a customized design is fabricated with limited additional QA necessary. Although this study was specific to PC-ISO, the same setup can be used to evaluate other 3D-printing materials.« less

  2. Upper extremity musculoskeletal discomfort among occupational notebook personal computer users: work interference, associations with risk factors and the use of notebook computer stand and docking station.

    PubMed

    Erdinc, Oguzhan

    2011-01-01

    This study explored the prevalence and work interference (WI) of upper extremity musculoskeletal discomfort (UEMSD) and investigated the associations of individual and work-related risk factors and using a notebook stand or docking station with UEMSD among symptomatic occupational notebook personal computer (PC) users. The participant group included 45 Turkish occupational notebook PC users. The study used self-reports of participants. The Turkish version of the Cornell Musculoskeletal Discomfort Questionnaire (T-CMDQ) was used to collect symptom data. UEMSD prevailed mostly in the neck, the upper back, and the lower back with prevalence rates of 77.8%, 73.3%, and 60.0% respectively, and with WI rates of 28.9%, 24.4%, and 26.7% respectively. Aggregated results showed that 44% of participants reported WI due to UEMSD in at least one body region. Significant risk factors were: being female, being aged <31 years, having computer work experience <10 years, and physical discomfort during computer use. UEMSD prevalence and WI rates were considerable in the neck, the upper back, and the lower back. Significant associations between certain risk factors and UEMSD were identified, but no association was found between using notebook stand and docking station and UEMSD among participants.

  3. Diagnostic Accuracy of the Primary Care Screener for Affective Disorder (PC-SAD) in Primary Care.

    PubMed

    Picardi, Angelo; Adler, D A; Rogers, W H; Lega, I; Zerella, M P; Matteucci, G; Tarsitani, L; Caredda, M; Gigantesco, A; Biondi, M

    2013-01-01

    Depression goes often unrecognised and untreated in non-psychiatric medical settings. Screening has recently gained acceptance as a first step towards improving depression recognition and management. The Primary Care Screener for Affective Disorders (PC-SAD) is a self-administered questionnaire to screen for Major Depressive Disorder (MDD) and Dysthymic Disorder (Dys) which has a sophisticated scoring algorithm that confers several advantages. This study tested its performance against a 'gold standard' diagnostic interview in primary care. A total of 416 adults attending 13 urban general internal medicine primary care practices completed the PC-SAD. Of 409 who returned a valid PC-SAD, all those scoring positive (N=151) and a random sample (N=106) of those scoring negative were selected for a 3-month telephone follow-up assessment including the administration of the Structured Clinical Interview for DSM-IV-TR Axis I Disorders (SCID-I) by a psychiatrist who was masked to PC-SAD results. Most selected patients (N=212) took part in the follow-up assessment. After adjustment for partial verification bias the sensitivity, specificity, positive and negative predictive value for MDD were 90%, 83%, 51%, and 98%. For Dys, the corresponding figures were 78%, 79%, 8%, and 88%. While some study limitations suggest caution in interpreting our results, this study corroborated the diagnostic validity of the PC-SAD, although the low PPV may limit its usefulness with regard to Dys. Given its good psychometric properties and the short average administration time, the PC-SAD might be the screening instrument of choice in settings where the technology for computer automated scoring is available.

  4. Computer Based Testing Using "Digital Ink": Participatory Design of a Tablet PC Based Assessment Application for Secondary Education

    ERIC Educational Resources Information Center

    Siozos, Panagiotis; Palaigeorgiou, George; Triantafyllakos, George; Despotakis, Theofanis

    2009-01-01

    In this paper, we identify key challenges faced by computer-based assessment (CBA) in secondary education and we put forward a framework of design considerations: design with the students and teachers, select the most appropriate media platform and plan an evolution rather than a revolution of prior practices. We present the CBA application…

  5. Large-Scale 1:1 Computing Initiatives: An Open Access Database

    ERIC Educational Resources Information Center

    Richardson, Jayson W.; McLeod, Scott; Flora, Kevin; Sauers, Nick J.; Kannan, Sathiamoorthy; Sincar, Mehmet

    2013-01-01

    This article details the spread and scope of large-scale 1:1 computing initiatives around the world. What follows is a review of the existing literature around 1:1 programs followed by a description of the large-scale 1:1 database. Main findings include: 1) the XO and the Classmate PC dominate large-scale 1:1 initiatives; 2) if professional…

  6. National Software Reference Library (NSRL)

    National Institute of Standards and Technology Data Gateway

    National Software Reference Library (NSRL) (PC database for purchase)   A collaboration of the National Institute of Standards and Technology (NIST), the National Institute of Justice (NIJ), the Federal Bureau of Investigation (FBI), the Defense Computer Forensics Laboratory (DCFL),the U.S. Customs Service, software vendors, and state and local law enforement organizations, the NSRL is a tool to assist in fighting crime involving computers.

  7. What to Use for Mathematics in High School: PC, Tablet or Graphing Calculator?

    ERIC Educational Resources Information Center

    Korenova, Lilla

    2015-01-01

    Digital technologies have made their way not only into our everyday lives, but nowadays they are also commonly used in schools. Computers, tablets and smartphones are now part of the lives of this new generation of students, so it's only natural that they are used for educational purposes as well. Besides the interactive whiteboards, computers and…

  8. Using a Computer Microphone Port to Study Circular Motion: Proposal of a Secondary School Experiment

    ERIC Educational Resources Information Center

    Soares, A. A.; Borcsik, F. S.

    2016-01-01

    In this work we present an inexpensive experiment proposal to study the kinematics of uniform circular motion in a secondary school. We used a PC sound card to connect a homemade simple sensor to a computer and used the free sound analysis software "Audacity" to record experimental data. We obtained quite good results even in comparison…

  9. Effects of computer monitor-emitted radiation on oxidant/antioxidant balance in cornea and lens from rats

    PubMed Central

    Namuslu, Mehmet; Devrim, Erdinç; Durak, İlker

    2009-01-01

    Purpose This study aims to investigate the possible effects of computer monitor-emitted radiation on the oxidant/antioxidant balance in corneal and lens tissues and to observe any protective effects of vitamin C (vit C). Methods Four groups (PC monitor, PC monitor plus vitamin C, vitamin C, and control) each consisting of ten Wistar rats were studied. The study lasted for three weeks. Vitamin C was administered in oral doses of 250 mg/kg/day. The computer and computer plus vitamin C groups were exposed to computer monitors while the other groups were not. Malondialdehyde (MDA) levels and superoxide dismutase (SOD), glutathione peroxidase (GSH-Px), and catalase (CAT) activities were measured in corneal and lens tissues of the rats. Results In corneal tissue, MDA levels and CAT activity were found to increase in the computer group compared with the control group. In the computer plus vitamin C group, MDA level, SOD, and GSH-Px activities were higher and CAT activity lower than those in the computer and control groups. Regarding lens tissue, in the computer group, MDA levels and GSH-Px activity were found to increase, as compared to the control and computer plus vitamin C groups, and SOD activity was higher than that of the control group. In the computer plus vitamin C group, SOD activity was found to be higher and CAT activity to be lower than those in the control group. Conclusion The results of this study suggest that computer-monitor radiation leads to oxidative stress in the corneal and lens tissues, and that vitamin C may prevent oxidative effects in the lens. PMID:19960068

  10. Personal computer wallpaper user segmentation based on Sasang typology.

    PubMed

    Lee, Joung-Youn

    2015-03-01

    As human-computer interaction (HCI) is becoming a significant part of all human life, the user's emotional satisfaction is an important factor to consider. These changes have been pointed out by several researchers who claim that a user's personality may become the most important factor in the design. The objective of this study is to examine Sasang typology as a user segmentation method in the area of HCI design. To test HCI usage patterns in terms of the user's personality and temperament, this study focuses on personal computer (PC) or lap-top wallpaper settings. One hundred and four Facebook friends completed a QSCC II survey assessing Sasang typology type and sent a captured image of their personal PC or lap-top wallpaper. To classify the computer usage pattern, folder organization and wallpaper setting were investigated. The research showed that So-Yang type organized folders and icons in an orderly manner, whereas So-Eum type did not organize folders and icons at all. With regard to wallpaper settings, So-Yang type used the default wallpaper provided by the PC but So-Eum type used landscape images. Because So-Yang type was reported to be emotionally stable and extrovert, they tended to be highly concerned with online privacy compared with So-Eum type. So-Eum type use a lot of images of landscapes as the background image, which demonstrates So-Eum's low emotional stability, anxiety, and the desire to obtain analogy throughout the computer screen. Also, So-Yang's wallpapers display family or peripheral figures and this is due to the sociability that extrovert So-Yang types possess. By proposing the Sasang typology as a factor in influencing an HCI usage pattern in this study, it can be used to predict the user's HCI experience, or suggest a native design methodology that can actively cope with the user's psychological environment.

  11. Software List.

    ERIC Educational Resources Information Center

    Computers in Chemical Education Newsletter, 1984

    1984-01-01

    Lists and briefly describes computer programs recently added to those currently available from Project SERAPHIM. Program name, subject, hardware, author, supplier, and current cost are provided in separate listings for Apple, Atari, Pet, VIC-20, TRS-80, and IBM-PC. (JN)

  12. Overcoming spatio-temporal limitations using dynamically scaled in vitro PC-MRI - A flow field comparison to true-scale computer simulations of idealized, stented and patient-specific left main bifurcations.

    PubMed

    Beier, Susann; Ormiston, John; Webster, Mark; Cater, John; Norris, Stuart; Medrano-Gracia, Pau; Young, Alistair; Gilbert, Kathleen; Cowan, Brett

    2016-08-01

    The majority of patients with angina or heart failure have coronary artery disease. Left main bifurcations are particularly susceptible to pathological narrowing. Flow is a major factor of atheroma development, but limitations in imaging technology such as spatio-temporal resolution, signal-to-noise ratio (SNRv), and imaging artefacts prevent in vivo investigations. Computational fluid dynamics (CFD) modelling is a common numerical approach to study flow, but it requires a cautious and rigorous application for meaningful results. Left main bifurcation angles of 40°, 80° and 110° were found to represent the spread of an atlas based 100 computed tomography angiograms. Three left mains with these bifurcation angles were reconstructed with 1) idealized, 2) stented, and 3) patient-specific geometry. These were then approximately 7× scaled-up and 3D printing as large phantoms. Their flow was reproduced using a blood-analogous, dynamically scaled steady flow circuit, enabling in vitro phase-contrast magnetic resonance (PC-MRI) measurements. After threshold segmentation the image data was registered to true-scale CFD of the same coronary geometry using a coherent point drift algorithm, yielding a small covariance error (σ 2 <;5.8×10 -4 ). Natural-neighbour interpolation of the CFD data onto the PC-MRI grid enabled direct flow field comparison, showing very good agreement in magnitude (error 2-12%) and directional changes (r 2 0.87-0.91), and stent induced flow alternations were measureable for the first time. PC-MRI over-estimated velocities close to the wall, possibly due to partial voluming. Bifurcation shape determined the development of slow flow regions, which created lower SNRv regions and increased discrepancies. These can likely be minimised in future by testing different similarity parameters to reduce acquisition error and improve correlation further. It was demonstrated that in vitro large phantom acquisition correlates to true-scale coronary flow simulations when dynamically scaled, and thus can overcome current PC-MRI's spatio-temporal limitations. This novel method enables experimental assessment of stent induced flow alternations, and in future may elevate CFD coronary flow simulations by providing sophisticated boundary conditions, and enable investigations of stenosis phantoms.

  13. Finding Bounded Rational Equilibria. Part 2; Alternative Lagrangians and Uncountable Move Spaces

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.

    2004-01-01

    A long-running difficulty with conventional game theory has been how to modify it to accommodate the bounded rationality characterizing all real-world players. A recurring issue in statistical physics is how best to approximate joint probability distributions with decoupled (and therefore far more tractable) distributions. It has recently been shown that the same information theoretic mathematical structure, known as Probability Collectives (PC) underlies both issues. This relationship between statistical physics and game theory allows techniques and insights &om the one field to be applied to the other. In particular, PC provides a formal model-independent definition of the degree of rationality of a player and of bounded rationality equilibria. This pair of papers extends previous work on PC by introducing new computational approaches to effectively find bounded rationality equilibria of common-interest (team) games.

  14. Assessing image quality of low-cost laparoscopic box trainers: options for residents training at home.

    PubMed

    Kiely, Daniel J; Stephanson, Kirk; Ross, Sue

    2011-10-01

    Low-cost laparoscopic box trainers built using home computers and webcams may provide residents with a useful tool for practice at home. This study set out to evaluate the image quality of low-cost laparoscopic box trainers compared with a commercially available model. Five low-cost laparoscopic box trainers including the components listed were compared in random order to one commercially available box trainer: A (high-definition USB 2.0 webcam, PC laptop), B (Firewire webcam, Mac laptop), C (high-definition USB 2.0 webcam, Mac laptop), D (standard USB webcam, PC desktop), E (Firewire webcam, PC desktop), and F (the TRLCD03 3-DMEd Standard Minimally Invasive Training System). Participants observed still image quality and performed a peg transfer task using each box trainer. Participants rated still image quality, image quality with motion, and whether the box trainer had sufficient image quality to be useful for training. Sixteen residents in obstetrics and gynecology took part in the study. The box trainers showing no statistically significant difference from the commercially available model were A, B, C, D, and E for still image quality; A for image quality with motion; and A and B for usefulness of the simulator based on image quality. The cost of the box trainers A-E is approximately $100 to $160 each, not including a computer or laparoscopic instruments. Laparoscopic box trainers built from a high-definition USB 2.0 webcam with a PC (box trainer A) or from a Firewire webcam with a Mac (box trainer B) provide image quality comparable with a commercial standard.

  15. Improved programs for DNA and protein sequence analysis on the IBM personal computer and other standard computer systems.

    PubMed Central

    Mount, D W; Conrad, B

    1986-01-01

    We have previously described programs for a variety of types of sequence analysis (1-4). These programs have now been integrated into a single package. They are written in the standard C programming language and run on virtually any computer system with a C compiler, such as the IBM/PC and other computers running under the MS/DOS and UNIX operating systems. The programs are widely distributed and may be obtained from the authors as described below. PMID:3753780

  16. Using Hand-Held Computers When Conducting National Security Background Interviews: Utility Test Results

    DTIC Science & Technology

    2010-05-01

    Tablet computers resemble ordinary notebook computers but can be set up as a flat display for handwriting by means of a stylus (digital pen). When used...PC accessories, and often strongly resemble notebook computers. However, all tablets can be set up as a flat display for handwriting by means of a...P3: “Depending on how the tablet handles the post-interview process, it would save time over paper.”  P4: “I hoped you were going to say that this

  17. Why not make a PC cluster of your own? 5. AppleSeed: A Parallel Macintosh Cluster for Scientific Computing

    NASA Astrophysics Data System (ADS)

    Decyk, Viktor K.; Dauger, Dean E.

    We have constructed a parallel cluster consisting of Apple Macintosh G4 computers running both Classic Mac OS as well as the Unix-based Mac OS X, and have achieved very good performance on numerically intensive, parallel plasma particle-in-cell simulations. Unlike other Unix-based clusters, no special expertise in operating systems is required to build and run the cluster. This enables us to move parallel computing from the realm of experts to the mainstream of computing.

  18. Software on diffractive optics and computer-generated holograms

    NASA Astrophysics Data System (ADS)

    Doskolovich, Leonid L.; Golub, Michael A.; Kazanskiy, Nikolay L.; Khramov, Alexander G.; Pavelyev, Vladimir S.; Seraphimovich, P. G.; Soifer, Victor A.; Volotovskiy, S. G.

    1995-01-01

    The `Quick-DOE' software for an IBM PC-compatible computer is aimed at calculating the masks of diffractive optical elements (DOEs) and computer generated holograms, computer simulation of DOEs, and for executing a number of auxiliary functions. In particular, among the auxiliary functions are the file format conversions, mask visualization on display from a file, implementation of fast Fourier transforms, and arranging and preparation of composite images for the output on a photoplotter. The software is aimed for use by opticians, DOE designers, and the programmers dealing with the development of the program for DOE computation.

  19. CLIPS - C LANGUAGE INTEGRATED PRODUCTION SYSTEM (IBM PC VERSION)

    NASA Technical Reports Server (NTRS)

    Riley, G.

    1994-01-01

    The C Language Integrated Production System, CLIPS, is a shell for developing expert systems. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. The primary design goals for CLIPS are portability, efficiency, and functionality. For these reasons, the program is written in C. CLIPS meets or outperforms most micro- and minicomputer based artificial intelligence tools. CLIPS is a forward chaining rule-based language. The program contains an inference engine and a language syntax that provide a framework for the construction of an expert system. It also includes tools for debugging an application. CLIPS is based on the Rete algorithm, which enables very efficient pattern matching. The collection of conditions and actions to be taken if the conditions are met is constructed into a rule network. As facts are asserted either prior to or during a session, CLIPS pattern-matches the number of fields. Wildcards and variables are supported for both single and multiple fields. CLIPS syntax allows the inclusion of externally defined functions (outside functions which are written in a language other than CLIPS). CLIPS itself can be embedded in a program such that the expert system is available as a simple subroutine call. Advanced features found in CLIPS version 4.3 include an integrated microEMACS editor, the ability to generate C source code from a CLIPS rule base to produce a dedicated executable, binary load and save capabilities for CLIPS rule bases, and the utility program CRSV (Cross-Reference, Style, and Verification) designed to facilitate the development and maintenance of large rule bases. Five machine versions are available. Each machine version includes the source and the executable for that machine. The UNIX version includes the source and binaries for IBM RS/6000, Sun3 series, and Sun4 series computers. The UNIX, DEC VAX, and DEC RISC Workstation versions are line oriented. The PC version and the Macintosh version each contain a windowing variant of CLIPS as well as the standard line oriented version. The mouse/window interface version for the PC works with a Microsoft compatible mouse or without a mouse. This window version uses the proprietary CURSES library for the PC, but a working executable of the window version is provided. The window oriented version for the Macintosh includes a version which uses a full Macintosh-style interface, including an integrated editor. This version allows the user to observe the changing fact base and rule activations in separate windows while a CLIPS program is executing. The IBM PC version is available bundled with CLIPSITS, The CLIPS Intelligent Tutoring System for a special combined price (COS-10025). The goal of CLIPSITS is to provide the student with a tool to practice the syntax and concepts covered in the CLIPS User's Guide. It attempts to provide expert diagnosis and advice during problem solving which is typically not available without an instructor. CLIPSITS is divided into 10 lessons which mirror the first 10 chapters of the CLIPS User's Guide. The program was developed for the IBM PC series with a hard disk. CLIPSITS is also available separately as MSC-21679. The CLIPS program is written in C for interactive execution and has been implemented on an IBM PC computer operating under DOS, a Macintosh and DEC VAX series computers operating under VMS or ULTRIX. The line oriented version should run on any computer system which supports a full (Kernighan and Ritchie) C compiler or the ANSI standard C language. CLIPS was developed in 1986 and Version 4.2 was released in July of 1988. Version 4.3 was released in June of 1989.

  20. CLIPS - C LANGUAGE INTEGRATED PRODUCTION SYSTEM (IBM PC VERSION WITH CLIPSITS)

    NASA Technical Reports Server (NTRS)

    Riley, , .

    1994-01-01

    The C Language Integrated Production System, CLIPS, is a shell for developing expert systems. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. The primary design goals for CLIPS are portability, efficiency, and functionality. For these reasons, the program is written in C. CLIPS meets or outperforms most micro- and minicomputer based artificial intelligence tools. CLIPS is a forward chaining rule-based language. The program contains an inference engine and a language syntax that provide a framework for the construction of an expert system. It also includes tools for debugging an application. CLIPS is based on the Rete algorithm, which enables very efficient pattern matching. The collection of conditions and actions to be taken if the conditions are met is constructed into a rule network. As facts are asserted either prior to or during a session, CLIPS pattern-matches the number of fields. Wildcards and variables are supported for both single and multiple fields. CLIPS syntax allows the inclusion of externally defined functions (outside functions which are written in a language other than CLIPS). CLIPS itself can be embedded in a program such that the expert system is available as a simple subroutine call. Advanced features found in CLIPS version 4.3 include an integrated microEMACS editor, the ability to generate C source code from a CLIPS rule base to produce a dedicated executable, binary load and save capabilities for CLIPS rule bases, and the utility program CRSV (Cross-Reference, Style, and Verification) designed to facilitate the development and maintenance of large rule bases. Five machine versions are available. Each machine version includes the source and the executable for that machine. The UNIX version includes the source and binaries for IBM RS/6000, Sun3 series, and Sun4 series computers. The UNIX, DEC VAX, and DEC RISC Workstation versions are line oriented. The PC version and the Macintosh version each contain a windowing variant of CLIPS as well as the standard line oriented version. The mouse/window interface version for the PC works with a Microsoft compatible mouse or without a mouse. This window version uses the proprietary CURSES library for the PC, but a working executable of the window version is provided. The window oriented version for the Macintosh includes a version which uses a full Macintosh-style interface, including an integrated editor. This version allows the user to observe the changing fact base and rule activations in separate windows while a CLIPS program is executing. The IBM PC version is available bundled with CLIPSITS, The CLIPS Intelligent Tutoring System for a special combined price (COS-10025). The goal of CLIPSITS is to provide the student with a tool to practice the syntax and concepts covered in the CLIPS User's Guide. It attempts to provide expert diagnosis and advice during problem solving which is typically not available without an instructor. CLIPSITS is divided into 10 lessons which mirror the first 10 chapters of the CLIPS User's Guide. The program was developed for the IBM PC series with a hard disk. CLIPSITS is also available separately as MSC-21679. The CLIPS program is written in C for interactive execution and has been implemented on an IBM PC computer operating under DOS, a Macintosh and DEC VAX series computers operating under VMS or ULTRIX. The line oriented version should run on any computer system which supports a full (Kernighan and Ritchie) C compiler or the ANSI standard C language. CLIPS was developed in 1986 and Version 4.2 was released in July of 1988. Version 4.3 was released in June of 1989.

  1. Combined Quantum Chemical/Raman Spectroscopic Analyses of Li+ Cation Solvation: Cyclic Carbonate Solvents - Ethylene Carbonate and Propylene Earbonate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allen, Joshua L.; Borodin, Oleg; Seo, D. M.

    2014-12-01

    Combined computational/Raman spectroscopic analyses of ethylene carbonate (EC) and propylene carbonate (PC) solvation interactions with lithium salts are reported. It is proposed that previously reported Raman analyses of (EC)n-LiX mixtures have utilized faulty assumptions. In the present studies, density functional theory (DFT) calculations have provided corrections in terms of both the scaling factors for the solvent's Raman band intensity variations and information about band overlap. By accounting for these factors, the solvation numbers obtained from two different EC solvent bands are in excellent agreement with one another. The same analysis for PC, however, was found to be quite challenging. Commerciallymore » available PC is a racemic mixture of (S)- and (R)-PC isomers. Based upon the quantum chemistry calculations, each of these solvent isomers may exist as multiple conformers due to a low energy barrier for ring inversion, making deconvolution of the Raman bands daunting and inherently prone to significant error. Thus, Raman spectroscopy is able to accurately determine the extent of the EC...Li+ cation solvation interactions using the provided methodology, but a similar analysis of PC...Li+ cation solvation results in a significant underestimation of the actual solvation numbers.« less

  2. Investigation of Patient-Specific Cerebral Aneurysm using Volumetric PIV, CFD, and In Vitro PC-MRI

    NASA Astrophysics Data System (ADS)

    Brindise, Melissa; Dickerhoff, Ben; Saloner, David; Rayz, Vitaliy; Vlachos, Pavlos

    2017-11-01

    4D PC-MRI is a modality capable of providing time-resolved velocity fields in cerebral aneurysms in vivo. The MRI-measured velocities and subsequent hemodynamic parameters such as wall shear stress, and oscillatory shear index, can help neurosurgeons decide a course of treatment for a patient, e.g. whether to treat or monitor the aneurysm. However, low spatiotemporal resolution, limited velocity dynamic range, and inherent noise of PC-MRI velocity fields can have a notable effect on subsequent calculations, and should be investigated. In this work, we compare velocity fields obtained with 4D PC-MRI, computational fluid dynamics (CFD) and volumetric particle image velocimetry (PIV), using a patient-specific model of a basilar tip aneurysm. The same in vitro model is used for all three modalities and flow input parameters are controlled. In vivo, PC-MRI data was also acquired for this patient and used for comparison. Specifically, we investigate differences in the resulting velocity fields and biases in subsequent calculations. Further, we explore the effect these errors may have on assessment of the aneurysm progression and seek to develop corrective algorithms and other methodologies that can be used to improve the accuracy of hemodynamic analysis in clinical setting.

  3. Evaluation of tablet PC as a tool for teaching tooth brushing to children.

    PubMed

    Salama, F; Abobakr, I; Al-Khodair, N; Al-Wakeel, M

    2016-12-01

    This study evaluated the effect of a single time tooth brushing instruction using video on a tablet PC (Apple iPad) compared to operator presentation using jaw model for plaque removal. This cross-sectional study included a convenience sample of 100 children divided into two groups. For Group 1 brushing was demonstrated to the child by the operator with the use of a jaw model. This demonstration was videotaped for subsequent use in Group 2 using a tablet PC (Apple iPad). Plaque index was recorded before and after demonstration of the assigned method of teaching tooth brushing. The results showed a significant difference using the two methods. The difference between the mean plaque index values with the jaw model and tablet PC at baseline and after tooth brushing represented 17.27% (50% improvement) and 11.56% (34% improvement) respectively. Boys showed a 18.3%. higher improvement in tooth brushing compared to girls. Seventy-five percent of the children reported using tablet computers in their daily life. CONCLUSION Teaching children by using a jaw model was more effective in improving plaque index score than using video on tablet PC by 16%. Both methods of tooth brushing teaching were fully accepted by all children.

  4. CuPc/Au(1 1 0): Determination of the azimuthal alignment by a combination of angle-resolved photoemission and density functional theory

    PubMed Central

    Lüftner, Daniel; Milko, Matus; Huppmann, Sophia; Scholz, Markus; Ngyuen, Nam; Wießner, Michael; Schöll, Achim; Reinert, Friedrich; Puschnig, Peter

    2014-01-01

    Here we report on a combined experimental and theoretical study on the structural and electronic properties of a monolayer of Copper-Phthalocyanine (CuPc) on the Au(1 1 0) surface. Low-energy electron diffraction reveals a commensurate overlayer unit cell containing one adsorbate species. The azimuthal alignment of the CuPc molecule is revealed by comparing experimental constant binding energy (kxky)-maps using angle-resolved photoelectron spectroscopy with theoretical momentum maps of the free molecule's highest occupied molecular orbital (HOMO). This structural information is confirmed by total energy calculations within the framework of van-der-Waals corrected density functional theory. The electronic structure is further analyzed by computing the molecule-projected density of states, using both a semi-local and a hybrid exchange-correlation functional. In agreement with experiment, the HOMO is located about 1.2 eV below the Fermi-level, while there is no significant charge transfer into the molecule and the CuPc LUMO remains unoccupied on the Au(1 1 0) surface. PMID:25284953

  5. Fuel cells provide a revenue-generating solution to power quality problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, J.M. Jr.

    Electric power quality and reliability are becoming increasingly important as computers and microprocessors assume a larger role in commercial, health care and industrial buildings and processes. At the same time, constraints on transmission and distribution of power from central stations are making local areas vulnerable to low voltage, load addition limitations, power quality and power reliability problems. Many customers currently utilize some form of premium power in the form of standby generators and/or UPS systems. These include customers where continuous power is required because of health and safety or security reasons (hospitals, nursing homes, places of public assembly, air trafficmore » control, military installations, telecommunications, etc.) These also include customers with industrial or commercial processes which can`t tolerance an interruption of power because of product loss or equipment damage. The paper discusses the use of the PC25 fuel cell power plant for backup and parallel power supplies for critical industrial applications. Several PC25 installations are described: the use of propane in a PC25; the use by rural cooperatives; and a demonstration of PC25 technology using landfill gas.« less

  6. PC graphics generation and management tool for real-time applications

    NASA Technical Reports Server (NTRS)

    Truong, Long V.

    1992-01-01

    A graphics tool was designed and developed for easy generation and management of personal computer graphics. It also provides methods and 'run-time' software for many common artificial intelligence (AI) or expert system (ES) applications.

  7. Personal Computer (PC) based image processing applied to fluid mechanics

    NASA Technical Reports Server (NTRS)

    Cho, Y.-C.; Mclachlan, B. G.

    1987-01-01

    A PC based image processing system was employed to determine the instantaneous velocity field of a two-dimensional unsteady flow. The flow was visualized using a suspension of seeding particles in water, and a laser sheet for illumination. With a finite time exposure, the particle motion was captured on a photograph as a pattern of streaks. The streak pattern was digitized and processed using various imaging operations, including contrast manipulation, noise cleaning, filtering, statistical differencing, and thresholding. Information concerning the velocity was extracted from the enhanced image by measuring the length and orientation of the individual streaks. The fluid velocities deduced from the randomly distributed particle streaks were interpolated to obtain velocities at uniform grid points. For the interpolation a simple convolution technique with an adaptive Gaussian window was used. The results are compared with a numerical prediction by a Navier-Stokes computation.

  8. A proposed computer diagnostic system for malignant melanoma (CDSMM).

    PubMed

    Shao, S; Grams, R R

    1994-04-01

    This paper describes a computer diagnostic system for malignant melanoma. The diagnostic system is a rule base system based on image analyses and works under the PC windows environment. It consists of seven modules: I/O module, Patient/Clinic database, image processing module, classification module, rule base module and system control module. In the system, the image analyses are automatically carried out, and database management is efficient and fast. Both final clinic results and immediate results from various modules such as measured features, feature pictures and history records of the disease lesion can be presented on screen or printed out from each corresponding module or from the I/O module. The system can also work as a doctor's office-based tool to aid dermatologists with details not perceivable by the human eye. Since the system operates on a general purpose PC, it can be made portable if the I/O module is disconnected.

  9. Cosmic reionization on computers. I. Design and calibration of simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gnedin, Nickolay Y., E-mail: gnedin@fnal.gov

    Cosmic Reionization On Computers is a long-term program of numerical simulations of cosmic reionization. Its goal is to model fully self-consistently (albeit not necessarily from the first principles) all relevant physics, from radiative transfer to gas dynamics and star formation, in simulation volumes of up to 100 comoving Mpc, and with spatial resolution approaching 100 pc in physical units. In this method paper, we describe our numerical method, the design of simulations, and the calibration of numerical parameters. Using several sets (ensembles) of simulations in 20 h {sup –1} Mpc and 40 h {sup –1} Mpc boxes with spatial resolutionmore » reaching 125 pc at z = 6, we are able to match the observed galaxy UV luminosity functions at all redshifts between 6 and 10, as well as obtain reasonable agreement with the observational measurements of the Gunn-Peterson optical depth at z < 6.« less

  10. Nutrient Patterns and Their Association with Socio-Demographic, Lifestyle Factors and Obesity Risk in Rural South African Adolescents

    PubMed Central

    Pisa, Pedro T.; Pedro, Titilola M.; Kahn, Kathleen; Tollman, Stephen M.; Pettifor, John M.; Norris, Shane A.

    2015-01-01

    The aim of this study was to identify and describe the diversity of nutrient patterns and how they associate with socio-demographic and lifestyle factors including body mass index in rural black South African adolescents. Nutrient patterns were identified from quantified food frequency questionnaires (QFFQ) in 388 rural South African adolescents between the ages of 11–15 years from the Agincourt Health and Socio-demographic Surveillance System (AHDSS). Principle Component Analysis (PCA) was applied to 25 nutrients derived from QFFQs. Multiple linear regression and partial R2 models were fitted and computed respectively for each of the retained principal component (PC) scores on socio-demographic and lifestyle characteristics including body mass index (BMI) for age Z scores. Four nutrient patterns explaining 79% of the total variance were identified: PCI (26%) was characterized by animal derived nutrients; PC2 (21%) by vitamins, fibre and vegetable oil nutrients; PC3 (19%) by both animal and plant derived nutrients (mixed diet driven nutrients); and PC4 (13%) by starch and folate. A positive and significant association was observed with BMI for age Z scores per 1 standard deviation (SD) increase in PC1 (0.13 (0.02; 0.24); p = 0.02) and PC4 (0.10 (−0.01; 0.21); p = 0.05) scores only. We confirmed variability in nutrient patterns that were significantly associated with various lifestyle factors including obesity. PMID:25984738

  11. A fast CT reconstruction scheme for a general multi-core PC.

    PubMed

    Zeng, Kai; Bai, Erwei; Wang, Ge

    2007-01-01

    Expensive computational cost is a severe limitation in CT reconstruction for clinical applications that need real-time feedback. A primary example is bolus-chasing computed tomography (CT) angiography (BCA) that we have been developing for the past several years. To accelerate the reconstruction process using the filtered backprojection (FBP) method, specialized hardware or graphics cards can be used. However, specialized hardware is expensive and not flexible. The graphics processing unit (GPU) in a current graphic card can only reconstruct images in a reduced precision and is not easy to program. In this paper, an acceleration scheme is proposed based on a multi-core PC. In the proposed scheme, several techniques are integrated, including utilization of geometric symmetry, optimization of data structures, single-instruction multiple-data (SIMD) processing, multithreaded computation, and an Intel C++ compilier. Our scheme maintains the original precision and involves no data exchange between the GPU and CPU. The merits of our scheme are demonstrated in numerical experiments against the traditional implementation. Our scheme achieves a speedup of about 40, which can be further improved by several folds using the latest quad-core processors.

  12. A Fast CT Reconstruction Scheme for a General Multi-Core PC

    PubMed Central

    Zeng, Kai; Bai, Erwei; Wang, Ge

    2007-01-01

    Expensive computational cost is a severe limitation in CT reconstruction for clinical applications that need real-time feedback. A primary example is bolus-chasing computed tomography (CT) angiography (BCA) that we have been developing for the past several years. To accelerate the reconstruction process using the filtered backprojection (FBP) method, specialized hardware or graphics cards can be used. However, specialized hardware is expensive and not flexible. The graphics processing unit (GPU) in a current graphic card can only reconstruct images in a reduced precision and is not easy to program. In this paper, an acceleration scheme is proposed based on a multi-core PC. In the proposed scheme, several techniques are integrated, including utilization of geometric symmetry, optimization of data structures, single-instruction multiple-data (SIMD) processing, multithreaded computation, and an Intel C++ compilier. Our scheme maintains the original precision and involves no data exchange between the GPU and CPU. The merits of our scheme are demonstrated in numerical experiments against the traditional implementation. Our scheme achieves a speedup of about 40, which can be further improved by several folds using the latest quad-core processors. PMID:18256731

  13. 15Mcps photon-counting X-ray computed tomography system using a ZnO-MPPC detector and its application to gadolinium imaging.

    PubMed

    Sato, Eiichi; Sugimura, Shigeaki; Endo, Haruyuki; Oda, Yasuyuki; Abudurexiti, Abulajiang; Hagiwara, Osahiko; Osawa, Akihiro; Matsukiyo, Hiroshi; Enomoto, Toshiyuki; Watanabe, Manabu; Kusachi, Shinya; Sato, Shigehiro; Ogawa, Akira; Onagawa, Jun

    2012-01-01

    15Mcps photon-counting X-ray computed tomography (CT) system is a first-generation type and consists of an X-ray generator, a turntable, a translation stage, a two-stage controller, a detector consisting of a 2mm-thick zinc-oxide (ZnO) single-crystal scintillator and an MPPC (multipixel photon counter) module, a counter card (CC), and a personal computer (PC). High-speed photon counting was carried out using the detector in the X-ray CT system. The maximum count rate was 15Mcps (mega counts per second) at a tube voltage of 100kV and a tube current of 1.95mA. Tomography is accomplished by repeated translations and rotations of an object, and projection curves of the object are obtained by the translation. The pulses of the event signal from the module are counted by the CC in conjunction with the PC. The minimum exposure time for obtaining a tomogram was 15min, and photon-counting CT was accomplished using gadolinium-based contrast media. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. Message Passing and Shared Address Space Parallelism on an SMP Cluster

    NASA Technical Reports Server (NTRS)

    Shan, Hongzhang; Singh, Jaswinder P.; Oliker, Leonid; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Currently, message passing (MP) and shared address space (SAS) are the two leading parallel programming paradigms. MP has been standardized with MPI, and is the more common and mature approach; however, code development can be extremely difficult, especially for irregularly structured computations. SAS offers substantial ease of programming, but may suffer from performance limitations due to poor spatial locality and high protocol overhead. In this paper, we compare the performance of and the programming effort required for six applications under both programming models on a 32-processor PC-SMP cluster, a platform that is becoming increasingly attractive for high-end scientific computing. Our application suite consists of codes that typically do not exhibit scalable performance under shared-memory programming due to their high communication-to-computation ratios and/or complex communication patterns. Results indicate that SAS can achieve about half the parallel efficiency of MPI for most of our applications, while being competitive for the others. A hybrid MPI+SAS strategy shows only a small performance advantage over pure MPI in some cases. Finally, improved implementations of two MPI collective operations on PC-SMP clusters are presented.

  15. The universal toolbox thermal imager

    NASA Astrophysics Data System (ADS)

    Hollock, Steve; Jones, Graham; Usowicz, Paul

    2003-09-01

    The Introduction of Microsoft Pocket PC 2000/2002 has seen a standardisation of the operating systems used by the majority of PDA manufacturers. This, coupled with the recent price reductions associated with these devices, has led to a rapid increase in the sales of such units; their use is now common in industrial, commercial and domestic applications throughout the world. This paper describes the results of a programme to develop a thermal imager that will interface directly to all of these units so as to take advantage of the existing and future installed base of such devices. The imager currently interfaces with virtually any Pocket PC which provides the necessary processing, display and storage capability; as an alternative, the output of the unit can be visualised and processed in real time using a PC or laptop computer. In future, the open architecture employed by this imager will allow it to support all mobile computing devices, including phones and PDAs. The imager has been designed for one-handed or two-handed operation so that it may be pointed at awkward angles or used in confined spaces; this flexibility of use coupled with the extensive feature range and exceedingly low-cost of the imager, is extending the marketplace for thermal imaging from military and professional, through industrial to the commercial and domestic marketplaces.

  16. Quantum-Chemical Approach to NMR Chemical Shifts in Paramagnetic Solids Applied to LiFePO4 and LiCoPO4.

    PubMed

    Mondal, Arobendo; Kaupp, Martin

    2018-04-05

    A novel protocol to compute and analyze NMR chemical shifts for extended paramagnetic solids, accounting comprehensively for Fermi-contact (FC), pseudocontact (PC), and orbital shifts, is reported and applied to the important lithium ion battery cathode materials LiFePO 4 and LiCoPO 4 . Using an EPR-parameter-based ansatz, the approach combines periodic (hybrid) DFT computation of hyperfine and orbital-shielding tensors with an incremental cluster model for g- and zero-field-splitting (ZFS) D-tensors. The cluster model allows the use of advanced multireference wave function methods (such as CASSCF or NEVPT2). Application of this protocol shows that the 7 Li shifts in the high-voltage cathode material LiCoPO 4 are dominated by spin-orbit-induced PC contributions, in contrast with previous assumptions, fundamentally changing interpretations of the shifts in terms of covalency. PC contributions are smaller for the 7 Li shifts of the related LiFePO 4 , where FC and orbital shifts dominate. The 31 P shifts of both materials finally are almost pure FC shifts. Nevertheless, large ZFS contributions can give rise to non-Curie temperature dependences for both 7 Li and 31 P shifts.

  17. Personal Computer Based Controller For Switched Reluctance Motor Drives

    NASA Astrophysics Data System (ADS)

    Mang, X.; Krishnan, R.; Adkar, S.; Chandramouli, G.

    1987-10-01

    Th9, switched reluctance motor (SRM) has recently gained considerable attention in the variable speed drive market. Two important factors that have contributed to this are, the simplicity of construction and the possibility of developing low cost con-trollers with minimum number of switching devices in the drive circuits. This is mainly due to the state-of-art of the present digital circuits technology and the low cost of switching devices. The control of this motor drive is under research. Optimized performance of the SRM motor drive is very dependent on the integration of the controller, converter and the motor. This research on system integration involves considerable changes in the control algorithms and their implementation. A Personal computer (PC) based controller is very appropriate for this purpose. Accordingly, the present paper is concerned with the design of a PC based controller for a SRM. The PC allows for real-time microprocessor control with the possibility of on-line system parameter modifications. Software reconfiguration of this controller is easier than a hardware based controller. User friendliness is a natural consequence of such a system. Considering the low cost of PCs, this controller will offer an excellent cost-effective means of studying the control strategies for the SRM drive intop greater detail than in the past.

  18. Stellar Inertial Navigation Workstation

    NASA Technical Reports Server (NTRS)

    Johnson, W.; Johnson, B.; Swaminathan, N.

    1989-01-01

    Software and hardware assembled to support specific engineering activities. Stellar Inertial Navigation Workstation (SINW) is integrated computer workstation providing systems and engineering support functions for Space Shuttle guidance and navigation-system logistics, repair, and procurement activities. Consists of personal-computer hardware, packaged software, and custom software integrated together into user-friendly, menu-driven system. Designed to operate on IBM PC XT. Applied in business and industry to develop similar workstations.

  19. Digital video technology, today and tomorrow

    NASA Astrophysics Data System (ADS)

    Liberman, J.

    1994-10-01

    Digital video is probably computing's fastest moving technology today. Just three years ago, the zenith of digital video technology on the PC was the successful marriage of digital text and graphics with analog audio and video by means of expensive analog laser disc players and video overlay boards. The state of the art involves two different approaches to fully digital video on computers: hardware-assisted and software-only solutions.

  20. Computing at DESY — current setup, trends and strategic directions

    NASA Astrophysics Data System (ADS)

    Ernst, Michael

    1998-05-01

    Since the HERA experiments H1 and ZEUS started data taking in '92, the computing environment at DESY has changed dramatically. Running a mainframe centred computing for more than 20 years, DESY switched to a heterogeneous, fully distributed computing environment within only about two years in almost every corner where computing has its applications. The computing strategy was highly influenced by the needs of the user community. The collaborations are usually limited by current technology and their ever increasing demands is the driving force for central computing to always move close to the technology edge. While DESY's central computing has a multidecade experience in running Central Data Recording/Central Data Processing for HEP experiments, the most challenging task today is to provide for clear and homogeneous concepts in the desktop area. Given that lowest level commodity hardware draws more and more attention, combined with the financial constraints we are facing already today, we quickly need concepts for integrated support of a versatile device which has the potential to move into basically any computing area in HEP. Though commercial solutions, especially addressing the PC management/support issues, are expected to come to market in the next 2-3 years, we need to provide for suitable solutions now. Buying PC's at DESY currently at a rate of about 30/month will otherwise absorb any available manpower in central computing and still will leave hundreds of unhappy people alone. Though certainly not the only region, the desktop issue is one of the most important one where we need HEP-wide collaboration to a large extent, and right now. Taking into account that there is traditionally no room for R&D at DESY, collaboration, meaning sharing experience and development resources within the HEP community, is a predominant factor for us.

  1. Dielectric relaxation of ethylene carbonate and propylene carbonate from molecular dynamics simulations

    DOE PAGES

    Chaudhari, Mangesh I.; You, Xinli; Pratt, Lawrence R.; ...

    2015-11-24

    Ethylene carbonate (EC) and propylene carbonate (PC) are widely used solvents in lithium (Li)-ion batteries and supercapacitors. Ion dissolution and diffusion in those media are correlated with solvent dielectric responses. Here, we use all-atom molecular dynamics simulations of the pure solvents to calculate dielectric constants and relaxation times, and molecular mobilities. The computed results are compared with limited available experiments to assist more exhaustive studies of these important characteristics. As a result, the observed agreement is encouraging and provides guidance for further validation of force-field simulation models for EC and PC solvents.

  2. Simultaneous real-time data collection methods

    NASA Technical Reports Server (NTRS)

    Klincsek, Thomas

    1992-01-01

    This paper describes the development of electronic test equipment which executes, supervises, and reports on various tests. This validation process uses computers to analyze test results and report conclusions. The test equipment consists of an electronics component and the data collection and reporting unit. The PC software, display screens, and real-time data-base are described. Pass-fail procedures and data replay are discussed. The OS2 operating system and Presentation Manager user interface system were used to create a highly interactive automated system. The system outputs are hardcopy printouts and MS DOS format files which may be used as input for other PC programs.

  3. Flow characteristics in a canine aneurysm model: A comparison of 4D accelerated phase-contrast MR measurements and computational fluid dynamics simulations

    PubMed Central

    Jiang, Jingfeng; Johnson, Kevin; Valen-Sendstad, Kristian; Mardal, Kent-Andre; Wieben, Oliver; Strother, Charles

    2011-01-01

    Purpose: Our purpose was to compare quantitatively velocity fields in and around experimental canine aneurysms as measured using an accelerated 4D PC-MR angiography (MRA) method and calculated based on animal-specific CFD simulations. Methods: Two animals with a surgically created bifurcation aneurysm were imaged using an accelerated 4D PC-MRA method. Meshes were created based on the geometries obtained from the PC-MRA and simulations using “subject-specific” pulsatile velocity waveforms and geometries were then solved using a commercial CFD solver. Qualitative visual assessments and quantitative comparisons of the time-resolved velocity fields obtained from the PC-MRA measurements and the CFD simulations were performed using a defined similarity metric combining both angular and magnitude differences of vector fields. Results: PC-MRA and image-based CFD not only yielded visually consistent representations of 3D streamlines in and around both aneurysms, but also showed good agreement with regard to the spatial velocity distributions. The estimated similarity between time-resolved velocity fields from both techniques was reasonably high (mean value >0.60; one being the highest and zero being the lowest). Relative differences in inflow and outflow zones among selected planes were also reasonable (on the order of 10%–20%). The correlation between CFD-calculated and PC-MRA-measured time-averaged wall shear stresses was low (0.22 and 0.31, p < 0.001). Conclusions: In two experimental canine aneurysms, PC-MRA and image-based CFD showed favorable agreement in intra-aneurismal velocity fields. Combining these two complementary techniques likely will further improve the ability to characterize and interpret the complex flow that occurs in human intracranial aneurysms. PMID:22047395

  4. A theoretical framework for analyzing coupled neuronal networks: Application to the olfactory system.

    PubMed

    Barreiro, Andrea K; Gautam, Shree Hari; Shew, Woodrow L; Ly, Cheng

    2017-10-01

    Determining how synaptic coupling within and between regions is modulated during sensory processing is an important topic in neuroscience. Electrophysiological recordings provide detailed information about neural spiking but have traditionally been confined to a particular region or layer of cortex. Here we develop new theoretical methods to study interactions between and within two brain regions, based on experimental measurements of spiking activity simultaneously recorded from the two regions. By systematically comparing experimentally-obtained spiking statistics to (efficiently computed) model spike rate statistics, we identify regions in model parameter space that are consistent with the experimental data. We apply our new technique to dual micro-electrode array in vivo recordings from two distinct regions: olfactory bulb (OB) and anterior piriform cortex (PC). Our analysis predicts that: i) inhibition within the afferent region (OB) has to be weaker than the inhibition within PC, ii) excitation from PC to OB is generally stronger than excitation from OB to PC, iii) excitation from PC to OB and inhibition within PC have to both be relatively strong compared to presynaptic inputs from OB. These predictions are validated in a spiking neural network model of the OB-PC pathway that satisfies the many constraints from our experimental data. We find when the derived relationships are violated, the spiking statistics no longer satisfy the constraints from the data. In principle this modeling framework can be adapted to other systems and be used to investigate relationships between other neural attributes besides network connection strengths. Thus, this work can serve as a guide to further investigations into the relationships of various neural attributes within and across different regions during sensory processing.

  5. Development of a PC-based diabetes simulator in collaboration with teenagers with type 1 diabetes.

    PubMed

    Nordfeldt, S; Hanberger, L; Malm, F; Ludvigsson, J

    2007-02-01

    The main aim of this study was to develop and test in a pilot study a PC-based interactive diabetes simulator prototype as a part of future Internet-based support systems for young teenagers and their families. A second aim was to gain experience in user-centered design (UCD) methods applied to such subjects. Using UCD methods, a computer scientist participated in iterative user group sessions involving teenagers with Type 1 diabetes 13-17 years old and parents. Input was transformed into a requirements specification by the computer scientist and advisors. This was followed by gradual prototype development based on a previously developed mathematical core. Individual test sessions were followed by a pilot study with five subjects testing a prototype. The process was evaluated by registration of flow and content of input and opinions from expert advisors. It was initially difficult to motivate teenagers to participate. User group discussion topics ranged from concrete to more academic matters. The issue of a simulator created active discussions among parents and teenagers. A large amount of input was generated from discussions among the teenagers. Individual test runs generated useful input. A pilot study suggested that the gradually elaborated software was functional. A PC-based diabetes simulator may create substantial interest among teenagers and parents, and the prototype seems worthy of further development and studies. UCD methods may generate significant input for computer support system design work and contribute to a functional design. Teenager involvement in design work may require time, patience, and flexibility.

  6. Strategies for reducing large fMRI data sets for independent component analysis.

    PubMed

    Wang, Ze; Wang, Jiongjiong; Calhoun, Vince; Rao, Hengyi; Detre, John A; Childress, Anna R

    2006-06-01

    In independent component analysis (ICA), principal component analysis (PCA) is generally used to reduce the raw data to a few principal components (PCs) through eigenvector decomposition (EVD) on the data covariance matrix. Although this works for spatial ICA (sICA) on moderately sized fMRI data, it is intractable for temporal ICA (tICA), since typical fMRI data have a high spatial dimension, resulting in an unmanageable data covariance matrix. To solve this problem, two practical data reduction methods are presented in this paper. The first solution is to calculate the PCs of tICA from the PCs of sICA. This approach works well for moderately sized fMRI data; however, it is highly computationally intensive, even intractable, when the number of scans increases. The second solution proposed is to perform PCA decomposition via a cascade recursive least squared (CRLS) network, which provides a uniform data reduction solution for both sICA and tICA. Without the need to calculate the covariance matrix, CRLS extracts PCs directly from the raw data, and the PC extraction can be terminated after computing an arbitrary number of PCs without the need to estimate the whole set of PCs. Moreover, when the whole data set becomes too large to be loaded into the machine memory, CRLS-PCA can save data retrieval time by reading the data once, while the conventional PCA requires numerous data retrieval steps for both covariance matrix calculation and PC extractions. Real fMRI data were used to evaluate the PC extraction precision, computational expense, and memory usage of the presented methods.

  7. On combination of strict Bayesian principles with model reduction technique or how stochastic model calibration can become feasible for large-scale applications

    NASA Astrophysics Data System (ADS)

    Oladyshkin, S.; Schroeder, P.; Class, H.; Nowak, W.

    2013-12-01

    Predicting underground carbon dioxide (CO2) storage represents a challenging problem in a complex dynamic system. Due to lacking information about reservoir parameters, quantification of uncertainties may become the dominant question in risk assessment. Calibration on past observed data from pilot-scale test injection can improve the predictive power of the involved geological, flow, and transport models. The current work performs history matching to pressure time series from a pilot storage site operated in Europe, maintained during an injection period. Simulation of compressible two-phase flow and transport (CO2/brine) in the considered site is computationally very demanding, requiring about 12 days of CPU time for an individual model run. For that reason, brute-force approaches for calibration are not feasible. In the current work, we explore an advanced framework for history matching based on the arbitrary polynomial chaos expansion (aPC) and strict Bayesian principles. The aPC [1] offers a drastic but accurate stochastic model reduction. Unlike many previous chaos expansions, it can handle arbitrary probability distribution shapes of uncertain parameters, and can therefore handle directly the statistical information appearing during the matching procedure. We capture the dependence of model output on these multipliers with the expansion-based reduced model. In our study we keep the spatial heterogeneity suggested by geophysical methods, but consider uncertainty in the magnitude of permeability trough zone-wise permeability multipliers. Next combined the aPC with Bootstrap filtering (a brute-force but fully accurate Bayesian updating mechanism) in order to perform the matching. In comparison to (Ensemble) Kalman Filters, our method accounts for higher-order statistical moments and for the non-linearity of both the forward model and the inversion, and thus allows a rigorous quantification of calibrated model uncertainty. The usually high computational costs of accurate filtering become very feasible for our suggested aPC-based calibration framework. However, the power of aPC-based Bayesian updating strongly depends on the accuracy of prior information. In the current study, the prior assumptions on the model parameters were not satisfactory and strongly underestimate the reservoir pressure. Thus, the aPC-based response surface used in Bootstrap filtering is fitted to a distant and poorly chosen region within the parameter space. Thanks to the iterative procedure suggested in [2] we overcome this drawback with small computational costs. The iteration successively improves the accuracy of the expansion around the current estimation of the posterior distribution. The final result is a calibrated model of the site that can be used for further studies, with an excellent match to the data. References [1] Oladyshkin S. and Nowak W. Data-driven uncertainty quantification using the arbitrary polynomial chaos expansion. Reliability Engineering and System Safety, 106:179-190, 2012. [2] Oladyshkin S., Class H., Nowak W. Bayesian updating via Bootstrap filtering combined with data-driven polynomial chaos expansions: methodology and application to history matching for carbon dioxide storage in geological formations. Computational Geosciences, 17 (4), 671-687, 2013.

  8. METCAN-PC - METAL MATRIX COMPOSITE ANALYZER

    NASA Technical Reports Server (NTRS)

    Murthy, P. L.

    1994-01-01

    High temperature metal matrix composites offer great potential for use in advanced aerospace structural applications. The realization of this potential however, requires concurrent developments in (1) a technology base for fabricating high temperature metal matrix composite structural components, (2) experimental techniques for measuring their thermal and mechanical characteristics, and (3) computational methods to predict their behavior. METCAN (METal matrix Composite ANalyzer) is a computer program developed to predict this behavior. METCAN can be used to computationally simulate the non-linear behavior of high temperature metal matrix composites (HT-MMC), thus allowing the potential payoff for the specific application to be assessed. It provides a comprehensive analysis of composite thermal and mechanical performance. METCAN treats material nonlinearity at the constituent (fiber, matrix, and interphase) level, where the behavior of each constituent is modeled accounting for time-temperature-stress dependence. The composite properties are synthesized from the constituent instantaneous properties by making use of composite micromechanics and macromechanics. Factors which affect the behavior of the composite properties include the fabrication process variables, the fiber and matrix properties, the bonding between the fiber and matrix and/or the properties of the interphase between the fiber and matrix. The METCAN simulation is performed as point-wise analysis and produces composite properties which are readily incorporated into a finite element code to perform a global structural analysis. After the global structural analysis is performed, METCAN decomposes the composite properties back into the localized response at the various levels of the simulation. At this point the constituent properties are updated and the next iteration in the analysis is initiated. This cyclic procedure is referred to as the integrated approach to metal matrix composite analysis. METCAN-PC is written in FORTRAN 77 for IBM PC series and compatible computers running MS-DOS. An 80286 machine with an 80287 math co-processor is required for execution. The executable requires at least 640K of RAM and DOS 3.1 or higher. The package includes sample executables which were compiled under Microsoft FORTRAN v. 5.1. The standard distribution medium for this program is one 5.25 inch 360K MS-DOS format diskette. The contents of the diskette are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. METCAN-PC was developed in 1992.

  9. Toward the optimization of PC-based training

    NASA Astrophysics Data System (ADS)

    Cho, Kohei; Murai, Shunji

    Since 1992, the National Space Development Agency of Japan (NASDA) and the Economic and Social Commission for Asia and the Pacific (ESCAP) have been co-organising the Regional Remote Sensing Seminar on Tropical Ecosystem Management (Program Chairman: Prof. Shunji Murai) every year in some country in Asia. In these seminars, the members of the ISPRS Working Group VI/2 'Computer Assisted Teaching' have been performing a PC-based hands-on-training on remote sensing and GIS for beginners. The main objective of the training was to transfer not only knowledge but also the technology of remote sensing and GIS to the beginners. The software and CD-ROM data set provided at the training were well designed not only for training but also for practical data analysis. This paper presents an outline of the training and discusses the optimisation of PC-based training for remote sensing and GIS.

  10. Three-dimensional structure of the Upper Scorpius association with the Gaia first data release

    NASA Astrophysics Data System (ADS)

    Galli, Phillip A. B.; Joncour, Isabelle; Moraux, Estelle

    2018-06-01

    Using new proper motion data from recently published catalogues, we revisit the membership of previously identified members of the Upper Scorpius association. We confirmed 750 of them as cluster members based on the convergent point method, compute their kinematic parallaxes, and combined them with Gaia parallaxes to investigate the 3D structure and geometry of the association using a robust covariance method. We find a mean distance of 146 ± 3 ± 6 pc and show that the morphology of the association defined by the brightest (and most massive) stars yields a prolate ellipsoid with dimensions of 74 × 38 × 32 pc3, while the faintest cluster members define a more elongated structure with dimensions of 98 × 24 × 18 pc3. We suggest that the different properties of both populations are an imprint of the star formation history in this region.

  11. Multi-camera real-time three-dimensional tracking of multiple flying animals

    PubMed Central

    Straw, Andrew D.; Branson, Kristin; Neumann, Titus R.; Dickinson, Michael H.

    2011-01-01

    Automated tracking of animal movement allows analyses that would not otherwise be possible by providing great quantities of data. The additional capability of tracking in real time—with minimal latency—opens up the experimental possibility of manipulating sensory feedback, thus allowing detailed explorations of the neural basis for control of behaviour. Here, we describe a system capable of tracking the three-dimensional position and body orientation of animals such as flies and birds. The system operates with less than 40 ms latency and can track multiple animals simultaneously. To achieve these results, a multi-target tracking algorithm was developed based on the extended Kalman filter and the nearest neighbour standard filter data association algorithm. In one implementation, an 11-camera system is capable of tracking three flies simultaneously at 60 frames per second using a gigabit network of nine standard Intel Pentium 4 and Core 2 Duo computers. This manuscript presents the rationale and details of the algorithms employed and shows three implementations of the system. An experiment was performed using the tracking system to measure the effect of visual contrast on the flight speed of Drosophila melanogaster. At low contrasts, speed is more variable and faster on average than at high contrasts. Thus, the system is already a useful tool to study the neurobiology and behaviour of freely flying animals. If combined with other techniques, such as ‘virtual reality’-type computer graphics or genetic manipulation, the tracking system would offer a powerful new way to investigate the biology of flying animals. PMID:20630879

  12. Fast generation of computer-generated hologram by graphics processing unit

    NASA Astrophysics Data System (ADS)

    Matsuda, Sho; Fujii, Tomohiko; Yamaguchi, Takeshi; Yoshikawa, Hiroshi

    2009-02-01

    A cylindrical hologram is well known to be viewable in 360 deg. This hologram depends high pixel resolution.Therefore, Computer-Generated Cylindrical Hologram (CGCH) requires huge calculation amount.In our previous research, we used look-up table method for fast calculation with Intel Pentium4 2.8 GHz.It took 480 hours to calculate high resolution CGCH (504,000 x 63,000 pixels and the average number of object points are 27,000).To improve quality of CGCH reconstructed image, fringe pattern requires higher spatial frequency and resolution.Therefore, to increase the calculation speed, we have to change the calculation method. In this paper, to reduce the calculation time of CGCH (912,000 x 108,000 pixels), we employ Graphics Processing Unit (GPU).It took 4,406 hours to calculate high resolution CGCH on Xeon 3.4 GHz.Since GPU has many streaming processors and a parallel processing structure, GPU works as the high performance parallel processor.In addition, GPU gives max performance to 2 dimensional data and streaming data.Recently, GPU can be utilized for the general purpose (GPGPU).For example, NVIDIA's GeForce7 series became a programmable processor with Cg programming language.Next GeForce8 series have CUDA as software development kit made by NVIDIA.Theoretically, calculation ability of GPU is announced as 500 GFLOPS. From the experimental result, we have achieved that 47 times faster calculation compared with our previous work which used CPU.Therefore, CGCH can be generated in 95 hours.So, total time is 110 hours to calculate and print the CGCH.

  13. Optical coherence tomography for an in-vivo study of posterior-capsule-opacification types and their influence on the total-pulse energy required for Nd:YAG capsulotomy: a case series.

    PubMed

    Hawlina, Gregor; Perovšek, Darko; Drnovšek-Olup, Brigita; MoŽina, Janez; Gregorčič, Peter

    2014-11-18

    Posterior capsule opacification (PCO) is the most common post-operative complication associated with cataract surgery and is mostly treated with Nd:YAG laser capsulotomy. Here, we demonstrate the use of high-resolution spectral-domain optical coherence tomography (OCT) as a technique for PCO analysis. Additionally, we evaluate the influence of PCO types and the distance between the intraocular lens (IOL) and the posterior capsule (PC), i.e., the IOL/PC distance, on the total-pulse energy required for the Nd:YAG laser posterior capsulotomy. 47 eyes with PCO scheduled for the Nd:YAG procedure were examined and divided into four categories: fibrosis, pearl, mixed type and late-postoperative capsular bag distension syndrome. Using custom-made computer software for OCT image analysis, the IOL/PC distances in two dimensions were measured. The IOL/PC distances were compared with those of a control group of 15 eyes without PCO. The influence of the different PCO types and the IOL/PC distance on the total-pulse energy required for the Nd:YAG procedure was analyzed. The total-pulse energy required for a laser capsulotomy differs significantly between PCO types (p = 0.005, Kruskal-Wallis test). The highest energy was required for the fibrosis PCO type, followed by mixed, pearl and late-postoperative capsular bag distension syndrome. The IOL/PC distance also significantly influenced the total-pulse energy required for laser capsulotomy (p = 0.028, linear regression). Lower total-pulse energy was expected for a larger IOL/PC distance. Our study indicates that the PCO types and the IOL/PC distance influence the total-pulse energy required for Nd:YAG capsulotomy. The presented OCT method has the potential to become an additional tool for PCO characterization. Our results are important for a better understanding of the photodisruptive mechanisms in Nd:YAG capsulotomy.

  14. Selection of polynomial chaos bases via Bayesian model uncertainty methods with applications to sparse approximation of PDEs with stochastic inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karagiannis, Georgios, E-mail: georgios.karagiannis@pnnl.gov; Lin, Guang, E-mail: guang.lin@pnnl.gov

    2014-02-15

    Generalized polynomial chaos (gPC) expansions allow us to represent the solution of a stochastic system using a series of polynomial chaos basis functions. The number of gPC terms increases dramatically as the dimension of the random input variables increases. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs when the corresponding deterministic solver is computationally expensive, evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solutions, in both spatial and random domains, bymore » coupling Bayesian model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spatial points, via (1) the Bayesian model average (BMA) or (2) the median probability model, and their construction as spatial functions on the spatial domain via spline interpolation. The former accounts for the model uncertainty and provides Bayes-optimal predictions; while the latter provides a sparse representation of the stochastic solutions by evaluating the expansion on a subset of dominating gPC bases. Moreover, the proposed methods quantify the importance of the gPC bases in the probabilistic sense through inclusion probabilities. We design a Markov chain Monte Carlo (MCMC) sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed methods are suitable for, but not restricted to, problems whose stochastic solutions are sparse in the stochastic space with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the accuracy and performance of the proposed methods and make comparisons with other approaches on solving elliptic SPDEs with 1-, 14- and 40-random dimensions.« less

  15. CIM for 300-mm semiconductor fab

    NASA Astrophysics Data System (ADS)

    Luk, Arthur

    1997-08-01

    Five years ago, factory automation (F/A) was not prevalent in the fab. Today facing the drastically changed market and the intense competition, management request the plant floor data be forward to their desktop computer. This increased demand rapidly pushed F/A to the computer integrated manufacturing (CIM). Through personalization, we successfully reduced a computer size, let them can be stored on our desktop. PC initiates a computer new era. With the advent of the network, the network computer (NC) creates fresh problems for us. When we plan to invest more than $3 billion to build new 300 mm fab, the next generation technology raises a challenging bar.

  16. Real-time plasma control based on the ISTTOK tomography diagnostica)

    NASA Astrophysics Data System (ADS)

    Carvalho, P. J.; Carvalho, B. B.; Neto, A.; Coelho, R.; Fernandes, H.; Sousa, J.; Varandas, C.; Chávez-Alarcón, E.; Herrera-Velázquez, J. J. E.

    2008-10-01

    The presently available processing power in generic processing units (GPUs) combined with state-of-the-art programmable logic devices benefits the implementation of complex, real-time driven, data processing algorithms for plasma diagnostics. A tomographic reconstruction diagnostic has been developed for the ISTTOK tokamak, based on three linear pinhole cameras each with ten lines of sight. The plasma emissivity in a poloidal cross section is computed locally on a submillisecond time scale, using a Fourier-Bessel algorithm, allowing the use of the output signals for active plasma position control. The data acquisition and reconstruction (DAR) system is based on ATCA technology and consists of one acquisition board with integrated field programmable gate array (FPGA) capabilities and a dual-core Pentium module running real-time application interface (RTAI) Linux. In this paper, the DAR real-time firmware/software implementation is presented, based on (i) front-end digital processing in the FPGA; (ii) a device driver specially developed for the board which enables streaming data acquisition to the host GPU; and (iii) a fast reconstruction algorithm running in Linux RTAI. This system behaves as a module of the central ISTTOK control and data acquisition system (FIRESIGNAL). Preliminary results of the above experimental setup are presented and a performance benchmarking against the magnetic coil diagnostic is shown.

  17. Digital dental surface registration with laser scanner for orthodontics set-up planning

    NASA Astrophysics Data System (ADS)

    Alcaniz-Raya, Mariano L.; Albalat, Salvador E.; Grau Colomer, Vincente; Monserrat, Carlos A.

    1997-05-01

    We present an optical measuring system based on laser structured light suitable for its diary use in orthodontics clinics that fit four main requirements: (1) to avoid use of stone models, (2) to automatically discriminate geometric points belonging to teeth and gum, (3) to automatically calculate diagnostic parameters used by orthodontists, (4) to make use of low cost and easy to use technology for future commercial use. Proposed technique is based in the use of hydrocolloids mould used by orthodontists for stone model obtention. These mould of the inside of patient's mouth are composed of very fluent materials like alginate or hydrocolloids that reveal fine details of dental anatomy. Alginate mould are both very easy to obtain and very low costly. Once captured, alginate moulds are digitized by mean of a newly developed and patented 3D dental scanner. Developed scanner is based in the optical triangulation method based in the projection of a laser line on the alginate mould surface. Line deformation gives uncalibrated shape information. Relative linear movements of the mould with respect to the sensor head gives more sections thus obtaining a full 3D uncalibrated dentition model. Developed device makes use of redundant CCD in the sensor head and servocontrolled linear axis for mould movement. Last step is calibration to get a real and precise X, Y, Z image. All the process is done automatically. The scanner has been specially adapted for 3D dental anatomy capturing in order to fulfill specific requirements such as: scanning time, accuracy, security and correct acquisition of 'hidden points' in alginate mould. Measurement realized on phantoms with known geometry quite similar to dental anatomy present errors less than 0,1 mm. Scanning of global dental anatomy is 2 minutes, and generation of 3D graphics of dental cast takes approximately 30 seconds in a Pentium-based PC.

  18. [Automated analyser of organ cultured corneal endothelial mosaic].

    PubMed

    Gain, P; Thuret, G; Chiquet, C; Gavet, Y; Turc, P H; Théillère, C; Acquart, S; Le Petit, J C; Maugery, J; Campos, L

    2002-05-01

    Until now, organ-cultured corneal endothelial mosaic has been assessed in France by cell counting using a calibrated graticule, or by drawing cells on a computerized image. The former method is unsatisfactory because it is characterized by a lack of objective evaluation of the cell surface and hexagonality and it requires an experienced technician. The latter method is time-consuming and requires careful attention. We aimed to make an efficient, fast and easy to use, automated digital analyzer of video images of the corneal endothelium. The hardware included a PC Pentium III ((R)) 800 MHz-Ram 256, a Data Translation 3155 acquisition card, a Sony SC 75 CE CCD camera, and a 22-inch screen. Special functions for automated cell boundary determination consisted of Plug-in programs included in the ImageTool software. Calibration was performed using a calibrated micrometer. Cell densities of 40 organ-cultured corneas measured by both manual and automated counting were compared using parametric tests (Student's t test for paired variables and the Pearson correlation coefficient). All steps were considered more ergonomic i.e., endothelial image capture, image selection, thresholding of multiple areas of interest, automated cell count, automated detection of errors in cell boundary drawing, presentation of the results in an HTML file including the number of counted cells, cell density, coefficient of variation of cell area, cell surface histogram and cell hexagonality. The device was efficient because the global process lasted on average 7 minutes and did not require an experienced technician. The correlation between cell densities obtained with both methods was high (r=+0.84, p<0.001). The results showed an under-estimation using manual counting (2191+/-322 vs. 2273+/-457 cell/mm(2), p=0.046), compared with the automated method. Our automated endothelial cell analyzer is efficient and gives reliable results quickly and easily. A multicentric validation would allow us to standardize cell counts among cornea banks in our country.

  19. Computer simulation of space charge

    NASA Astrophysics Data System (ADS)

    Yu, K. W.; Chung, W. K.; Mak, S. S.

    1991-05-01

    Using the particle-mesh (PM) method, a one-dimensional simulation of the well-known Langmuir-Child's law is performed on an INTEL 80386-based personal computer system. The program is coded in turbo basic (trademark of Borland International, Inc.). The numerical results obtained were in excellent agreement with theoretical predictions and the computational time required is quite modest. This simulation exercise demonstrates that some simple computer simulation using particles may be implemented successfully on PC's that are available today, and hopefully this will provide the necessary incentives for newcomers to the field who wish to acquire a flavor of the elementary aspects of the practice.

  20. Analysis of Disaster Preparedness Planning Measures in DoD Computer Facilities

    DTIC Science & Technology

    1993-09-01

    city, stae, aod ZP code) 10 Source of Funding Numbers SProgram Element No lProject No ITask No lWork Unit Accesion I 11 Title include security...Computer Disaster Recovery .... 13 a. PC and LAN Lessons Learned . . ..... 13 2. Distributed Architectures . . . .. . 14 3. Backups...amount of expense, but no client problems." (Leeke, 1993, p. 8) 2. Distributed Architectures The majority of operations that were disrupted by the

  1. TVC actuator model. [for the space shuttle main engine

    NASA Technical Reports Server (NTRS)

    Baslock, R. W.

    1977-01-01

    A prototype Space Shuttle Main Engine (SSME) Thrust Vector Control (TVC) Actuator analog model was successfully completed. The prototype, mounted on five printed circuit (PC) boards, was delivered to NASA, checked out and tested using a modular replacement technique on an analog computer. In all cases, the prototype model performed within the recording techniques of the analog computer which is well within the tolerances of the specifications.

  2. QCDOC: A 10-teraflops scale computer for lattice QCD

    NASA Astrophysics Data System (ADS)

    Chen, D.; Christ, N. H.; Cristian, C.; Dong, Z.; Gara, A.; Garg, K.; Joo, B.; Kim, C.; Levkova, L.; Liao, X.; Mawhinney, R. D.; Ohta, S.; Wettig, T.

    2001-03-01

    The architecture of a new class of computers, optimized for lattice QCD calculations, is described. An individual node is based on a single integrated circuit containing a PowerPC 32-bit integer processor with a 1 Gflops 64-bit IEEE floating point unit, 4 Mbyte of memory, 8 Gbit/sec nearest-neighbor communications and additional control and diagnostic circuitry. The machine's name, QCDOC, derives from "QCD On a Chip".

  3. [The importance of using the computer in treating children with strabismus and amblyopia].

    PubMed

    Tatarinov, S A; Amel'ianova, S G; Kashchenko, T P; Lakomkin, V I; Avuchenkova, T N; Galich, V I

    1993-01-01

    A method for therapy of strabismus and amblyopia with the use of IBM PC AT type computer is suggested. It consists in active interaction of a patient with various test objects on the monitor and is realized via a special set of programs. Clinical indications for the use of a new method are defined. Its use yielded good results in 82 of 97 children.

  4. Another View of "PC vs. Mac."

    ERIC Educational Resources Information Center

    DeMillion, John A.

    1998-01-01

    An article by Nan Wodarz in the November 1997 issue listed reasons why the Microsoft computer operating system was superior to the Apple Macintosh platform. This rebuttal contends the Macintosh is less expensive, lasts longer, and requires less technical staff for support. (MLF)

  5. The Design of PC/MISI, a PC-Based Common User Interface to Remote Information Storage and Retrieval Systems. M.S. ThesisFinal Report, 1 Jul. 1985 - 31 Dec. 1987

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Hall, Philip P.

    1985-01-01

    The amount of information contained in the data bases of large-scale information storage and retrieval systems is very large and growing at a rapid rate. The methods available for assessing this information have not been successful in making the information easily available to the people who have the greatest need for it. This thesis describes the design of a personal computer based system which will provide a means for these individuals to retrieve this data through one standardized interface. The thesis identifies each of the major problems associated with providing access to casual users of IS and R systems and describes the manner in which these problems are to be solved by the utilization of the local processing power of a PC. Additional capabilities, not available with standard access methods, are also provided to improve the user's ability to make use of this information. The design of PC/MISI is intended to facilitate its use as a research vehicle. Evaluation mechanisms and possible areas of future research are described. The PC/MISI development effort is part of a larger research effort directed at improving access to remote IS and R systems. This research effort, supported in part by NASA, is also reviewed.

  6. Enhancing 4D PC-MRI in an aortic phantom considering numerical simulations

    NASA Astrophysics Data System (ADS)

    Kratzke, Jonas; Schoch, Nicolai; Weis, Christian; Müller-Eschner, Matthias; Speidel, Stefanie; Farag, Mina; Beller, Carsten J.; Heuveline, Vincent

    2015-03-01

    To date, cardiovascular surgery enables the treatment of a wide range of aortic pathologies. One of the current challenges in this field is given by the detection of high-risk patients for adverse aortic events, who should be treated electively. Reliable diagnostic parameters, which indicate the urge of treatment, have to be determined. Functional imaging by means of 4D phase contrast-magnetic resonance imaging (PC-MRI) enables the time-resolved measurement of blood flow velocity in 3D. Applied to aortic phantoms, three dimensional blood flow properties and their relation to adverse dynamics can be investigated in vitro. Emerging "in silico" methods of numerical simulation can supplement these measurements in computing additional information on crucial parameters. We propose a framework that complements 4D PC-MRI imaging by means of numerical simulation based on the Finite Element Method (FEM). The framework is developed on the basis of a prototypic aortic phantom and validated by 4D PC-MRI measurements of the phantom. Based on physical principles of biomechanics, the derived simulation depicts aortic blood flow properties and characteristics. The framework might help identifying factors that induce aortic pathologies such as aortic dilatation or aortic dissection. Alarming thresholds of parameters such as wall shear stress distribution can be evaluated. The combined techniques of 4D PC-MRI and numerical simulation can be used as complementary tools for risk-stratification of aortic pathology.

  7. Safety and efficacy of percutaneous cecostomy/colostomy for treatment of large bowel obstruction in adults with cancer.

    PubMed

    Tewari, Sanjit O; Getrajdman, George I; Petre, Elena N; Sofocleous, Constantinos T; Siegelbaum, Robert H; Erinjeri, Joseph P; Weiser, Martin R; Thornton, Raymond H

    2015-02-01

    To assess the safety and efficacy of image-guided percutaneous cecostomy/colostomy (PC) in the management of colonic obstruction in patients with cancer. Twenty-seven consecutive patients underwent image-guided PC to relieve large bowel obstruction at a single institution between 2000 and 2012. Colonic obstruction was the common indication. Patient demographics, diagnosis, procedural details, and outcomes including maximum colonic distension (MCD; ie, greatest transverse measurement of the colon on radiograph or scout computed tomography image) were recorded and retrospectively analyzed. Following PC, no patient experienced colonic perforation; pain was relieved in 24 of 27 patients (89%). Catheters with tip position in luminal gas rather than mixed stool/gas or stool were associated with greater decrease in MCD (-40%, -12%, and -16%, respectively), with the difference reaching statistical significance (P = .002 and P = .013, respectively). Catheter size was not associated with change in MCD (P = .978). Catheters were successfully removed from six of nine patients (67%) with functional obstructions and two of 18 patients (11%) with mechanical obstructions. One patient underwent endoscopic stent placement after catheter removal. Three patients required diverting colostomy after PC, and their catheters were removed at the time of surgery. One major complication (3.7%; subcutaneous emphysema, pneumomediastinum, and sepsis) occurred 8 days after PC and was successfully treated with cecostomy exchange, soft-tissue drainage, and intravenous antibiotic therapy. Image-guided PC is safe and effective for management of functional and mechanical bowel obstruction in patients with cancer. For optimal efficacy, catheters should terminate within luminal gas. Copyright © 2015 SIR. Published by Elsevier Inc. All rights reserved.

  8. Factors Affecting Outcomes in Cochlear Implant Recipients Implanted With a Perimodiolar Electrode Array Located in Scala Tympani.

    PubMed

    Holden, Laura K; Firszt, Jill B; Reeder, Ruth M; Uchanski, Rosalie M; Dwyer, Noël Y; Holden, Timothy A

    2016-12-01

    To identify primary biographic and audiologic factors contributing to cochlear implant (CI) performance variability in quiet and noise by controlling electrode array type and electrode position within the cochlea. Although CI outcomes have improved over time, considerable outcome variability still exists. Biographic, audiologic, and device-related factors have been shown to influence performance. Examining CI recipients with consistent array type and electrode position may allow focused investigation into outcome variability resulting from biographic and audiologic factors. Thirty-nine adults (40 ears) implanted for at least 6 months with a perimodiolar electrode array known (via computed tomography [CT] imaging) to be in scala tympani participated. Test materials, administered CI only, included monosyllabic words, sentences in quiet and noise, and spectral ripple discrimination. In quiet, scores were high with mean word and sentence scores of 76 and 87%, respectively; however, sentence scores decreased by an average of 35 percentage points when noise was added. A principal components (PC) analysis of biographic and audiologic factors found three distinct factors, PC1 Age, PC2 Duration, and PC3 Pre-op Hearing. PC1 Age was the only factor that correlated, albeit modestly, with speech recognition in quiet and noise. Spectral ripple discrimination strongly correlated with speech measures. For these recipients with consistent electrode position, PC1 Age was related to speech recognition performance. Consistent electrode position may have contributed to high speech understanding in quiet. Inter-subject variability in noise may have been influenced by auditory/cognitive processing, known to decline with age, and mechanisms that underlie spectral resolution ability.

  9. Principal component and clustering analysis on molecular dynamics data of the ribosomal L11·23S subdomain.

    PubMed

    Wolf, Antje; Kirschner, Karl N

    2013-02-01

    With improvements in computer speed and algorithm efficiency, MD simulations are sampling larger amounts of molecular and biomolecular conformations. Being able to qualitatively and quantitatively sift these conformations into meaningful groups is a difficult and important task, especially when considering the structure-activity paradigm. Here we present a study that combines two popular techniques, principal component (PC) analysis and clustering, for revealing major conformational changes that occur in molecular dynamics (MD) simulations. Specifically, we explored how clustering different PC subspaces effects the resulting clusters versus clustering the complete trajectory data. As a case example, we used the trajectory data from an explicitly solvated simulation of a bacteria's L11·23S ribosomal subdomain, which is a target of thiopeptide antibiotics. Clustering was performed, using K-means and average-linkage algorithms, on data involving the first two to the first five PC subspace dimensions. For the average-linkage algorithm we found that data-point membership, cluster shape, and cluster size depended on the selected PC subspace data. In contrast, K-means provided very consistent results regardless of the selected subspace. Since we present results on a single model system, generalization concerning the clustering of different PC subspaces of other molecular systems is currently premature. However, our hope is that this study illustrates a) the complexities in selecting the appropriate clustering algorithm, b) the complexities in interpreting and validating their results, and c) by combining PC analysis with subsequent clustering valuable dynamic and conformational information can be obtained.

  10. AERO2S - SUBSONIC AERODYNAMIC ANALYSIS OF WINGS WITH LEADING- AND TRAILING-EDGE FLAPS IN COMBINATION WITH CANARD OR HORIZONTAL TAIL SURFACES (IBM PC VERSION)

    NASA Technical Reports Server (NTRS)

    Carlson, H. W.

    1994-01-01

    This code was developed to aid design engineers in the selection and evaluation of aerodynamically efficient wing-canard and wing-horizontal-tail configurations that may employ simple hinged-flap systems. Rapid estimates of the longitudinal aerodynamic characteristics of conceptual airplane lifting surface arrangements are provided. The method is particularly well suited to configurations which, because of high speed flight requirements, must employ thin wings with highly swept leading edges. The code is applicable to wings with either sharp or rounded leading edges. The code provides theoretical pressure distributions over the wing, the canard or horizontal tail, and the deflected flap surfaces as well as estimates of the wing lift, drag, and pitching moments which account for attainable leading edge thrust and leading edge separation vortex forces. The wing planform information is specified by a series of leading edge and trailing edge breakpoints for a right hand wing panel. Up to 21 pairs of coordinates may be used to describe both the leading edge and the trailing edge. The code has been written to accommodate 2000 right hand panel elements, but can easily be modified to accommodate a larger or smaller number of elements depending on the capacity of the target computer platform. The code provides solutions for wing surfaces composed of all possible combinations of leading edge and trailing edge flap settings provided by the original deflection multipliers and by the flap deflection multipliers. Up to 25 pairs of leading edge and trailing edge flap deflection schedules may thus be treated simultaneously. The code also provides for an improved accounting of hinge-line singularities in determination of wing forces and moments. To determine lifting surface perturbation velocity distributions, the code provides for a maximum of 70 iterations. The program is constructed so that successive runs may be made with a given code entry. To make additional runs, it is necessary only to add an identification record and the namelist data that are to be changed from the previous run. This code was originally developed in 1989 in FORTRAN V on a CDC 6000 computer system, and was later ported to an MS-DOS environment. Both versions are available from COSMIC. There are only a few differences between the PC version (LAR-14458) and CDC version (LAR-14178) of AERO2S distributed by COSMIC. The CDC version has one main source code file while the PC version has two files which are easier to edit and compile on a PC. The PC version does not require a FORTRAN compiler which supports NAMELIST because a special INPUT subroutine has been added. The CDC version includes two MODIFY decks which can be used to improve the code and prevent the possibility of some infrequently occurring errors while PC-version users will have to make these code changes manually. The PC version includes an executable which was generated with the Ryan McFarland/FORTRAN compiler and requires 253K RAM and an 80x87 math co-processor. Using this executable, the sample case requires about four hours to execute on an 8MHz AT-class microcomputer with a co-processor. The source code conforms to the FORTRAN 77 standard except that it uses variables longer than six characters. With two minor modifications, the PC version should be portable to any computer with a FORTRAN compiler and sufficient memory. The CDC version of AERO2S is available in CDC NOS Internal format on a 9-track 1600 BPI magnetic tape. The PC version is available on a set of two 5.25 inch 360K MS-DOS format diskettes. IBM AT is a registered trademark of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation. CDC is a registered trademark of Control Data Corporation. NOS is a trademark of Control Data Corporation.

  11. The RANDOM computer program: A linear congruential random number generator

    NASA Technical Reports Server (NTRS)

    Miles, R. F., Jr.

    1986-01-01

    The RANDOM Computer Program is a FORTRAN program for generating random number sequences and testing linear congruential random number generators (LCGs). The linear congruential form of random number generator is discussed, and the selection of parameters of an LCG for a microcomputer described. This document describes the following: (1) The RANDOM Computer Program; (2) RANDOM.MOD, the computer code needed to implement an LCG in a FORTRAN program; and (3) The RANCYCLE and the ARITH Computer Programs that provide computational assistance in the selection of parameters for an LCG. The RANDOM, RANCYCLE, and ARITH Computer Programs are written in Microsoft FORTRAN for the IBM PC microcomputer and its compatibles. With only minor modifications, the RANDOM Computer Program and its LCG can be run on most micromputers or mainframe computers.

  12. Active Acoustics using Bellhop-DRDC: Run Time Tests and Suggested Configurations for a Tracking Exercise in Shallow Scotian Waters

    DTIC Science & Technology

    2005-05-01

    simulée d’essai pour obtenir les diagrammes de perte de transmission et de réverbération pour 18 éléments (une source, un réseau remorqué et 16 bouées...were recorded using a 1.5GHz Pentium 4 processor. The test results indicate that the Bellhop program runs fast enough to provide the required acoustic...was determined that the Bellhop program will be fast enough for these clients. Future Plans It is intended to integrate further enhancements that

  13. Portable Electromyograph

    NASA Technical Reports Server (NTRS)

    De Luca, Gianluca; De Luca, Carlo J.; Bergman, Per

    2004-01-01

    A portable electronic apparatus records electromyographic (EMG) signals in as many as 16 channels at a sampling rate of 1,024 Hz in each channel. The apparatus (see figure) includes 16 differential EMG electrodes (each electrode corresponding to one channel) with cables and attachment hardware, reference electrodes, an input/output-and-power-adapter unit, a 16-bit analog-to-digital converter, and a hand-held computer that contains a removable 256-MB flash memory card. When all 16 EMG electrodes are in use, full-bandwidth data can be recorded in each channel for as long as 8 hours. The apparatus is powered by a battery and is small enough that it can be carried in a waist pouch. The computer is equipped with a small screen that can be used to display the incoming signals on each channel. Amplitude and time adjustments of this display can be made easily by use of touch buttons on the screen. The user can also set up a data-acquisition schedule to conform to experimental protocols or to manage battery energy and memory efficiently. Once the EMG data have been recorded, the flash memory card is removed from the EMG apparatus and placed in a flash-memory- card-reading external drive unit connected to a personal computer (PC). The PC can then read the data recorded in the 16 channels. Preferably, before further analysis, the data should be stored in the hard drive of the PC. The data files are opened and viewed on the PC by use of special- purpose software. The software for operation of the apparatus resides in a random-access memory (RAM), with backup power supplied by a small internal lithium cell. A backup copy of this software resides on the flash memory card. In the event of loss of both main and backup battery power and consequent loss of this software, the backup copy can be used to restore the RAM copy after power has been restored. Accessories for this device are also available. These include goniometers, accelerometers, foot switches, and force gauges.

  14. VizieR Online Data Catalog: RAVE open cluster pairs, groups and complexes (Conrad+, 2017)

    NASA Astrophysics Data System (ADS)

    Conrad, C.; Scholz, R.-D.; Kharchenko, N. V.; Piskunov, A. E.; Roeser, S.; Schilbach, E.; de Jong, R. S.; Schnurr, O.; Steinmetz, M.; Grebel, E. K.; Zwitter, T.; Bienayme, O.; Bland-Hawthorn, J.; Gibson, B. K.; Gilmore, G.; Kordopatis, G.; Kunder, A.; Navarro, J. F.; Parker, Q.; Reid, W.; Seabroke, G.; Siviero, A.; Watson, F.; Wyse, R.

    2017-01-01

    The presented tables summarize the parameters for the clusters and the mean values for the detected potential cluster groupings. The ages, distances and proper motions were taken from the Catalogue of Open Cluster Data (COCD; Kharchenko et al. 2005, Cat. J/A+A/438/1163, J/A+A/440/403), while additional radial velocities and metallicities were obtained from the Radial Velocity Experiment (RAVE; Kordopatis et al. 2013AJ....146..134K, Cat. III/272 ) and from the online compilation provided by Dias et al. (2002, See B/ocl). A description of the determination for the radial velocities and metallicities can be found in Conrad et al. 2014A&A...562A..54C. The potential groupings were identified using an adapted Friends-of-Friends algorithm with two sets of linking lengths, namely (100pc, 10km/s) and (100pc, 20km/s). The table clupar.dat (combining Tables A.1 and A.2 from the Appendix of our paper): Tables comprises the parameters collected for the final working sample of 432 clusters with available radial velocities, namely coordinates and proper motions in equatorial and galactic coordinates, distances, ages, metallicities, as well as Cartesian coordinates and velocities. The latter were computed through converting the spherical parameters to Cartesian space with the sun as point of origin. The tables grpar10.dat and grpar20.dat (listed as two parts in Table B.1 of the Appendix of our paper) contain the mean values for the identified potential open cluster groupings for two sets of linking lengths, 100pc and 10km/s (19 potential groupings) and 100pc and 20km/s (41 potential groupings), respectively. These were computed as simple mean, while the uncertainties were computed as simple rms. We list the counting number, the number of members, the COCD number and name for each member, The mean Cartesian coordinates and velocities, along with the uncertainties, the mean distances (with uncertainties), the mean logarithmic ages (with uncertainties) and the mean metallicities, where available (with uncertainties, if at least two measurement were used). (4 data files).

  15. VENTURE/PC manual: A multidimensional multigroup neutron diffusion code system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shapiro, A.; Huria, H.C.; Cho, K.W.

    1991-12-01

    VENTURE/PC is a recompilation of part of the Oak Ridge BOLD VENTURE code system, which will operate on an IBM PC or compatible computer. Neutron diffusion theory solutions are obtained for multidimensional, multigroup problems. This manual contains information associated with operating the code system. The purpose of the various modules used in the code system, and the input for these modules are discussed. The PC code structure is also given. Version 2 included several enhancements not given in the original version of the code. In particular, flux iterations can be done in core rather than by reading and writing tomore » disk, for problems which allow sufficient memory for such in-core iterations. This speeds up the iteration process. Version 3 does not include any of the special processors used in the previous versions. These special processors utilized formatted input for various elements of the code system. All such input data is now entered through the Input Processor, which produces standard interface files for the various modules in the code system. In addition, a Standard Interface File Handbook is included in the documentation which is distributed with the code, to assist in developing the input for the Input Processor.« less

  16. Personal Computer System for Automatic Coronary Venous Flow Measurement

    PubMed Central

    Dew, Robert B.

    1985-01-01

    We developed an automated system based on an IBM PC/XT Personal computer to measure coronary venous blood flow during cardiac catheterization. Flow is determined by a thermodilution technique in which a cold saline solution is infused through a catheter into the coronary venous system. Regional temperature fluctuations sensed by the catheter are used to determine great cardiac vein and coronary sinus blood flow. The computer system replaces manual methods of acquiring and analyzing temperature data related to flow measurement, thereby increasing the speed and accuracy with which repetitive flow determinations can be made.

  17. COATING ALTERNATIVES GUIDE (CAGE) USER'S GUIDE

    EPA Science Inventory

    The guide provides instructions for using the Coating Alternatives GuidE (CAGE) software program, version 1.0. It assumes that the user is familiar with the fundamentals of operating an IBM-compatible personal computer (PC) under the Microsoft disk operating system (MS-DOS). CAGE...

  18. Heliport noise model (HNM) version 1 user's guide

    DOT National Transportation Integrated Search

    1988-02-01

    This document contains the instructions to execute the Heliport Noise Model (HNM), Version 1. HNM Version 1 is a computer tool for determining the total impact of helicopter noise at and around heliports. The model runs on IBM PC/XT/AT personal compu...

  19. A Mobile Computing Solution for Collecting Functional Analysis Data on a Pocket PC

    PubMed Central

    Jackson, James; Dixon, Mark R

    2007-01-01

    The present paper provides a task analysis for creating a computerized data system using a Pocket PC and Microsoft Visual Basic. With Visual Basic software and any handheld device running the Windows Moble operating system, this task analysis will allow behavior analysts to program and customize their own functional analysis data-collection system. The program will allow the user to select the type of behavior to be recorded, choose between interval and frequency data collection, and summarize data for graphing and analysis. We also provide suggestions for customizing the data-collection system for idiosyncratic research and clinical needs. PMID:17624078

  20. Real time gamma-ray signature identifier

    DOEpatents

    Rowland, Mark [Alamo, CA; Gosnell, Tom B [Moraga, CA; Ham, Cheryl [Livermore, CA; Perkins, Dwight [Livermore, CA; Wong, James [Dublin, CA

    2012-05-15

    A real time gamma-ray signature/source identification method and system using principal components analysis (PCA) for transforming and substantially reducing one or more comprehensive spectral libraries of nuclear materials types and configurations into a corresponding concise representation/signature(s) representing and indexing each individual predetermined spectrum in principal component (PC) space, wherein an unknown gamma-ray signature may be compared against the representative signature to find a match or at least characterize the unknown signature from among all the entries in the library with a single regression or simple projection into the PC space, so as to substantially reduce processing time and computing resources and enable real-time characterization and/or identification.

Top