Sample records for software system pass

  1. Space Shuttle Program Primary Avionics Software System (PASS) Success Legacy - Quality and Reliability Date

    NASA Technical Reports Server (NTRS)

    Orr, James K.; Peltier, Daryl

    2010-01-01

    Thsi slide presentation reviews the avionics software system on board the space shuttle, with particular emphasis on the quality and reliability. The Primary Avionics Software System (PASS) provides automatic and fly-by-wire control of critical shuttle systems which executes in redundant computers. Charts given show the number of space shuttle flights vs time, PASS's development history, and other charts that point to the reliability of the system's development. The reliability of the system is also compared to predicted reliability.

  2. Space Shuttle Program Primary Avionics Software System (PASS) Success Legacy - Major Accomplishments and Lessons Learned Detail Historical Timeline Analysis

    NASA Technical Reports Server (NTRS)

    Orr, James K.

    2010-01-01

    This presentation focuses on the Space Shuttle Primary Avionics Software System (PASS) and the people who developed and maintained this system. One theme is to provide quantitative data on software quality and reliability over a 30 year period. Consistent data relates to code break discrepancies. Requirements were supplied from external sources. Requirement inspections and measurements not implemented until later, beginning in 1985. Second theme is to focus on the people and organization of PASS. Many individuals have supported the PASS project over the entire period while transitioning from company to company and contract to contract. Major events and transitions have impacted morale (both positively and negatively) across the life of the project.

  3. System integration test plan for HANDI 2000 business management system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, D.

    This document presents the system integration test plan for the Commercial-Off-The-Shelf, PassPort and PeopleSoft software, and custom software created to work with the COTS products. The PP software is an integrated application for AP, Contract Management, Inventory Management, Purchasing and Material Safety Data Sheet. The PS software is an integrated application for Project Costing, General Ledger, Human Resources/Training, Payroll, and Base Benefits.

  4. Space Shuttle Program Primary Avionics Software System (PASS) Success Legacy -Major Accomplishments and Lessons Learned

    NASA Technical Reports Server (NTRS)

    Orr, James K.

    2010-01-01

    This presentation has shown the accomplishments of the PASS project over three decades and highlighted the lessons learned. Over the entire time, our goal has been to continuously improve our process, implement automation for both quality and increased productivity, and identify and remove all defects due to prior execution of a flawed process in addition to improving our processes following identification of significant process escapes. Morale and workforce instability have been issues, most significantly during 1993 to 1998 (period of consolidation in aerospace industry). The PASS project has also consulted with others, including the Software Engineering Institute, so as to be an early evaluator, adopter, and adapter of state-of-the-art software engineering innovations.

  5. An implementation and performance measurement of the progressive retry technique

    NASA Technical Reports Server (NTRS)

    Suri, Gaurav; Huang, Yennun; Wang, Yi-Min; Fuchs, W. Kent; Kintala, Chandra

    1995-01-01

    This paper describes a recovery technique called progressive retry for bypassing software faults in message-passing applications. The technique is implemented as reusable modules to provide application-level software fault tolerance. The paper describes the implementation of the technique and presents results from the application of progressive retry to two telecommunications systems. the results presented show that the technique is helpful in reducing the total recovery time for message-passing applications.

  6. A real-time MPEG software decoder using a portable message-passing library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kwong, Man Kam; Tang, P.T. Peter; Lin, Biquan

    1995-12-31

    We present a real-time MPEG software decoder that uses message-passing libraries such as MPL, p4 and MPI. The parallel MPEG decoder currently runs on the IBM SP system but can be easil ported to other parallel machines. This paper discusses our parallel MPEG decoding algorithm as well as the parallel programming environment under which it uses. Several technical issues are discussed, including balancing of decoding speed, memory limitation, 1/0 capacities, and optimization of MPEG decoding components. This project shows that a real-time portable software MPEG decoder is feasible in a general-purpose parallel machine.

  7. Experiences Using OpenMP Based on Compiler Directed Software DSM on a PC Cluster

    NASA Technical Reports Server (NTRS)

    Hess, Matthias; Jost, Gabriele; Mueller, Matthias; Ruehle, Roland; Biegel, Bryan (Technical Monitor)

    2002-01-01

    In this work we report on our experiences running OpenMP (message passing) programs on a commodity cluster of PCs (personal computers) running a software distributed shared memory (DSM) system. We describe our test environment and report on the performance of a subset of the NAS (NASA Advanced Supercomputing) Parallel Benchmarks that have been automatically parallelized for OpenMP. We compare the performance of the OpenMP implementations with that of their message passing counterparts and discuss performance differences.

  8. Laser Line Scan System for UXO Characterization

    DTIC Science & Technology

    2012-04-01

    they geometrically rectified. The Year 2 survey collected LLSS images from seven passes over two separate calibration strings and six passes over two...Microsoft DOS-based software tool. According to the side- by-side comparisons shown in Figure 9, the morphometrics were relatively equal between...survey. Note: The imagery in this figure is not presented at full resolution nor geometrically rectified. LLSS Targets, Pass One 1. Danforth

  9. Horizontal Directional Drilling-Length Detection Technology While Drilling Based on Bi-Electro-Magnetic Sensing.

    PubMed

    Wang, Yudan; Wen, Guojun; Chen, Han

    2017-04-27

    The drilling length is an important parameter in the process of horizontal directional drilling (HDD) exploration and recovery, but there has been a lack of accurate, automatically obtained statistics regarding this parameter. Herein, a technique for real-time HDD length detection and a management system based on the electromagnetic detection method with a microprocessor and two magnetoresistive sensors employing the software LabVIEW are proposed. The basic principle is to detect the change in the magnetic-field strength near a current coil while the drill stem and drill-stem joint successively pass through the current coil forward or backward. The detection system consists of a hardware subsystem and a software subsystem. The hardware subsystem employs a single-chip microprocessor as the main controller. A current coil is installed in front of the clamping unit, and two magneto resistive sensors are installed on the sides of the coil symmetrically and perpendicular to the direction of movement of the drill pipe. Their responses are used to judge whether the drill-stem joint is passing through the clamping unit; then, the order of their responses is used to judge the movement direction. The software subsystem is composed of a visual software running on the host computer and a software running in the slave microprocessor. The host-computer software processes, displays, and saves the drilling-length data, whereas the slave microprocessor software operates the hardware system. A combined test demonstrated the feasibility of the entire drilling-length detection system.

  10. Horizontal Directional Drilling-Length Detection Technology While Drilling Based on Bi-Electro-Magnetic Sensing

    PubMed Central

    Wang, Yudan; Wen, Guojun; Chen, Han

    2017-01-01

    The drilling length is an important parameter in the process of horizontal directional drilling (HDD) exploration and recovery, but there has been a lack of accurate, automatically obtained statistics regarding this parameter. Herein, a technique for real-time HDD length detection and a management system based on the electromagnetic detection method with a microprocessor and two magnetoresistive sensors employing the software LabVIEW are proposed. The basic principle is to detect the change in the magnetic-field strength near a current coil while the drill stem and drill-stem joint successively pass through the current coil forward or backward. The detection system consists of a hardware subsystem and a software subsystem. The hardware subsystem employs a single-chip microprocessor as the main controller. A current coil is installed in front of the clamping unit, and two magneto resistive sensors are installed on the sides of the coil symmetrically and perpendicular to the direction of movement of the drill pipe. Their responses are used to judge whether the drill-stem joint is passing through the clamping unit; then, the order of their responses is used to judge the movement direction. The software subsystem is composed of a visual software running on the host computer and a software running in the slave microprocessor. The host-computer software processes, displays, and saves the drilling-length data, whereas the slave microprocessor software operates the hardware system. A combined test demonstrated the feasibility of the entire drilling-length detection system. PMID:28448445

  11. SU-F-T-301: Planar Dose Pass Rate Inflation Due to the MapCHECK Measurement Uncertainty Function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, D; Spaans, J; Kumaraswamy, L

    Purpose: To quantify the effect of the Measurement Uncertainty function on planar dosimetry pass rates, as analyzed with Sun Nuclear Corporation analytic software (“MapCHECK” or “SNC Patient”). This optional function is toggled on by default upon software installation, and automatically increases the user-defined dose percent difference (%Diff) tolerance for each planar dose comparison. Methods: Dose planes from 109 IMRT fields and 40 VMAT arcs were measured with the MapCHECK 2 diode array, and compared to calculated planes from a commercial treatment planning system. Pass rates were calculated within the SNC analytic software using varying calculation parameters, including Measurement Uncertainty onmore » and off. By varying the %Diff criterion for each dose comparison performed with Measurement Uncertainty turned off, an effective %Diff criterion was defined for each field/arc corresponding to the pass rate achieved with MapCHECK Uncertainty turned on. Results: For 3%/3mm analysis, the Measurement Uncertainty function increases the user-defined %Diff by 0.8–1.1% average, depending on plan type and calculation technique, for an average pass rate increase of 1.0–3.5% (maximum +8.7%). For 2%, 2 mm analysis, the Measurement Uncertainty function increases the user-defined %Diff by 0.7–1.2% average, for an average pass rate increase of 3.5–8.1% (maximum +14.2%). The largest increases in pass rate are generally seen with poorly-matched planar dose comparisons; the MapCHECK Uncertainty effect is markedly smaller as pass rates approach 100%. Conclusion: The Measurement Uncertainty function may substantially inflate planar dose comparison pass rates for typical IMRT and VMAT planes. The types of uncertainties incorporated into the function (and their associated quantitative estimates) as described in the software user’s manual may not accurately estimate realistic measurement uncertainty for the user’s measurement conditions. Pass rates listed in published reports or otherwise compared to the results of other users or vendors should clearly indicate whether the Measurement Uncertainty function is used.« less

  12. Implementation plan for HANDI 2000 TWRS master equipment list

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BENNION, S.I.

    This document presents the implementation plan for an additional deliverable of the HANDI 2000 Project. The PassPort Equipment Data module processes include those portions of the COTS PassPort system required to support tracking and management of the Master Equipment List for Lockheed Martin Hanford Company (LMHC) and custom software created to work with the COTS products.

  13. A tool to include gamma analysis software into a quality assurance program.

    PubMed

    Agnew, Christina E; McGarry, Conor K

    2016-03-01

    To provide a tool to enable gamma analysis software algorithms to be included in a quality assurance (QA) program. Four image sets were created comprising two geometric images to independently test the distance to agreement (DTA) and dose difference (DD) elements of the gamma algorithm, a clinical step and shoot IMRT field and a clinical VMAT arc. The images were analysed using global and local gamma analysis with 2 in-house and 8 commercially available software encompassing 15 software versions. The effect of image resolution on gamma pass rates was also investigated. All but one software accurately calculated the gamma passing rate for the geometric images. Variation in global gamma passing rates of 1% at 3%/3mm and over 2% at 1%/1mm was measured between software and software versions with analysis of appropriately sampled images. This study provides a suite of test images and the gamma pass rates achieved for a selection of commercially available software. This image suite will enable validation of gamma analysis software within a QA program and provide a frame of reference by which to compare results reported in the literature from various manufacturers and software versions. Copyright © 2015. Published by Elsevier Ireland Ltd.

  14. Evaluation of 3D Gamma index calculation implemented in two commercial dosimetry systems

    NASA Astrophysics Data System (ADS)

    Xing, Aitang; Arumugam, Sankar; Deshpande, Shrikant; George, Armia; Vial, Philip; Holloway, Lois; Goozee, Gary

    2015-01-01

    3D Gamma index is one of the metrics which have been widely used for clinical routine patient specific quality assurance for IMRT, Tomotherapy and VMAT. The algorithms for calculating the 3D Gamma index using global and local methods implemented in two software tools: PTW- VeriSoft® as a part of OCTIVIUS 4D dosimeter systems and 3DVHTM from Sun Nuclear were assessed. The Gamma index calculated by the two systems was compared with manual calculated for one data set. The Gamma pass rate calculated by the two systems was compared using 3%/3mm, 2%/2mm, 3%/2mm and 2%/3mm for two additional data sets. The Gamma indexes calculated by the two systems were accurate, but Gamma pass rates calculated by the two software tools for same data set with the same dose threshold were different due to the different interpolation of raw dose data by the two systems and different implementation of Gamma index calculation and other modules in the two software tools. The mean difference was -1.3%±3.38 (1SD) with a maximum difference of 11.7%.

  15. Parallel Ray Tracing Using the Message Passing Interface

    DTIC Science & Technology

    2007-09-01

    software is available for lens design and for general optical systems modeling. It tends to be designed to run on a single processor and can be very...Cameron, Senior Member, IEEE Abstract—Ray-tracing software is available for lens design and for general optical systems modeling. It tends to be designed to...National Aeronautics and Space Administration (NASA), optical ray tracing, parallel computing, parallel pro- cessing, prime numbers, ray tracing

  16. Parallel-Processing Test Bed For Simulation Software

    NASA Technical Reports Server (NTRS)

    Blech, Richard; Cole, Gary; Townsend, Scott

    1996-01-01

    Second-generation Hypercluster computing system is multiprocessor test bed for research on parallel algorithms for simulation in fluid dynamics, electromagnetics, chemistry, and other fields with large computational requirements but relatively low input/output requirements. Built from standard, off-shelf hardware readily upgraded as improved technology becomes available. System used for experiments with such parallel-processing concepts as message-passing algorithms, debugging software tools, and computational steering. First-generation Hypercluster system described in "Hypercluster Parallel Processor" (LEW-15283).

  17. Federal Communications Commission (FCC) Transponder Loading Data Conversion Software. User's guide and software maintenance manual, version 1.2

    NASA Technical Reports Server (NTRS)

    Mallasch, Paul G.

    1993-01-01

    This volume contains the complete software system documentation for the Federal Communications Commission (FCC) Transponder Loading Data Conversion Software (FIX-FCC). This software was written to facilitate the formatting and conversion of FCC Transponder Occupancy (Loading) Data before it is loaded into the NASA Geosynchronous Satellite Orbital Statistics Database System (GSOSTATS). The information that FCC supplies NASA is in report form and must be converted into a form readable by the database management software used in the GSOSTATS application. Both the User's Guide and Software Maintenance Manual are contained in this document. This volume of documentation passed an independent quality assurance review and certification by the Product Assurance and Security Office of the Planning Research Corporation (PRC). The manuals were reviewed for format, content, and readability. The Software Management and Assurance Program (SMAP) life cycle and documentation standards were used in the development of this document. Accordingly, these standards were used in the review. Refer to the System/Software Test/Product Assurance Report for the Geosynchronous Satellite Orbital Statistics Database System (GSOSTATS) for additional information.

  18. Distributed Offline Data Reconstruction in BaBar

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pulliam, Teela M

    The BaBar experiment at SLAC is in its fourth year of running. The data processing system has been continuously evolving to meet the challenges of higher luminosity running and the increasing bulk of data to re-process each year. To meet these goals a two-pass processing architecture has been adopted, where 'rolling calibrations' are quickly calculated on a small fraction of the events in the first pass and the bulk data reconstruction done in the second. This allows for quick detector feedback in the first pass and allows for the parallelization of the second pass over two or more separate farms.more » This two-pass system allows also for distribution of processing farms off-site. The first such site has been setup at INFN Padova. The challenges met here were many. The software was ported to a full Linux-based, commodity hardware system. The raw dataset, 90 TB, was imported from SLAC utilizing a 155 Mbps network link. A system for quality control and export of the processed data back to SLAC was developed. Between SLAC and Padova we are currently running three pass-one farms, with 32 CPUs each, and nine pass-two farms with 64 to 80 CPUs each. The pass-two farms can process between 2 and 4 million events per day. Details about the implementation and performance of the system will be presented.« less

  19. Design ATE systems for complex assemblies

    NASA Astrophysics Data System (ADS)

    Napier, R. S.; Flammer, G. H.; Moser, S. A.

    1983-06-01

    The use of ATE systems in radio specification testing can reduce the test time by approximately 90 to 95 percent. What is more, the test station does not require a highly trained operator. Since the system controller has full power over all the measurements, human errors are not introduced into the readings. The controller is immune to any need to increase output by allowing marginal units to pass through the system. In addition, the software compensates for predictable, repeatable system errors, for example, cabling losses, which are an inherent part of the test setup. With no variation in test procedures from unit to unit, there is a constant repeatability factor. Preparing the software, however, usually entails considerable expense. It is pointed out that many of the problems associated with ATE system software can be avoided with the use of a software-intensive, or computer-intensive, system organization. Its goal is to minimize the user's need for software development, thereby saving time and money.

  20. The PLATO IV Communications System.

    ERIC Educational Resources Information Center

    Sherwood, Bruce Arne; Stifle, Jack

    The PLATO IV computer-based educational system contains its own communications hardware and software for operating plasma-panel graphics terminals. Key echoing is performed by the central processing unit: every key pressed at a terminal passes through the entire system before anything appears on the terminal's screen. Each terminal is guaranteed…

  1. Geospatial Authentication

    NASA Technical Reports Server (NTRS)

    Lyle, Stacey D.

    2009-01-01

    A software package that has been designed to allow authentication for determining if the rover(s) is/are within a set of boundaries or a specific area to access critical geospatial information by using GPS signal structures as a means to authenticate mobile devices into a network wirelessly and in real-time has been developed. The advantage lies in that the system only allows those with designated geospatial boundaries or areas into the server. The Geospatial Authentication software has two parts Server and Client. The server software is a virtual private network (VPN) developed in Linux operating system using Perl programming language. The server can be a stand-alone VPN server or can be combined with other applications and services. The client software is a GUI Windows CE software, or Mobile Graphical Software, that allows users to authenticate into a network. The purpose of the client software is to pass the needed satellite information to the server for authentication.

  2. Cost effective system for monitoring of fish migration with a camera

    NASA Astrophysics Data System (ADS)

    Sečnik, Matej; Brilly, Mitja; Vidmar, Andrej

    2016-04-01

    Within the European LIFE project Ljubljanica connects (LIFE10 NAT/SI/000142) we have developed a cost-effective solution for the monitoring of fish migration through the fish passes with the underwater camera. In the fish pass at Ambrožev trg and in the fish pass near the Fužine castle we installed a video camera called "Fishcam" to be able to monitor the migration of fish through the fish passes and success of its reconstruction. Live stream from fishcams installed in the fishpassesis available on our project website (http://ksh.fgg.uni-lj.si/ljubljanicaconnects/ang/12_camera). The system for the fish monitoring is made from two parts. First is the waterproof box for the computer with charger and the second part is the camera itself. We used a high sensitive Sony analogue camera. The advantage of this camera is that it has very good sensitivity in low light conditions, so it can take good quality pictures even at night with a minimum additional lighting. For the night recording we use additional IR reflector to illuminate passing fishes. The camera is connected to an 8-inch tablet PC. We decided to use a tablet PC because it is quite small, cheap, it is relatively fast and has a low power consumption. On the computer we use software which has advanced motion detection capabilities, so we can also detect the small fishes. When the fish is detected by a software, its photograph is automatically saved to local hard drive and for backup also on Google drive. The system for monitoring of fish migration has turned out to work very well. From the beginning of monitoring in June 2015 to end of the year there were more than 100.000 photographs produced. The first analysis of them was already prepared estimating fish species and their frequency in passing the fish pass.

  3. Please Reduce Cycle Time

    DTIC Science & Technology

    2014-12-01

    observed an ERP system implementation that encountered this exact model. The modified COTS software worked and passed the acceptance tests but never... software -intensive program. We decided to create a very detailed master sched- ule with multiple supporting subschedules that linked and Implementing ...processes in place as part of the COTS implementation . For hardware , COTS can also present some risks. Many pro- grams use COTS computers and servers

  4. A Down-to-Earth Educational Operating System for Up-in-the-Cloud Many-Core Architectures

    ERIC Educational Resources Information Center

    Ziwisky, Michael; Persohn, Kyle; Brylow, Dennis

    2013-01-01

    We present "Xipx," the first port of a major educational operating system to a processor in the emerging class of many-core architectures. Through extensions to the proven Embedded Xinu operating system, Xipx gives students hands-on experience with system programming in a distributed message-passing environment. We expose the software primitives…

  5. Commanding and Controlling Satellite Clusters (IEEE Intelligent Systems, November/December 2000)

    DTIC Science & Technology

    2000-01-01

    real - time operating system , a message-passing OS well suited for distributed...ground Flight processors ObjectAgent RTOS SCL RTOS RDMS Space command language Real - time operating system Rational database management system TS-21 RDMS...engineer with Princeton Satellite Systems. She is working with others to develop ObjectAgent software to run on the OSE Real Time Operating System .

  6. [Discussion to the advanced application of scripting in RayStation TPS system].

    PubMed

    Zhang, Jianying; Sun, Jing; Wang, Yun

    2014-11-01

    In this study, the implementation methods for the several functions are explored on RayStation 4.0 Platform. Those functions are passing the information such as ROI names to a plan prescription Word file. passing the file to RayStation for plan evaluation; passing the evaluation result to form an evaluated report file. The result shows the RayStation scripts can exchange data with Word, as well as control the running of Word and the content of a Word file. Consequently, it's feasible for scripts to inactive with third party softwares upgrade the performance of RayStation itself.

  7. WinHPC System Programming | High-Performance Computing | NREL

    Science.gov Websites

    Programming WinHPC System Programming Learn how to build and run an MPI (message passing interface (mpi.h) and library (msmpi.lib) are. To build from the command line, run... Start > Intel Software Development Tools > Intel C++ Compiler Professional... > C++ Build Environment for applications running

  8. Mark 4A antenna control system data handling architecture study

    NASA Technical Reports Server (NTRS)

    Briggs, H. C.; Eldred, D. B.

    1991-01-01

    A high-level review was conducted to provide an analysis of the existing architecture used to handle data and implement control algorithms for NASA's Deep Space Network (DSN) antennas and to make system-level recommendations for improving this architecture so that the DSN antennas can support the ever-tightening requirements of the next decade and beyond. It was found that the existing system is seriously overloaded, with processor utilization approaching 100 percent. A number of factors contribute to this overloading, including dated hardware, inefficient software, and a message-passing strategy that depends on serial connections between machines. At the same time, the system has shortcomings and idiosyncrasies that require extensive human intervention. A custom operating system kernel and an obscure programming language exacerbate the problems and should be modernized. A new architecture is presented that addresses these and other issues. Key features of the new architecture include a simplified message passing hierarchy that utilizes a high-speed local area network, redesign of particular processing function algorithms, consolidation of functions, and implementation of the architecture in modern hardware and software using mainstream computer languages and operating systems. The system would also allow incremental hardware improvements as better and faster hardware for such systems becomes available, and costs could potentially be low enough that redundancy would be provided economically. Such a system could support DSN requirements for the foreseeable future, though thorough consideration must be given to hard computational requirements, porting existing software functionality to the new system, and issues of fault tolerance and recovery.

  9. System Data Model (SDM) Source Code

    DTIC Science & Technology

    2012-08-23

    CROSS_COMPILE=/opt/gumstix/build_arm_nofpu/staging_dir/bin/arm-linux-uclibcgnueabi- 8 : CC=$(CROSS_COMPILE)gcc 9: CXX=$(CROSS_COMPILE)g++ 10 : AR...and flags to pass to it 6: LEX=flex 7: LEXFLAGS=-B 8 : 9: ## The parser generator to invoke and flags to pass to it 10 : YACC=bison 11: YACCFLAGS...5: # Point to default PetaLinux root directory 6: ifndef ROOTDIR 7: ROOTDIR=$(PETALINUX)/software/petalinux-dist 8 : endif 9: 10 : PATH:=$(PATH

  10. Software engineering

    NASA Technical Reports Server (NTRS)

    Fridge, Ernest M., III; Hiott, Jim; Golej, Jim; Plumb, Allan

    1993-01-01

    Today's software systems generally use obsolete technology, are not integrated properly with other software systems, and are difficult and costly to maintain. The discipline of reverse engineering is becoming prominent as organizations try to move their systems up to more modern and maintainable technology in a cost effective manner. The Johnson Space Center (JSC) created a significant set of tools to develop and maintain FORTRAN and C code during development of the space shuttle. This tool set forms the basis for an integrated environment to reengineer existing code into modern software engineering structures which are then easier and less costly to maintain and which allow a fairly straightforward translation into other target languages. The environment will support these structures and practices even in areas where the language definition and compilers do not enforce good software engineering. The knowledge and data captured using the reverse engineering tools is passed to standard forward engineering tools to redesign or perform major upgrades to software systems in a much more cost effective manner than using older technologies. The latest release of the environment was in Feb. 1992.

  11. A Survey of Rollback-Recovery Protocols in Message-Passing Systems

    DTIC Science & Technology

    1999-06-01

    and M.A. Castillo. "Checkpointing through garbage collection." Technical report. Departamento de Ciencia de la Computation, Escuela de Ingenieria ...between consecutive checkpoints. It can be implemented by using the dirty-bit of the memory protection hardware or by emulating a dirty-bit in software [4...compare the program’s state with the previous checkpoint in software , and writing the difference in a new checkpoint [46]. The required storage and

  12. Experiences using OpenMP based on Computer Directed Software DSM on a PC Cluster

    NASA Technical Reports Server (NTRS)

    Hess, Matthias; Jost, Gabriele; Mueller, Matthias; Ruehle, Roland

    2003-01-01

    In this work we report on our experiences running OpenMP programs on a commodity cluster of PCs running a software distributed shared memory (DSM) system. We describe our test environment and report on the performance of a subset of the NAS Parallel Benchmarks that have been automaticaly parallelized for OpenMP. We compare the performance of the OpenMP implementations with that of their message passing counterparts and discuss performance differences.

  13. Delivering real-time status and arrival information to commuter rail passengers at complex stations

    DOT National Transportation Integrated Search

    2003-08-01

    Software was developed for calculating real-time train status in an Automated Train Information Display System (ATIDS) at NJ Transit. Interfaces were developed for passing schedules and real-time train position and routing data from a rail traffic co...

  14. Healthwatch-2 System Overview

    NASA Technical Reports Server (NTRS)

    Barszcz, Eric; Mosher, Marianne; Huff, Edward M.

    2004-01-01

    Healthwatch-2 (HW-2) is a research tool designed to facilitate the development and testing of in-flight health monitoring algorithms. HW-2 software is written in C/C++ and executes on an x86-based computer running the Linux operating system. The executive module has interfaces for collecting various signal data, such as vibration, torque, tachometer, and GPS. It is designed to perform in-flight time or frequency averaging based on specifications defined in a user-supplied configuration file. Averaged data are then passed to a user-supplied algorithm written as a Matlab function. This allows researchers a convenient method for testing in-flight algorithms. In addition to its in-flight capabilities, HW-2 software is also capable of reading archived flight data and processing it as if collected in-flight. This allows algorithms to be developed and tested in the laboratory before being flown. Currently HW-2 has passed its checkout phase and is collecting data on a Bell OH-58C helicopter operated by the U.S. Army at NASA Ames Research Center.

  15. Constructing a working taxonomy of functional Ada software components for real-time embedded system applications

    NASA Technical Reports Server (NTRS)

    Wallace, Robert

    1986-01-01

    A major impediment to a systematic attack on Ada software reusability is the lack of an effective taxonomy for software component functions. The scope of all possible applications of Ada software is considered too great to allow the practical development of a working taxonomy. Instead, for the purposes herein, the scope of Ada software application is limited to device and subsystem control in real-time embedded systems. A functional approach is taken in constructing the taxonomy tree for identified Ada domain. The use of modular software functions as a starting point fits well with the object oriented programming philosophy of Ada. Examples of the types of functions represented within the working taxonomy are real time kernels, interrupt service routines, synchronization and message passing, data conversion, digital filtering and signal conditioning, and device control. The constructed taxonomy is proposed as a framework from which a need analysis can be performed to reveal voids in current Ada real-time embedded programming efforts for Space Station.

  16. SU-E-T-133: Assessing IMRT Treatment Delivery Accuracy and Consistency On a Varian TrueBeam Using the SunNuclear PerFraction EPID Dosimetry Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dieterich, S; Trestrail, E; Holt, R

    2015-06-15

    Purpose: To assess if the TrueBeam HD120 collimator is delivering small IMRT fields accurately and consistently throughout the course of treatment using the SunNuclear PerFraction software. Methods: 7-field IMRT plans for 8 canine patients who passed IMRT QA using SunNuclear Mapcheck DQA were selected for this study. The animals were setup using CBCT image guidance. The EPID fluence maps were captured for each treatment field and each treatment fraction, with the first fraction EPID data serving as the baseline for comparison. The Sun Nuclear PerFraction Software was used to compare the EPID data for subsequent fractions using a Gamma (3%/3mm)more » pass rate of 90%. To simulate requirements for SRS, the data was reanalyzed using a Gamma (3%/1mm) pass rate of 90%. Low-dose, low- and high gradient thresholds were used to focus the analysis on clinically relevant parts of the dose distribution. Results: Not all fractions could be analyzed, because during some of the treatment courses the DICOM tags in the EPID images intermittently change from CU to US (unspecified), which would indicate a temporary loss of EPID calibration. This technical issue is still being investigated. For the remaining fractions, the vast majority (7/8 of patients, 95% of fractions, and 96.6% of fields) are passing the less stringent Gamma criteria. The more stringent Gamma criteria caused a drop in pass rate (90 % of fractions, 84% of fields). For the patient with the lowest pass rate, wet towel bolus was used. Another patient with low pass rates experienced masseter muscle wasting. Conclusion: EPID dosimetry using the PerFraction software demonstrated that the majority of fields passed a Gamma (3%/3mm) for IMRT treatments delivered with a TrueBeam HD120 MLC. Pass rates dropped for a DTA of 1mm to model SRS tolerances. PerFraction pass rates can flag missing bolus or internal shields. Sanjeev Saini is an employee of Sun Nuclear Corporation. For this study, a pre-release version of PerFRACTION 1.1 software from Sun Nuclear Corporation was used.« less

  17. Software reengineering

    NASA Technical Reports Server (NTRS)

    Fridge, Ernest M., III

    1991-01-01

    Today's software systems generally use obsolete technology, are not integrated properly with other software systems, and are difficult and costly to maintain. The discipline of reverse engineering is becoming prominent as organizations try to move their systems up to more modern and maintainable technology in a cost effective manner. JSC created a significant set of tools to develop and maintain FORTRAN and C code during development of the Space Shuttle. This tool set forms the basis for an integrated environment to re-engineer existing code into modern software engineering structures which are then easier and less costly to maintain and which allow a fairly straightforward translation into other target languages. The environment will support these structures and practices even in areas where the language definition and compilers do not enforce good software engineering. The knowledge and data captured using the reverse engineering tools is passed to standard forward engineering tools to redesign or perform major upgrades to software systems in a much more cost effective manner than using older technologies. A beta vision of the environment was released in Mar. 1991. The commercial potential for such re-engineering tools is very great. CASE TRENDS magazine reported it to be the primary concern of over four hundred of the top MIS executives.

  18. Instrumentation development for space debris optical observation system in Indonesia: Preliminary results

    NASA Astrophysics Data System (ADS)

    Dani, Tiar; Rachman, Abdul; Priyatikanto, Rhorom; Religia, Bahar

    2015-09-01

    An increasing number of space junk in orbit has raised their chances to fall in Indonesian region. So far, three debris of rocket bodies have been found in Bengkulu, Gorontalo and Lampung. LAPAN has successfully developed software for monitoring space debris that passes over Indonesia with an altitude below 200 km. To support the software-based system, the hardware-based system has been developed based on optical instruments. The system has been under development in early 2014 which consist of two systems: the telescopic system and wide field system. The telescopic system uses CCD cameras and a reflecting telescope with relatively high sensitivity. Wide field system uses DSLR cameras, binoculars and a combination of CCD with DSLR Lens. Methods and preliminary results of the systems will be presented.

  19. Development of multichannel analyzer using sound card ADC for nuclear spectroscopy system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, Maslina Mohd; Yussup, Nolida; Lombigit, Lojius

    This paper describes the development of Multi-Channel Analyzer (MCA) using sound card analogue to digital converter (ADC) for nuclear spectroscopy system. The system was divided into a hardware module and a software module. Hardware module consist of detector NaI (Tl) 2” by 2”, Pulse Shaping Amplifier (PSA) and a build in ADC chip from readily available in any computers’ sound system. The software module is divided into two parts which are a pre-processing of raw digital input and the development of the MCA software. Band-pass filter and baseline stabilization and correction were implemented for the pre-processing. For the MCA development,more » the pulse height analysis method was used to process the signal before displaying it using histogram technique. The development and tested result for using the sound card as an MCA are discussed.« less

  20. The integrated proactive surveillance system for prostate cancer.

    PubMed

    Wang, Haibin; Yatawara, Mahendra; Huang, Shao-Chi; Dudley, Kevin; Szekely, Christine; Holden, Stuart; Piantadosi, Steven

    2012-01-01

    In this paper, we present the design and implementation of the integrated proactive surveillance system for prostate cancer (PASS-PC). The integrated PASS-PC is a multi-institutional web-based system aimed at collecting a variety of data on prostate cancer patients in a standardized and efficient way. The integrated PASS-PC was commissioned by the Prostate Cancer Foundation (PCF) and built through the joint of efforts by a group of experts in medical oncology, genetics, pathology, nutrition, and cancer research informatics. Their main goal is facilitating the efficient and uniform collection of critical demographic, lifestyle, nutritional, dietary and clinical information to be used in developing new strategies in diagnosing, preventing and treating prostate cancer.The integrated PASS-PC is designed based on common industry standards - a three tiered architecture and a Service- Oriented Architecture (SOA). It utilizes open source software and programming languages such as HTML, PHP, CSS, JQuery, Drupal and MySQL. We also use a commercial database management system - Oracle 11g. The integrated PASS-PC project uses a "confederation model" that encourages participation of any interested center, irrespective of its size or location. The integrated PASS-PC utilizes a standardized approach to data collection and reporting, and uses extensive validation procedures to prevent entering erroneous data. The integrated PASS-PC controlled vocabulary is harmonized with the National Cancer Institute (NCI) Thesaurus. Currently, two cancer centers in the USA are participating in the integrated PASS-PC project.THE FINAL SYSTEM HAS THREE MAIN COMPONENTS: 1. National Prostate Surveillance Network (NPSN) website; 2. NPSN myConnect portal; 3. Proactive Surveillance System for Prostate Cancer (PASS-PC). PASS-PC is a cancer Biomedical Informatics Grid (caBIG) compatible product. The integrated PASS-PC provides a foundation for collaborative prostate cancer research. It has been built to meet the short term goal of gathering prostate cancer related data, but also with the prerequisites in place for future evolution into a cancer research informatics platform. In the future this will be vital for successful prostate cancer studies, care and treatment.

  1. The Integrated Proactive Surveillance System for Prostate Cancer

    PubMed Central

    Wang, Haibin; Yatawara, Mahendra; Huang, Shao-Chi; Dudley, Kevin; Szekely, Christine; Holden, Stuart; Piantadosi, Steven

    2012-01-01

    In this paper, we present the design and implementation of the integrated proactive surveillance system for prostate cancer (PASS-PC). The integrated PASS-PC is a multi-institutional web-based system aimed at collecting a variety of data on prostate cancer patients in a standardized and efficient way. The integrated PASS-PC was commissioned by the Prostate Cancer Foundation (PCF) and built through the joint of efforts by a group of experts in medical oncology, genetics, pathology, nutrition, and cancer research informatics. Their main goal is facilitating the efficient and uniform collection of critical demographic, lifestyle, nutritional, dietary and clinical information to be used in developing new strategies in diagnosing, preventing and treating prostate cancer. The integrated PASS-PC is designed based on common industry standards – a three tiered architecture and a Service- Oriented Architecture (SOA). It utilizes open source software and programming languages such as HTML, PHP, CSS, JQuery, Drupal and MySQL. We also use a commercial database management system – Oracle 11g. The integrated PASS-PC project uses a “confederation model” that encourages participation of any interested center, irrespective of its size or location. The integrated PASS-PC utilizes a standardized approach to data collection and reporting, and uses extensive validation procedures to prevent entering erroneous data. The integrated PASS-PC controlled vocabulary is harmonized with the National Cancer Institute (NCI) Thesaurus. Currently, two cancer centers in the USA are participating in the integrated PASS-PC project. The final system has three main components: 1. National Prostate Surveillance Network (NPSN) website; 2. NPSN myConnect portal; 3. Proactive Surveillance System for Prostate Cancer (PASS-PC). PASS-PC is a cancer Biomedical Informatics Grid (caBIG) compatible product. The integrated PASS-PC provides a foundation for collaborative prostate cancer research. It has been built to meet the short term goal of gathering prostate cancer related data, but also with the prerequisites in place for future evolution into a cancer research informatics platform. In the future this will be vital for successful prostate cancer studies, care and treatment. PMID:22505956

  2. UDECON: deconvolution optimization software for restoring high-resolution records from pass-through paleomagnetic measurements

    NASA Astrophysics Data System (ADS)

    Xuan, Chuang; Oda, Hirokuni

    2015-11-01

    The rapid accumulation of continuous paleomagnetic and rock magnetic records acquired from pass-through measurements on superconducting rock magnetometers (SRM) has greatly contributed to our understanding of the paleomagnetic field and paleo-environment. Pass-through measurements are inevitably smoothed and altered by the convolution effect of SRM sensor response, and deconvolution is needed to restore high-resolution paleomagnetic and environmental signals. Although various deconvolution algorithms have been developed, the lack of easy-to-use software has hindered the practical application of deconvolution. Here, we present standalone graphical software UDECON as a convenient tool to perform optimized deconvolution for pass-through paleomagnetic measurements using the algorithm recently developed by Oda and Xuan (Geochem Geophys Geosyst 15:3907-3924, 2014). With the preparation of a format file, UDECON can directly read pass-through paleomagnetic measurement files collected at different laboratories. After the SRM sensor response is determined and loaded to the software, optimized deconvolution can be conducted using two different approaches (i.e., "Grid search" and "Simplex method") with adjustable initial values or ranges for smoothness, corrections of sample length, and shifts in measurement position. UDECON provides a suite of tools to view conveniently and check various types of original measurement and deconvolution data. Multiple steps of measurement and/or deconvolution data can be compared simultaneously to check the consistency and to guide further deconvolution optimization. Deconvolved data together with the loaded original measurement and SRM sensor response data can be saved and reloaded for further treatment in UDECON. Users can also export the optimized deconvolution data to a text file for analysis in other software.

  3. A software bus for thread objects

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Li, Dehuai

    1995-01-01

    The authors have implemented a software bus for lightweight threads in an object-oriented programming environment that allows for rapid reconfiguration and reuse of thread objects in discrete-event simulation experiments. While previous research in object-oriented, parallel programming environments has focused on direct communication between threads, our lightweight software bus, called the MiniBus, provides a means to isolate threads from their contexts of execution by restricting communications between threads to message-passing via their local ports only. The software bus maintains a topology of connections between these ports. It routes, queues, and delivers messages according to this topology. This approach allows for rapid reconfiguration and reuse of thread objects in other systems without making changes to the specifications or source code. A layered approach that provides the needed transparency to developers is presented. Examples of using the MiniBus are given, and the value of bus architectures in building and conducting simulations of discrete-event systems is discussed.

  4. [Study for portable dynamic ECG monitor and recorder].

    PubMed

    Yang, Pengcheng; Li, Yongqin; Chen, Bihua

    2012-09-01

    This Paper presents a portable dynamic ECG monitor system based on MSP430F149 microcontroller. The electrocardiogram detecting system consists of ECG detecting circuit, man-machine interaction module, MSP430F149 and upper computer software. The ECG detecting circuit including a preamplifier, second-order Butterworth low-pass filter, high-pass filter, and 50Hz trap circuit to detects electrocardiogram and depresses various kinds of interference effectively. A microcontroller is used to collect three channel analog signals which can be displayed on TFT LCD. A SD card is used to record real-time data continuously and implement the FTA16 file system. In the end, a host computer system interface is also designed to analyze the ECG signal and the analysis results can provide diagnosis references to clinical doctors.

  5. A multiarchitecture parallel-processing development environment

    NASA Technical Reports Server (NTRS)

    Townsend, Scott; Blech, Richard; Cole, Gary

    1993-01-01

    A description is given of the hardware and software of a multiprocessor test bed - the second generation Hypercluster system. The Hypercluster architecture consists of a standard hypercube distributed-memory topology, with multiprocessor shared-memory nodes. By using standard, off-the-shelf hardware, the system can be upgraded to use rapidly improving computer technology. The Hypercluster's multiarchitecture nature makes it suitable for researching parallel algorithms in computational field simulation applications (e.g., computational fluid dynamics). The dedicated test-bed environment of the Hypercluster and its custom-built software allows experiments with various parallel-processing concepts such as message passing algorithms, debugging tools, and computational 'steering'. Such research would be difficult, if not impossible, to achieve on shared, commercial systems.

  6. Decision-aids for enhancing intergovernmental interactions: The Pre-notification Analysis Support System (PASS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lester, M.; Liebow, E.; Holm, J.

    1994-05-01

    The Department of Energy (DOE) plans to honor its commitment to government-to-government interactions by providing advance notice of DOE spent fuel and high-level waste shipments to Indian tribes whose jurisdictions are crossed by or adjacent to transportation routes. The tribes are important contributors to a regional response network, and providing tribes with advance notice of DOE shipping plans marks the start -- not the end -- of direct, government-to-government interactions with DOE. The Tribal Prenotification Analysis Support System (PASS) is being developed for the Office of Special Programs within the Department`s Office of Environmental Restoration and Waste Management. PASS willmore » help DOE-Headquarters to coordinate field office activities and provide technical and institutional support to the DOE field offices. PASS is designed to be used by anyone with minimum computer literacy and having contemporary computer hardware and software. It uses on-screen maps to choose and display a shipment route, and to display the tribal jurisdictions. With forms that are easy to understand, it provides information about each jurisdiction and points of contact. PASS records all contacts, commitments made, and actions taken.« less

  7. Software error detection

    NASA Technical Reports Server (NTRS)

    Buechler, W.; Tucker, A. G.

    1981-01-01

    Several methods were employed to detect both the occurrence and source of errors in the operational software of the AN/SLQ-32. A large embedded real time electronic warfare command and control system for the ROLM 1606 computer are presented. The ROLM computer provides information about invalid addressing, improper use of privileged instructions, stack overflows, and unimplemented instructions. Additionally, software techniques were developed to detect invalid jumps, indices out of range, infinte loops, stack underflows, and field size errors. Finally, data are saved to provide information about the status of the system when an error is detected. This information includes I/O buffers, interrupt counts, stack contents, and recently passed locations. The various errors detected, techniques to assist in debugging problems, and segment simulation on a nontarget computer are discussed. These error detection techniques were a major factor in the success of finding the primary cause of error in 98% of over 500 system dumps.

  8. Development of a distributed control system for TOTEM experiment using ASIO Boost C++ libraries

    NASA Astrophysics Data System (ADS)

    Cafagna, F.; Mercadante, A.; Minafra, N.; Quinto, M.; Radicioni, E.

    2014-06-01

    The main goals of the TOTEM Experiment at the LHC are the measurements of the elastic and total p-p cross sections and the studies of the diffractive dissociation processes. Those scientific objectives are achieved by using three tracking detectors symmetrically arranged around the interaction point called IP5. The control system is based on a C++ software that allows the user, by means of a graphical interface, direct access to hardware and handling of devices configuration. A first release of the software was designed as a monolithic block, with all functionalities being merged together. Such approach showed soon its limits, mainly poor reusability and maintainability of the source code, evident not only in phase of bug-fixing, but also when one wants to extend functionalities or apply some other modifications. This led to the decision of a radical redesign of the software, now based on the dialogue (message-passing) among separate building blocks. Thanks to the acquired extensibility, the software gained new features and now is a complete tool by which it is possible not only to configure different devices interfacing with a large subset of buses like I2C and VME, but also to do data acquisition both for calibration and physics runs. Furthermore, the software lets the user set up a series of operations to be executed sequentially to handle complex operations. To achieve maximum flexibility, the program units may be run either as a single process or as separate processes on different PCs which exchange messages over the network, thus allowing remote control of the system. Portability is ensured by the adoption of the ASIO (Asynchronous Input Output) library of Boost, a cross-platform suite of libraries which is candidate to become part of the C++ 11 standard. We present the state of the art of this project and outline the future perspectives. In particular, we describe the system architecture and the message-passing scheme. We also report on the results obtained in a first complete test of the software both as a single process and on two PCs.

  9. Ship electric propulsion simulator based on networking technology

    NASA Astrophysics Data System (ADS)

    Zheng, Huayao; Huang, Xuewu; Chen, Jutao; Lu, Binquan

    2006-11-01

    According the new ship building tense, a novel electric propulsion simulator (EPS) had been developed in Marine Simulation Center of SMU. The architecture, software function and FCS network technology of EPS and integrated power system (IPS) were described. In allusion to the POD propeller in ship, a special physical model was built. The POD power was supplied from the simulative 6.6 kV Medium Voltage Main Switchboard, its control could be realized in local or remote mode. Through LAN, the simulated feature information of EPS will pass to the physical POD model, which would reflect the real thruster working status in different sea conditions. The software includes vessel-propeller math module, thruster control system, distribution and emergency integrated management, double closed loop control system, vessel static water resistance and dynamic software; instructor main control software. The monitor and control system is realized by real time data collection system and CAN bus technology. During the construction, most devices such as monitor panels and intelligent meters, are developed in lab which were based on embedded microcomputer system with CAN interface to link the network. They had also successfully used in practice and would be suitable for the future demands of digitalization ship.

  10. 47 CFR 73.9006 - Add-in covered demodulator products.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Section 73.9006 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES... passed to an output (e.g., where a demodulator add-in card in a personal computer passes such content to an associated software application installed in the same computer), it shall pass such content: (1...

  11. The Legacy of Space Shuttle Flight Software

    NASA Technical Reports Server (NTRS)

    Hickey, Christopher J.; Loveall, James B.; Orr, James K.; Klausman, Andrew L.

    2011-01-01

    The initial goals of the Space Shuttle Program required that the avionics and software systems blaze new trails in advancing avionics system technology. Many of the requirements placed on avionics and software were accomplished for the first time on this program. Examples include comprehensive digital fly-by-wire technology, use of a digital databus for flight critical functions, fail operational/fail safe requirements, complex automated redundancy management, and the use of a high-order software language for flight software development. In order to meet the operational and safety goals of the program, the Space Shuttle software had to be extremely high quality, reliable, robust, reconfigurable and maintainable. To achieve this, the software development team evolved a software process focused on continuous process improvement and defect elimination that consistently produced highly predictable and top quality results, providing software managers the confidence needed to sign each Certificate of Flight Readiness (COFR). This process, which has been appraised at Capability Maturity Model (CMM)/Capability Maturity Model Integration (CMMI) Level 5, has resulted in one of the lowest software defect rates in the industry. This paper will present an overview of the evolution of the Primary Avionics Software System (PASS) project and processes over thirty years, an argument for strong statistical control of software processes with examples, an overview of the success story for identifying and driving out errors before flight, a case study of the few significant software issues and how they were either identified before flight or slipped through the process onto a flight vehicle, and identification of the valuable lessons learned over the life of the project.

  12. Benchmarking hypercube hardware and software

    NASA Technical Reports Server (NTRS)

    Grunwald, Dirk C.; Reed, Daniel A.

    1986-01-01

    It was long a truism in computer systems design that balanced systems achieve the best performance. Message passing parallel processors are no different. To quantify the balance of a hypercube design, an experimental methodology was developed and the associated suite of benchmarks was applied to several existing hypercubes. The benchmark suite includes tests of both processor speed in the absence of internode communication and message transmission speed as a function of communication patterns.

  13. HyperPASS, a New Aeroassist Tool

    NASA Technical Reports Server (NTRS)

    Gates, Kristin; McRonald, Angus; Nock, Kerry

    2005-01-01

    A new software tool designed to perform aeroassist studies has been developed by Global Aerospace Corporation (GAC). The Hypersonic Planetary Aeroassist Simulation System (HyperPASS) [1] enables users to perform guided aerocapture, guided ballute aerocapture, aerobraking, orbit decay, or unguided entry simulations at any of six target bodies (Venus, Earth, Mars, Jupiter, Titan, or Neptune). HyperPASS is currently being used for trade studies to investigate (1) aerocapture performance with alternate aeroshell types, varying flight path angle and entry velocity, different gload and heating limits, and angle of attack and angle of bank variations; (2) variable, attached ballute geometry; (3) railgun launched projectile trajectories, and (4) preliminary orbit decay evolution. After completing a simulation, there are numerous visualization options in which data can be plotted, saved, or exported to various formats. Several analysis examples will be described.

  14. The Raid distributed database system

    NASA Technical Reports Server (NTRS)

    Bhargava, Bharat; Riedl, John

    1989-01-01

    Raid, a robust and adaptable distributed database system for transaction processing (TP), is described. Raid is a message-passing system, with server processes on each site to manage concurrent processing, consistent replicated copies during site failures, and atomic distributed commitment. A high-level layered communications package provides a clean location-independent interface between servers. The latest design of the package delivers messages via shared memory in a configuration with several servers linked into a single process. Raid provides the infrastructure to investigate various methods for supporting reliable distributed TP. Measurements on TP and server CPU time are presented, along with data from experiments on communications software, consistent replicated copy control during site failures, and concurrent distributed checkpointing. A software tool for evaluating the implementation of TP algorithms in an operating-system kernel is proposed.

  15. Performance Evaluation of Communication Software Systems for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Fatoohi, Rod

    1996-01-01

    In recent years there has been an increasing interest in object-oriented distributed computing since it is better quipped to deal with complex systems while providing extensibility, maintainability, and reusability. At the same time, several new high-speed network technologies have emerged for local and wide area networks. However, the performance of networking software is not improving as fast as the networking hardware and the workstation microprocessors. This paper gives an overview and evaluates the performance of the Common Object Request Broker Architecture (CORBA) standard in a distributed computing environment at NASA Ames Research Center. The environment consists of two testbeds of SGI workstations connected by four networks: Ethernet, FDDI, HiPPI, and ATM. The performance results for three communication software systems are presented, analyzed and compared. These systems are: BSD socket programming interface, IONA's Orbix, an implementation of the CORBA specification, and the PVM message passing library. The results show that high-level communication interfaces, such as CORBA and PVM, can achieve reasonable performance under certain conditions.

  16. Preliminary design of the spatial filters used in the multipass amplification system of TIL

    NASA Astrophysics Data System (ADS)

    Zhu, Qihua; Zhang, Xiao Min; Jing, Feng

    1998-12-01

    The spatial filters are used in Technique Integration Line, which has a multi-pass amplifier, not only to suppress parasitic high spatial frequency modes but also to provide places for inserting a light isolator and injecting the seed beam, and to relay image while the beam passes through the amplifiers several times. To fulfill these functions, the parameters of the spatial filters are optimized by calculations and analyzes with the consideration of avoiding the plasma blow-off effect and components demanding by ghost beam focus. The 'ghost beams' are calculated by ray tracing. A software was developed to evaluate the tolerance of the spatial filters and their components, and to align the whole system on computer simultaneously.

  17. Development and application of an acceptance testing model

    NASA Technical Reports Server (NTRS)

    Pendley, Rex D.; Noonan, Caroline H.; Hall, Kenneth R.

    1992-01-01

    The process of acceptance testing large software systems for NASA has been analyzed, and an empirical planning model of the process constructed. This model gives managers accurate predictions of the staffing needed, the productivity of a test team, and the rate at which the system will pass. Applying the model to a new system shows a high level of agreement between the model and actual performance. The model also gives managers an objective measure of process improvement.

  18. Combat Service Support Model Development: BRASS - TRANSLOG - Army 21

    DTIC Science & Technology

    1984-07-01

    throughout’the system. Transitional problems may address specific hardware and related software , such as the Standard Army Ammunition System ( SAAS ...FILE. 00 Cabat Service Support Model Development .,PASS TRANSLOG -- ARMY 21 0 Contract Number DAAK11-84-D-0004 Task Order #1 DRAFT REPOkT July 1984 D...Armament Systems, Inc. 211 West Bel Air Avenue P.O. Box 158 Aberdeen, MD 21001 8 8 8 2 1 S CORMIT SERVICE SUPPORT MODEL DEVELOPMENT BRASS -- TRANSLOG

  19. Proteus: a reconfigurable computational network for computer vision

    NASA Astrophysics Data System (ADS)

    Haralick, Robert M.; Somani, Arun K.; Wittenbrink, Craig M.; Johnson, Robert; Cooper, Kenneth; Shapiro, Linda G.; Phillips, Ihsin T.; Hwang, Jenq N.; Cheung, William; Yao, Yung H.; Chen, Chung-Ho; Yang, Larry; Daugherty, Brian; Lorbeski, Bob; Loving, Kent; Miller, Tom; Parkins, Larye; Soos, Steven L.

    1992-04-01

    The Proteus architecture is a highly parallel MIMD, multiple instruction, multiple-data machine, optimized for large granularity tasks such as machine vision and image processing The system can achieve 20 Giga-flops (80 Giga-flops peak). It accepts data via multiple serial links at a rate of up to 640 megabytes/second. The system employs a hierarchical reconfigurable interconnection network with the highest level being a circuit switched Enhanced Hypercube serial interconnection network for internal data transfers. The system is designed to use 256 to 1,024 RISC processors. The processors use one megabyte external Read/Write Allocating Caches for reduced multiprocessor contention. The system detects, locates, and replaces faulty subsystems using redundant hardware to facilitate fault tolerance. The parallelism is directly controllable through an advanced software system for partitioning, scheduling, and development. System software includes a translator for the INSIGHT language, a parallel debugger, low and high level simulators, and a message passing system for all control needs. Image processing application software includes a variety of point operators neighborhood, operators, convolution, and the mathematical morphology operations of binary and gray scale dilation, erosion, opening, and closing.

  20. Technical support plan for HANDI 2000 business management system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, D.E.

    The Hanford Data Integration 2000 (HANDI 2000) Project will result in an integrated and comprehensive set of functional applications containing core information necessary to support the Project Hanford Management Contract. It is based on the Commercial-Off-The-Shelf (COTS) product solution with commercially proven business processes. The PassPort (PP) software is an integrated application for Accounts Payable, Contract Management, Inventory Management, and Purchasing. The PeopleSoft (PS) software is an integrated application for General Ledger, Project Costing, Human Resources, Payroll, Benefits, and Training. The implementation of this set of products, as the first deliverable of the HAND1 2000 Project, is referred to asmore » Business Management System (BMS) and Chemical Management.« less

  1. Sensor Agent Processing Software (SAPS)

    DTIC Science & Technology

    2004-05-01

    buildings, sewers, and tunnels. The time scale governs many aspects of tactical sensing. In high intensity combat situations forces move within...21 Figure 9-2 BAE Systems Sitex00 High Bandwidth...float) Subscribers Subscribers Preprocessor Channel 1 xout[256] Data File in Memory xout[256] S w i t c h High Pass Filter (IIR) xin[256] xout[256

  2. Asbestos: Securing Untrusted Software with Interposition

    DTIC Science & Technology

    2005-09-01

    consistent intelligible interfaces to different types of resource. Message-based operating systems, such as Accent, Amoeba, Chorus, L4 , Spring...control on self-authenticating capabilities, precluding policies that restrict delegation. L4 uses a strict hierarchy of interpositions, useful for...the OS de- sign space amenable to secure application construction. Similar effects might be possible with message-passing microkernels , or unwieldy

  3. Parallel computation and the Basis system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, G.R.

    1992-12-16

    A software package has been written that can facilitate efforts to develop powerful, flexible, and easy-to-use programs that can run in single-processor, massively parallel, and distributed computing environments. Particular attention has been given to the difficulties posed by a program consisting of many science packages that represent subsystems of a complicated, coupled system. Methods have been found to maintain independence of the packages by hiding data structures without increasing the communication costs in a parallel computing environment. Concepts developed in this work are demonstrated by a prototype program that uses library routines from two existing software systems, Basis and Parallelmore » Virtual Machine (PVM). Most of the details of these libraries have been encapsulated in routines and macros that could be rewritten for alternative libraries that possess certain minimum capabilities. The prototype software uses a flexible master-and-slaves paradigm for parallel computation and supports domain decomposition with message passing for partitioning work among slaves. Facilities are provided for accessing variables that are distributed among the memories of slaves assigned to subdomains. The software is named PROTOPAR.« less

  4. Parallel computation and the basis system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, G.R.

    1993-05-01

    A software package has been written that can facilitate efforts to develop powerful, flexible, and easy-to use programs that can run in single-processor, massively parallel, and distributed computing environments. Particular attention has been given to the difficulties posed by a program consisting of many science packages that represent subsystems of a complicated, coupled system. Methods have been found to maintain independence of the packages by hiding data structures without increasing the communications costs in a parallel computing environment. Concepts developed in this work are demonstrated by a prototype program that uses library routines from two existing software systems, Basis andmore » Parallel Virtual Machine (PVM). Most of the details of these libraries have been encapsulated in routines and macros that could be rewritten for alternative libraries that possess certain minimum capabilities. The prototype software uses a flexible master-and-slaves paradigm for parallel computation and supports domain decomposition with message passing for partitioning work among slaves. Facilities are provided for accessing variables that are distributed among the memories of slaves assigned to subdomains. The software is named PROTOPAR.« less

  5. System Testing of Ground Cooling System Components

    NASA Technical Reports Server (NTRS)

    Ensey, Tyler Steven

    2014-01-01

    This internship focused primarily upon software unit testing of Ground Cooling System (GCS) components, one of the three types of tests (unit, integrated, and COTS/regression) utilized in software verification. Unit tests are used to test the software of necessary components before it is implemented into the hardware. A unit test determines that the control data, usage procedures, and operating procedures of a particular component are tested to determine if the program is fit for use. Three different files are used to make and complete an efficient unit test. These files include the following: Model Test file (.mdl), Simulink SystemTest (.test), and autotest (.m). The Model Test file includes the component that is being tested with the appropriate Discrete Physical Interface (DPI) for testing. The Simulink SystemTest is a program used to test all of the requirements of the component. The autotest tests that the component passes Model Advisor and System Testing, and puts the results into proper files. Once unit testing is completed on the GCS components they can then be implemented into the GCS Schematic and the software of the GCS model as a whole can be tested using integrated testing. Unit testing is a critical part of software verification; it allows for the testing of more basic components before a model of higher fidelity is tested, making the process of testing flow in an orderly manner.

  6. Diagnosis diagrams for passing signals on an automatic block signaling railway section

    NASA Astrophysics Data System (ADS)

    Spunei, E.; Piroi, I.; Chioncel, C. P.; Piroi, F.

    2018-01-01

    This work presents a diagnosis method for railway traffic security installations. More specifically, the authors present a series of diagnosis charts for passing signals on a railway block equipped with an automatic block signaling installation. These charts are based on the exploitation electric schemes, and are subsequently used to develop a diagnosis software package. The thus developed software package contributes substantially to a reduction of failure detection and remedy for these types of installation faults. The use of the software package eliminates making wrong decisions in the fault detection process, decisions that may result in longer remedy times and, sometimes, to railway traffic events.

  7. A Framework for Testing Scientific Software: A Case Study of Testing Amsterdam Discrete Dipole Approximation Software

    NASA Astrophysics Data System (ADS)

    Shao, Hongbing

    Software testing with scientific software systems often suffers from test oracle problem, i.e., lack of test oracles. Amsterdam discrete dipole approximation code (ADDA) is a scientific software system that can be used to simulate light scattering of scatterers of various types. Testing of ADDA suffers from "test oracle problem". In this thesis work, I established a testing framework to test scientific software systems and evaluated this framework using ADDA as a case study. To test ADDA, I first used CMMIE code as the pseudo oracle to test ADDA in simulating light scattering of a homogeneous sphere scatterer. Comparable results were obtained between ADDA and CMMIE code. This validated ADDA for use with homogeneous sphere scatterers. Then I used experimental result obtained for light scattering of a homogeneous sphere to validate use of ADDA with sphere scatterers. ADDA produced light scattering simulation comparable to the experimentally measured result. This further validated the use of ADDA for simulating light scattering of sphere scatterers. Then I used metamorphic testing to generate test cases covering scatterers of various geometries, orientations, homogeneity or non-homogeneity. ADDA was tested under each of these test cases and all tests passed. The use of statistical analysis together with metamorphic testing is discussed as a future direction. In short, using ADDA as a case study, I established a testing framework, including use of pseudo oracles, experimental results and the metamorphic testing techniques to test scientific software systems that suffer from test oracle problems. Each of these techniques is necessary and contributes to the testing of the software under test.

  8. VAC: Versatile Advection Code

    NASA Astrophysics Data System (ADS)

    Tóth, Gábor; Keppens, Rony

    2012-07-01

    The Versatile Advection Code (VAC) is a freely available general hydrodynamic and magnetohydrodynamic simulation software that works in 1, 2 or 3 dimensions on Cartesian and logically Cartesian grids. VAC runs on any Unix/Linux system with a Fortran 90 (or 77) compiler and Perl interpreter. VAC can run on parallel machines using either the Message Passing Interface (MPI) library or a High Performance Fortran (HPF) compiler.

  9. Research on realization scheme of interactive voice response (IVR) system

    NASA Astrophysics Data System (ADS)

    Jin, Xin; Zhu, Guangxi

    2003-12-01

    In this paper, a novel interactive voice response (IVR) system is proposed, which is apparently different from the traditional. Using software operation and network control, the IVR system is presented which only depends on software in the server in which the system lies and the hardware in network terminals on user side, such as gateway (GW), personal gateway (PG), PC and so on. The system transmits the audio using real time protocol (RTP) protocol via internet to the network terminals and controls flow using finite state machine (FSM) stimulated by H.245 massages sent from user side and the system control factors. Being compared with other existing schemes, this IVR system results in several advantages, such as greatly saving the system cost, fully utilizing the existing network resources and enhancing the flexibility. The system is capable to be put in any service server anywhere in the Internet and even fits for the wireless applications based on packet switched communication. The IVR system has been put into reality and passed the system test.

  10. Embedded Web Technology: Applying World Wide Web Standards to Embedded Systems

    NASA Technical Reports Server (NTRS)

    Ponyik, Joseph G.; York, David W.

    2002-01-01

    Embedded Systems have traditionally been developed in a highly customized manner. The user interface hardware and software along with the interface to the embedded system are typically unique to the system for which they are built, resulting in extra cost to the system in terms of development time and maintenance effort. World Wide Web standards have been developed in the passed ten years with the goal of allowing servers and clients to intemperate seamlessly. The client and server systems can consist of differing hardware and software platforms but the World Wide Web standards allow them to interface without knowing about the details of system at the other end of the interface. Embedded Web Technology is the merging of Embedded Systems with the World Wide Web. Embedded Web Technology decreases the cost of developing and maintaining the user interface by allowing the user to interface to the embedded system through a web browser running on a standard personal computer. Embedded Web Technology can also be used to simplify an Embedded System's internal network.

  11. Community Petascale Project for Accelerator Science and Simulation: Advancing Computational Science for Future Accelerators and Accelerator Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spentzouris, P.; /Fermilab; Cary, J.

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessarymore » accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors. ComPASS is in the first year of executing its plan to develop the next-generation HPC accelerator modeling tools. ComPASS aims to develop an integrated simulation environment that will utilize existing and new accelerator physics modules with petascale capabilities, by employing modern computing and solver technologies. The ComPASS vision is to deliver to accelerator scientists a virtual accelerator and virtual prototyping modeling environment, with the necessary multiphysics, multiscale capabilities. The plan for this development includes delivering accelerator modeling applications appropriate for each stage of the ComPASS software evolution. Such applications are already being used to address challenging problems in accelerator design and optimization. The ComPASS organization for software development and applications accounts for the natural domain areas (beam dynamics, electromagnetics, and advanced acceleration), and all areas depend on the enabling technologies activities, such as solvers and component technology, to deliver the desired performance and integrated simulation environment. The ComPASS applications focus on computationally challenging problems important for design or performance optimization to all major HEP, NP, and BES accelerator facilities. With the cost and complexity of particle accelerators rising, the use of computation to optimize their designs and find improved operating regimes becomes essential, potentially leading to significant cost savings with modest investment.« less

  12. Measuring Sea-Ice Motion in the Arctic with Real Time Photogrammetry

    NASA Astrophysics Data System (ADS)

    Brozena, J. M.; Hagen, R. A.; Peters, M. F.; Liang, R.; Ball, D.

    2014-12-01

    The U.S. Naval Research Laboratory, in coordination with other groups, has been collecting sea-ice data in the Arctic off the north coast of Alaska with an airborne system employing a radar altimeter, LiDAR and a photogrammetric camera in an effort to obtain wide swaths of measurements coincident with Cryosat-2 footprints. Because the satellite tracks traverse areas of moving pack ice, precise real-time estimates of the ice motion are needed to fly a survey grid that will yield complete data coverage. This requirement led us to develop a method to find the ice motion from the aircraft during the survey. With the advent of real-time orthographic photogrammetric systems, we developed a system that measures the sea ice motion in-flight, and also permits post-process modeling of sea ice velocities to correct the positioning of radar and LiDAR data. For the 2013 and 2014 field seasons, we used this Real Time Ice Motion Estimation (RTIME) system to determine ice motion using Applanix's Inflight Ortho software with an Applanix DSS439 system. Operationally, a series of photos were taken in the survey area. The aircraft then turned around and took more photos along the same line several minutes later. Orthophotos were generated within minutes of collection and evaluated by custom software to find photo footprints and potential overlap. Overlapping photos were passed to the correlation software, which selects a series of "chips" in the first photo and looks for the best matches in the second photo. The correlation results are then passed to a density-based clustering algorithm to determine the offset of the photo pair. To investigate any systematic errors in the photogrammetry, we flew several flight lines over a fixed point on various headings, over an area of non-moving ice in 2013. The orthophotos were run through the correlation software to find any residual offsets, and run through additional software to measure chip positions and offsets relative to the aircraft heading. X- and Y-offsets in situations where one of the chips was near the center of its photo were plotted to find the along- and across-track errors vs. distance from the photo center. Corrections were determined and applied to the survey data, reducing the mean error by about 1 meter. The corrections were applied to all of the subsequent survey data.

  13. Hyperswitch Communication Network Computer

    NASA Technical Reports Server (NTRS)

    Peterson, John C.; Chow, Edward T.; Priel, Moshe; Upchurch, Edwin T.

    1993-01-01

    Hyperswitch Communications Network (HCN) computer is prototype multiple-processor computer being developed. Incorporates improved version of hyperswitch communication network described in "Hyperswitch Network For Hypercube Computer" (NPO-16905). Designed to support high-level software and expansion of itself. HCN computer is message-passing, multiple-instruction/multiple-data computer offering significant advantages over older single-processor and bus-based multiple-processor computers, with respect to price/performance ratio, reliability, availability, and manufacturing. Design of HCN operating-system software provides flexible computing environment accommodating both parallel and distributed processing. Also achieves balance among following competing factors; performance in processing and communications, ease of use, and tolerance of (and recovery from) faults.

  14. NASA Tech Briefs, June 2013

    NASA Technical Reports Server (NTRS)

    2013-01-01

    Topics include: Cloud Absorption Radiometer Autonomous Navigation System - CANS, Software Method for Computed Tomography Cylinder Data Unwrapping, Re-slicing, and Analysis, Discrete Data Qualification System and Method Comprising Noise Series Fault Detection, Simple Laser Communications Terminal for Downlink from Earth Orbit at Rates Exceeding 10 Gb/s, Application Program Interface for the Orion Aerodynamics Database, Hyperspectral Imager-Tracker, Web Application Software for Ground Operations Planning Database (GOPDb) Management, Software Defined Radio with Parallelized Software Architecture, Compact Radar Transceiver with Included Calibration, Software Defined Radio with Parallelized Software Architecture, Phase Change Material Thermal Power Generator, The Thermal Hogan - A Means of Surviving the Lunar Night, Micromachined Active Magnetic Regenerator for Low-Temperature Magnetic Coolers, Nano-Ceramic Coated Plastics, Preparation of a Bimetal Using Mechanical Alloying for Environmental or Industrial Use, Phase Change Material for Temperature Control of Imager or Sounder on GOES Type Satellites in GEO, Dual-Compartment Inflatable Suitlock, Modular Connector Keying Concept, Genesis Ultrapure Water Megasonic Wafer Spin Cleaner, Piezoelectrically Initiated Pyrotechnic Igniter, Folding Elastic Thermal Surface - FETS, Multi-Pass Quadrupole Mass Analyzer, Lunar Sulfur Capture System, Environmental Qualification of a Single-Crystal Silicon Mirror for Spaceflight Use, Planar Superconducting Millimeter-Wave/Terahertz Channelizing Filter, Qualification of UHF Antenna for Extreme Martian Thermal Environments, Ensemble Eclipse: A Process for Prefab Development Environment for the Ensemble Project, ISS Live!, Space Operations Learning Center (SOLC) iPhone/iPad Application, Software to Compare NPP HDF5 Data Files, Planetary Data Systems (PDS) Imaging Node Atlas II, Automatic Calibration of an Airborne Imaging System to an Inertial Navigation Unit, Translating MAPGEN to ASPEN for MER, Support Routines for In Situ Image Processing, and Semi-Supervised Eigenbasis Novelty Detection.

  15. A Mechanism to Avoid Collusion Attacks Based on Code Passing in Mobile Agent Systems

    NASA Astrophysics Data System (ADS)

    Jaimez, Marc; Esparza, Oscar; Muñoz, Jose L.; Alins-Delgado, Juan J.; Mata-Díaz, Jorge

    Mobile agents are software entities consisting of code, data, state and itinerary that can migrate autonomously from host to host executing their code. Despite its benefits, security issues strongly restrict the use of code mobility. The protection of mobile agents against the attacks of malicious hosts is considered the most difficult security problem to solve in mobile agent systems. In particular, collusion attacks have been barely studied in the literature. This paper presents a mechanism that avoids collusion attacks based on code passing. Our proposal is based on a Multi-Code agent, which contains a different variant of the code for each host. A Trusted Third Party is responsible for providing the information to extract its own variant to the hosts, and for taking trusted timestamps that will be used to verify time coherence.

  16. A prototype for the PASS Permanent All Sky Survey

    NASA Astrophysics Data System (ADS)

    Deeg, H. J.; Alonso, R.; Belmonte, J. A.; Horne, K.; Alsubai, K.; Collier Cameron, A.; Doyle, L. R.

    2004-10-01

    A prototype system for the Permanent All Sky Survey (PASS) project is presented. PASS is a continuous photometric survey of the entire celestial sphere with a high temporal resolution. Its major objectives are the detection of all giant-planet transits (with periods up to some weeks) across stars up to mag 10.5, and to deliver continuously photometry that is useful for the study of any variable stars. The prototype is based on CCD cameras with short focal length optics on a fixed mount. A small dome to house it at Teide Observatory, Tenerife, is currently being constructed. A placement at the antarctic Dome C is also being considered. The prototype will be used for a feasibility study of PASS, to define the best observing strategies, and to perform a detailed characterization of the capabilities and scope of the survey. Afterwards, a first partial sky surveying will be started with it. That first survey may be able to detect transiting planets during its first few hundred hours of operation. It will also deliver a data set around which software modules dealing with the various scientific objectives of PASS will be developed. The PASS project is still in its early phase and teams interested in specific scientific objectives, in providing technical expertise, or in participating with own observations are invited to collaborate.

  17. Strategies for Energy Efficient Resource Management of Hybrid Programming Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Dong; Supinski, Bronis de; Schulz, Martin

    2013-01-01

    Many scientific applications are programmed using hybrid programming models that use both message-passing and shared-memory, due to the increasing prevalence of large-scale systems with multicore, multisocket nodes. Previous work has shown that energy efficiency can be improved using software-controlled execution schemes that consider both the programming model and the power-aware execution capabilities of the system. However, such approaches have focused on identifying optimal resource utilization for one programming model, either shared-memory or message-passing, in isolation. The potential solution space, thus the challenge, increases substantially when optimizing hybrid models since the possible resource configurations increase exponentially. Nonetheless, with the accelerating adoptionmore » of hybrid programming models, we increasingly need improved energy efficiency in hybrid parallel applications on large-scale systems. In this work, we present new software-controlled execution schemes that consider the effects of dynamic concurrency throttling (DCT) and dynamic voltage and frequency scaling (DVFS) in the context of hybrid programming models. Specifically, we present predictive models and novel algorithms based on statistical analysis that anticipate application power and time requirements under different concurrency and frequency configurations. We apply our models and methods to the NPB MZ benchmarks and selected applications from the ASC Sequoia codes. Overall, we achieve substantial energy savings (8.74% on average and up to 13.8%) with some performance gain (up to 7.5%) or negligible performance loss.« less

  18. Simultaneous real-time data collection methods

    NASA Technical Reports Server (NTRS)

    Klincsek, Thomas

    1992-01-01

    This paper describes the development of electronic test equipment which executes, supervises, and reports on various tests. This validation process uses computers to analyze test results and report conclusions. The test equipment consists of an electronics component and the data collection and reporting unit. The PC software, display screens, and real-time data-base are described. Pass-fail procedures and data replay are discussed. The OS2 operating system and Presentation Manager user interface system were used to create a highly interactive automated system. The system outputs are hardcopy printouts and MS DOS format files which may be used as input for other PC programs.

  19. Enhancing DSN Operations Efficiency with the Discrepancy Reporting Management System (DRMS)

    NASA Technical Reports Server (NTRS)

    Chatillon, Mark; Lin, James; Cooper, Tonja M.

    2003-01-01

    The DRMS is the Discrepancy Reporting Management System used by the Deep Space Network (DSN). It uses a web interface and is a management tool designed to track and manage: data outage incidents during spacecraft tracks against equipment and software known as DRs (discrepancy Reports), to record "out of pass" incident logs against equipment and software in a Station Log, to record instances where equipment has be restarted or reset as Reset records, and to electronically record equipment readiness status across the DSN. Tracking and managing these items increases DSN operational efficiency by providing: the ability to establish the operational history of equipment items, data on the quality of service provided to the DSN customers, the ability to measure service performance, early insight into processes, procedures and interfaces that may need updating or changing, and the capability to trace a data outage to a software or hardware change. The items listed above help the DSN to focus resources on areas of most need.

  20. Region and database management for HANDI 2000 business management system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, D.

    The Data Integration 2000 Project will result in an integrated and comprehensive set of functional applications containing core information necessary to support the Project Hanford Management Contract. It is based on the Commercial-Off-The-Shelf product solution with commercially proven business processes. The COTS product solution set, of PassPort and People Soft software, supports finance, supply and chemical management/Material Safety Data Sheet, human resources.

  1. Orbiter Flying Qualities (OFQ) Workstation user's guide

    NASA Technical Reports Server (NTRS)

    Myers, Thomas T.; Parseghian, Zareh; Hogue, Jeffrey R.

    1988-01-01

    This project was devoted to the development of a software package, called the Orbiter Flying Qualities (OFQ) Workstation, for working with the OFQ Archives which are specially selected sets of space shuttle entry flight data relevant to flight control and flying qualities. The basic approach to creation of the workstation software was to federate and extend commercial software products to create a low cost package that operates on personal computers. Provision was made to link the workstation to large computers, but the OFQ Archive files were also converted to personal computer diskettes and can be stored on workstation hard disk drives. The primary element of the workstation developed in the project is the Interactive Data Handler (IDH) which allows the user to select data subsets from the archives and pass them to specialized analysis programs. The IDH was developed as an application in a relational database management system product. The specialized analysis programs linked to the workstation include a spreadsheet program, FREDA for spectral analysis, MFP for frequency domain system identification, and NIPIP for pilot-vehicle system parameter identification. The workstation also includes capability for ensemble analysis over groups of missions.

  2. Advances of FishNet towards a fully automatic monitoring system for fish migration

    NASA Astrophysics Data System (ADS)

    Kratzert, Frederik; Mader, Helmut

    2017-04-01

    Restoring the continuum of river networks, affected by anthropogenic constructions, is one of the main objectives of the Water Framework Directive. Regarding fish migration, fish passes are a widely used measure. Often the functionality of these fish passes needs to be assessed by monitoring. Over the last years, we developed a new semi-automatic monitoring system (FishCam) which allows the contact free observation of fish migration in fish passes through videos. The system consists of a detection tunnel, equipped with a camera, a motion sensor and artificial light sources, as well as a software (FishNet), which helps to analyze the video data. In its latest version, the software is capable of detecting and tracking objects in the videos as well as classifying them into "fish" and "no-fish" objects. This allows filtering out the videos containing at least one fish (approx. 5 % of all grabbed videos) and reduces the manual labor to the analysis of these videos. In this state the entire system has already been used in over 20 different fish passes across Austria for a total of over 140 months of monitoring resulting in more than 1.4 million analyzed videos. As a next step towards a fully automatic monitoring system, a key feature is the automatized classification of the detected fish into their species, which is still an unsolved task in a fully automatic monitoring environment. Recent advances in the field of machine learning, especially image classification with deep convolutional neural networks, sound promising in order to solve this problem. In this study, different approaches for the fish species classification are tested. Besides an image-only based classification approach using deep convolutional neural networks, various methods that combine the power of convolutional neural networks as image descriptors with additional features, such as the fish length and the time of appearance, are explored. To facilitate the development and testing phase of this approach, a subset of six fish species of Austrian rivers and streams is considered in this study. All scripts and the data to reproduce the results of this study will be made publicly available on GitHub* at the beginning of the EGU2017 General Assembly. * https://github.com/kratzert/EGU2017_public/

  3. ABM Drag_Pass Report Generator

    NASA Technical Reports Server (NTRS)

    Fisher, Forest; Gladden, Roy; Khanampornpan, Teerapat

    2008-01-01

    dragREPORT software was developed in parallel with abmREPORT, which is described in the preceding article. Both programs were built on the capabilities created during that process. This tool generates a drag_pass report that summarizes vital information from the MRO aerobreaking drag_pass build process to facilitate both sequence reviews and provide a high-level summarization of the sequence for mission management. The script extracts information from the ENV, SSF, FRF, SCMFmax, and OPTG files, presenting them in a single, easy-to-check report providing the majority of parameters needed for cross check and verification as part of the sequence review process. Prior to dragReport, all the needed information was spread across a number of different files, each in a different format. This software is a Perl script that extracts vital summarization information and build-process details from a number of source files into a single, concise report format used to aid the MPST sequence review process and to provide a high-level summarization of the sequence for mission management reference. This software could be adapted for future aerobraking missions to provide similar reports, review and summarization information.

  4. UAVSAR Active Electronically Scanned Array

    NASA Technical Reports Server (NTRS)

    Sadowy, Gregory, A.; Chamberlain, Neil F.; Zawadzki, Mark S.; Brown, Kyle M.; Fisher, Charles D.; Figueroa, Harry S.; Hamilton, Gary A.; Jones, Cathleen E.; Vorperian, Vatche; Grando, Maurio B.

    2011-01-01

    The Uninhabited Airborne Vehicle Synthetic Aperture Radar (UAVSAR) is a pod-based, L-band (1.26 GHz), repeatpass, interferometric, synthetic aperture radar (InSAR) used for Earth science applications. Repeat-pass interferometric radar measurements from an airborne platform require an antenna that can be steered to maintain the same angle with respect to the flight track over a wide range of aircraft yaw angles. In order to be able to collect repeat-pass InSAR data over a wide range of wind conditions, UAVSAR employs an active electronically scanned array (AESA). During data collection, the UAVSAR flight software continuously reads the aircraft attitude state measured by the Embedded GPS/INS system (EGI) and electronically steers the beam so that it remains perpendicular to the flight track throughout the data collection

  5. Visual identification system for homeland security and law enforcement support

    NASA Astrophysics Data System (ADS)

    Samuel, Todd J.; Edwards, Don; Knopf, Michael

    2005-05-01

    This paper describes the basic configuration for a visual identification system (VIS) for Homeland Security and law enforcement support. Security and law enforcement systems with an integrated VIS will accurately and rapidly provide identification of vehicles or containers that have entered, exited or passed through a specific monitoring location. The VIS system stores all images and makes them available for recall for approximately one week. Images of alarming vehicles will be archived indefinitely as part of the alarming vehicle"s or cargo container"s record. Depending on user needs, the digital imaging information will be provided electronically to the individual inspectors, supervisors, and/or control center at the customer"s office. The key components of the VIS are the high-resolution cameras that capture images of vehicles, lights, presence sensors, image cataloging software, and image recognition software. In addition to the cameras, the physical integration and network communications of the VIS components with the balance of the security system and client must be ensured.

  6. Data Acquisition System for Multi-Frequency Radar Flight Operations Preparation

    NASA Technical Reports Server (NTRS)

    Leachman, Jonathan

    2010-01-01

    A three-channel data acquisition system was developed for the NASA Multi-Frequency Radar (MFR) system. The system is based on a commercial-off-the-shelf (COTS) industrial PC (personal computer) and two dual-channel 14-bit digital receiver cards. The decimated complex envelope representations of the three radar signals are passed to the host PC via the PCI bus, and then processed in parallel by multiple cores of the PC CPU (central processing unit). The innovation is this parallelization of the radar data processing using multiple cores of a standard COTS multi-core CPU. The data processing portion of the data acquisition software was built using autonomous program modules or threads, which can run simultaneously on different cores. A master program module calculates the optimal number of processing threads, launches them, and continually supplies each with data. The benefit of this new parallel software architecture is that COTS PCs can be used to implement increasingly complex processing algorithms on an increasing number of radar range gates and data rates. As new PCs become available with higher numbers of CPU cores, the software will automatically utilize the additional computational capacity.

  7. Spot: A Programming Language for Verified Flight Software

    NASA Technical Reports Server (NTRS)

    Bocchino, Robert L., Jr.; Gamble, Edward; Gostelow, Kim P.; Some, Raphael R.

    2014-01-01

    The C programming language is widely used for programming space flight software and other safety-critical real time systems. C, however, is far from ideal for this purpose: as is well known, it is both low-level and unsafe. This paper describes Spot, a language derived from C for programming space flight systems. Spot aims to maintain compatibility with existing C code while improving the language and supporting verification with the SPIN model checker. The major features of Spot include actor-based concurrency, distributed state with message passing and transactional updates, and annotations for testing and verification. Spot also supports domain-specific annotations for managing spacecraft state, e.g., communicating telemetry information to the ground. We describe the motivation and design rationale for Spot, give an overview of the design, provide examples of Spot's capabilities, and discuss the current status of the implementation.

  8. CrossTalk: The Journal of Defense Software Engineering. Volume 20, Number 4

    DTIC Science & Technology

    2007-04-01

    and test markets . The decision fails the review, gets marked for adjustment, or passes. • The decision gets pushed out into the world. At this point...STD- 1521, Institute for Electrical and Electronics Engineers [IEEE]-15288). Myopically focused on early correctness, systems engineering can seem to...based on Mishkin Berteig’s experiences as an agile coach, consultant or trainer to teams and management in organizations across North America. From

  9. Data management plan for HANDI 2000 business management system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, D.

    The Hanford Data Integration 2000 (HANDI 2000) Project will result in an integrated and comprehensive set of functional applications containing core information necessary to support the Project Hanford Management Contract (PHMC). It is based on the Commercial-Off-The-Shelf (COTS) product solution with commercially proven business processes. The COTS product solution set, of PassPort (PP) and PeopleSoft (PS) software, supports finance, supply and chemical management/Material Safety Data Sheet.

  10. MaROS Strategic Relay Planning and Coordination Interfaces

    NASA Technical Reports Server (NTRS)

    Allard, Daniel A.

    2010-01-01

    The Mars Relay Operations Service (MaROS) is designed to provide planning and analysis tools in support of ongoing Mars Network relay operations. Strategic relay planning requires coordination between lander and orbiter mission ground data system (GDS) teams to schedule and execute relay communications passes. MaROS centralizes this process, correlating all data relevant to relay coordination to provide a cohesive picture of the relay state. Service users interact with the system through thin-layer command line and web user interface client applications. Users provide and utilize data such as lander view periods of orbiters, Deep Space Network (DSN) antenna tracks, and reports of relay pass performance. Users upload and download relevant relay data via formally defined and documented file structures including some described in Extensible Markup Language (XML). Clients interface with the system via an http-based Representational State Transfer (ReST) pattern using Javascript Object Notation (JSON) formats. This paper will provide a general overview of the service architecture and detail the software interfaces and considerations for interface design.

  11. Smart command recognizer (SCR) - For development, test, and implementation of speech commands

    NASA Technical Reports Server (NTRS)

    Simpson, Carol A.; Bunnell, John W.; Krones, Robert R.

    1988-01-01

    The SCR, a rapid prototyping system for the development, testing, and implementation of speech commands in a flight simulator or test aircraft, is described. A single unit performs all functions needed during these three phases of system development, while the use of common software and speech command data structure files greatly reduces the preparation time for successive development phases. As a smart peripheral to a simulation or flight host computer, the SCR interprets the pilot's spoken input and passes command codes to the simulation or flight computer.

  12. Design of a network for concurrent message passing systems

    NASA Astrophysics Data System (ADS)

    Song, Paul Y.

    1988-08-01

    We describe the design of the network design frame (NDF), a self-timed routing chip for a message-passing concurrent computer. The NDF uses a partitioned data path, low-voltage output drivers, and a distributed token-passing arbiter to provide a bandwidth of 450 Mbits/sec into the network. Wormhole routing and bidirectional virtual channels are used to provide low latency communications, less than 2us latency to deliver a 216 bit message across the diameter of a 1K node mess-connected machine. To support concurrent software systems, the NDF provides two logical networks, one for user messages and one for system messages. The two networks share the same set of physical wires. To facilitate the development of network nodes, the NDF is a design frame. The NDF circuitry is integrated into the pad frame of a chip leaving the center of the chip uncommitted. We define an analytic framework in which to study the effects of network size, network buffering capacity, bidirectional channels, and traffic on this class of networks. The response of the network to various combinations of these parameters are obtained through extensive simulation of the network model. Through simulation, we are able to observe the macro behavior of the network as opposed to the micro behavior of the NDF routing controller.

  13. Design and Analysis of a Micromachined LC Low Pass Filter For 2.4GHz Application

    NASA Astrophysics Data System (ADS)

    Saroj, Samruddhi R.; Rathee, Vishal R.; Pande, Rajesh S.

    2018-02-01

    This paper reports design and analysis of a passive low pass filter with cut-off frequency of 2.4 GHz using MEMS (Micro Electro-Mechanical Systems) technology. The passive components such as suspended spiral inductors and metal-insulator-metal (MIM) capacitor are arranged in T network form to implement LC low pass filter design. This design employs a simple approach of suspension thereby reducing parasitic losses to eliminate the performance degrading effects caused by integrating an off-chip inductor in the filter circuit proposed to be developed on a low cost silicon substrate using RF-MEMS components. The filter occupies only 2.1 mm x 0.66 mm die area and is designed using micro-strip transmission line placed on a silicon substrate. The design is implemented in High Frequency Structural Simulator (HFSS) software and fabrication flow is proposed for its implementation. The simulated results show that the design has an insertion loss of -4.98 dB and return loss of -2.60dB.

  14. Industry Supplied CAD Curriculum: Case Study on Passing Certification Exams

    ERIC Educational Resources Information Center

    Webster, Rustin; Dues, Joseph; Ottway, Rudy

    2017-01-01

    Students who successfully pass professional certification exams while in school are often targeted first by industry for internships and entry level positions. Over the last decade, leading industry suppliers of computer-aided design (CAD) software have developed and launched certification exams for many of their product offerings. Some have also…

  15. Symbolically Modeling Concurrent MCAPI Executions

    NASA Technical Reports Server (NTRS)

    Fischer, Topher; Mercer, Eric; Rungta, Neha

    2011-01-01

    Improper use of Inter-Process Communication (IPC) within concurrent systems often creates data races which can lead to bugs that are challenging to discover. Techniques that use Satisfiability Modulo Theories (SMT) problems to symbolically model possible executions of concurrent software have recently been proposed for use in the formal verification of software. In this work we describe a new technique for modeling executions of concurrent software that use a message passing API called MCAPI. Our technique uses an execution trace to create an SMT problem that symbolically models all possible concurrent executions and follows the same sequence of conditional branch outcomes as the provided execution trace. We check if there exists a satisfying assignment to the SMT problem with respect to specific safety properties. If such an assignment exists, it provides the conditions that lead to the violation of the property. We show how our method models behaviors of MCAPI applications that are ignored in previously published techniques.

  16. MaROS: Web Visualization of Mars Orbiting and Landed Assets

    NASA Technical Reports Server (NTRS)

    Wallick, Michael N.; Allard, Daniel A.; Gladden, Roy E.; Hy, Franklin H.

    2011-01-01

    Mars Relay operations currently involve several e-mails and phone calls between lander and orbiter teams in order to settle on an agreed time for performing a communication pass between the landed asset (i.e. rover or lander) and orbiter, then back to Earth. This new application aims to reduce this complexity by presenting a visualization of the overpass time ranges and elevation angle, as well as other information. The user is able to select a specific overflight opportunity to receive further information about that particular pass. This software presents a unified view of the potential communication passes available between orbiting and landed assets on Mars. Each asset is presented to the user in a graphical view showing overpass opportunities, elevation angle, requested and acknowledged communication windows, forward and back latencies, warnings, conflicts, relative planetary times, ACE Schedules, and DSN information. This software is unique in that it is the first of its kind to visually display the information regarding communication opportunities between landed and orbiting Mars assets. The software is written using ActionScript/FLEX, a Web language, meaning that this information may be accessed over the Internet from anywhere in the world.

  17. WE-G-BRA-02: SafetyNet: Automating Radiotherapy QA with An Event Driven Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hadley, S; Kessler, M; Litzenberg, D

    2015-06-15

    Purpose: Quality assurance is an essential task in radiotherapy that often requires many manual tasks. We investigate the use of an event driven framework in conjunction with software agents to automate QA and eliminate wait times. Methods: An in house developed subscription-publication service, EventNet, was added to the Aria OIS to be a message broker for critical events occurring in the OIS and software agents. Software agents operate without user intervention and perform critical QA steps. The results of the QA are documented and the resulting event is generated and passed back to EventNet. Users can subscribe to those eventsmore » and receive messages based on custom filters designed to send passing or failing results to physicists or dosimetrists. Agents were developed to expedite the following QA tasks: Plan Revision, Plan 2nd Check, SRS Winston-Lutz isocenter, Treatment History Audit, Treatment Machine Configuration. Results: Plan approval in the Aria OIS was used as the event trigger for plan revision QA and Plan 2nd check agents. The agents pulled the plan data, executed the prescribed QA, stored the results and updated EventNet for publication. The Winston Lutz agent reduced QA time from 20 minutes to 4 minutes and provided a more accurate quantitative estimate of radiation isocenter. The Treatment Machine Configuration agent automatically reports any changes to the Treatment machine or HDR unit configuration. The agents are reliable, act immediately, and execute each task identically every time. Conclusion: An event driven framework has inverted the data chase in our radiotherapy QA process. Rather than have dosimetrists and physicists push data to QA software and pull results back into the OIS, the software agents perform these steps immediately upon receiving the sentinel events from EventNet. Mr Keranen is an employee of Varian Medical Systems. Dr. Moran’s institution receives research support for her effort for a linear accelerator QA project from Varian Medical Systems. Other quality projects involving her effort are funded by Blue Cross Blue Shield of Michigan, Breast Cancer Research Foundation, and the NIH.« less

  18. Statistical variability and confidence intervals for planar dose QA pass rates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, Daniel W.; Nelms, Benjamin E.; Attwood, Kristopher

    Purpose: The most common metric for comparing measured to calculated dose, such as for pretreatment quality assurance of intensity-modulated photon fields, is a pass rate (%) generated using percent difference (%Diff), distance-to-agreement (DTA), or some combination of the two (e.g., gamma evaluation). For many dosimeters, the grid of analyzed points corresponds to an array with a low areal density of point detectors. In these cases, the pass rates for any given comparison criteria are not absolute but exhibit statistical variability that is a function, in part, on the detector sampling geometry. In this work, the authors analyze the statistics ofmore » various methods commonly used to calculate pass rates and propose methods for establishing confidence intervals for pass rates obtained with low-density arrays. Methods: Dose planes were acquired for 25 prostate and 79 head and neck intensity-modulated fields via diode array and electronic portal imaging device (EPID), and matching calculated dose planes were created via a commercial treatment planning system. Pass rates for each dose plane pair (both centered to the beam central axis) were calculated with several common comparison methods: %Diff/DTA composite analysis and gamma evaluation, using absolute dose comparison with both local and global normalization. Specialized software was designed to selectively sample the measured EPID response (very high data density) down to discrete points to simulate low-density measurements. The software was used to realign the simulated detector grid at many simulated positions with respect to the beam central axis, thereby altering the low-density sampled grid. Simulations were repeated with 100 positional iterations using a 1 detector/cm{sup 2} uniform grid, a 2 detector/cm{sup 2} uniform grid, and similar random detector grids. For each simulation, %/DTA composite pass rates were calculated with various %Diff/DTA criteria and for both local and global %Diff normalization techniques. Results: For the prostate and head/neck cases studied, the pass rates obtained with gamma analysis of high density dose planes were 2%-5% higher than respective %/DTA composite analysis on average (ranging as high as 11%), depending on tolerances and normalization. Meanwhile, the pass rates obtained via local normalization were 2%-12% lower than with global maximum normalization on average (ranging as high as 27%), depending on tolerances and calculation method. Repositioning of simulated low-density sampled grids leads to a distribution of possible pass rates for each measured/calculated dose plane pair. These distributions can be predicted using a binomial distribution in order to establish confidence intervals that depend largely on the sampling density and the observed pass rate (i.e., the degree of difference between measured and calculated dose). These results can be extended to apply to 3D arrays of detectors, as well. Conclusions: Dose plane QA analysis can be greatly affected by choice of calculation metric and user-defined parameters, and so all pass rates should be reported with a complete description of calculation method. Pass rates for low-density arrays are subject to statistical uncertainty (vs. the high-density pass rate), but these sampling errors can be modeled using statistical confidence intervals derived from the sampled pass rate and detector density. Thus, pass rates for low-density array measurements should be accompanied by a confidence interval indicating the uncertainty of each pass rate.« less

  19. Improving the uniformity of luminous system in radial imaging capsule endoscope system

    NASA Astrophysics Data System (ADS)

    Ou-Yang, Mang; Jeng, Wei-De

    2013-02-01

    This study concerns the illumination system in a radial imaging capsule endoscope (RICE). Uniformly illuminating the object is difficult because the intensity of the light from the light emitting diodes (LEDs) varies with angular displacement. When light is emitted from the surface of the LED, it first encounters the cone mirror, from which it is reflected, before directly passing through the lenses and complementary metal oxide semiconductor (CMOS) sensor. The light that is strongly reflected from the transparent view window (TVW) propagates again to the cone mirror, to be reflected and to pass through the lenses and CMOS sensor. The above two phenomena cause overblooming on the image plane. Overblooming causes nonuniform illumination on the image plane and consequently reduced image quality. In this work, optical design software was utilized to construct a photometric model for the optimal design of the LED illumination system. Based on the original RICE model, this paper proposes an optimal design to improve the uniformity of the illumination. The illumination uniformity in the RICE is increased from its original value of 0.128 to 0.69, greatly improving light uniformity.

  20. The equipment access software for a distributed UNIX-based accelerator control system

    NASA Astrophysics Data System (ADS)

    Trofimov, Nikolai; Zelepoukine, Serguei; Zharkov, Eugeny; Charrue, Pierre; Gareyte, Claire; Poirier, Hervé

    1994-12-01

    This paper presents a generic equipment access software package for a distributed control system using computers with UNIX or UNIX-like operating systems. The package consists of three main components, an application Equipment Access Library, Message Handler and Equipment Data Base. An application task, which may run in any computer in the network, sends requests to access equipment through Equipment Library calls. The basic request is in the form Equipment-Action-Data and is routed via a remote procedure call to the computer to which the given equipment is connected. In this computer the request is received by the Message Handler. According to the type of the equipment connection, the Message Handler either passes the request to the specific process software in the same computer or forwards it to a lower level network of equipment controllers using MIL1553B, GPIB, RS232 or BITBUS communication. The answer is then returned to the calling application. Descriptive information required for request routing and processing is stored in the real-time Equipment Data Base. The package has been written to be portable and is currently available on DEC Ultrix, LynxOS, HPUX, XENIX, OS-9 and Apollo domain.

  1. Design notes for the next generation persistent object manager for CAP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Isely, M.; Fischler, M.; Galli, M.

    1995-05-01

    The CAP query system software at Fermilab has several major components, including SQS (for managing the query), the retrieval system (for fetching auxiliary data), and the query software itself. The central query software in particular is essentially a modified version of the `ptool` product created at UIC (University of Illinois at Chicago) as part of the PASS project under Bob Grossman. The original UIC version was designed for use in a single-user non-distributed Unix environment. The Fermi modifications were an attempt to permit multi-user access to a data set distributed over a set of storage nodes. (The hardware is anmore » IBM SP-x system - a cluster of AIX POWER2 nodes with an IBM-proprietary high speed switch interconnect). Since the implementation work of the Fermi-ized ptool, the CAP members have learned quite a bit about the nature of queries and where the current performance bottlenecks exist. This has lead them to design a persistent object manager that will overcome these problems. For backwards compatibility with ptool, the ptool persistent object API will largely be retained, but the implementation will be entirely different.« less

  2. Managing coherence via put/get windows

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton on Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Ohmacht, Martin [Yorktown Heights, NY

    2011-01-11

    A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an area of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.

  3. Managing coherence via put/get windows

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton on Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Ohmacht, Martin [Yorktown Heights, NY

    2012-02-21

    A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an area of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.

  4. An expert system for the design of heating, ventilating, and air-conditioning systems

    NASA Astrophysics Data System (ADS)

    Camejo, Pedro Jose

    1989-12-01

    Expert systems are computer programs that seek to mimic human reason. An expert system shelf, a software program commonly used for developing expert systems in a relatively short time, was used to develop a prototypical expert system for the design of heating, ventilating, and air-conditioning (HVAC) systems in buildings. Because HVAC design involves several related knowledge domains, developing an expert system for HVAC design requires the integration of several smaller expert systems known as knowledge bases. A menu program and several auxiliary programs for gathering data, completing calculations, printing project reports, and passing data between the knowledge bases are needed and have been developed to join the separate knowledge bases into one simple-to-use program unit.

  5. caBIG compatibility review system: software to support the evaluation of applications using defined interoperability criteria.

    PubMed

    Freimuth, Robert R; Schauer, Michael W; Lodha, Preeti; Govindrao, Poornima; Nagarajan, Rakesh; Chute, Christopher G

    2008-11-06

    The caBIG Compatibility Review System (CRS) is a web-based application to support compatibility reviews, which certify that software applications that pass the review meet a specific set of criteria that allow them to interoperate. The CRS contains workflows that support both semantic and syntactic reviews, which are performed by the caBIG Vocabularies and Common Data Elements (VCDE) and Architecture workspaces, respectively. The CRS increases the efficiency of compatibility reviews by reducing administrative overhead and it improves uniformity by ensuring that each review is conducted according to a standard process. The CRS provides metrics that allow the review team to evaluate the level of data element reuse in an application, a first step towards quantifying the extent of harmonization between applications. Finally, functionality is being added that will provide automated validation of checklist criteria, which will further simplify the review process.

  6. Accuracy of flat panel detector CT with integrated navigational software with and without MR fusion for single-pass needle placement.

    PubMed

    Mabray, Marc C; Datta, Sanjit; Lillaney, Prasheel V; Moore, Teri; Gehrisch, Sonja; Talbott, Jason F; Levitt, Michael R; Ghodke, Basavaraj V; Larson, Paul S; Cooke, Daniel L

    2016-07-01

    Fluoroscopic systems in modern interventional suites have the ability to perform flat panel detector CT (FDCT) with navigational guidance. Fusion with MR allows navigational guidance towards FDCT occult targets. We aim to evaluate the accuracy of this system using single-pass needle placement in a deep brain stimulation (DBS) phantom. MR was performed on a head phantom with DBS lead targets. The head phantom was placed into fixation and FDCT was performed. FDCT and MR datasets were automatically fused using the integrated guidance system (iGuide, Siemens). A DBS target was selected on the MR dataset. A 10 cm, 19 G needle was advanced by hand in a single pass using laser crosshair guidance. Radial error was visually assessed against measurement markers on the target and by a second FDCT. Ten needles were placed using CT-MR fusion and 10 needles were placed without MR fusion, with targeting based solely on FDCT and fusion steps repeated for every pass. Mean radial error was 2.75±1.39 mm as defined by visual assessment to the centre of the DBS target and 2.80±1.43 mm as defined by FDCT to the centre of the selected target point. There were no statistically significant differences in error between MR fusion and non-MR guided series. Single pass needle placement in a DBS phantom using FDCT guidance is associated with a radial error of approximately 2.5-3.0 mm at a depth of approximately 80 mm. This system could accurately target sub-centimetre intracranial lesions defined on MR. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  7. Using OpenSSH to secure mobile LAN network traffic

    NASA Astrophysics Data System (ADS)

    Luu, Brian B.; Gopaul, Richard D.

    2002-08-01

    Mobile Internet Protocol (IP) Local Area Network (LAN) is a technique, developed by the U.S. Army Research Laboratory, which allows a LAN to be IP mobile when attaching to a foreign IP-based network and using this network as a means to retain connectivity to its home network. In this paper, we describe a technique that uses Open Secure Shell (OpenSSH) software to ensure secure, encrypted transmission of a mobile LAN's network traffic. Whenever a mobile LAN, implemented with Mobile IP LAN, moves to a foreign network, its gateway (router) obtains an IP address from the new network. IP tunnels, using IP encapsulation, are then established from the gateway through the foreign network to a home agent on its home network. These tunnels provide a virtual two-way connection to the home network for the mobile LAN as if the LAN were connected directly to its home network. Hence, when IP mobile, a mobile LAN's tunneled network traffic must traverse one or more foreign networks that may not be trusted. This traffic could be subject to eavesdropping, interception, modification, or redirection by malicious nodes in these foreign networks. To protect network traffic passing through the tunnels, OpenSSH is used as a means of encryption because it prevents surveillance, modification, and redirection of mobile LAN traffic passing across foreign networks. Since the software is found in the public domain, is available for most current operating systems, and is commonly used to provide secure network communications, OpenSSH is the software of choice.

  8. Software for Generating Strip Maps from SAR Data

    NASA Technical Reports Server (NTRS)

    Hensley, Scott; Michel, Thierry; Madsen, Soren; Chapin, Elaine; Rodriguez, Ernesto

    2004-01-01

    Jurassicprok is a computer program that generates strip-map digital elevation models and other data products from raw data acquired by an airborne synthetic-aperture radar (SAR) system. This software can process data from a variety of airborne SAR systems but is designed especially for the GeoSAR system, which is a dual-frequency (P- and X-band), single-pass interferometric SAR system for measuring elevation both at the bare ground surface and top of the vegetation canopy. Jurassicprok is a modified version of software developed previously for airborne-interferometric- SAR applications. The modifications were made to accommodate P-band interferometric processing, remove approximations that are not generally valid, and reduce processor-induced mapping errors to the centimeter level. Major additions and other improvements over the prior software include the following: a) A new, highly efficient multi-stage-modified wave-domain processing algorithm for accurately motion compensating ultra-wideband data; b) Adaptive regridding algorithms based on estimated noise and actual measured topography to reduce noise while maintaining spatial resolution; c) Exact expressions for height determination from interferogram data; d) Fully calibrated volumetric correlation data based on rigorous removal of geometric and signal-to-noise decorrelation terms; e) Strip range-Doppler image output in user-specified Doppler coordinates; f) An improved phase-unwrapping and absolute-phase-determination algorithm; g) A more flexible user interface with many additional processing options; h) Increased interferogram filtering options; and i) Ability to use disk space instead of random- access memory for some processing steps.

  9. PVM Wrapper

    NASA Technical Reports Server (NTRS)

    Katz, Daniel

    2004-01-01

    PVM Wrapper is a software library that makes it possible for code that utilizes the Parallel Virtual Machine (PVM) software library to run using the message-passing interface (MPI) software library, without needing to rewrite the entire code. PVM and MPI are the two most common software libraries used for applications that involve passing of messages among parallel computers. Since about 1996, MPI has been the de facto standard. Codes written when PVM was popular often feature patterns of {"initsend," "pack," "send"} and {"receive," "unpack"} calls. In many cases, these calls are not contiguous and one set of calls may even exist over multiple subroutines. These characteristics make it difficult to obtain equivalent functionality via a single MPI "send" call. Because PVM Wrapper is written to run with MPI- 1.2, some PVM functions are not permitted and must be replaced - a task that requires some programming expertise. The "pvm_spawn" and "pvm_parent" function calls are not replaced, but a programmer can use "mpirun" and knowledge of the ranks of parent and child tasks with supplied macroinstructions to enable execution of codes that use "pvm_spawn" and "pvm_parent."

  10. Allocations for HANDI 2000 business management system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, D.

    The Data Integration 2000 Project will result in an integrated and comprehensive set of functional applications containing core information necessary to support the Project Hanford Management Contract. It is based on the Commercial-Off-The-Shelf product solution with commercially proven business processes. The COTS product solution set, of PassPort and People Soft software, supports finance, supply and chemical management/Material Safety Data Sheet, human resources. Allocations at Fluor Daniel Hanford are burdens added to base costs using a predetermined rate.

  11. Long range targeting for space based rendezvous

    NASA Technical Reports Server (NTRS)

    Everett, Louis J.; Redfield, R. C.

    1995-01-01

    The work performed under this grant supported the Dexterous Flight Experiment one STS-62 The project required developing hardware and software for automating a TRAC sensor on orbit. The hardware developed by for the flight has been documented through standard NASA channels since it has to pass safety, environmental, and other issues. The software has not been documented previously, therefore, this report provides a software manual for the TRAC code developed for the grant.

  12. Making Sense of Remotely Sensed Ultra-Spectral Infrared Data

    NASA Technical Reports Server (NTRS)

    2001-01-01

    NASA's Jet Propulsion Laboratory (JPL), Pasadena, California, Earth Observing System (EOS) programs, the Deep Space Network (DSN), and various Department of Defense (DOD) technology demonstration programs, combined their technical expertise to develop SEASCRAPE, a software program that obtains data when thermal infrared radiation passes through the Earth's atmosphere and reaches a sensor. Licensed by the California Institute of Technology (Caltech), SEASCRAPE automatically inverts complex infrared data and makes it possible to obtain estimates of the state of the atmosphere along the ray path. Former JPL staff members created a small entrepreneurial firm, Remote Sensing Analysis Systems, Inc., of Altadena, California, to commercialize the product. The founders believed that a commercial version of the software was needed for future U.S. government missions and the commercial monitoring of pollution. With the inversion capability of this software and remote sensing instrumentation, it is possible to monitor pollution sources from safe and secure distances on a noninterfering, noncooperative basis. The software, now know as SEASCRAPE_Plus, allows the user to determine the presence of pollution products, their location and their abundance along the ray path. The technology has been cleared by the Department of Commerce for export, and is currently used by numerous research and engineering organizations around the world.

  13. Sighten Final Technical Report DEEE0006690 Deploying an integrated and comprehensive solar financing software platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Leary, Conlan

    Over the project, Sighten built a comprehensive software-as-a-service (Saas) platform to automate and streamline the residential solar financing workflow. Before the project period, significant time and money was spent by companies on front-end tools related to system design and proposal creation, but comparatively few resources were available to support the many back-end calculations and data management processes that underpin third party financing. Without a tool like Sighten, the solar financing processes involved passing information from the homeowner prospect into separate tools for system design, financing, and then later to reporting tools including Microsoft Excel, CRM software, in-house software, outside software,more » and offline, manual processes. Passing data between tools and attempting to connect disparate systems results in inefficiency and inaccuracy for the industry. Sighten was built to consolidate all financial and solar-related calculations in a single software platform. It significantly improves upon the accuracy of these calculations and exposes sophisticated new analysis tools resulting in a rigorous, efficient and cost-effective toolset for scaling residential solar. Widely deploying a platform like Sighten’s significantly and immediately impacts the residential solar space in several important ways: 1) standardizing and improving the quality of all quantitative calculations involved in the residential financing process, most notably project finance, system production and reporting calculations; 2) representing a true step change in terms of reporting and analysis capabilities by maintaining more accurate data and exposing sophisticated tools around simulation, tranching, and financial reporting, among others, to all stakeholders in the space; 3) allowing a broader group of developers/installers/finance companies to access the capital markets by providing an out-of-the-box toolset that handles the execution of running investor capital through a rooftop solar financing program. Standardizing and improving all calculations, improving data quality, and exposing new analysis tools previously unavailable affects investment in the residential space in several important ways: 1) lowering the cost of capital for existing capital providers by mitigating uncertainty and de-risking the solar asset class; 2) attracting new, lower cost investors to the solar asset class as reporting and data quality resemble standards of more mature asset classes; 3) increasing the prevalence of liquidity options for investors through back leverage, securitization, or secondary sale by providing the tools necessary for lenders, ratings agencies, etc. to properly understand a portfolio of residential solar assets. During the project period, Sighten successfully built and scaled a commercially ready tool for the residential solar market. The software solution built by Sighten has been deployed with key target customer segments identified in the award deliverables: solar installers, solar developers/channel managers, and solar financiers, including lenders. Each of these segments greatly benefits from the availability of the Sighten toolset.« less

  14. A model-based approach for automated in vitro cell tracking and chemotaxis analyses.

    PubMed

    Debeir, Olivier; Camby, Isabelle; Kiss, Robert; Van Ham, Philippe; Decaestecker, Christine

    2004-07-01

    Chemotaxis may be studied in two main ways: 1) counting cells passing through an insert (e.g., using Boyden chambers), and 2) directly observing cell cultures (e.g., using Dunn chambers), both in response to stationary concentration gradients. This article promotes the use of Dunn chambers and in vitro cell-tracking, achieved by video microscopy coupled with automatic image analysis software, in order to extract quantitative and qualitative measurements characterizing the response of cells to a diffusible chemical agent. Previously, we set up a videomicroscopy system coupled with image analysis software that was able to compute cell trajectories from in vitro cell cultures. In the present study, we are introducing a new software increasing the application field of this system to chemotaxis studies. This software is based on an adapted version of the active contour methodology, enabling each cell to be efficiently tracked for hours and resulting in detailed descriptions of individual cell trajectories. The major advantages of this method come from an improved robustness with respect to variability in cell morphologies between different cell lines and dynamical changes in cell shape during cell migration. Moreover, the software includes a very small number of parameters which do not require overly sensitive tuning. Finally, the running time of the software is very short, allowing improved possibilities in acquisition frequency and, consequently, improved descriptions of complex cell trajectories, i.e. trajectories including cell division and cell crossing. We validated this software on several artificial and real cell culture experiments in Dunn chambers also including comparisons with manual (human-controlled) analyses. We developed new software and data analysis tools for automated cell tracking which enable cell chemotaxis to be efficiently analyzed. Copyright 2004 Wiley-Liss, Inc.

  15. Proceedings of the Second NASA Formal Methods Symposium

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar (Editor)

    2010-01-01

    This publication contains the proceedings of the Second NASA Formal Methods Symposium sponsored by the National Aeronautics and Space Administration and held in Washington D.C. April 13-15, 2010. Topics covered include: Decision Engines for Software Analysis using Satisfiability Modulo Theories Solvers; Verification and Validation of Flight-Critical Systems; Formal Methods at Intel -- An Overview; Automatic Review of Abstract State Machines by Meta Property Verification; Hardware-independent Proofs of Numerical Programs; Slice-based Formal Specification Measures -- Mapping Coupling and Cohesion Measures to Formal Z; How Formal Methods Impels Discovery: A Short History of an Air Traffic Management Project; A Machine-Checked Proof of A State-Space Construction Algorithm; Automated Assume-Guarantee Reasoning for Omega-Regular Systems and Specifications; Modeling Regular Replacement for String Constraint Solving; Using Integer Clocks to Verify the Timing-Sync Sensor Network Protocol; Can Regulatory Bodies Expect Efficient Help from Formal Methods?; Synthesis of Greedy Algorithms Using Dominance Relations; A New Method for Incremental Testing of Finite State Machines; Verification of Faulty Message Passing Systems with Continuous State Space in PVS; Phase Two Feasibility Study for Software Safety Requirements Analysis Using Model Checking; A Prototype Embedding of Bluespec System Verilog in the PVS Theorem Prover; SimCheck: An Expressive Type System for Simulink; Coverage Metrics for Requirements-Based Testing: Evaluation of Effectiveness; Software Model Checking of ARINC-653 Flight Code with MCP; Evaluation of a Guideline by Formal Modelling of Cruise Control System in Event-B; Formal Verification of Large Software Systems; Symbolic Computation of Strongly Connected Components Using Saturation; Towards the Formal Verification of a Distributed Real-Time Automotive System; Slicing AADL Specifications for Model Checking; Model Checking with Edge-valued Decision Diagrams; and Data-flow based Model Analysis.

  16. Software Writing Skills for Your Research - Lessons Learned from Workshops in the Geosciences

    NASA Astrophysics Data System (ADS)

    Hammitzsch, Martin

    2016-04-01

    Findings presented in scientific papers are based on data and software. Once in a while they come along with data - but not commonly with software. However, the software used to gain findings plays a crucial role in the scientific work. Nevertheless, software is rarely seen publishable. Thus researchers may not reproduce the findings without the software which is in conflict with the principle of reproducibility in sciences. For both, the writing of publishable software and the reproducibility issue, the quality of software is of utmost importance. For many programming scientists the treatment of source code, e.g. with code design, version control, documentation, and testing is associated with additional work that is not covered in the primary research task. This includes the adoption of processes following the software development life cycle. However, the adoption of software engineering rules and best practices has to be recognized and accepted as part of the scientific performance. Most scientists have little incentive to improve code and do not publish code because software engineering habits are rarely practised by researchers or students. Software engineering skills are not passed on to followers as for paper writing skill. Thus it is often felt that the software or code produced is not publishable. The quality of software and its source code has a decisive influence on the quality of research results obtained and their traceability. So establishing best practices from software engineering to serve scientific needs is crucial for the success of scientific software. Even though scientists use existing software and code, i.e., from open source software repositories, only few contribute their code back into the repositories. So writing and opening code for Open Science means that subsequent users are able to run the code, e.g. by the provision of sufficient documentation, sample data sets, tests and comments which in turn can be proven by adequate and qualified reviews. This assumes that scientist learn to write and release code and software as they learn to write and publish papers. Having this in mind, software could be valued and assessed as a contribution to science. But this requires the relevant skills that can be passed to colleagues and followers. Therefore, the GFZ German Research Centre for Geosciences performed three workshops in 2015 to address the passing of software writing skills to young scientists, the next generation of researchers in the Earth, planetary and space sciences. Experiences in running these workshops and the lessons learned will be summarized in this presentation. The workshops have received support and funding by Software Carpentry, a volunteer organization whose goal is to make scientists more productive, and their work more reliable, by teaching them basic computing skills, and by FOSTER (Facilitate Open Science Training for European Research), a two-year, EU-Funded (FP7) project, whose goal to produce a European-wide training programme that will help to incorporate Open Access approaches into existing research methodologies and to integrate Open Science principles and practice in the current research workflow by targeting the young researchers and other stakeholders.

  17. The unbalanced signal measuring of automotive brake drum

    NASA Astrophysics Data System (ADS)

    Wang, Xiao-Dong; Ye, Sheng-Hua; Zhang, Bang-Cheng

    2005-04-01

    For the purpose of the research and development of automatic balancing system by mass removing, the dissertation deals with the measuring method of the unbalance signal, the design the automatic balance equipment and the software. This paper emphases the testing system of the balancer of automotive brake drum. The paper designs the band-pass filter product with favorable automatic follow of electronic product, and with favorable automatic follow capability, filtration effect and stability. The system of automatic balancing system by mass removing based on virtual instrument is designed in this paper. A lab system has been constructed. The results of contrast experiments indicate the notable effect of 1-plane automatic balance and the high precision of dynamic balance, and demonstrate the application value of the system.

  18. High-energy physics software parallelization using database techniques

    NASA Astrophysics Data System (ADS)

    Argante, E.; van der Stok, P. D. V.; Willers, I.

    1997-02-01

    A programming model for software parallelization, called CoCa, is introduced that copes with problems caused by typical features of high-energy physics software. By basing CoCa on the database transaction paradimg, the complexity induced by the parallelization is for a large part transparent to the programmer, resulting in a higher level of abstraction than the native message passing software. CoCa is implemented on a Meiko CS-2 and on a SUN SPARCcenter 2000 parallel computer. On the CS-2, the performance is comparable with the performance of native PVM and MPI.

  19. Design and Experiment of Electrooculogram (EOG) System and Its Application to Control Mobile Robot

    NASA Astrophysics Data System (ADS)

    Sanjaya, W. S. M.; Anggraeni, D.; Multajam, R.; Subkhi, M. N.; Muttaqien, I.

    2017-03-01

    In this paper, we design and investigate a biological signal detection of eye movements (Electrooculogram). To detect a signal of Electrooculogram (EOG) used 4 instrument amplifier process; differential instrumentation amplifier, High Pass Filter (HPF) with 3 stage filters, Low Pass Filter (LPF) with 3 stage filters and Level Shifter circuit. The total of amplifying is 1000 times of gain, with frequency range 0.5-30 Hz. IC OP-Amp OP07 was used for all amplifying process. EOG signal will be read as analog input for Arduino microprocessor, and will interfaced with serial communication to PC Monitor using Processing® software. The result of this research show a differences value of eye movements. Differences signal of EOG have been applied to navigation control of the mobile robot. In this research, all communication process using Bluetooth HC-05.

  20. Test Driven Development of Scientific Models

    NASA Technical Reports Server (NTRS)

    Clune, Thomas L.

    2012-01-01

    Test-Driven Development (TDD) is a software development process that promises many advantages for developer productivity and has become widely accepted among professional software engineers. As the name suggests, TDD practitioners alternate between writing short automated tests and producing code that passes those tests. Although this overly simplified description will undoubtedly sound prohibitively burdensome to many uninitiated developers, the advent of powerful unit-testing frameworks greatly reduces the effort required to produce and routinely execute suites of tests. By testimony, many developers find TDD to be addicting after only a few days of exposure, and find it unthinkable to return to previous practices. Of course, scientific/technical software differs from other software categories in a number of important respects, but I nonetheless believe that TDD is quite applicable to the development of such software and has the potential to significantly improve programmer productivity and code quality within the scientific community. After a detailed introduction to TDD, I will present the experience within the Software Systems Support Office (SSSO) in applying the technique to various scientific applications. This discussion will emphasize the various direct and indirect benefits as well as some of the difficulties and limitations of the methodology. I will conclude with a brief description of pFUnit, a unit testing framework I co-developed to support test-driven development of parallel Fortran applications.

  1. GLobal Integrated Design Environment

    NASA Technical Reports Server (NTRS)

    Kunkel, Matthew; McGuire, Melissa; Smith, David A.; Gefert, Leon P.

    2011-01-01

    The GLobal Integrated Design Environment (GLIDE) is a collaborative engineering application built to resolve the design session issues of real-time passing of data between multiple discipline experts in a collaborative environment. Utilizing Web protocols and multiple programming languages, GLIDE allows engineers to use the applications to which they are accustomed in this case, Excel to send and receive datasets via the Internet to a database-driven Web server. Traditionally, a collaborative design session consists of one or more engineers representing each discipline meeting together in a single location. The discipline leads exchange parameters and iterate through their respective processes to converge on an acceptable dataset. In cases in which the engineers are unable to meet, their parameters are passed via e-mail, telephone, facsimile, or even postal mail. The result of this slow process of data exchange would elongate a design session to weeks or even months. While the iterative process remains in place, software can now exchange parameters securely and efficiently, while at the same time allowing for much more information about a design session to be made available. GLIDE is written in a compilation of several programming languages, including REALbasic, PHP, and Microsoft Visual Basic. GLIDE client installers are available to download for both Microsoft Windows and Macintosh systems. The GLIDE client software is compatible with Microsoft Excel 2000 or later on Windows systems, and with Microsoft Excel X or later on Macintosh systems. GLIDE follows the Client-Server paradigm, transferring encrypted and compressed data via standard Web protocols. Currently, the engineers use Excel as a front end to the GLIDE Client, as many of their custom tools run in Excel.

  2. Expert system for the design of heating, ventilating, and air-conditioning systems. Master's thesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Camejo, P.J.

    1989-12-01

    Expert systems are computer programs that seek to mimic human reason. An expert system shelf, a software program commonly used for developing expert systems in a relatively short time, was used to develop a prototypical expert system for the design of heating, ventilating, and air-conditioning (HVAC) systems in buildings. Because HVAC design involves several related knowledge domains, developing an expert system for HVAC design requires the integration of several smaller expert systems known as knowledge bases. A menu program and several auxiliary programs for gathering data, completing calculations, printing project reports, and passing data between the knowledge bases are neededmore » and have been developed to join the separate knowledge bases into one simple-to-use program unit.« less

  3. Simplifying and speeding the management of intra-node cache coherence

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton on Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Phillip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Ohmacht, Martin [Yorktown Heights, NY

    2012-04-17

    A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an area of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.

  4. Managing coherence via put/get windows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blumrich, Matthias A; Chen, Dong; Coteus, Paul W

    A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an areamore » of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.« less

  5. Description of real-time Ada software implementation of a power system monitor for the Space Station Freedom PMAD DC testbed

    NASA Technical Reports Server (NTRS)

    Ludwig, Kimberly; Mackin, Michael; Wright, Theodore

    1991-01-01

    The authors describe the Ada language software developed to perform the electrical power system monitoring functions for the NASA Lewis Research Center's Power Management and Distribution (PMAD) DC testbed. The results of the effort to implement this monitor are presented. The PMAD DC testbed is a reduced-scale prototype of the electric power system to be used in Space Station Freedom. The power is controlled by smart switches known as power control components (or switchgear). The power control components are currently coordinated by five Compaq 386/20e computers connected through an 802.4 local area network. The power system monitor algorithm comprises several functions, including periodic data acquisition, data smoothing, system performance analysis, and status reporting. Data are collected from the switchgear sensors every 100 ms, then passed through a 2-Hz digital filter. System performance analysis includes power interruption and overcurrent detection. The system monitor required a hardware timer interrupt to activate the data acquisition function. The execution time of the code was optimized by using an assembly language routine. The routine allows direct vectoring of the processor to Ada language procedures that perform periodic control activities.

  6. Precious "MeTL": Reflections on the Use of Tablet PCs and Collaborative Interactive Software in Peer- Assisted Study Sessions

    ERIC Educational Resources Information Center

    Devey, Adrian; Hicks, Marianne; Gunaratnam, Shaminka; Pan, Yijun; Plecan, Alexandru

    2012-01-01

    Peer-Assisted Study Sessions (PASS) is an academic mentoring program, where high achieving senior students assist small groups of first years in study sessions throughout semester. One of the challenges PASS Leaders face at Monash in conducting their classes is the limited time they have with their students. The current paper explores, through…

  7. LHCb Kalman Filter cross architecture studies

    NASA Astrophysics Data System (ADS)

    Cámpora Pérez, Daniel Hugo

    2017-10-01

    The 2020 upgrade of the LHCb detector will vastly increase the rate of collisions the Online system needs to process in software, in order to filter events in real time. 30 million collisions per second will pass through a selection chain, where each step is executed conditional to its prior acceptance. The Kalman Filter is a fit applied to all reconstructed tracks which, due to its time characteristics and early execution in the selection chain, consumes 40% of the whole reconstruction time in the current trigger software. This makes the Kalman Filter a time-critical component as the LHCb trigger evolves into a full software trigger in the Upgrade. I present a new Kalman Filter algorithm for LHCb that can efficiently make use of any kind of SIMD processor, and its design is explained in depth. Performance benchmarks are compared between a variety of hardware architectures, including x86_64 and Power8, and the Intel Xeon Phi accelerator, and the suitability of said architectures to efficiently perform the LHCb Reconstruction process is determined.

  8. Software solutions manage the definition, operation, maintenance and configuration control of the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobson, D; Churby, A; Krieger, E

    2011-07-25

    The National Ignition Facility (NIF) is the world's largest laser composed of millions of individual parts brought together to form one massive assembly. Maintaining control of the physical definition, status and configuration of this structure is a monumental undertaking yet critical to the validity of the shot experiment data and the safe operation of the facility. The NIF business application suite of software provides the means to effectively manage the definition, build, operation, maintenance and configuration control of all components of the National Ignition Facility. State of the art Computer Aided Design software applications are used to generate a virtualmore » model and assemblies. Engineering bills of material are controlled through the Enterprise Configuration Management System. This data structure is passed to the Enterprise Resource Planning system to create a manufacturing bill of material. Specific parts are serialized then tracked along their entire lifecycle providing visibility to the location and status of optical, target and diagnostic components that are key to assessing pre-shot machine readiness. Nearly forty thousand items requiring preventive, reactive and calibration maintenance are tracked through the System Maintenance & Reliability Tracking application to ensure proper operation. Radiological tracking applications ensure proper stewardship of radiological and hazardous materials and help provide a safe working environment for NIF personnel.« less

  9. General, database-driven fast-feedback system for the Stanford Linear Collider

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rouse, F.; Allison, S.; Castillo, S.

    A new feedback system has been developed for stabilizing the SLC beams at many locations. The feedback loops are designed to sample and correct at the 60 Hz repetition rate of the accelerator. Each loop can be distributed across several of the standard 80386 microprocessors which control the SLC hardware. A new communications system, KISNet, has been implemented to pass signals between the microprocessors at this rate. The software is written in a general fashion using the state space formalism of digital control theory. This allows a new loop to be implemented by just setting up the online database andmore » perhaps installing a communications link. 3 refs., 4 figs.« less

  10. The ALICE Software Release Validation cluster

    NASA Astrophysics Data System (ADS)

    Berzano, D.; Krzewicki, M.

    2015-12-01

    One of the most important steps of software lifecycle is Quality Assurance: this process comprehends both automatic tests and manual reviews, and all of them must pass successfully before the software is approved for production. Some tests, such as source code static analysis, are executed on a single dedicated service: in High Energy Physics, a full simulation and reconstruction chain on a distributed computing environment, backed with a sample “golden” dataset, is also necessary for the quality sign off. The ALICE experiment uses dedicated and virtualized computing infrastructures for the Release Validation in order not to taint the production environment (i.e. CVMFS and the Grid) with non-validated software and validation jobs: the ALICE Release Validation cluster is a disposable virtual cluster appliance based on CernVM and the Virtual Analysis Facility, capable of deploying on demand, and with a single command, a dedicated virtual HTCondor cluster with an automatically scalable number of virtual workers on any cloud supporting the standard EC2 interface. Input and output data are externally stored on EOS, and a dedicated CVMFS service is used to provide the software to be validated. We will show how the Release Validation Cluster deployment and disposal are completely transparent for the Release Manager, who simply triggers the validation from the ALICE build system's web interface. CernVM 3, based entirely on CVMFS, permits to boot any snapshot of the operating system in time: we will show how this allows us to certify each ALICE software release for an exact CernVM snapshot, addressing the problem of Long Term Data Preservation by ensuring a consistent environment for software execution and data reprocessing in the future.

  11. Secure UNIX socket-based controlling system for high-throughput protein crystallography experiments.

    PubMed

    Gaponov, Yurii; Igarashi, Noriyuki; Hiraki, Masahiko; Sasajima, Kumiko; Matsugaki, Naohiro; Suzuki, Mamoru; Kosuge, Takashi; Wakatsuki, Soichi

    2004-01-01

    A control system for high-throughput protein crystallography experiments has been developed based on a multilevel secure (SSL v2/v3) UNIX socket under the Linux operating system. Main features of protein crystallography experiments (purification, crystallization, loop preparation, data collecting, data processing) are dealt with by the software. All information necessary to perform protein crystallography experiments is stored (except raw X-ray data, that are stored in Network File Server) in a relational database (MySQL). The system consists of several servers and clients. TCP/IP secure UNIX sockets with four predefined behaviors [(a) listening to a request followed by a reply, (b) sending a request and waiting for a reply, (c) listening to a broadcast message, and (d) sending a broadcast message] support communications between all servers and clients allowing one to control experiments, view data, edit experimental conditions and perform data processing remotely. The usage of the interface software is well suited for developing well organized control software with a hierarchical structure of different software units (Gaponov et al., 1998), which will pass and receive different types of information. All communication is divided into two parts: low and top levels. Large and complicated control tasks are split into several smaller ones, which can be processed by control clients independently. For communicating with experimental equipment (beamline optical elements, robots, and specialized experimental equipment etc.), the STARS server, developed at the Photon Factory, is used (Kosuge et al., 2002). The STARS server allows any application with an open socket to be connected with any other clients that control experimental equipment. Majority of the source code is written in C/C++. GUI modules of the system were built mainly using Glade user interface builder for GTK+ and Gnome under Red Hat Linux 7.1 operating system.

  12. Initial experience of ArcCHECK and 3DVH software for RapidArc treatment plan verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Infusino, Erminia; Mameli, Alessandra, E-mail: e.infusino@unicampus.it; Conti, Roberto

    2014-10-01

    The purpose of this study was to perform delivery quality assurance with ArcCHECK and 3DVH system (Sun Nuclear, FL) and to evaluate the suitability of this system for volumetric-modulated arc therapy (VMAT) (RapidArc [RA]) verification. This software calculates the delivered dose distributions in patients by perturbing the calculated dose using errors detected in fluence or planar dose measurements. The device is tested to correlate the gamma passing rate (%GP) and the composite dose predicted by 3DVH software. A total of 28 patients with prostate cancer who were treated with RA were analyzed. RA treatments were delivered to a diode arraymore » phantom (ArcCHECK), which was used to create a planned dose perturbation (PDP) file. The 3DVH analysis used the dose differences derived from comparing the measured dose with the treatment planning system (TPS)-calculated doses to perturb the initial TPS-calculated dose. The 3DVH then overlays the resultant dose on the patient's structures using the resultant “PDP” beams. Measured dose distributions were compared with the calculated ones using the gamma index (GI) method by applying the global (Van Dyk) normalization and acceptance criteria, i.e., 3%/3 mm. Paired differences tests were used to estimate statistical significance of the differences between the composite dose calculated using 3DVH and %GP. Also, statistical correlation by means of logistic regression analysis has been analyzed. Dose-volume histogram (DVH) analysis for patient plans revealed small differences between treatment plan calculations and 3DVH results for organ at risk (OAR), whereas planning target volume (PTV) of the measured plan was systematically higher than that predicted by the TPS. The t-test results between the planned and the estimated DVH values showed that mean values were incomparable (p < 0.05). The quality assurance (QA) gamma analysis 3%/3 mm showed that in all cases there were only weak-to-moderate correlations (Pearson r: 0.12 to 0.74). Moreover, clinically relevant differences increased with increasing QA passing rate, indicating that some of the largest dose differences occurred in the cases of high QA passing rates, which may be called “false negatives.” The clinical importance of any disagreement between the measured and the calculated dose is often difficult to interpret; however, beam errors (either in delivery or in TPS calculation) can affect the effectiveness of the patient dose. Further research is needed to determinate the role of a PDP-type algorithm to accurately estimate patient dose effect.« less

  13. Description of a MIL-STD-1553B Data Bus Ada Driver for the LeRC EPS Testbed

    NASA Technical Reports Server (NTRS)

    Mackin, Michael A.

    1995-01-01

    This document describes the software designed to provide communication between control computers in the NASA Lewis Research Center Electrical Power System Testbed using MIL-STD-1553B. The software drivers are coded in the Ada programming language and were developed on a MSDOS-based computer workstation. The Electrical Power System (EPS) Testbed is a reduced-scale prototype space station electrical power system. The power system manages and distributes electrical power from the sources (batteries or photovoltaic arrays) to the end-user loads. The electrical system primary operates at 120 volts DC, and the secondary system operates at 28 volts DC. The devices which direct the flow of electrical power are controlled by a network of six control computers. Data and control messages are passed between the computers using the MIL-STD-1553B network. One of the computers, the Power Management Controller (PMC), controls the primary power distribution and another, the Load Management Controller (LMC), controls the secondary power distribution. Each of these computers communicates with two other computers which act as subsidiary controllers. These subsidiary controllers are, in turn, connected to the devices which directly control the flow of electrical power.

  14. Phase elements by means of a photolithographic system employing a spatial light modulator

    NASA Astrophysics Data System (ADS)

    Aubrecht, Ivo; Miler, Miroslav; Pala, Jan

    2003-07-01

    The system employs a spatial light modulator (SLM), between a pair of crossed polarizers, and an electronic shutter. Transmission of the SLM with the polarizers is controlled by graphical software that defines which pixels are fully transparent and which are fully opaque. While a particular binary graphics is on the SLM the electronic shutter allows light to pass for a certain time. The graphics is imaged, by an objective, onto a photoresist plate. A mercury lamp is used as a light source. The graphics changes after each exposition and the whole sequence of images determines the resultant surface-relief modulation.

  15. The contaminant analysis automation robot implementation for the automated laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Younkin, J.R.; Igou, R.E.; Urenda, T.D.

    1995-12-31

    The Contaminant Analysis Automation (CAA) project defines the automated laboratory as a series of standard laboratory modules (SLM) serviced by a robotic standard support module (SSM). These SLMs are designed to allow plug-and-play integration into automated systems that perform standard analysis methods (SAM). While the SLMs are autonomous in the execution of their particular chemical processing task, the SAM concept relies on a high-level task sequence controller (TSC) to coordinate the robotic delivery of materials requisite for SLM operations, initiate an SLM operation with the chemical method dependent operating parameters, and coordinate the robotic removal of materials from the SLMmore » when its commands and events has been established to allow ready them for transport operations as well as performing the Supervisor and Subsystems (GENISAS) software governs events from the SLMs and robot. The Intelligent System Operating Environment (ISOE) enables the inter-process communications used by GENISAS. CAA selected the Hewlett-Packard Optimized Robot for Chemical Analysis (ORCA) and its associated Windows based Methods Development Software (MDS) as the robot SSM. The MDS software is used to teach the robot each SLM position and required material port motions. To allow the TSC to command these SLM motions, a hardware and software implementation was required that allowed message passing between different operating systems. This implementation involved the use of a Virtual Memory Extended (VME) rack with a Force CPU-30 computer running VxWorks; a real-time multitasking operating system, and a Radiuses PC compatible VME computer running MDS. A GENISAS server on The Force computer accepts a transport command from the TSC, a GENISAS supervisor, over Ethernet and notifies software on the RadiSys PC of the pending command through VMEbus shared memory. The command is then delivered to the MDS robot control software using a Windows Dynamic Data Exchange conversation.« less

  16. Recommended Practices for Interactive Video Portability

    DTIC Science & Technology

    1990-10-01

    3-9 4. Implementation details 4-1 4.1 Installation issues ....................... 4-1 April 15, 1990 Release R 1.0 vii contents 4.1.1 VDI ...passed via an ASCII or binary application interface to the Virtual Device Interface ( VDI ) Management Software. ’ VDI Management, in turn, executes...the commands by calling appropriate low-level services and passes responses back to the application via the application interface. VDI Manage- ment is

  17. The GOAL-to-HAL/S translator specification. [for space shuttle

    NASA Technical Reports Server (NTRS)

    Stanten, S. F.; Flanders, J. H.

    1973-01-01

    The specification sets forth a technical framework within which to deal with the transfer of specific GOAL features to HAL/S. Key technical features of the translator are described which communicate with the data bank, handle repeat statements, and deal with software interrupts. GOAL programs, databank information, and GOAL system subroutines are integrated into one GOAL in HAL/S. This output is fully compatible HAL/S source ready for insertion into the HAL/S compiler. The Translator uses a PASS1 to establish all the global data needed for the HAL/S output program. Individual GOAL statements are translated in PASS2. The specification document makes extensive use of flowcharts to specify exactly how each variation of each GOAL statement is to be translated. The specification also deals with definitions and assumptions, executive support structure and implementation. An appendix, entitled GOAL-to-HAL Mapping, provides examples of translated GOAL statements.

  18. Orion Optical Navigation Progress Toward Exploration: Mission 1

    NASA Technical Reports Server (NTRS)

    Holt, Greg N.; D'Souza, Christopher N.; Saley, David

    2018-01-01

    Optical navigation of human spacecraft was proposed on Gemini and implemented successfully on Apollo as a means of autonomously operating the vehicle in the event of lost communication with controllers on Earth. It shares a history with the "method of lunar distances" that was used in the 18th century and gained some notoriety after its use by Captain James Cook during his 1768 Pacific voyage of the HMS Endeavor. The Orion emergency return system utilizing optical navigation has matured in design over the last several years, and is currently undergoing the final implementation and test phase in preparation for Exploration Mission 1 (EM-1) in 2019. The software development is being worked as a Government Furnished Equipment (GFE) project delivered as an application within the Core Flight Software of the Orion camera controller module. The mathematical formulation behind the initial ellipse fit in the image processing is detailed in Christian. The non-linear least squares refinement then follows the technique of Mortari as an estimation process of the planetary limb using the sigmoid function. The Orion optical navigation system uses a body fixed camera, a decision that was driven by mass and mechanism constraints. The general concept of operations involves a 2-hour pass once every 24 hours, with passes specifically placed before all maneuvers to supply accurate navigation information to guidance and targeting. The pass lengths are limited by thermal constraints on the vehicle since the OpNav attitude generally deviates from the thermally stable tail-to-sun attitude maintained during the rest of the orbit coast phase. Calibration is scheduled prior to every pass due to the unknown nature of thermal effects on the lens distortion and the mounting platform deformations between the camera and star trackers. The calibration technique is described in detail by Christian, et al. and simultaneously estimates the Brown-Conrady coefficients and the Star Tracker/Camera interlock angles. Accurate attitude information is provided by the star trackers during each pass. Figure 1 shows the various phases of lunar return navigation when the vehicle is in autonomous operation with lost ground communication. The midcourse maneuvers are placed to control the entry interface conditions to the desired corridor for safe landing. The general form of optical navigation on Orion is where still images of the Moon or Earth are processed to find the apparent angular diameter and centroid in the camera focal plane. This raw data is transformed into range and bearing angle measurements using planetary data and precise star tracker inertial attitude. The measurements are then sent to the main flight computer's Kalman filter to update the onboard state vector. The images are, of course, collected over an arc to converge the state and estimate velocity. The same basic technique was used by Apollo to satisfy loss-of-comm, but Apollo used manual crew sightings with a vehicle-integral sextant instead of autonomously processing optical imagery. The software development is past its Critical Design Review, and is progressing through test and certification for human rating. In support of this, a hardware-in-the-loop test rig was developed in the Johnson Space Center Electro-Optics Lab to exercise the OpNav system prior to integrated testing on the Orion vehicle. Figure 2 shows the rig, which the test team has dubbed OCILOT (Orion Camera In the Loop Optical Testbed). Analysis performed to date shows a delivery that satisfies an allowable entry corridor as shown in Figure 3.

  19. The design of preamplifier and ADC circuit base on weak e-optical signal

    NASA Astrophysics Data System (ADS)

    Fen, Leng; Ying-ping, Yang; Ya-nan, Yu; Xiao-ying, Xu

    2011-02-01

    Combined with the demand of the process of weak e-optical signal in QPD detection system, the article introduced the circuit principle of deigning preamplifier and ADC circuit with I/V conversion, instrumentation amplifier, low-pass filter and 16-bit A/D transformation. At the same time the article discussed the circuit's noise suppression and isolation according to the characteristics of the weak signal, and gave the method of software rectification. Finally, tested the weak signal with keithley2000, and got a good effect.

  20. Exploring and validating physicochemical properties of mangiferin through GastroPlus® software

    PubMed Central

    Khurana, Rajneet Kaur; Kaur, Ranjot; Kaur, Manninder; Kaur, Rajpreet; Kaur, Jasleen; Kaur, Harpreet; Singh, Bhupinder

    2017-01-01

    Aim: Mangiferin (Mgf), a promising therapeutic polyphenol, exhibits poor oral bioavailability. Hence, apt delivery systems are required to facilitate its gastrointestinal absorption. The requisite details on its physicochemical properties have not yet been well documented in literature. Accordingly, in order to have explicit insight into its physicochemical characteristics, the present work was undertaken using GastroPlus™ software. Results: Aqueous solubility (0.38 mg/ml), log P (-0.65), Peff (0.16 × 10-4 cm/s) and ability to act as P-gp substrate were defined. Potency to act as a P-gp substrate was verified through Caco-2 cells, while Peff was estimated through single pass intestinal perfusion studies. Characterization of Mgf through transmission electron microscopy, differential scanning calorimetry, infrared spectroscopy and powder x-ray diffraction has also been reported. Conclusion: The values of physicochemical properties for Mgf reported in the current manuscript would certainly enable the researchers to develop newer delivery systems for Mgf. PMID:28344830

  1. ALMA software architecture

    NASA Astrophysics Data System (ADS)

    Schwarz, Joseph; Raffi, Gianni

    2002-12-01

    The Atacama Large Millimeter Array (ALMA) is a joint project involving astronomical organizations in Europe and North America. ALMA will consist of at least 64 12-meter antennas operating in the millimeter and sub-millimeter range. It will be located at an altitude of about 5000m in the Chilean Atacama desert. The primary challenge to the development of the software architecture is the fact that both its development and runtime environments will be distributed. Groups at different institutes will develop the key elements such as Proposal Preparation tools, Instrument operation, On-line calibration and reduction, and Archiving. The Proposal Preparation software will be used primarily at scientists' home institutions (or on their laptops), while Instrument Operations will execute on a set of networked computers at the ALMA Operations Support Facility. The ALMA Science Archive, itself to be replicated at several sites, will serve astronomers worldwide. Building upon the existing ALMA Common Software (ACS), the system architects will prepare a robust framework that will use XML-encoded entity objects to provide an effective solution to the persistence needs of this system, while remaining largely independent of any underlying DBMS technology. Independence of distributed subsystems will be facilitated by an XML- and CORBA-based pass-by-value mechanism for exchange of objects. Proof of concept (as well as a guide to subsystem developers) will come from a prototype whose details will be presented.

  2. SpaceWire Driver Software for Special DSPs

    NASA Technical Reports Server (NTRS)

    Clark, Douglas; Lux, James; Nishimoto, Kouji; Lang, Minh

    2003-01-01

    A computer program provides a high-level C-language interface to electronics circuitry that controls a SpaceWire interface in a system based on a space qualified version of the ADSP-21020 digital signal processor (DSP). SpaceWire is a spacecraft-oriented standard for packet-switching data-communication networks that comprise nodes connected through bidirectional digital serial links that utilize low-voltage differential signaling (LVDS). The software is tailored to the SMCS-332 application-specific integrated circuit (ASIC) (also available as the TSS901E), which provides three highspeed (150 Mbps) serial point-to-point links compliant with the proposed Institute of Electrical and Electronics Engineers (IEEE) Standard 1355.2 and equivalent European Space Agency (ESA) Standard ECSS-E-50-12. In the specific application of this software, the SpaceWire ASIC was combined with the DSP processor, memory, and control logic in a Multi-Chip Module DSP (MCM-DSP). The software is a collection of low-level driver routines that provide a simple message-passing application programming interface (API) for software running on the DSP. Routines are provided for interrupt-driven access to the two styles of interface provided by the SMCS: (1) the "word at a time" conventional host interface (HOCI); and (2) a higher performance "dual port memory" style interface (COMI).

  3. An automated system for performing continuous viscosity versus temperature measurements of fluids using an Ostwald viscometer

    NASA Astrophysics Data System (ADS)

    Beaulieu, L. Y.; Logan, E. R.; Gering, K. L.; Dahn, J. R.

    2017-09-01

    An automated system was developed to measure the viscosity of fluids as a function of temperature using image analysis tracking software. An Ostwald viscometer was placed in a three-wall dewar in which ethylene glycol was circulated using a thermal bath. The system collected continuous measurements during both heating and cooling cycles exhibiting no hysteresis. The use of video tracking analysis software greatly reduced the measurement errors associated with measuring the time required for the meniscus to pass through the markings on the viscometer. The stability of the system was assessed by performing 38 consecutive measurements of water at 42.50 ± 0.05 °C giving an average flow time of 87.7 ± 0.3 s. A device was also implemented to repeatedly deliver a constant volume of liquid of 11.00 ± 0.03 ml leading to an average error in the viscosity of 0.04%. As an application, the system was used to measure the viscosity of two Li-ion battery electrolyte solvents from approximately 10 to 40 °C with results showing excellent agreement with viscosity values calculated using Gering's Advanced Electrolyte Model (AEM).

  4. Decentralized Formation Flying Control in a Multiple-Team Hierarchy

    NASA Technical Reports Server (NTRS)

    Mueller, Joseph .; Thomas, Stephanie J.

    2005-01-01

    This paper presents the prototype of a system that addresses these objectives-a decentralized guidance and control system that is distributed across spacecraft using a multiple-team framework. The objective is to divide large clusters into teams of manageable size, so that the communication and computational demands driven by N decentralized units are related to the number of satellites in a team rather than the entire cluster. The system is designed to provide a high-level of autonomy, to support clusters with large numbers of satellites, to enable the number of spacecraft in the cluster to change post-launch, and to provide for on-orbit software modification. The distributed guidance and control system will be implemented in an object-oriented style using MANTA (Messaging Architecture for Networking and Threaded Applications). In this architecture, tasks may be remotely added, removed or replaced post-launch to increase mission flexibility and robustness. This built-in adaptability will allow software modifications to be made on-orbit in a robust manner. The prototype system, which is implemented in MATLAB, emulates the object-oriented and message-passing features of the MANTA software. In this paper, the multiple-team organization of the cluster is described, and the modular software architecture is presented. The relative dynamics in eccentric reference orbits is reviewed, and families of periodic, relative trajectories are identified, expressed as sets of static geometric parameters. The guidance law design is presented, and an example reconfiguration scenario is used to illustrate the distributed process of assigning geometric goals to the cluster. Next, a decentralized maneuver planning approach is presented that utilizes linear-programming methods to enact reconfiguration and coarse formation keeping maneuvers. Finally, a method for performing online collision avoidance is discussed, and an example is provided to gauge its performance.

  5. CDC WONDER: a cooperative processing architecture for public health.

    PubMed Central

    Friede, A; Rosen, D H; Reid, J A

    1994-01-01

    CDC WONDER is an information management architecture designed for public health. It provides access to information and communications without the user's needing to know the location of data or communication pathways and mechanisms. CDC WONDER users have access to extractions from some 40 databases; electronic mail (e-mail); and surveillance data processing. System components include the Remote Client, the Communications Server, the Queue Managers, and Data Servers and Process Servers. The Remote Client software resides in the user's machine; other components are at the Centers for Disease Control and Prevention (CDC). The Remote Client, the Communications Server, and the Applications Server provide access to the information and functions in the Data Servers and Process Servers. The system architecture is based on cooperative processing, and components are coupled via pure message passing, using several protocols. This architecture allows flexibility in the choice of hardware and software. One system limitation is that final results from some subsystems are obtained slowly. Although designed for public health, CDC WONDER could be useful for other disciplines that need flexible, integrated information exchange. PMID:7719813

  6. Designing an autonomous environment for mission critical operation of the EUVE satellite

    NASA Technical Reports Server (NTRS)

    Abedini, Annadiana; Malina, Roger F.

    1994-01-01

    Since the launch of NASA's Extreme Ultraviolet Explorer (EUVE) satellite in 1992, there has only been a handful of occurrences that have warranted manual intervention in the EUVE Science Operations Center (ESOC). So, in an effort to reduce costs, the current environment is being redesigned to utilize a combination of off-the-shelf packages and recently developed artificial intelligence (AI) software to automate the monitoring of the science payload and ground systems. The successful implementation of systemic automation would allow the ESOC to evolve from a seven day/week, three shift operation, to a seven day/week one shift operation. First, it was necessary to identify all areas considered mission critical. These were defined as follows: (1) The telemetry stream must be monitored autonomously and anomalies identified. (2) Duty personnel must be automatically paged and informed of the occurrence of an anomaly. (3) The 'basic' state of the ground system must be assessed. (4) Monitors should check that the systems and processes needed to continue in a 'healthy' operational mode are working at all times. (5) Network loads should be monitored to ensure that they stay within established limits. (6) Connectivity to Goddard Space Flight Center (GSFC) systems should be monitored as well, not just for connectivity of the network itself but also for the ability to transfer files. (7) All necessary peripheral devices should be monitored. This would include the disks, routers, tape drives, printers, tape carousel, and power supplies. (8) System daemons such as the archival daemon, the Sybase server, the payload monitoring software, and any other necessary processes should be monitored to ensure that they are operational. (9) The monitoring system needs to be redundant so that the failure of a single machine will not paralyze the monitors. (10) Notification should be done by means of looking though a table of the pager numbers for current 'on call' personnel. The software should be capable of dialing out to notify, sending email, and producing error logs. (11) The system should have knowledge of when real-time passes and tape recorder dumps will occur and should know that these passes and data transmissions are successful. Once the design criteria were established, the design team split into two groups: one that addressed the tracking, commanding, and health and safety of the science payload and another group that addressed the ground systems and communications aspects of the overall system.

  7. CCSDS Time-Critical Onboard Networking Service

    NASA Technical Reports Server (NTRS)

    Parkes, Steve; Schnurr, Rick; Marquart, Jane; Menke, Greg; Ciccone, Massimiliano

    2006-01-01

    The Consultative Committee for Space Data Systems (CCSDS) is developing recommendations for communication services onboard spacecraft. Today many different communication buses are used on spacecraft requiring software with the same basic functionality to be rewritten for each type of bus. This impacts on the application software resulting in custom software for almost every new mission. The Spacecraft Onboard Interface Services (SOIS) working group aims to provide a consistent interface to various onboard buses and sub-networks, enabling a common interface to the application software. The eventual goal is reusable software that can be easily ported to new missions and run on a range of onboard buses without substantial modification. The system engineer will then be able to select a bus based on its performance, power, etc and be confident that a particular choice of bus will not place excessive demands on software development. This paper describes the SOIS Intra-Networking Service which is designed to enable data transfer and multiplexing of a variety of internetworking protocols with a range of quality of service support, over underlying heterogeneous data links. The Intra-network service interface provides users with a common Quality of Service interface when transporting data across a variety of underlying data links. Supported Quality of Service (QoS) elements include: Priority, Resource Reservation and Retry/Redundancy. These three QoS elements combine and map into four TCONS services for onboard data communications: Best Effort, Assured, Reserved, and Guaranteed. Data to be transported is passed to the Intra-network service with a requested QoS. The requested QoS includes the type of service, priority and where appropriate, a channel identifier. The data is de-multiplexed, prioritized, and the required resources for transport are allocated. The data is then passed to the appropriate data link for transfer across the bus. The SOIS supported data links may inherently provide the quality of service support requested by the intra-network layer. In the case where the data link does not have the required level of support, the missing functionality is added by SOIS. As a result of this architecture, re-usable software applications can be designed and used across missions thereby promoting common mission operations. In addition, the protocol multiplexing function enables the blending of multiple onboard networks. This paper starts by giving an overview of the SOIS architecture in section 11, illustrating where the TCONS services fit into the overall architecture. It then describes the quality of service approach adopted, in section III. The prototyping efforts that have been going on are introduced in section JY. Finally, in section V the current status of the CCSDS recommendations is summarized.

  8. Operations automation

    NASA Technical Reports Server (NTRS)

    Boreham, Charles Thomas

    1994-01-01

    This is truly the era of 'faster-better-cheaper' at the National Aeronautics and Space Administration/Jet Propulsion Laboratory (NASA/JPL). To continue JPL's primary mission of building and operating interplanetary spacecraft, all possible avenues are being explored in the search for better value for each dollar spent. A significant cost factor in any mission is the amount of manpower required to receive, decode, decommutate, and distribute spacecraft engineering and experiment data. The replacement of the many mission-unique data systems with the single Advanced Multimission Operations System (AMMOS) has already allowed for some manpower reduction. Now, we find that further economies are made possible by drastically reducing the number of human interventions required to perform the setup, data saving, station handover, processed data loading, and tear down activities that are associated with each spacecraft tracking pass. We have recently adapted three public domain tools to the AMMOS system which allow common elements to be scheduled and initialized without the normal human intervention. This is accomplished with a stored weekly event schedule. The manual entries and specialized scripts which had to be provided just prior to and during a pass are now triggered by the schedule to perform the functions unique to the upcoming pass. This combination of public domain software and the AMMOS system has been run in parallel with the flight operation in an online testing phase for six months. With this methodology, a savings of 11 man-years per year is projected with no increase in data loss or project risk. There are even greater savings to be gained as we learn other uses for this configuration.

  9. Coma Patient Monitoring System Using Image Processing

    NASA Astrophysics Data System (ADS)

    Sankalp, Meenu

    2011-12-01

    COMA PATIENT MONITORING SYSTEM provides high quality healthcare services in the near future. To provide more convenient and comprehensive medical monitoring in big hospitals since it is tough job for medical personnel to monitor each patient for 24 hours.. The latest development in patient monitoring system can be used in Intensive Care Unit (ICU), Critical Care Unit (CCU), and Emergency Rooms of hospital. During treatment, the patient monitor is continuously monitoring the coma patient to transmit the important information. Also in the emergency cases, doctor are able to monitor patient condition efficiently to reduce time consumption, thus it provides more effective healthcare system. So due to importance of patient monitoring system, the continuous monitoring of the coma patient can be simplified. This paper investigates about the effects seen in the patient using "Coma Patient Monitoring System" which is a very advanced product related to physical changes in body movement of the patient and gives Warning in form of alarm and display on the LCD in less than one second time. It also passes a sms to a person sitting at the distant place if there exists any movement in any body part of the patient. The model for the system uses Keil software for the software implementation of the developed system.

  10. Orbit Determination of LEO Satellites for a Single Pass through a Radar: Comparison of Methods

    NASA Technical Reports Server (NTRS)

    Khutorovsky, Z.; Kamensky, S.; Sbytov, N.; Alfriend, K. T.

    2007-01-01

    The problem of determining the orbit of a space object from measurements based on one pass through the field of view of a radar is not a new one. Extensive research in this area has been carried out in the USA and Russia since the late 50s when these countries started the development of ballistic missile defense (BMD) and Early Warning systems. In Russia these investigations got additional stimulation in the early 60s after the decision to create a Space Surveillance System, whose primary task would be the maintenance of the satellite catalog. These problems were the focus of research interest until the middle 70s when the appropriate techniques and software were implemented for all radars. Then for more than 20 years no new research papers appeared on this subject. This produced an impression that all the problems of track determination based on one pass had been solved and there was no need for further research. In the late 90s interest in this problem arose again in relation to the following. It was estimated that there would be greater than 100,000 objects with size greater than 1-2 cm and collision of an operational spacecraft with any of these objects could have catastrophic results. Thus, for prevention of hazardous approaches and collisions with valuable spacecraft the existing satellite catalog should be extended by at least an order of magnitude This is a very difficult scientific and engineering task. One of the issues is the development of data fusion procedures and the software capable of maintaining such a huge catalog in near real time. The number of daily processed measurements (of all types, radar and optical) for such a system may constitute millions, thus increasing the number of measurements by at least an order of magnitude. Since we will have ten times more satellites and measurements the computer effort required for the correlation of measurements will be two orders of magnitude greater. This could create significant problems for processing data close to real time even for modern computers. Preliminary "compression" of data for one pass through the field of view of a sensor can significantly reduce the requirements to computers and data communication. This compression will occur when all the single measurements of the sensor are replaced by the orbit determined on their basis. The single measurement here means the radar parameters (range, azimuth, elevation, and in some cases range rate) measured by a single pulse.

  11. HyperForest: A high performance multi-processor architecture for real-time intelligent systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia, P. Jr.; Rebeil, J.P.; Pollard, H.

    1997-04-01

    Intelligent Systems are characterized by the intensive use of computer power. The computer revolution of the last few years is what has made possible the development of the first generation of Intelligent Systems. Software for second generation Intelligent Systems will be more complex and will require more powerful computing engines in order to meet real-time constraints imposed by new robots, sensors, and applications. A multiprocessor architecture was developed that merges the advantages of message-passing and shared-memory structures: expendability and real-time compliance. The HyperForest architecture will provide an expandable real-time computing platform for computationally intensive Intelligent Systems and open the doorsmore » for the application of these systems to more complex tasks in environmental restoration and cleanup projects, flexible manufacturing systems, and DOE`s own production and disassembly activities.« less

  12. Interpretation of Gamma Index for Quality Assurance of Simultaneously Integrated Boost (SIB) IMRT Plans for Head and Neck Carcinoma

    NASA Astrophysics Data System (ADS)

    Atiq, Maria; Atiq, Atia; Iqbal, Khalid; Shamsi, Quratul ain; Andleeb, Farah; Buzdar, Saeed Ahmad

    2017-12-01

    Objective: The Gamma Index is prerequisite to estimate point-by-point difference between measured and calculated dose distribution in terms of both Distance to Agreement (DTA) and Dose Difference (DD). This study aims to inquire what percentage of pixels passing a certain criteria assure a good quality plan and suggest gamma index as efficient mechanism for dose verification of Simultaneous Integrated Boost Intensity Modulated Radiotherapy plans. Method: In this study, dose was calculated for 14 head and neck patients and IMRT Quality Assurance was performed with portal dosimetry using the Eclipse treatment planning system. Eclipse software has a Gamma analysis function to compare measured and calculated dose distribution. Plans of this study were deemed acceptable when passing rate was 95% using tolerance for Distance to agreement (DTA) as 3mm and Dose Difference (DD) as 5%. Result and Conclusion: Thirteen cases pass tolerance criteria of 95% set by our institution. Confidence Limit for DD is 9.3% and for gamma criteria our local CL came out to be 2.0% (i.e., 98.0% passing). Lack of correlation was found between DD and γ passing rate with R2 of 0.0509. Our findings underline the importance of gamma analysis method to predict the quality of dose calculation. Passing rate of 95% is achieved in 93% of cases which is adequate level of accuracy for analyzed plans thus assuring the robustness of SIB IMRT treatment technique. This study can be extended to investigate gamma criteria of 5%/3mm for different tumor localities and to explore confidence limit on target volumes of small extent and simple geometry.

  13. Minimally invasive evacuation of parenchymal and ventricular hemorrhage using the Apollo system with simultaneous neuronavigation, neuroendoscopy and active monitoring with cone beam CT.

    PubMed

    Fiorella, David; Gutman, Fredrick; Woo, Henry; Arthur, Adam; Aranguren, Ricardo; Davis, Raphael

    2015-10-01

    The Apollo system is a low profile irrigation-aspiration system which can be used for the evacuation of intracranial hemorrhage. We demonstrate the feasibility of using Apollo to evacuate intracranial hemorrhage in a series of three patients with combined neuronavigation, neuroendoscopy, and cone beam CT (CB-CT). Access to the hematoma was planned using neuronavigation software. Parietal (n=2) or frontal (1) burr holes were created and a 19 F endoscopic sheath was placed under neuronavigation guidance into the distal aspect of the hematoma along its longest accessible axis. The 2.6 mm Apollo wand was then directed through the working channel of a neuroendoscope and used to aspirate the blood products under direct visualization, working from distal to proximal. After a pass through the hematoma, the sheath, neuroendoscope, and Apollo system were removed. CB-CT was then used to evaluate for residual hematoma. When required, the CB-CT data could then be directly uploaded into the neuronavigation system and a new trajectory planned to approach the residual hematoma. Three patients with parenchymal (n=2) and mixed parenchymal-intraventricular (n=1) hematomas underwent minimally invasive evacuation with the Apollo system. The isolated parenchymal hematomas measured 93.4 and 15.6 mL and were reduced to 11.2 (two passes) and 0.9 mL (single pass), respectively. The entire parenchymal component of the mixed hemorrhage was evacuated, as was the intraventricular component within the right frontal horn (single pass). No complications were experienced. All patients showed clinical improvement after the procedure. The average presenting National Institutes of Health Stroke Scale was 19.0, which had improved to 5.7 within an average of 4.7 days after the procedure. The Apollo system can be used within the neuroangiography suite for the minimally invasive evacuation of intracranial hemorrhage using simultaneous neuronavigation for planning and intraprocedural guidance, direct visualization with neuroendoscopy, and real time monitoring of progress with CB-CT. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  14. Evaluation of detector array technology for the verification of advanced intensity-modulated radiotherapy

    NASA Astrophysics Data System (ADS)

    Hussien, Mohammad

    Purpose: Quality assurance (QA) for intensity modulated radiotherapy (IMRT) has evolved substantially. In recent years, various ionization chamber or diode detector arrays have become commercially available, allowing pre-treatment absolute dose verification with near real-time results. This has led to a wide uptake of this technology to replace point dose and film dosimetry and to facilitate QA streamlining. However, arrays are limited by their spatial resolution giving rise to concerns about their response to clinically relevant deviations. The common factor in all commercial array systems is the reliance on the gamma index (γ) method to provide the quantitative evaluation of the measured dose distribution against the Treatment Planning System (TPS) calculated dose distribution. The mathematical definition of the gamma index presents computational challenges that can cause a variation in the calculation in different systems. The purpose of this thesis was to evaluate the suitability of detector array systems, combined with their implementation of the gamma index, in the verification and dosimetry audit of advanced IMRT. Method: The response of various commercial detector array systems (Delta4®, ArcCHECK®, and the PTW 2D-Array seven29™ and OCTAVIUS II™ phantom combination, Gafchromic® EBT2 and composite EPID measurements) to simulated deliberate changes in clinical IMRT and VMAT plans was evaluated. The variability of the gamma index calculation in the different systems was also evaluated by comparing against a bespoke Matlab-based gamma index analysis software. A novel methodology for using a commercial detector array in a dosimetry audit of rotational radiotherapy was then developed. Comparison was made between measurements using the detector array and those performed using ionization chambers, alanine and radiochromic film. The methodology was developed as part of the development of a national audit of rotational radiotherapy. Ten cancer centres were asked to create a rotational radiotherapy treatment plan for a three-dimensional treatment-planning-system (3DTPS) test and audited. Phantom measurements using a commercial 2D ionization chamber (IC) array were compared with measurements using 0.125cm3 ion chamber, Gafchromic film and alanine pellets in the same plane. Relative and absolute gamma index (γ) comparisons were made for Gafchromic film and 2D-Array planes respectively. A methodology for prospectively deriving appropriate gamma index acceptance criteria for detector array systems, via simulation of deliberate changes and receiver operator characteristic (ROC) analysis, has been developed. Results: In the event of clinically relevant delivery introduced changes, the detector array systems evaluated are able to detect some of these changes if suitable gamma index passing criteria, such as 2%/2mm, are used. Different computational approaches can produce variability in the calculation of the gamma index between different software implementations. For the same passing criteria, different devices and software combinations exhibit varying levels of agreement with the Matlab predicted gamma index analysis. This work has found that it is suitable to use a detector array in a dosimetry audit of rotational radiotherapy in place of standard systems of dosimetry such as ion chambers, alanine and film. Comparisons between individual detectors within the 2D-Array against the corresponding ion chamber and alanine measurement showed a statistically significant concordance correlation coefficient (ρc>0.998, p<0.001) with mean difference of -1.1%±1.1% and -0.8%±1.1%, respectively, in a high dose PTV. In the γ comparison between the 2D-Array and film it was found that the 2D-Array was more likely to fail in planes where there was a dose discrepancy due to the absolute analysis performed. A follow-up analysis of the library of measured data during the audit found that additional metrics such as the mean gamma index or dose differences over regions of interest can be gleaned from the measured dose distributions. Conclusions: It is important to understand the response and limitations of the gamma index analysis combined with the equipment and software in use. For the same pass-rate criteria, different devices and software combinations exhibit varying levels of agreement with the predicted γ analysis. It has been found that using a commercial detector array for a dosimetry audit of rotational radiotherapy is suitable in place of standard systems of dosimetry. A methodology for being able to prospectively ascertain appropriate gamma index acceptance criteria for the detector array system in use, via simulation of deliberate changes and ROC analysis, has been developed. It has been shown that setting appropriate tolerances can be achieved and should be performed as the methodology takes into account the configuration of the commercial system as well as the software implementation of the gamma index.

  15. SU-E-T-472: A Multi-Dimensional Measurements Comparison to Analyze a 3D Patient Specific QA Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ashmeg, S; Jackson, J; Zhang, Y

    Purpose: To quantitatively evaluate a 3D patient specific QA tool using 2D film and 3D Presage dosimetry. Methods: A brain IMRT case was delivered to Delta4, EBT2 film and Presage plastic dosimeter. The film was inserted in the solid water slabs at 7.5cm depth for measurement. The Presage dosimeter was inserted into a head phantom for 3D dose measurement. Delta4's Anatomy software was used to calculate the corresponding dose to the film in solid water slabs and to Presage in the head phantom. The results from Anatomy were compared to both calculated results from Eclipse and measured dose from filmmore » and Presage to evaluate its accuracy. Using RIT software, we compared the “Anatomy” dose to the EBT2 film measurement and the film measurement to ECLIPSE calculation. For 3D analysis, DICOM file of “Anatomy” was extracted and imported to CERR software, which was used to compare the Presage dose to both “Anatomy” calculation and ECLIPSE calculation. Gamma criteria of 3% - 3mm and 5% - 5mm was used for comparison. Results: Gamma passing rates of film vs “Anatomy”, “Anatomy” vs ECLIPSE and film vs ECLIPSE were 82.8%, 70.9% and 87.6% respectively when 3% - 3mm criteria is used. When the criteria is changed to 5% - 5mm, the passing rates became 87.8%, 76.3% and 90.8% respectively. For 3D analysis, Anatomy vs ECLIPSE showed gamma passing rate of 86.4% and 93.3% for 3% - 3mm and 5% - 5mm respectively. The rate is 77.0% for Presage vs ECLIPSE analysis. The Anatomy vs ECLIPSE were absolute dose comparison. However, film and Presage analysis were relative comparison Conclusion: The results show higher passing rate in 3D than 2D in “Anatomy” software. This could be due to the higher degrees of freedom in 3D than in 2D for gamma analysis.« less

  16. A Mobile GPS Application: Mosque Tracking with Prayer Time Synchronization

    NASA Astrophysics Data System (ADS)

    Hashim, Rathiah; Ikhmatiar, Mohammad Sibghotulloh; Surip, Miswan; Karmin, Masiri; Herawan, Tutut

    Global Positioning System (GPS) is a popular technology applied in many areas and embedded in many devices, facilitating end-users to navigate effectively to user's intended destination via the best calculated route. The ability of GPS to track precisely according to coordinates of specific locations can be utilized to assist a Muslim traveler visiting or passing an unfamiliar place to find the nearest mosque in order to perform his prayer. However, not many techniques have been proposed for Mosque tracking. This paper presents the development of GPS technology in tracking the nearest mosque using mobile application software embedded with the prayer time's synchronization system on a mobile application. The prototype GPS system developed has been successfully incorporated with a map and several mosque locations.

  17. Cluster Computing For Real Time Seismic Array Analysis.

    NASA Astrophysics Data System (ADS)

    Martini, M.; Giudicepietro, F.

    A seismic array is an instrument composed by a dense distribution of seismic sen- sors that allow to measure the directional properties of the wavefield (slowness or wavenumber vector) radiated by a seismic source. Over the last years arrays have been widely used in different fields of seismological researches. In particular they are applied in the investigation of seismic sources on volcanoes where they can be suc- cessfully used for studying the volcanic microtremor and long period events which are critical for getting information on the volcanic systems evolution. For this reason arrays could be usefully employed for the volcanoes monitoring, however the huge amount of data produced by this type of instruments and the processing techniques which are quite time consuming limited their potentiality for this application. In order to favor a direct application of arrays techniques to continuous volcano monitoring we designed and built a small PC cluster able to near real time computing the kinematics properties of the wavefield (slowness or wavenumber vector) produced by local seis- mic source. The cluster is composed of 8 Intel Pentium-III bi-processors PC working at 550 MHz, and has 4 Gigabytes of RAM memory. It runs under Linux operating system. The developed analysis software package is based on the Multiple SIgnal Classification (MUSIC) algorithm and is written in Fortran. The message-passing part is based upon the LAM programming environment package, an open-source imple- mentation of the Message Passing Interface (MPI). The developed software system includes modules devote to receiving date by internet and graphical applications for the continuous displaying of the processing results. The system has been tested with a data set collected during a seismic experiment conducted on Etna in 1999 when two dense seismic arrays have been deployed on the northeast and the southeast flanks of this volcano. A real time continuous acquisition system has been simulated by a pro- gram which reads data from disk files and send them to a remote host by using the Internet protocols.

  18. Metropolitan all-pass and inter-city quantum communication network.

    PubMed

    Chen, Teng-Yun; Wang, Jian; Liang, Hao; Liu, Wei-Yue; Liu, Yang; Jiang, Xiao; Wang, Yuan; Wan, Xu; Cai, Wei-Qi; Ju, Lei; Chen, Luo-Kan; Wang, Liu-Jun; Gao, Yuan; Chen, Kai; Peng, Cheng-Zhi; Chen, Zeng-Bing; Pan, Jian-Wei

    2010-12-20

    We have demonstrated a metropolitan all-pass quantum communication network in field fiber for four nodes. Any two nodes of them can be connected in the network to perform quantum key distribution (QKD). An optical switching module is presented that enables arbitrary 2-connectivity among output ports. Integrated QKD terminals are worked out, which can operate either as a transmitter, a receiver, or even both at the same time. Furthermore, an additional link in another city of 60 km fiber (up to 130 km) is seamless integrated into this network based on a trusted relay architecture. On all the links, we have implemented protocol of decoy state scheme. All of necessary electrical hardware, synchronization, feedback control, network software, execution of QKD protocols are made by tailored designing, which allow a completely automatical and stable running. Our system has been put into operation in Hefei in August 2009, and publicly demonstrated during an evaluation conference on quantum network organized by the Chinese Academy of Sciences on August 29, 2009. Real-time voice telephone with one-time pad encoding between any two of the five nodes (four all-pass nodes plus one additional node through relay) is successfully established in the network within 60 km.

  19. DOCU-TEXT: A tool before the data dictionary

    NASA Technical Reports Server (NTRS)

    Carter, B.

    1983-01-01

    DOCU-TEXT, a proprietary software package that aids in the production of documentation for a data processing organization and can be installed and operated only on IBM computers is discussed. In organizing information that ultimately will reside in a data dictionary, DOCU-TEXT proved to be a useful documentation tool in extracting information from existing production jobs, procedure libraries, system catalogs, control data sets and related files. DOCU-TEXT reads these files to derive data that is useful at the system level. The output of DOCU-TEXT is a series of user selectable reports. These reports can reflect the interactions within a single job stream, a complete system, or all the systems in an installation. Any single report, or group of reports, can be generated in an independent documentation pass.

  20. Software Mechanisms for Multiprocessor TLB Consistency

    DTIC Science & Technology

    1989-12-01

    thanks. Raj V aswani implemented the DASH message-passing system. Ramesh Govindan implemented part of the DASH virtual memory system. G . Scott...1 Latency (ma) 5𔃺 ---·-r··-·r···r··-T·-·-r··-·r····r····-r··-·r··- . g ~~~R=r 18 -----1·--·-r··-T·----1·--··r·---1·----1·--··r·---T·----1 16...model development. Synchronizing TLBs is similar to updating replicated data in a distributed environment. Lee and Garcia-Molina both used an M/ G /1

  1. ThermalTracker Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    The software processes recorded thermal video and detects the flight tracks of birds and bats that passed through the camera's field of view. The output is a set of images that show complete flight tracks for any detections, with the direction of travel indicated and the thermal image of the animal delineated. A report of the descriptive features of each detected track is also output in the form of a comma-separated value text file.

  2. Custom electronic subsystems for the laboratory telerobotic manipulator

    NASA Technical Reports Server (NTRS)

    Glassell, R. L.; Butler, P. L.; Rowe, J. C.; Zimmermann, S. D.

    1990-01-01

    The National Aeronautics and Space Administration (NASA) Space Station Program presents new opportunities for the application of telerobotic and robotic systems. The Laboratory Telerobotic Manipulator (LTM) is a highly advanced 7 degrees-of-freedom (DOF) telerobotic/robotic manipulator. It was developed and built for the Automation Technology Branch at NASA's Langley Research Center (LaRC) for work in research and to demonstrate ground-based telerobotic manipulator system hardware and software systems for future NASA applications in the hazardous environment of space. The LTM manipulator uses an embedded wiring design with all electronics, motor power, and control and communication cables passing through the pitch-yaw differential joints. This design requires the number of cables passing through the pitch/yaw joint to be kept to a minimum. To eliminate the cables needed to carry each pitch-yaw joint's sensor data to the VME control computers, a custom-embedded electronics package for each manipulator joint was developed. The electronics package collects and sends the joint's sensor data to the VME control computers over a fiber optic cable. The electronics package consist of five individual subsystems: the VME Link Processor, the Joint Processor and the Joint Processor power supply in the joint module, the fiber optics communications system, and the electronics and motor power cabling.

  3. Software Development to Assist in the Processing and Analysis of Data Obtained Using Fiber Bragg Grating Interrogation Systems

    NASA Technical Reports Server (NTRS)

    Hicks, Rebecca

    2009-01-01

    A fiber Bragg grating is a portion of a core of a fiber optic strand that has been treated to affect the way light travels through the strand. Light within a certain narrow range of wavelengths will be reflected along the fiber by the grating, while light outside that range will pass through the grating mostly undisturbed. Since the range of wavelengths that can penetrate the grating depends on the grating itself as well as temperature and mechanical strain, fiber Bragg gratings can be used as temperature and strain sensors. This capability, along with the light-weight nature of the fiber optic strands in which the gratings reside, make fiber optic sensors an ideal candidate for flight testing and monitoring in which temperature and wing strain are factors. The purpose of this project is to research the availability of software capable of processing massive amounts of data in both real-time and post-flight settings, and to produce software segments that can be integrated to assist in the task as well.

  4. OpenPET Hardware, Firmware, Software, and Board Design Files

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abu-Nimeh, Faisal; Choong, Woon-Sengq; Moses, William W.

    OpenPET is an open source, flexible, high-performance, and modular data acquisition system for a variety of applications. The OpenPET electronics are capable of reading analog voltage or current signals from a wide variety of sensors. The electronics boards make extensive use of field programmable gate arrays (FPGAs) to provide flexibility and scalability. Firmware and software for the FPGAs and computer are used to control and acquire data from the system. The command and control flow is similar to the data flow, however, the commands are initiated from the computer similar to a tree topology (i.e., from top-to-bottom). Each node inmore » the tree discovers its parent and children, and all addresses are configured accordingly. A user (or a script) initiates a command from the computer. This command will be translated and encoded to the corresponding child (e.g., SB, MB, DB, etc.). Consecutively, each node will pass the command to its corresponding child(ren) by looking at the destination address. Finally, once the command reaches its desired destination(s) the corresponding node(s) execute(s) the command and send(s) a reply, if required. All the firmware, software, and the electronics board design files are distributed through the OpenPET website (http://openpet.lbl.gov).« less

  5. Accruals for HANDI 2000 business management system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, D.

    The Data Integration 2000 Project will result in an integrated and comprehensive set of functional applications containing core information necessary to support the Project Hanford Management Contract. It is based on the Commercial-Off-The-Shelf product solution with commercially proven business processes. The COTS product solution set, of PassPort and People Soft software, supports finance, supply and chemical management/Material Safety Data Sheet, human resources. Accruals are made at the project level. At the inception of each project, Project Management and the Accounts Payable Group make a mutual decision on whether periodic accrual entries should be made for it.

  6. [Development of a software for 3D virtual phantom design].

    PubMed

    Zou, Lian; Xie, Zhao; Wu, Qi

    2014-02-01

    In this paper, we present a 3D virtual phantom design software, which was developed based on object-oriented programming methodology and dedicated to medical physics research. This software was named Magical Phan tom (MPhantom), which is composed of 3D visual builder module and virtual CT scanner. The users can conveniently construct any complex 3D phantom, and then export the phantom as DICOM 3.0 CT images. MPhantom is a user-friendly and powerful software for 3D phantom configuration, and has passed the real scene's application test. MPhantom will accelerate the Monte Carlo simulation for dose calculation in radiation therapy and X ray imaging reconstruction algorithm research.

  7. Research in Varying Burner Tilt Angle to Reduce Rear Pass Temperature in Coal Fired Boiler

    NASA Astrophysics Data System (ADS)

    Thrangaraju, Savithry K.; Munisamy, Kannan M.; Baskaran, Saravanan

    2017-04-01

    This research shows the investigation conducted on one of techniques that is used in Manjung 700 MW tangentially fired coal power plant. The investigation conducted in this research is finding out the right tilt angle for the burners in the boiler that causes an efficient temperature distribution and combustion gas flow pattern in the boiler especially at the rear pass section. The main outcome of the project is to determine the right tilt angle for the burner to create an efficient temperature distribution and combustion gas flow pattern that able to increase the efficiency of the boiler. The investigation is carried out by using Computational Fluid Dynamics method to obtain the results by varying the burner tilt angle. The boiler model is drawn by using designing software which is called Solid Works and Fluent from Computational Fluid Dynamics is used to conduct the analysis on the boiler model. The analysis is to imitate the real combustion process in the real Manjung 700 MW boiler. The expected results are to determine the right burner tilt angle with a computational fluid analysis by obtaining the temperature distribution and combustion gas flow pattern for each of the three angles set for the burner tilt angle in FLUENT software. Three burner tilt angles are selected which are burner tilt angle at (0°) as test case 1, burner tilt angle at (+10°) as test case 2 and burner tilt angle at (-10°) as test case 3. These entire three cases were run in CFD software and the results of temperature distribution and velocity vector were obtained to find out the changes on the three cases at the furnace and rear pass section of the boiler. The results are being compared in analysis part by plotting graphs to determine the right tilting angle that reduces the rear pass temperature.

  8. Multiprocessor shared-memory information exchange

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santoline, L.L.; Bowers, M.D.; Crew, A.W.

    1989-02-01

    In distributed microprocessor-based instrumentation and control systems, the inter-and intra-subsystem communication requirements ultimately form the basis for the overall system architecture. This paper describes a software protocol which addresses the intra-subsystem communications problem. Specifically the protocol allows for multiple processors to exchange information via a shared-memory interface. The authors primary goal is to provide a reliable means for information to be exchanged between central application processor boards (masters) and dedicated function processor boards (slaves) in a single computer chassis. The resultant Multiprocessor Shared-Memory Information Exchange (MSMIE) protocol, a standard master-slave shared-memory interface suitable for use in nuclear safety systems, ismore » designed to pass unidirectional buffers of information between the processors while providing a minimum, deterministic cycle time for this data exchange.« less

  9. Issues central to a useful image understanding environment

    NASA Astrophysics Data System (ADS)

    Beveridge, J. Ross; Draper, Bruce A.; Hanson, Allen R.; Riseman, Edward M.

    1992-04-01

    A recent DARPA initiative has sparked interested in software environments for computer vision. The goal is a single environment to support both basic research and technology transfer. This paper lays out six fundamental attributes such a system must possess: (1) support for both C and Lisp, (2) extensibility, (3) data sharing, (4) data query facilities tailored to vision, (5) graphics, and (6) code sharing. The first three attributes fundamentally constrain the system design. Support for both C and Lisp demands some form of database or data-store for passing data between languages. Extensibility demands that system support facilities, such as spatial retrieval of data, be readily extended to new user-defined datatypes. Finally, data sharing demands that data saved by one user, including data of a user-defined type, must be readable by another user.

  10. Propagation of atmospheric pressure helium plasma jet into ambient air at laminar gas flow

    NASA Astrophysics Data System (ADS)

    Pinchuk, M.; Stepanova, O.; Kurakina, N.; Spodobin, V.

    2017-05-01

    The formation of an atmospheric pressure plasma jet (APPJ) in a gas flow passing through the discharge gap depends on both gas-dynamic properties and electrophysical parameters of the plasma jet generator. The paper presents the results of experimental and numerical study of the propagation of the APPJ in a laminar flow of helium. A dielectric-barrier discharge (DBD) generated inside a quartz tube equipped with a coaxial electrode system, which provided gas passing through it, served as a plasma source. The transition of the laminar regime of gas flow into turbulent one was controlled by the photography of a formed plasma jet. The corresponding gas outlet velocity and Reynolds numbers were revealed experimentally and were used to simulate gas dynamics with OpenFOAM software. The data of the numerical simulation suggest that the length of plasma jet at the unvarying electrophysical parameters of DBD strongly depends on the mole fraction of ambient air in a helium flow, which is established along the direction of gas flow.

  11. Networking and AI systems: Requirements and benefits

    NASA Technical Reports Server (NTRS)

    1988-01-01

    The price performance benefits of network systems is well documented. The ability to share expensive resources sold timesharing for mainframes, department clusters of minicomputers, and now local area networks of workstations and servers. In the process, other fundamental system requirements emerged. These have now been generalized with open system requirements for hardware, software, applications and tools. The ability to interconnect a variety of vendor products has led to a specification of interfaces that allow new techniques to extend existing systems for new and exciting applications. As an example of the message passing system, local area networks provide a testbed for many of the issues addressed by future concurrent architectures: synchronization, load balancing, fault tolerance and scalability. Gold Hill has been working with a number of vendors on distributed architectures that range from a network of workstations to a hypercube of microprocessors with distributed memory. Results from early applications are promising both for performance and scalability.

  12. Security model for picture archiving and communication systems.

    PubMed

    Harding, D B; Gac, R J; Reynolds, C T; Romlein, J; Chacko, A K

    2000-05-01

    The modern information revolution has facilitated a metamorphosis of health care delivery wrought with the challenges of securing patient sensitive data. To accommodate this reality, Congress passed the Health Insurance Portability and Accountability Act (HIPAA). While final guidance has not fully been resolved at this time, it is up to the health care community to develop and implement comprehensive security strategies founded on procedural, hardware and software solutions in preparation for future controls. The Virtual Radiology Environment (VRE) Project, a landmark US Army picture archiving and communications system (PACS) implemented across 10 geographically dispersed medical facilities, has addressed that challenge by planning for the secure transmission of medical images and reports over their local (LAN) and wide area network (WAN) infrastructure. Their model, which is transferable to general PACS implementations, encompasses a strategy of application risk and dataflow identification, data auditing, security policy definition, and procedural controls. When combined with hardware and software solutions that are both non-performance limiting and scalable, the comprehensive approach will not only sufficiently address the current security requirements, but also accommodate the natural evolution of the enterprise security model.

  13. A Versatile Phenotyping System and Analytics Platform Reveals Diverse Temporal Responses to Water Availability in Setaria.

    PubMed

    Fahlgren, Noah; Feldman, Maximilian; Gehan, Malia A; Wilson, Melinda S; Shyu, Christine; Bryant, Douglas W; Hill, Steven T; McEntee, Colton J; Warnasooriya, Sankalpi N; Kumar, Indrajit; Ficor, Tracy; Turnipseed, Stephanie; Gilbert, Kerrigan B; Brutnell, Thomas P; Carrington, James C; Mockler, Todd C; Baxter, Ivan

    2015-10-05

    Phenotyping has become the rate-limiting step in using large-scale genomic data to understand and improve agricultural crops. Here, the Bellwether Phenotyping Platform for controlled-environment plant growth and automated multimodal phenotyping is described. The system has capacity for 1140 plants, which pass daily through stations to record fluorescence, near-infrared, and visible images. Plant Computer Vision (PlantCV) was developed as open-source, hardware platform-independent software for quantitative image analysis. In a 4-week experiment, wild Setaria viridis and domesticated Setaria italica had fundamentally different temporal responses to water availability. While both lines produced similar levels of biomass under limited water conditions, Setaria viridis maintained the same water-use efficiency under water replete conditions, while Setaria italica shifted to less efficient growth. Overall, the Bellwether Phenotyping Platform and PlantCV software detected significant effects of genotype and environment on height, biomass, water-use efficiency, color, plant architecture, and tissue water status traits. All ∼ 79,000 images acquired during the course of the experiment are publicly available. Copyright © 2015 The Author. Published by Elsevier Inc. All rights reserved.

  14. Detection of endoscopic looping during colonoscopy procedure by using embedded bending sensors

    PubMed Central

    Bruce, Michael; Choi, JungHun

    2018-01-01

    Background Looping of the colonoscope shaft during procedure is one of the most common obstacles encountered by colonoscopists. It occurs in 91% of cases with the N-sigmoid loop being the most common, occurring in 79% of cases. Purpose Herein, a novel system is developed that will give a complete three-dimensional (3D) vector image of the shaft as it passes through the colon, to aid the colonoscopist in detecting loops before they form. Patients and methods A series of connected links spans the middle 50% of the shaft, where loops are likely to form. Two potentiometers are attached at each joint to measure angular deflection in two directions to allow for 3D positioning. This 3D positioning is converted into a 3D vector image using computer software. MATLAB software has been used to display the image on a computer monitor. For the different configuration of the colon model, the system determined the looping status. Results Different configurations (N loop, reverse gamma loop, and reverse splenic flexure) of the loops were well defined using 3D vector image. Conclusion The novel sensory system can accurately define the various configuration of the colon during the colonoscopy procedure. PMID:29849469

  15. SATSIN System Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Livingston, R.C.

    1995-01-01

    This report outlines the design, functions and operation of the HAARP Diagnostic Satellite Scintillation (SATSIN) system that will be used to characterize the structure and dynamics of F region ionospheric irregularities created during HF heating. When in routine operation, the SATSIN system will be located so that the propagation path from satellite radio beacons passes through the heated volume created by HAARP. The signal, altered in phase and amplitude by the irregularities, is received by the SATSIN array of eight antennas and is processed to extract the spatial and temporal characteristics of the scintillation. From this information, the strength, shapemore » and motion of the in situ irregularities generated by HAARP can be implied. The hardware and software components of the system are reviewed, and the installation and operation in conjunction with the HAARP network are outlined.« less

  16. On-line applications of numerical models in the Black Sea GIS

    NASA Astrophysics Data System (ADS)

    Zhuk, E.; Khaliulin, A.; Zodiatis, G.; Nikolaidis, A.; Nikolaidis, M.; Stylianou, Stavros

    2017-09-01

    The Black Sea Geographical Information System (GIS) is developed based on cutting edge information technologies, and provides automated data processing and visualization on-line. Mapserver is used as a mapping service; the data are stored in MySQL DBMS; PHP and Python modules are utilized for data access, processing, and exchange. New numerical models can be incorporated in the GIS environment as individual software modules, compiled for a server-based operational system, providing interaction with the GIS. A common interface allows setting the input parameters; then the model performs the calculation of the output data in specifically predefined files and format. The calculation results are then passed to the GIS for visualization. Initially, a test scenario of integration of a numerical model into the GIS was performed, using software, developed to describe a two-dimensional tsunami propagation in variable basin depth, based on a linear long surface wave model which is legitimate for more than 5 m depth. Furthermore, the well established oil spill and trajectory 3-D model MEDSLIK (http://www.oceanography.ucy.ac.cy/medslik/) was integrated into the GIS with more advanced GIS functionality and capabilities. MEDSLIK is able to forecast and hind cast the trajectories of oil pollution and floating objects, by using meteo-ocean data and the state of oil spill. The MEDSLIK module interface allows a user to enter all the necessary oil spill parameters, i.e. date and time, rate of spill or spill volume, forecasting time, coordinates, oil spill type, currents, wind, and waves, as well as the specification of the output parameters. The entered data are passed on to MEDSLIK; then the oil pollution characteristics are calculated for pre-defined time steps. The results of the forecast or hind cast are then visualized upon a map.

  17. Testing the Dependence of Airborne Gravity Results on Three Variables in Kinematic GPS Processing

    NASA Astrophysics Data System (ADS)

    Weil, C.; Diehl, T. M.

    2011-12-01

    The National Geodetic Survey's Gravity for the Redefinition of the American Vertical Datum (GRAV-D) program plans to collect airborne gravity data across the entire U.S. and its holdings over the next decade. The goal is to build a geoid accurate to 1-2 cm, for which the airborne gravity data is key. The first phase is underway, with > 13% of data collection completed in: parts of Alaska, parts of California, most of the Gulf Coast, Puerto Rico, and the Virgin Islands. Obtaining accurate airborne gravity survey results depends on the quality of the GPS/IMU position solution used in the processing. There are many factors that could have an influence on the positioning results. First, we will investigate how an increased data sampling rate for the GPS/IMU affects the position solution and accelerations derived from those positions. Second we will test the hypothesis that, for differential kinematic processing a better solution is obtained using both a base and a rover GPS unit that contain an additional rubidium clock that is reported to sync better with GPS time. Finally, we will look at a few different GPS+IMU processing methods available in commercial software. This includes comparing GPS-only solutions with loosely coupled GPS/IMU solutions from the Applanix POSAV-510 system and tightly coupled solutions with our newly-acquired NovAtel SPAN system (micro-IRS IMU). Differential solutions are compared with PPP (Precise Point Positioning) solutions along with multi-pass and advanced tropospheric corrections available with the NovAtel Inertial Explorer software. Based on preliminary research, we expect that the tightly-coupled solutions with either better troposphere and/or multi-pass solutions will provide superior position (and gravity) results.

  18. PcapDB: Search Optimized Packet Capture, Version 0.1.0.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferrell, Paul; Steinfadt, Shannon

    PcapDB is a packet capture system designed to optimize the captured data for fast search in the typical (network incident response) use case. The technology involved in this software has been submitted via the IDEAS system and has been filed as a provisional patent. It includes the following primary components: capture: The capture component utilizes existing capture libraries to retrieve packets from network interfaces. Once retrieved the packets are passed to additional threads for sorting into flows and indexing. The sorted flows and indexes are passed to other threads so that they can be written to disk. These components aremore » written in the C programming language. search: The search components provide a means to find relevant flows and the associated packets. A search query is parsed and represented as a search tree. Various search commands, written in C, are then used resolve this tree into a set of search results. The tree generation and search execution management components are written in python. interface: The PcapDB web interface is written in Python on the Django framework. It provides a series of pages, API's, and asynchronous tasks that allow the user to manage the capture system, perform searches, and retrieve results. Web page components are written in HTML,CSS and Javascript.« less

  19. The NASA Constellation Program Procedure System

    NASA Technical Reports Server (NTRS)

    Phillips, Robert G.; Wang, Lui

    2010-01-01

    NASA has used procedures to describe activities to be performed onboard vehicles by astronaut crew and on the ground by flight controllers since Apollo. Starting with later Space Shuttle missions and the International Space Station, NASA moved forward to electronic presentation of procedures. For the Constellation Program, another large step forward is being taken - to make procedures more interactive with the vehicle and to assist the crew in controlling the vehicle more efficiently and with less error. The overall name for the project is the Constellation Procedure Applications Software System (CxPASS). This paper describes some of the history behind this effort, the key concepts and operational paradigms that the work is based upon, and the actual products being developed to implement procedures for Constellation

  20. Deconvolution of time series in the laboratory

    NASA Astrophysics Data System (ADS)

    John, Thomas; Pietschmann, Dirk; Becker, Volker; Wagner, Christian

    2016-10-01

    In this study, we present two practical applications of the deconvolution of time series in Fourier space. First, we reconstruct a filtered input signal of sound cards that has been heavily distorted by a built-in high-pass filter using a software approach. Using deconvolution, we can partially bypass the filter and extend the dynamic frequency range by two orders of magnitude. Second, we construct required input signals for a mechanical shaker in order to obtain arbitrary acceleration waveforms, referred to as feedforward control. For both situations, experimental and theoretical approaches are discussed to determine the system-dependent frequency response. Moreover, for the shaker, we propose a simple feedback loop as an extension to the feedforward control in order to handle nonlinearities of the system.

  1. Earthquake Analysis (EA) Software for The Earthquake Observatories

    NASA Astrophysics Data System (ADS)

    Yanik, K.; Tezel, T.

    2009-04-01

    There are many software that can used for observe the seismic signals and locate the earthquakes, but some of them commercial and has technical support. For this reason, many seismological observatories developed and use their own seismological software packets which are convenient with their seismological network. In this study, we introduce our software which has some capabilities that it can read seismic signals and process and locate the earthquakes. This software is used by the General Directorate of Disaster Affairs Earthquake Research Department Seismology Division (here after ERD) and will improve according to the new requirements. ERD network consist of 87 seismic stations that 63 of them were equipped with 24 bite digital Guralp CMG-3T, 16 of them with analogue short period S-13-Geometrics and 8 of them 24 bite digital short period S-13j-DR-24 Geometrics seismometers. Data is transmitted with satellite from broadband stations, whereas leased line used from short period stations. Daily data archive capacity is 4 GB. In big networks, it is very important that observe the seismic signals and locate the earthquakes as soon as possible. This is possible, if they use software which was developed considering their network properties. When we started to develop a software for big networks as our, we recognized some realities that all known seismic format data should be read without any convert process, observing of the only selected stations and do this on the map directly, add seismic files with import command, establishing relation between P and S phase readings and location solutions, store in database and entering to the program with user name and password. In this way, we can prevent data disorder and repeated phase readings. There are many advantages, when data store on the database proxies. These advantages are easy access to data from anywhere using ethernet, publish the bulletin and catalogues using website, easily sending of short message (sms) and e-mail, data reading from anywhere that has ethernet connection and store the results in same centre. The Earthqukae Analysis (EA) program was developed considering above facilities. Microsoft Visual Basic 6.0 and Microsoft GDI tools were used as a basement for program development. EA program can image five different seismic formats (gcf, suds, seisan, sac, nanometrics-y) without any conversion and use all seismic process facilities that are filtering (band-pass, low-pass, high-pass), fast fourier transform, offset adjustment etc.

  2. Passport-PeopleSoft integration for HANDI 2000 business management system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, D.

    The integration between the PeopleSoft applications and Passport modules are accomplished with an off the shelf package developed by lNDUS. The product was updated to the PeopleSoft Release 7.O. The Integration product interacts with data from multiple products within Passport and PeopleSoft. For 10/l/98 the Integration will interlace between the following: (1) PassPort Accounts Payable, Contract Management, Inventory Management, Purchasing; and (2) PeopleSoft General Ledger, Project Costing, Human Resources, Payroll. The current supply systems and financial systems interact with each other via multiple custom interfaces. Data integrity and Y2K issues were some of the driving factors in replacement of thesemore » systems. The new systems allow FDH the opportunity to change the current business processes to go to a best business practice that the commercial off the shelf software was adopted.« less

  3. Productivity and cost of conventional understory biomass harvesting systems

    Treesearch

    Douglas E. Miller; Thomas J. Straka; Bryce J. Stokes; William Watson

    1987-01-01

    Conventional harvesting equipment was tested for removing forest understory biomass (energywood) for use as fuel. Two types of systems were tested--a one-pass system and a two-pass system. In the one-pass system, the energywood and pulpwood were harvested simultaneously. In the two-pass system, the energywood was harvested in a first pass through the stand, and the...

  4. First results of electron temperature measurements by the use of multi-pass Thomson scattering system in GAMMA 10

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoshikawa, M., E-mail: yosikawa@prc.tsukuba.ac.jp; Nagasu, K.; Shimamura, Y.

    2014-11-15

    A multi-pass Thomson scattering (TS) has the advantage of enhancing scattered signals. We constructed a multi-pass TS system for a polarisation-based system and an image relaying system modelled on the GAMMA 10 TS system. We undertook Raman scattering experiments both for the multi-pass setting and for checking the optical components. Moreover, we applied the system to the electron temperature measurements in the GAMMA 10 plasma for the first time. The integrated scattering signal was magnified by approximately three times by using the multi-pass TS system with four passes. The electron temperature measurement accuracy is improved by using this multi-pass system.

  5. First results of electron temperature measurements by the use of multi-pass Thomson scattering system in GAMMA 10.

    PubMed

    Yoshikawa, M; Yasuhara, R; Nagasu, K; Shimamura, Y; Shima, Y; Kohagura, J; Sakamoto, M; Nakashima, Y; Imai, T; Ichimura, M; Yamada, I; Funaba, H; Kawahata, K; Minami, T

    2014-11-01

    A multi-pass Thomson scattering (TS) has the advantage of enhancing scattered signals. We constructed a multi-pass TS system for a polarisation-based system and an image relaying system modelled on the GAMMA 10 TS system. We undertook Raman scattering experiments both for the multi-pass setting and for checking the optical components. Moreover, we applied the system to the electron temperature measurements in the GAMMA 10 plasma for the first time. The integrated scattering signal was magnified by approximately three times by using the multi-pass TS system with four passes. The electron temperature measurement accuracy is improved by using this multi-pass system.

  6. New data processing for multichannel FIR laser interferometer

    NASA Astrophysics Data System (ADS)

    Jun-Ben, Chen; Xiang, Gao

    1989-10-01

    Usually, both the probing and reference signals received by LATGS detectors of FIR interferometer pass through hardware phase discriminator and the output phase difference--hence the electron line densities is collected for analysis and display with a computerized data acquisition system(DAS). In this paper, a new numerical method for computing the phase difference in software has been developed instead of hardware phase discriminator, the temporal resolution and stability is improved. An asymmetrical Abel inversion is applied to processing the data from a seven-channel FIR HCN laser interferometer and the space-time distributions of plasma electron density in the HT-6M tokamak are derived.

  7. Software implementation of the SKIPSM paradigm under PIP

    NASA Astrophysics Data System (ADS)

    Hack, Ralf; Waltz, Frederick M.; Batchelor, Bruce G.

    1997-09-01

    SKIPSM (separated-kernel image processing using finite state machines) is a technique for implementing large-kernel binary- morphology operators and many other operations. While earlier papers on SKIPSM concentrated mainly on implementations using pipelined hardware, there is considerable scope for achieving major speed improvements in software systems. Using identical control software, one-pass binary erosion and dilation structuring elements (SEs) ranging from the trivial (3 by 3) to the gigantic (51 by 51, or even larger), are readily available. Processing speed is independent of the size of the SE, making the SKIPSM approach practical for work with very large SEs on ordinary desktop computers. PIP (prolog image processing) is an interactive machine vision prototyping environment developed at the University of Wales Cardiff. It consists of a large number of image processing operators embedded within the standard AI language Prolog. This paper describes the SKIPSM implementation of binary morphology operators within PIP. A large set of binary erosion and dilation operations (circles, squares, diamonds, octagons, etc.) is available to the user through a command-line driven dialogue, via pull-down menus, or incorporated into standard (Prolog) programs. Little has been done thus far to optimize speed on this first software implementation of SKIPSM. Nevertheless, the results are impressive. The paper describes sample applications and presents timing figures. Readers have the opportunity to try out these operations on demonstration software written by the University of Wales, or via their WWW home page at http://bruce.cs.cf.ac.uk/bruce/index.html .

  8. Software development for a gamma-ray burst rapid-response observatory in the US Virgin Islands.

    NASA Astrophysics Data System (ADS)

    Davis, K. A.; Giblin, T. W.; Neff, J. E.; Hakkila, J.; Hartmann, D.

    2004-12-01

    The site is situated near the crest of Crown Mountain on the island of St. Thomas in the US Virgin Islands. The observing site is strategically located 65 W longitude, placing it as the most eastern GRB-dedicated observing site in the western hemisphere. The observatory has a 0.5 m robotic telescope and a Marconi 4240 2048 by 2048 CCD with BVRI filters. The field of view is identical to that of the XRT onboard Swift, 19 by 19 arc minutes. The telescope is operated through the Talon telescope control software. The observatory is notified of a burst trigger through the GRB Coordinates Network (GCN). This GCN notification is received through a socket connection to the control computer on site. A Perl script passes this information to the Talon software, which automatically interrupts concurrent observations and inserts a new GRB observing schedule. Once the observations are made the resulting images are then analyzed in IRAF. A source extraction is necessary to identify known sources and the optical transient. The system is being calibrated for automatic GRB response and is expected to be ready to follow up Swift observations. This work has been supported by NSF and NASA-EPSCoR.

  9. LEGION: Lightweight Expandable Group of Independently Operating Nodes

    NASA Technical Reports Server (NTRS)

    Burl, Michael C.

    2012-01-01

    LEGION is a lightweight C-language software library that enables distributed asynchronous data processing with a loosely coupled set of compute nodes. Loosely coupled means that a node can offer itself in service to a larger task at any time and can withdraw itself from service at any time, provided it is not actively engaged in an assignment. The main program, i.e., the one attempting to solve the larger task, does not need to know up front which nodes will be available, how many nodes will be available, or at what times the nodes will be available, which is normally the case in a "volunteer computing" framework. The LEGION software accomplishes its goals by providing message-based, inter-process communication similar to MPI (message passing interface), but without the tight coupling requirements. The software is lightweight and easy to install as it is written in standard C with no exotic library dependencies. LEGION has been demonstrated in a challenging planetary science application in which a machine learning system is used in closed-loop fashion to efficiently explore the input parameter space of a complex numerical simulation. The machine learning system decides which jobs to run through the simulator; then, through LEGION calls, the system farms those jobs out to a collection of compute nodes, retrieves the job results as they become available, and updates a predictive model of how the simulator maps inputs to outputs. The machine learning system decides which new set of jobs would be most informative to run given the results so far; this basic loop is repeated until sufficient insight into the physical system modeled by the simulator is obtained.

  10. Final Report: Non-Visible, Automated Target Acquisition and Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ziock, Klaus-Peter; Fabris, Lorenzo; Goddard, James K.

    The Roadside Tracker (RST) represents a new approach to radiation portal monitors. It uses a combination of gamma-ray and visible-light imaging to localize gamma-ray radiation sources to individual vehicles in free-flowing, multi-lane traffic. Deployed as two trailers that are parked on either side of the roadway (Fig. 1); the RST scans passing traffic with two large gamma-ray imagers, one mounted in each trailer. The system compensates for vehicle motion through the imager’s fields of view by using automated target acquisition and tracking (TAT) software applied to a stream of video images. Once a vehicle has left the field of view,more » the radiation image of that vehicle is analyzed for the presence of a source, and if one is found, an alarm is sounded. The gamma-ray image is presented to the operator together with the video image of the traffic stream when the vehicle was approximately closest to the system (Fig. 2). The offending vehicle is identified with a bounding box to distinguish it from other vehicles that might be present at the same time. The system was developed under a previous grant from the Department of Homeland Security’s (DHS’s) Domestic Nuclear Detection Office (DNDO). This report documents work performed with follow-on funding from DNDO to further advance the development of the RST. Specifically, the primary thrust was to extend the performance envelope of the system by replacing the visible-light video cameras used by the TAT software with sensors that would allow operation at night and during inclement weather. In particular, it was desired to allow operation after dark without requiring external lighting. As part of this work, the system software was also upgraded to allow the use of 64-bit computers, the current generation operating system (OS), software development environment (Windows 7 vs. Windows XP, and current Visual Studio.Net), and improved software version controls (GIT vs. Source Safe.) With the upgraded performance allowed by new computers, and the additional memory available in a 64-bit OS, the system was able to handle greater traffic densities, and this also allowed addition of the ability to handle stop-and-go traffic.« less

  11. An XML-based method for astronomy software designing

    NASA Astrophysics Data System (ADS)

    Liao, Mingxue; Aili, Yusupu; Zhang, Jin

    XML-based method for standardization of software designing is introduced and analyzed and successfully applied to renovating the hardware and software of the digital clock at Urumqi Astronomical Station. Basic strategy for eliciting time information from the new digital clock of FT206 in the antenna control program is introduced. By FT206, the need to compute how many centuries passed since a certain day with sophisticated formulas is eliminated and it is no longer necessary to set right UT time for the computer holding control over antenna because the information about year, month, day are all deduced from Julian day dwelling in FT206, rather than from computer time. With XML-based method and standard for software designing, various existing designing methods are unified, communications and collaborations between developers are facilitated, and thus Internet-based mode of developing software becomes possible. The trend of development of XML-based designing method is predicted.

  12. The roles of the AAS Journals' Data Editors

    NASA Astrophysics Data System (ADS)

    Muench, August; NASA/SAO ADS, CERN/Zenodo.org, Harvard/CfA Wolbach Library

    2018-01-01

    I will summarize the community services provided by the AAS Journals' Data Editors to support authors’ when citing and preserving the software and data used in the published literature. In addition I will describe the life of a piece of code as it passes through the current workflows for software citation in astronomy. Using this “lifecycle” I will detail the ongoing work funded by a grant from the Alfred P. Sloan Foundation to the American Astronomical Society to improve the citation of software in the literature. The funded development team and advisory boards, made up of non-profit publishers, literature indexers, and preservation archives, is implementing the Force11 Software citation principles for astronomy Journals. The outcome of this work will be new workflows for authors and developers that fit in their current practices while enabling versioned citation of software and granular credit for its creators.

  13. Teleoperated Modular Robots for Lunar Operations

    NASA Technical Reports Server (NTRS)

    Globus, Al; Hornby, Greg; Larchev, Greg; Hancher, Matt; Cannon, Howard; Lohn, Jason

    2004-01-01

    Solar system exploration is currently carried out by special purpose robots exquisitely designed for the anticipated tasks. However, all contingencies for in situ resource utilization (ISRU), human habitat preparation, and exploration will be difficult to anticipate. Furthermore, developing the necessary special purpose mechanisms for deployment and other capabilities is difficult and error prone. For example, the Galileo high gain antenna never opened, severely restricting the quantity of data returned by the spacecraft. Also, deployment hardware is used only once. To address these problems, we are developing teleoperated modular robots for lunar missions, including operations in transit from Earth. Teleoperation of lunar systems from Earth involves a three second speed-of-light delay, but experiment suggests that interactive operations are feasible.' Modular robots typically consist of many identical modules that pass power and data between them and can be reconfigured for different tasks providing great flexibility, inherent redundancy and graceful degradation as modules fail. Our design features a number of different hub, link, and joint modules to simplify the individual modules, lower structure cost, and provide specialized capabilities. Modular robots are well suited for space applications because of their extreme flexibility, inherent redundancy, high-density packing, and opportunities for mass production. Simple structural modules can be manufactured from lunar regolith in situ using molds or directed solar sintering. Software to direct and control modular robots is difficult to develop. We have used genetic algorithms to evolve both the morphology and control system for walking modular robots3 We are currently using evolvable system technology to evolve controllers for modular robots in the ISS glove box. Development of lunar modular robots will require software and physical simulators, including regolith simulation, to enable design and test of robot software and hardware, particularly automation software. Ready access to these simulators could provide opportunities for contest-driven development ala RoboCup (http://www.robocup.org/). Licensing of module designs could provide opportunities in the toy market and for spin-off applications.

  14. Policy-Based Management Natural Language Parser

    NASA Technical Reports Server (NTRS)

    James, Mark

    2009-01-01

    The Policy-Based Management Natural Language Parser (PBEM) is a rules-based approach to enterprise management that can be used to automate certain management tasks. This parser simplifies the management of a given endeavor by establishing policies to deal with situations that are likely to occur. Policies are operating rules that can be referred to as a means of maintaining order, security, consistency, or other ways of successfully furthering a goal or mission. PBEM provides a way of managing configuration of network elements, applications, and processes via a set of high-level rules or business policies rather than managing individual elements, thus switching the control to a higher level. This software allows unique management rules (or commands) to be specified and applied to a cross-section of the Global Information Grid (GIG). This software embodies a parser that is capable of recognizing and understanding conversational English. Because all possible dialect variants cannot be anticipated, a unique capability was developed that parses passed on conversation intent rather than the exact way the words are used. This software can increase productivity by enabling a user to converse with the system in conversational English to define network policies. PBEM can be used in both manned and unmanned science-gathering programs. Because policy statements can be domain-independent, this software can be applied equally to a wide variety of applications.

  15. The impact of a private group practice of converting to an automated medical information system.

    PubMed

    Templeton, J; Bernes, M; Ostrowski, M

    1983-08-01

    As hardware and software developments make medical information systems increasingly available to physicians office practices and outpatients facilities, there is a need to focus on systems installation and conversion issues. In addition to the detailed step-by-step implementation plan, the overall impact of the new system should be anticipated. The purchasers should consider such issues as new information flow and user communication patterns between patient care and ancillary and support departments; restructuring of fundamental approaches to work allocation for either batch or real-time systems; and new emphasis on any departments vital to problem-spotting and solving. At California Primary Physicians (CPP), an awareness of these changes did not develop until well after the official "live" data had passed and the staff had been successfully using the system for several months. This paper explains how the above issues have emerged and the impact they have had on CPP, and provides a framework for anticipating such matters in any system installation.

  16. Using McIDAS-V data analysis and visualization software as an educational tool for understanding the atmosphere

    NASA Astrophysics Data System (ADS)

    Achtor, T. H.; Rink, T.

    2010-12-01

    The University of Wisconsin’s Space Science and Engineering Center (SSEC) has been at the forefront in developing data analysis and visualization tools for environmental satellites and other geophysical data. The fifth generation of the Man-computer Interactive Data Access System (McIDAS-V) is Java-based, open-source, freely available software that operates on Linux, Macintosh and Windows systems. The software tools provide powerful new data manipulation and visualization capabilities that work with geophysical data in research, operational and educational environments. McIDAS-V provides unique capabilities to support innovative techniques for evaluating research results, teaching and training. McIDAS-V is based on three powerful software elements. VisAD is a Java library for building interactive, collaborative, 4 dimensional visualization and analysis tools. The Integrated Data Viewer (IDV) is a reference application based on the VisAD system and developed by the Unidata program that demonstrates the flexibility that is needed in this evolving environment, using a modern, object-oriented software design approach. The third tool, HYDRA, allows users to build, display and interrogate multi and hyperspectral environmental satellite data in powerful ways. The McIDAS-V software is being used for training and education in several settings. The McIDAS User Group provides training workshops at its annual meeting. Numerous online tutorials with training data sets have been developed to aid users in learning simple and more complex operations in McIDAS-V, all are available online. In a University of Wisconsin-Madison undergraduate course in Radar and Satellite Meteorology, McIDAS-V is used to create and deliver laboratory exercises using case study and real time data. At the high school level, McIDAS-V is used in several exercises in our annual Summer Workshop in Earth and Atmospheric Sciences to provide young scientists the opportunity to examine data with friendly and powerful tools. This presentation will describe the McIDAS-V software and demonstrate some of the capabilities of McIDAS-V to analyze and display many types of global data. The presentation will also focus on describing how McIDAS-V can be used as an educational window to examine global geophysical data. Consecutive polar orbiting passes of NASA MODIS and CALIPSO observations

  17. INCOSE Systems Engineering Handbook v3.2: Improving the Process for SE Practitioners

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. Douglas Hamelin; David D. Walden; Michael E. Krueger

    2010-07-01

    The INCOSE Systems Engineering Handbook is the official INCOSE reference document for understanding systems engineering (SE) methods and conducting SE activities. Over the years, the Handbook has evolved to accommodate advances in the SE discipline and now serves as the basis for the Certified Systems Engineering Professional (CSEP) exam. Due to its evolution, the Handbook had become somewhat disjointed in its treatment and presentation of SE topics and was not aligned with the latest version of International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC) 15288:2008, Systems and Software Engineering. As a result, numerous inconsistencies were identified that could confuse practitionersmore » and directly impact the probability of success in passing the CSEP exam. Further, INCOSE leadership had previously submitted v3.1 of the Handbook to ISO/IEC for consideration as a Technical Report, but was told that the Handbook would have to be updated to conform with the terminology and structure of new ISO/IEC15288:2008, Systems and software engineering, prior to being considered. The revised INCOSE Systems Engineering Handbook v3.2 aligns with the structure and principles of ISO/IEC 15288:2008 and presents the generic SE life-cycle process steps in their entirety, without duplication or redundancy, in a single location within the text. As such, the revised Handbook v3.2 serves as a comprehensive instructional and reference manual for effectively understanding SE processes and conducting SE and better serves certification candidates preparing for the CSEP exam.« less

  18. Data fusion for automated non-destructive inspection

    PubMed Central

    Brierley, N.; Tippetts, T.; Cawley, P.

    2014-01-01

    In industrial non-destructive evaluation (NDE), it is increasingly common for data acquisition to be automated, driving a recent substantial increase in the availability of data. The collected data need to be analysed, typically necessitating the painstaking manual labour of a skilled operator. Moreover, in automated NDE a region of an inspected component is typically interrogated several times, be it within a single data channel due to multiple probe passes, across several channels acquired simultaneously or over the course of repeated inspections. The systematic combination of these diverse readings is recognized to offer an opportunity to improve the reliability of the inspection, but is not achievable in a manual analysis. This paper describes a data-fusion-based software framework providing a partial automation capability, allowing component regions to be declared defect-free to a very high probability while readily identifying defect indications, thereby optimizing the use of the operator's time. The system is designed to applicable to a wide range of automated NDE scenarios, but the processing is exemplified using the industrial ultrasonic immersion inspection of aerospace turbine discs. Results obtained for industrial datasets demonstrate an orders-of-magnitude reduction in false-call rates, for a given probability of detection, achievable using the developed software system. PMID:25002828

  19. SU-E-T-81: A Study On Correlation Between Gamma Analysis for Midline and Lateralized Tumors Using VMAT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Syam; Anjana

    Purpose: To evaluate the fluence for the midline and lateralized tumors for VMAT technique using 2D seven29 detector array combined with the Octavius phantom. Methods: 60 cases that are already being treated with volumetric modulated arc therapy (VMAT) have selected for this study. This includes tumors situated at the medial and lateral. Medial refers to the tumor situated at the midline of the body and lateral means toward the side or away from the midline of the body. Verification plans were created for each treatment plan in Varian Eclipse treatment planning system (version10, Varian medical systems, Palo Alto,CA) with themore » 2D Seven29 detector array and the Octavius phantom(PTW, Freiburg, Germany). Measurements were performed on a Varian Clinac 2100 iX, linear accelerator equipped with a millennium 120 leaf collimator. Analysis was done by comparing the fluence measured for the tumors situated on the midline and tumors situated laterally. Results: Fluence measured for all the delivered plans were analyzed using Verisoft software (PTW, Freiburg, Germany). The gamma pass percentage for midline tumors were found to be higher compared with the lateralized ones. The standard deviation between gamma values for midline and lateralized tumors is 2.18 and 3.5 respectively. Also the standard deviation between the point doses for midline and lateralized tumors is 0.38 and 0.29 respectively. The average gamma passing rate for midline tumors is 96.55% and for lateralized tumors are 94.94% for 3%DD and 3mm DTA criteria. From the T test, it was found that there is no significant difference between the gamma pass percentage between midline and lateralized tumors with p value of 0.28. Conclusion: There is no particular correlation found in the gamma pass criteria for midline and lateralized tumors.« less

  20. SU-E-T-608: Performance Comparison of Four Commercial Treatment Planning Systems Applied to Intensity-Modulated Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, Y; Li, R; Chi, Z

    Purpose: To compare the performances of four commercial treatment planning systems (TPS) used for the intensity-modulated radiotherapy (IMRT). Methods: Ten patients of nasopharyngeal (4 cases), esophageal (3 cases) and cervical (3 cases) cancer were randomly selected from a 3-month IMRT plan pool at one radiotherapy center. For each patient, four IMRT plans were newly generated by using four commercial TPS (Corvus, Monaco, Pinnacle and Xio), and then verified with Matrixx (two-dimensional array/IBA Company) on Varian23EX accelerator. A pass rate (PR) calculated from the Gamma index by OminiPro IMRT 1.5 software was evaluated at four plan verification standards (1%/1mm, 2%/2mm, 3%/3mm,more » 4%/4mm and 5%/5mm) for each treatment plan. Overall and multiple pairwise comparisons of PRs were statistically conducted by analysis of covariance (ANOVA) F and LSD tests among four TPSs. Results: Overall significant (p>0.05) differences of PRs were found among four TPSs with F test values of 3.8 (p=0.02), 21.1(>0.01), 14.0 (>0.01), 8.3(>0.01) at standards of 1%/1mm to 4%/4mm respectively, except at 5%/5mm standard with 2.6 (p=0.06). All means (standard deviation) of PRs at 3%/3mm of 94.3 ± 3.3 (Corvus), 98.8 ± 0.8 (Monaco), 97.5± 1.7 (Pinnacle), 98.4 ± 1.0 (Xio) were above 90% and met clinical requirement. Multiple pairwise comparisons had not demonstrated a consistent low or high pattern on either TPS. Conclusion: Matrixx dose verification results show that the validation pass rates of Monaco and Xio plans are relatively higher than those of the other two; Pinnacle plan shows slight higher pass rate than Corvus plan; lowest pass rate was achieved by the Corvus plan among these four kinds of TPS.« less

  1. Scheduling with Automatic Resolution of Conflicts

    NASA Technical Reports Server (NTRS)

    Clement, Bradley; Schaffer, Steve

    2006-01-01

    DSN Requirement Scheduler is a computer program that automatically schedules, reschedules, and resolves conflicts for allocations of resources of NASA s Deep Space Network (DSN) on the basis of ever-changing project requirements for DSN services. As used here, resources signifies, primarily, DSN antennas, ancillary equipment, and times during which they are available. Examples of project-required DSN services include arraying, segmentation, very-long-baseline interferometry, and multiple spacecraft per aperture. Requirements can include periodic reservations of specific or optional resources during specific time intervals or within ranges specified in terms of starting times and durations. This program is built on the Automated Scheduling and Planning Environment (ASPEN) software system (aspects of which have been described in previous NASA Tech Briefs articles), with customization to reflect requirements and constraints involved in allocation of DSN resources. Unlike prior DSN-resource- scheduling programs that make single passes through the requirements and require human intervention to resolve conflicts, this program makes repeated passes in a continuing search for all possible allocations, provides a best-effort solution at any time, and presents alternative solutions among which users can choose.

  2. EON: software for long time simulations of atomic scale systems

    NASA Astrophysics Data System (ADS)

    Chill, Samuel T.; Welborn, Matthew; Terrell, Rye; Zhang, Liang; Berthet, Jean-Claude; Pedersen, Andreas; Jónsson, Hannes; Henkelman, Graeme

    2014-07-01

    The EON software is designed for simulations of the state-to-state evolution of atomic scale systems over timescales greatly exceeding that of direct classical dynamics. States are defined as collections of atomic configurations from which a minimization of the potential energy gives the same inherent structure. The time evolution is assumed to be governed by rare events, where transitions between states are uncorrelated and infrequent compared with the timescale of atomic vibrations. Several methods for calculating the state-to-state evolution have been implemented in EON, including parallel replica dynamics, hyperdynamics and adaptive kinetic Monte Carlo. Global optimization methods, including simulated annealing, basin hopping and minima hopping are also implemented. The software has a client/server architecture where the computationally intensive evaluations of the interatomic interactions are calculated on the client-side and the state-to-state evolution is managed by the server. The client supports optimization for different computer architectures to maximize computational efficiency. The server is written in Python so that developers have access to the high-level functionality without delving into the computationally intensive components. Communication between the server and clients is abstracted so that calculations can be deployed on a single machine, clusters using a queuing system, large parallel computers using a message passing interface, or within a distributed computing environment. A generic interface to the evaluation of the interatomic interactions is defined so that empirical potentials, such as in LAMMPS, and density functional theory as implemented in VASP and GPAW can be used interchangeably. Examples are given to demonstrate the range of systems that can be modeled, including surface diffusion and island ripening of adsorbed atoms on metal surfaces, molecular diffusion on the surface of ice and global structural optimization of nanoparticles.

  3. User Interface Developed for Controls/CFD Interdisciplinary Research

    NASA Technical Reports Server (NTRS)

    1996-01-01

    The NASA Lewis Research Center, in conjunction with the University of Akron, is developing analytical methods and software tools to create a cross-discipline "bridge" between controls and computational fluid dynamics (CFD) technologies. Traditionally, the controls analyst has used simulations based on large lumping techniques to generate low-order linear models convenient for designing propulsion system controls. For complex, high-speed vehicles such as the High Speed Civil Transport (HSCT), simulations based on CFD methods are required to capture the relevant flow physics. The use of CFD should also help reduce the development time and costs associated with experimentally tuning the control system. The initial application for this research is the High Speed Civil Transport inlet control problem. A major aspect of this research is the development of a controls/CFD interface for non-CFD experts, to facilitate the interactive operation of CFD simulations and the extraction of reduced-order, time-accurate models from CFD results. A distributed computing approach for implementing the interface is being explored. Software being developed as part of the Integrated CFD and Experiments (ICE) project provides the basis for the operating environment, including run-time displays and information (data base) management. Message-passing software is used to communicate between the ICE system and the CFD simulation, which can reside on distributed, parallel computing systems. Initially, the one-dimensional Large-Perturbation Inlet (LAPIN) code is being used to simulate a High Speed Civil Transport type inlet. LAPIN can model real supersonic inlet features, including bleeds, bypasses, and variable geometry, such as translating or variable-ramp-angle centerbodies. Work is in progress to use parallel versions of the multidimensional NPARC code.

  4. POLYMERASE CHAIN REACTION (PCR) TECHNOLOGY IN VISUAL BEACH

    EPA Science Inventory

    In 2000, the US Congress passed the Beaches Environmental Assessment and Coastal Health Act under which the EPA has the mandate to manage all significant public beaches by 2008. As a result, EPA, USGS and NOAA are developing the Visual Beach program which consists of software eq...

  5. Capital Expert System

    NASA Astrophysics Data System (ADS)

    Dowell, Laurie; Gary, Jack; Illingworth, Bill; Sargent, Tom

    1987-05-01

    Gathering information, necessary forms, and financial calculations needed to generate a "capital investment proposal" is an extremely complex and difficult process. The intent of the capital investment proposal is to ensure management that the proposed investment has been thoroughly investigated and will have a positive impact on corporate goals. Meeting this requirement typically takes four or five experts a total of 12 hours to generate a "Capital Package." A Capital Expert System was therefore developed using "Personal Consultant." The completed system is hybrid and as such does not depend solely on rules but incorporates several different software packages that communicate through variables and functions passed from one to another. This paper describes the use of expert system techniques, methodology in building the knowledge base, contexts, LISP functions, data base, and special challenges that had to be overcome to create this system. The Capital Expert System is the successful result of a unique integration of artificial intelligence with business accounting, financial forms generation, and investment proposal expertise.

  6. 78 FR 5477 - Agency Information Collection Activities: InfoPass System, No Form Number; Extension, Without...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-25

    ...-0113] Agency Information Collection Activities: InfoPass System, No Form Number; Extension, Without... Change, of a Currently Approved Collection. (2) Title of the Form/Collection: InfoPass System. (3) Agency...: Primary: Individuals or households. The InfoPass system allows an applicant or petitioner to schedule an...

  7. 77 FR 65898 - Agency Information Collection Activities: InfoPass System, No Form Number; Extension, Without...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-31

    ...-0113] Agency Information Collection Activities: InfoPass System, No Form Number; Extension, Without...) Title of the Form/Collection: InfoPass System. (3) Agency form number, if any, and the applicable... InfoPass system allows an applicant or petitioner to schedule an interview appointment with USCIS...

  8. Software for C1 interpolation

    NASA Technical Reports Server (NTRS)

    Lawson, C. L.

    1977-01-01

    The problem of mathematically defining a smooth surface, passing through a finite set of given points is studied. Literature relating to the problem is briefly reviewed. An algorithm is described that first constructs a triangular grid in the (x,y) domain, and first partial derivatives at the modal points are estimated. Interpolation in the triangular cells using a method that gives C sup.1 continuity overall is examined. Performance of software implementing the algorithm is discussed. Theoretical results are presented that provide valuable guidance in the development of algorithms for constructing triangular grids.

  9. Improvements in Low-cost Ultrasonic Measurements of Blood Flow in "by-passes" Using Narrow & Broad Band Transit-time Procedures

    NASA Astrophysics Data System (ADS)

    Ramos, A.; Calas, H.; Diez, L.; Moreno, E.; Prohías, J.; Villar, A.; Carrillo, E.; Jiménez, A.; Pereira, W. C. A.; Von Krüger, M. A.

    The cardio-pathology by ischemia is an important cause of death, but the re-vascularization of coronary arteries (by-pass operation) is an useful solution to reduce associated morbidity improving quality of life in patients. During these surgeries, the flow in coronary vessels must be measured, using non-invasive ultrasonic methods, known as transit time flow measurements (TTFM), which are the most accurate option nowadays. TTFM is a common intra-operative tool, in conjunction with classic Doppler velocimetry, to check the quality of these surgery processes for implanting grafts in parallel with the coronary arteries. This work shows important improvements achieved in flow-metering, obtained in our research laboratories (CSIC, ICIMAF, COPPE) and tested under real surgical conditions in Cardiocentro-HHA, for both narrowband NB and broadband BB regimes, by applying results of a CYTED multinational project (Ultrasonic & computational systems for cardiovascular diagnostics). mathematical models and phantoms were created to evaluate accurately flow measurements, in laboratory conditions, before our new electronic designs and low-cost implementations, improving previous ttfm systems, which include analogic detection, acquisition & post-processing, and a portable PC. Both regimes (NB and BB), with complementary performances for different conditions, were considered. Finally, specific software was developed to offer facilities to surgeons in their interventions.

  10. Two years experience with quality assurance protocol for patient related Rapid Arc treatment plan verification using a two dimensional ionization chamber array

    PubMed Central

    2011-01-01

    Purpose To verify the dose distribution and number of monitor units (MU) for dynamic treatment techniques like volumetric modulated single arc radiation therapy - Rapid Arc - each patient treatment plan has to be verified prior to the first treatment. The purpose of this study was to develop a patient related treatment plan verification protocol using a two dimensional ionization chamber array (MatriXX, IBA, Schwarzenbruck, Germany). Method Measurements were done to determine the dependence between response of 2D ionization chamber array, beam direction, and field size. Also the reproducibility of the measurements was checked. For the patient related verifications the original patient Rapid Arc treatment plan was projected on CT dataset of the MatriXX and the dose distribution was calculated. After irradiation of the Rapid Arc verification plans measured and calculated 2D dose distributions were compared using the gamma evaluation method implemented in the measuring software OmniPro (version 1.5, IBA, Schwarzenbruck, Germany). Results The dependence between response of 2D ionization chamber array, field size and beam direction has shown a passing rate of 99% for field sizes between 7 cm × 7 cm and 24 cm × 24 cm for measurements of single arc. For smaller and larger field sizes than 7 cm × 7 cm and 24 cm × 24 cm the passing rate was less than 99%. The reproducibility was within a passing rate of 99% and 100%. The accuracy of the whole process including the uncertainty of the measuring system, treatment planning system, linear accelerator and isocentric laser system in the treatment room was acceptable for treatment plan verification using gamma criteria of 3% and 3 mm, 2D global gamma index. Conclusion It was possible to verify the 2D dose distribution and MU of Rapid Arc treatment plans using the MatriXX. The use of the MatriXX for Rapid Arc treatment plan verification in clinical routine is reasonable. The passing rate should be 99% than the verification protocol is able to detect clinically significant errors. PMID:21342509

  11. Strong Motion Seismograph Based On MEMS Accelerometer

    NASA Astrophysics Data System (ADS)

    Teng, Y.; Hu, X.

    2013-12-01

    The MEMS strong motion seismograph we developed used the modularization method to design its software and hardware.It can fit various needs in different application situation.The hardware of the instrument is composed of a MEMS accelerometer,a control processor system,a data-storage system,a wired real-time data transmission system by IP network,a wireless data transmission module by 3G broadband,a GPS calibration module and power supply system with a large-volumn lithium battery in it. Among it,the seismograph's sensor adopted a three-axis with 14-bit high resolution and digital output MEMS accelerometer.Its noise level just reach about 99μg/√Hz and ×2g to ×8g dynamically selectable full-scale.Its output data rates from 1.56Hz to 800Hz. Its maximum current consumption is merely 165μA,and the device is so small that it is available in a 3mm×3mm×1mm QFN package. Furthermore,there is access to both low pass filtered data as well as high pass filtered data,which minimizes the data analysis required for earthquake signal detection. So,the data post-processing can be simplified. Controlling process system adopts a 32-bit low power consumption embedded ARM9 processor-S3C2440 and is based on the Linux operation system.The processor's operating clock at 400MHz.The controlling system's main memory is a 64MB SDRAM with a 256MB flash-memory.Besides,an external high-capacity SD card data memory can be easily added.So the system can meet the requirements for data acquisition,data processing,data transmission,data storage,and so on. Both wired and wireless network can satisfy remote real-time monitoring, data transmission,system maintenance,status monitoring or updating software.Linux was embedded and multi-layer designed conception was used.The code, including sensor hardware driver,the data acquisition,earthquake setting out and so on,was written on medium layer.The hardware driver consist of IIC-Bus interface driver, IO driver and asynchronous notification driver. The application program layer mainly concludes: earthquake parameter module, local database managing module, data transmission module, remote monitoring, FTP service and so on. The application layer adopted multi-thread process. The whole strong motion seismograph was encapsulated in a small aluminum box, which size is 80mm×120mm×55mm. The inner battery can work continuesly more than 24 hours. The MEMS accelerograph uses modular design for its software part and hardware part. It has remote software update function and can meet the following needs: a) Auto picking up the earthquake event; saving the data on wave-event files and hours files; It may be used for monitoring strong earthquake, explosion, bridge and house health. b) Auto calculate the earthquake parameters, and transferring those parameters by 3G wireless broadband network. This kind of seismograph has characteristics of low cost, easy installation. They can be concentrated in the urban region or areas need to specially care. We can set up a ground motion parameters quick report sensor network while large earthquake break out. Then high-resolution-fine shake-map can be easily produced for the need of emergency rescue. c) By loading P-wave detection program modules, it can be used for earthquake early warning for large earthquakes; d) Can easily construct a high-density layout seismic monitoring network owning remote control and modern intelligent earthquake sensor.

  12. A system for the automated data-acquisition of fast transient signals in excitable membranes.

    PubMed

    Bustamante, J O

    1988-01-01

    This paper provides a description of a system for the acquisition of fast transient currents flowing across excitable membranes. The front end of the system consists of a CAMAC crate with plug-in modules. The modules provide control of CAMAC operations, analog to digital conversion, electronic memory storage and timing of events. The signals are transferred under direct memory access to an IBM PC microcomputer through a special-purpose interface. Voltage levels from a digital to analog board in the microcomputer are passed through multiplexers to produce the desired voltage pulse patterns to elicit the transmembrane currents. The dead time between consecutive excitatory voltage pulses is limited only by the computer data bus and the software characteristics. The dead time between data transfers can be reduced to the order of milliseconds, which is sufficient for most experiments with transmembrane ionic currents.

  13. Quantum information processing with superconducting circuits: a review.

    PubMed

    Wendin, G

    2017-10-01

    During the last ten years, superconducting circuits have passed from being interesting physical devices to becoming contenders for near-future useful and scalable quantum information processing (QIP). Advanced quantum simulation experiments have been shown with up to nine qubits, while a demonstration of quantum supremacy with fifty qubits is anticipated in just a few years. Quantum supremacy means that the quantum system can no longer be simulated by the most powerful classical supercomputers. Integrated classical-quantum computing systems are already emerging that can be used for software development and experimentation, even via web interfaces. Therefore, the time is ripe for describing some of the recent development of superconducting devices, systems and applications. As such, the discussion of superconducting qubits and circuits is limited to devices that are proven useful for current or near future applications. Consequently, the centre of interest is the practical applications of QIP, such as computation and simulation in Physics and Chemistry.

  14. Quantum information processing with superconducting circuits: a review

    NASA Astrophysics Data System (ADS)

    Wendin, G.

    2017-10-01

    During the last ten years, superconducting circuits have passed from being interesting physical devices to becoming contenders for near-future useful and scalable quantum information processing (QIP). Advanced quantum simulation experiments have been shown with up to nine qubits, while a demonstration of quantum supremacy with fifty qubits is anticipated in just a few years. Quantum supremacy means that the quantum system can no longer be simulated by the most powerful classical supercomputers. Integrated classical-quantum computing systems are already emerging that can be used for software development and experimentation, even via web interfaces. Therefore, the time is ripe for describing some of the recent development of superconducting devices, systems and applications. As such, the discussion of superconducting qubits and circuits is limited to devices that are proven useful for current or near future applications. Consequently, the centre of interest is the practical applications of QIP, such as computation and simulation in Physics and Chemistry.

  15. MediaTracker system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sandoval, D. M.; Strittmatter, R. B.; Abeyta, J. D.

    2004-01-01

    The initial objectives of this effort were to provide a hardware and software platform that can address the requirements for the accountability of classified removable electronic media and vault access logging. The Media Tracker system software assists classified media custodian in managing vault access logging and Media Tracking to prevent the inadvertent violation of rules or policies for the access to a restricted area and the movement and use of tracked items. The MediaTracker system includes the software tools to track and account for high consequence security assets and high value items. The overall benefits include: (1) real-time access tomore » the disposition of all Classified Removable Electronic Media (CREM), (2) streamlined security procedures and requirements, (3) removal of ambiguity and managerial inconsistencies, (4) prevention of incidents that can and should be prevented, (5) alignment with the DOE's initiative to achieve improvements in security and facility operations through technology deployment, and (6) enhanced individual responsibility by providing a consistent method of dealing with daily responsibilities. In response to initiatives to enhance the control of classified removable electronic media (CREM), the Media Tracker software suite was developed, piloted and implemented at the Los Alamos National Laboratory beginning in July 2000. The Media Tracker software suite assists in the accountability and tracking of CREM and other high-value assets. One component of the MediaTracker software suite provides a Laboratory-approved media tracking system. Using commercial touch screen and bar code technology, the MediaTracker (MT) component of the MediaTracker software suite provides an efficient and effective means to meet current Laboratory requirements and provides new-engineered controls to help assure compliance with those requirements. It also establishes a computer infrastructure at vault entrances for vault access logging, and can accommodate several methods of positive identification including smart cards and biometrics. Currently, we have three mechanisms that provide added security for accountability and tracking purposes. One mechanism consists of a portable, hand-held inventory scanner, which allows the custodian to physically track the items that are not accessible within a particular area. The second mechanism is a radio frequency identification (RFID) consisting of a monitoring portal, which tracks and logs in a database all activity tagged of items that pass through the portals. The third mechanism consists of an electronic tagging of a flash memory device for automated inventory of CREM in storage. By modifying this USB device the user is provided with added assurance, limiting the data from being obtained from any other computer.« less

  16. Development of electronic software for the management of trauma patients on the orthopaedic unit.

    PubMed

    Patel, Vishal P; Raptis, Demitri; Christofi, T; Mathew, Rajeev; Horwitz, M D; Eleftheriou, K; McGovern, Paul D; Youngman, J; Patel, J V; Haddad, F S

    2009-04-01

    Continuity of patient care is an essential prerequisite for the successful running of a trauma surgery service. This is becoming increasingly difficult because of the new working arrangements of junior doctors. Handover is now central to ensure continuity of care following shift change over. The purpose of this study was to compare the quality of information handed over using the traditional ad hoc method of a handover sheet versus a web-based electronic software programme. It was hoped that through improved quality of handover the new system would have a positive impact on clinical care, risk and time management. Data was prospectively collected and analyzed using the SPSS 14 statistical package. The handover data of 350 patients using a paper-based system was compared to the data of 357 cases using the web-based system. Key data included basic demographic data, responsible surgeon, location of patient, injury site including site, whether fractures were open or closed, concomitant injuries and the treatment plan. A survey was conducted amongst health care providers to assess the impact of the new software. With the introduction of the electronic handover system, patients with missing demographic data reduced from 35.1% to 0.8% (p<0.0001) and missing patient location from 18.6% to 3.6% (p<0.0001). Missing consultant information and missing diagnosis dropped from 12.9% to 2.0% (p<0.0001) and from 11.7% to 0.8% (p<0.0001), respectively. The missing information regarding side and anatomical site of the injury was reduced from 31.4% to 0.8% (p<0.0001) and from 13.7% to 1.1% (p<0.0001), respectively. In 96.6% of paper ad hoc handovers it was not stated whether the injury was 'closed' or 'open', whereas in the electronic group this information was evident in all 357 patients (p<0.0001). A treatment plan was included only in 52.3% of paper handovers compared to 94.7% (p<0.0001) of electronic handovers. A survey revealed 96% of members of the trauma team felt an improvement of handover since the introduction of the software, and 94% of members were satisfied with the software. The findings of our study show that the use of web-based electronic software is effective in facilitating and improving the quality of information passed during handover. Structured software also aids in improving work flow amongst the trauma team. We argue that an improvement in the quality of handover is an improvement in clinical practice.

  17. AN ADA LINEAR ALGEBRA PACKAGE MODELED AFTER HAL/S

    NASA Technical Reports Server (NTRS)

    Klumpp, A. R.

    1994-01-01

    This package extends the Ada programming language to include linear algebra capabilities similar to those of the HAL/S programming language. The package is designed for avionics applications such as Space Station flight software. In addition to the HAL/S built-in functions, the package incorporates the quaternion functions used in the Shuttle and Galileo projects, and routines from LINPAK that solve systems of equations involving general square matrices. Language conventions in this package follow those of HAL/S to the maximum extent practical and minimize the effort required for writing new avionics software and translating existent software into Ada. Valid numeric types in this package include scalar, vector, matrix, and quaternion declarations. (Quaternions are fourcomponent vectors used in representing motion between two coordinate frames). Single precision and double precision floating point arithmetic is available in addition to the standard double precision integer manipulation. Infix operators are used instead of function calls to define dot products, cross products, quaternion products, and mixed scalar-vector, scalar-matrix, and vector-matrix products. The package contains two generic programs: one for floating point, and one for integer. The actual component type is passed as a formal parameter to the generic linear algebra package. The procedures for solving systems of linear equations defined by general matrices include GEFA, GECO, GESL, and GIDI. The HAL/S functions include ABVAL, UNIT, TRACE, DET, INVERSE, TRANSPOSE, GET, PUT, FETCH, PLACE, and IDENTITY. This package is written in Ada (Version 1.2) for batch execution and is machine independent. The linear algebra software depends on nothing outside the Ada language except for a call to a square root function for floating point scalars (such as SQRT in the DEC VAX MATHLIB library). This program was developed in 1989, and is a copyrighted work with all copyright vested in NASA.

  18. Integrating medical imaging analyses through a high-throughput bundled resource imaging system

    NASA Astrophysics Data System (ADS)

    Covington, Kelsie; Welch, E. Brian; Jeong, Ha-Kyu; Landman, Bennett A.

    2011-03-01

    Exploitation of advanced, PACS-centric image analysis and interpretation pipelines provides well-developed storage, retrieval, and archival capabilities along with state-of-the-art data providence, visualization, and clinical collaboration technologies. However, pursuit of integrated medical imaging analysis through a PACS environment can be limiting in terms of the overhead required to validate, evaluate and integrate emerging research technologies. Herein, we address this challenge through presentation of a high-throughput bundled resource imaging system (HUBRIS) as an extension to the Philips Research Imaging Development Environment (PRIDE). HUBRIS enables PACS-connected medical imaging equipment to invoke tools provided by the Java Imaging Science Toolkit (JIST) so that a medical imaging platform (e.g., a magnetic resonance imaging scanner) can pass images and parameters to a server, which communicates with a grid computing facility to invoke the selected algorithms. Generated images are passed back to the server and subsequently to the imaging platform from which the images can be sent to a PACS. JIST makes use of an open application program interface layer so that research technologies can be implemented in any language capable of communicating through a system shell environment (e.g., Matlab, Java, C/C++, Perl, LISP, etc.). As demonstrated in this proof-of-concept approach, HUBRIS enables evaluation and analysis of emerging technologies within well-developed PACS systems with minimal adaptation of research software, which simplifies evaluation of new technologies in clinical research and provides a more convenient use of PACS technology by imaging scientists.

  19. HIFOGS: Its design, operations and calibration

    NASA Astrophysics Data System (ADS)

    Witteborn, Fred C.; Cohen, Martin; Bregman, Jesse D.; Heere, Karen R.; Greene, Thomas P.; Wooden, Diane H.

    The High-efficiency, Infrared Faint Object Grating Spectrometer (HIFOGS) provides spectral coverage of selectable portions of the 3 to 18 micron range at resolving powers from 00 to 1000 using 120 Si/Bi detectors. Additional coverage to 30 microns is provided by a bank of 32 Si:P detectors. Selectable apertures, gratings and band-pass filters provide flexibility to this system. Software for operation of HIFOGS and reduction of the data runs on a MacIntosh computer. HIFOGS has been used to establish celestial flux standards using 3 independent approaches: comparison to star models, comparisons to asteroid models and comparisons to laboratory blackbodies. These standards are expected to have wide application in astronomical thermal-infrared spectroscopy.

  20. Matching rendered and real world images by digital image processing

    NASA Astrophysics Data System (ADS)

    Mitjà, Carles; Bover, Toni; Bigas, Miquel; Escofet, Jaume

    2010-05-01

    Recent advances in computer-generated images (CGI) have been used in commercial and industrial photography providing a broad scope in product advertising. Mixing real world images with those rendered from virtual space software shows a more or less visible mismatching between corresponding image quality performance. Rendered images are produced by software which quality performance is only limited by the resolution output. Real world images are taken with cameras with some amount of image degradation factors as lens residual aberrations, diffraction, sensor low pass anti aliasing filters, color pattern demosaicing, etc. The effect of all those image quality degradation factors can be characterized by the system Point Spread Function (PSF). Because the image is the convolution of the object by the system PSF, its characterization shows the amount of image degradation added to any taken picture. This work explores the use of image processing to degrade the rendered images following the parameters indicated by the real system PSF, attempting to match both virtual and real world image qualities. The system MTF is determined by the slanted edge method both in laboratory conditions and in the real picture environment in order to compare the influence of the working conditions on the device performance; an approximation to the system PSF is derived from the two measurements. The rendered images are filtered through a Gaussian filter obtained from the taking system PSF. Results with and without filtering are shown and compared measuring the contrast achieved in different final image regions.

  1. LHCb Conditions database operation assistance systems

    NASA Astrophysics Data System (ADS)

    Clemencic, M.; Shapoval, I.; Cattaneo, M.; Degaudenzi, H.; Santinelli, R.

    2012-12-01

    The Conditions Database (CondDB) of the LHCb experiment provides versioned, time dependent geometry and conditions data for all LHCb data processing applications (simulation, high level trigger (HLT), reconstruction, analysis) in a heterogeneous computing environment ranging from user laptops to the HLT farm and the Grid. These different use cases impose front-end support for multiple database technologies (Oracle and SQLite are used). Sophisticated distribution tools are required to ensure timely and robust delivery of updates to all environments. The content of the database has to be managed to ensure that updates are internally consistent and externally compatible with multiple versions of the physics application software. In this paper we describe three systems that we have developed to address these issues. The first system is a CondDB state tracking extension to the Oracle 3D Streams replication technology, to trap cases when the CondDB replication was corrupted. Second, an automated distribution system for the SQLite-based CondDB, providing also smart backup and checkout mechanisms for the CondDB managers and LHCb users respectively. And, finally, a system to verify and monitor the internal (CondDB self-consistency) and external (LHCb physics software vs. CondDB) compatibility. The former two systems are used in production in the LHCb experiment and have achieved the desired goal of higher flexibility and robustness for the management and operation of the CondDB. The latter one has been fully designed and is passing currently to the implementation stage.

  2. Network Traffic Generator for Low-rate Small Network Equipment Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lanzisera, Steven

    2013-05-28

    Application that uses the Python low-level socket interface to pass network traffic between devices on the local side of a NAT router and the WAN side of the NAT router. This application is designed to generate traffic that complies with the Energy Star Small Network Equipment Test Method.

  3. Modeling a maintenance simulation of the geosynchronous platform

    NASA Technical Reports Server (NTRS)

    Kleiner, A. F., Jr.

    1980-01-01

    A modeling technique used to conduct a simulation study comparing various maintenance routines for a space platform is dicussed. A system model is described and illustrated, the basic concepts of a simulation pass are detailed, and sections on failures and maintenance are included. The operation of the system across time is best modeled by a discrete event approach with two basic events - failure and maintenance of the system. Each overall simulation run consists of introducing a particular model of the physical system, together with a maintenance policy, demand function, and mission lifetime. The system is then run through many passes, each pass corresponding to one mission and the model is re-initialized before each pass. Statistics are compiled at the end of each pass and after the last pass a report is printed. Items of interest typically include the time to first maintenance, total number of maintenance trips for each pass, average capability of the system, etc.

  4. Comparison of Absolute Apparent Diffusion Coefficient (ADC) Values in ADC Maps Generated Across Different Postprocessing Software: Reproducibility in Endometrial Carcinoma.

    PubMed

    Ghosh, Adarsh; Singh, Tulika; Singla, Veenu; Bagga, Rashmi; Khandelwal, Niranjan

    2017-12-01

    Apparent diffusion coefficient (ADC) maps are usually generated by builtin software provided by the MRI scanner vendors; however, various open-source postprocessing software packages are available for image manipulation and parametric map generation. The purpose of this study is to establish the reproducibility of absolute ADC values obtained using different postprocessing software programs. DW images with three b values were obtained with a 1.5-T MRI scanner, and the trace images were obtained. ADC maps were automatically generated by the in-line software provided by the vendor during image generation and were also separately generated on postprocessing software. These ADC maps were compared on the basis of ROIs using paired t test, Bland-Altman plot, mountain plot, and Passing-Bablok regression plot. There was a statistically significant difference in the mean ADC values obtained from the different postprocessing software programs when the same baseline trace DW images were used for the ADC map generation. For using ADC values as a quantitative cutoff for histologic characterization of tissues, standardization of the postprocessing algorithm is essential across processing software packages, especially in view of the implementation of vendor-neutral archiving.

  5. A concurrent distributed system for aircraft tactical decision generation

    NASA Technical Reports Server (NTRS)

    Mcmanus, John W.

    1990-01-01

    A research program investigating the use of AI techniques to aid in the development of a tactical decision generator (TDG) for within visual range (WVR) air combat engagements is discussed. The application of AI programming and problem-solving methods in the development and implementation of a concurrent version of the computerized logic for air-to-air warfare simulations (CLAWS) program, a second-generation TDG, is presented. Concurrent computing environments and programming approaches are discussed, and the design and performance of prototype concurrent TDG system (Cube CLAWS) are presented. It is concluded that the Cube CLAWS has provided a useful testbed to evaluate the development of a distributed blackboard system. The project has shown that the complexity of developing specialized software on a distributed, message-passing architecture such as the Hypercube is not overwhelming, and that reasonable speedups and processor efficiency can be achieved by a distributed blackboard system. The project has also highlighted some of the costs of using a distributed approach to designing a blackboard system.

  6. Autonomous System for Monitoring the Integrity of Composite Fan Housings

    NASA Technical Reports Server (NTRS)

    Qing, Xinlin P.; Aquino, Christopher; Kumar, Amrita

    2010-01-01

    A low-cost and reliable system assesses the integrity of composite fan-containment structures. The system utilizes a network of miniature sensors integrated with the structure to scan the entire structural area for any impact events and resulting structural damage, and to monitor degradation due to usage. This system can be used to monitor all types of composite structures on aircraft and spacecraft, as well as automatically monitor in real time the location and extent of damage in the containment structures. This diagnostic information is passed to prognostic modeling that is being developed to utilize the information and provide input on the residual strength of the structure, and maintain a history of structural degradation during usage. The structural health-monitoring system would consist of three major components: (1) sensors and a sensor network, which is permanently bonded onto the structure being monitored; (2) integrated hardware; and (3) software to monitor in-situ the health condition of in-service structures.

  7. You can't touch this: touch-free navigation through radiological images.

    PubMed

    Ebert, Lars C; Hatch, Gary; Ampanozi, Garyfalia; Thali, Michael J; Ross, Steffen

    2012-09-01

    Keyboards, mice, and touch screens are a potential source of infection or contamination in operating rooms, intensive care units, and autopsy suites. The authors present a low-cost prototype of a system, which allows for touch-free control of a medical image viewer. This touch-free navigation system consists of a computer system (IMac, OS X 10.6 Apple, USA) with a medical image viewer (OsiriX, OsiriX foundation, Switzerland) and a depth camera (Kinect, Microsoft, USA). They implemented software that translates the data delivered by the camera and a voice recognition software into keyboard and mouse commands, which are then passed to OsiriX. In this feasibility study, the authors introduced 10 medical professionals to the system and asked them to re-create 12 images from a CT data set. They evaluated response times and usability of the system compared with standard mouse/keyboard control. Users felt comfortable with the system after approximately 10 minutes. Response time was 120 ms. Users required 1.4 times more time to re-create an image with gesture control. Users with OsiriX experience were significantly faster using the mouse/keyboard and faster than users without prior experience. They rated the system 3.4 out of 5 for ease of use in comparison to the mouse/keyboard. The touch-free, gesture-controlled system performs favorably and removes a potential vector for infection, protecting both patients and staff. Because the camera can be quickly and easily integrated into existing systems, requires no calibration, and is low cost, the barriers to using this technology are low.

  8. Invention and validation of an automated camera system that uses optical character recognition to identify patient name mislabeled samples.

    PubMed

    Hawker, Charles D; McCarthy, William; Cleveland, David; Messinger, Bonnie L

    2014-03-01

    Mislabeled samples are a serious problem in most clinical laboratories. Published error rates range from 0.39/1000 to as high as 1.12%. Standardization of bar codes and label formats has not yet achieved the needed improvement. The mislabel rate in our laboratory, although low compared with published rates, prompted us to seek a solution to achieve zero errors. To reduce or eliminate our mislabeled samples, we invented an automated device using 4 cameras to photograph the outside of a sample tube. The system uses optical character recognition (OCR) to look for discrepancies between the patient name in our laboratory information system (LIS) vs the patient name on the customer label. All discrepancies detected by the system's software then require human inspection. The system was installed on our automated track and validated with production samples. We obtained 1 009 830 images during the validation period, and every image was reviewed. OCR passed approximately 75% of the samples, and no mislabeled samples were passed. The 25% failed by the system included 121 samples actually mislabeled by patient name and 148 samples with spelling discrepancies between the patient name on the customer label and the patient name in our LIS. Only 71 of the 121 mislabeled samples detected by OCR were found through our normal quality assurance process. We have invented an automated camera system that uses OCR technology to identify potential mislabeled samples. We have validated this system using samples transported on our automated track. Full implementation of this technology offers the possibility of zero mislabeled samples in the preanalytic stage.

  9. Dental Students' Perceptions of Digital Assessment Software for Preclinical Tooth Preparation Exercises.

    PubMed

    Park, Carly F; Sheinbaum, Justin M; Tamada, Yasushi; Chandiramani, Raina; Lian, Lisa; Lee, Cliff; Da Silva, John; Ishikawa-Nagai, Shigemi

    2017-05-01

    Objective self-assessment is essential to learning and continued competence in dentistry. A computer-assisted design/computer-assisted manufacturing (CAD/CAM) learning software (prepCheck, Sirona) allows students to objectively assess their performance in preclinical prosthodontics. The aim of this study was to evaluate students' perceptions of CAD/CAM learning software for preclinical prosthodontics exercises. In 2014, all third-year dental students at Harvard School of Dental Medicine (n=36) were individually instructed by a trained faculty member in using prepCheck. Each student completed a preclinical formative exercise (#18) and summative examination (#30) for ceramometal crown preparation and evaluated the preparation using five assessment tools (reduction, margin width, surface finish, taper, and undercut) in prepCheck. The students then rated each of the five tools for usefulness, user-friendliness, and frequency of use on a scale from 1=lowest to 5=highest. Faculty members graded the tooth preparations as pass (P), marginal-pass (MP), or fail (F). The survey response rate was 100%. The tools for undercut and taper had the highest scores for usefulness, user-friendliness, and frequency of use. The reduction tool score was significantly lower in all categories (p<0.01). There were significant differences in usefulness (p<0.05) and user-friendliness (p<0.05) scores among the P, MP, and F groups. These results suggest that the prepCheck taper and undercut tools were useful for the students' learning process in a preclinical exercise. The students' perceptions of prepCheck and their preclinical performance were related, and those students who performed poorest rated the software as significantly more useful.

  10. Comparative Implementation of High Performance Computing for Power System Dynamic Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Shuangshuang; Huang, Zhenyu; Diao, Ruisheng

    Dynamic simulation for transient stability assessment is one of the most important, but intensive, computations for power system planning and operation. Present commercial software is mainly designed for sequential computation to run a single simulation, which is very time consuming with a single processer. The application of High Performance Computing (HPC) to dynamic simulations is very promising in accelerating the computing process by parallelizing its kernel algorithms while maintaining the same level of computation accuracy. This paper describes the comparative implementation of four parallel dynamic simulation schemes in two state-of-the-art HPC environments: Message Passing Interface (MPI) and Open Multi-Processing (OpenMP).more » These implementations serve to match the application with dedicated multi-processor computing hardware and maximize the utilization and benefits of HPC during the development process.« less

  11. The Keck keyword layer

    NASA Technical Reports Server (NTRS)

    Conrad, A. R.; Lupton, W. F.

    1992-01-01

    Each Keck instrument presents a consistent software view to the user interface programmer. The view consists of a small library of functions, which are identical for all instruments, and a large set of keywords, that vary from instrument to instrument. All knowledge of the underlying task structure is hidden from the application programmer by the keyword layer. Image capture software uses the same function library to collect data for the image header. Because the image capture software and the instrument control software are built on top of the same keyword layer, a given observation can be 'replayed' by extracting keyword-value pairs from the image header and passing them back to the control system. The keyword layer features non-blocking as well as blocking I/O. A non-blocking keyword write operation (such as setting a filter position) specifies a callback to be invoked when the operation is complete. A non-blocking keyword read operation specifies a callback to be invoked whenever the keyword changes state. The keyword-callback style meshes well with the widget-callback style commonly used in X window programs. The first keyword library was built for the two Keck optical instruments. More recently, keyword libraries have been developed for the infrared instruments and for telescope control. Although the underlying mechanisms used for inter-process communication by each of these systems vary widely (Lick MUSIC, Sun RPC, and direct socket I/O, respectively), a basic user interface has been written that can be used with any of these systems. Since the keyword libraries are bound to user interface programs dynamically at run time, only a single set of user interface executables is needed. For example, the same program, 'xshow', can be used to display continuously the telescope's position, the time left in an instrument's exposure, or both values simultaneously. Less generic tools that operate on specific keywords, for example an X display that controls optical instrument exposures, have also been written using the keyword layer.

  12. Design of dual band FSS by using quadruple L-slot technique

    NASA Astrophysics Data System (ADS)

    Fauzi, Noor Azamiah Md; Aziz, Mohamad Zoinol Abidin Abd.; Said, Maizatul Alice Meor; Othman, Mohd Azlishah; Ahmad, Badrul Hisham; Malek, Mohd Fareq Abd

    2015-05-01

    This paper presents a new design of dual band frequency selective surface (FSS) for band pass microwave transmission application. FSS can be used on energy saving glass to improve the transmission of wireless communication signals through the glass. The microwave signal will be attenuate when propagate throughout the different structure such as building. Therefore, some of the wireless communication system cannot be used in the optimum performance. The aim of this paper is designed, simulated and analyzed the new dual band FSS structure for microwave transmission. This design is based on a quadruple L slot combined with cross slot to produce pass band at 900 MHz and 2.4 GHz. The vertical of pair inverse L slot is used as the band pass for the frequency of 2.4GHz. While, the horizontal of pair inverse L slot is used as the band pass at frequency 900MHz. This design is simulated and analyzed by using Computer Simulation Technology (CST) Microwave Studio (MWS) software. The characteristics of the transmission (S21) and reflection (S11) of the dual band FSS were simulater and analyzed. The bandwidth of the first band is 118.91MHz which covered the frequency range from 833.4MHz until 952.31MHz. Meanwhile, the bandwidth for the second band is 358.84MHz which covered the frequency range from 2.1475GHz until 2.5063GHz. The resonance/center frequency of this design is obtained at 900MHz with a 26.902dB return loss and 2.37GHz with 28.506dB a return loss. This FSS is suitable as microwave filter for GSM900 and WLAN 2.4GHz application.

  13. Interactive Supercomputing’s Star-P Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edelman, Alan; Husbands, Parry; Leibman, Steve

    2006-09-19

    The thesis of this extended abstract is simple. High productivity comes from high level infrastructures. To measure this, we introduce a methodology that goes beyond the tradition of timing software in serial and tuned parallel modes. We perform a classroom productivity study involving 29 students who have written a homework exercise in a low level language (MPI message passing) and a high level language (Star-P with MATLAB client). Our conclusions indicate what perhaps should be of little surprise: (1) the high level language is always far easier on the students than the low level language. (2) The early versions ofmore » the high level language perform inadequately compared to the tuned low level language, but later versions substantially catch up. Asymptotically, the analogy must hold that message passing is to high level language parallel programming as assembler is to high level environments such as MATLAB, Mathematica, Maple, or even Python. We follow the Kepner method that correctly realizes that traditional speedup numbers without some discussion of the human cost of reaching these numbers can fail to reflect the true human productivity cost of high performance computing. Traditional data compares low level message passing with serial computation. With the benefit of a high level language system in place, in our case Star-P running with MATLAB client, and with the benefit of a large data pool: 29 students, each running the same code ten times on three evolutions of the same platform, we can methodically demonstrate the productivity gains. To date we are not aware of any high level system as extensive and interoperable as Star-P, nor are we aware of an experiment of this kind performed with this volume of data.« less

  14. FoCa: a modular treatment planning system for proton radiotherapy with research and educational purposes

    NASA Astrophysics Data System (ADS)

    Sánchez-Parcerisa, D.; Kondrla, M.; Shaindlin, A.; Carabe, A.

    2014-12-01

    FoCa is an in-house modular treatment planning system, developed entirely in MATLAB, which includes forward dose calculation of proton radiotherapy plans in both active and passive modalities as well as a generic optimization suite for inverse treatment planning. The software has a dual education and research purpose. From the educational point of view, it can be an invaluable teaching tool for educating medical physicists, showing the insights of a treatment planning system from a well-known and widely accessible software platform. From the research point of view, its current and potential uses range from the fast calculation of any physical, radiobiological or clinical quantity in a patient CT geometry, to the development of new treatment modalities not yet available in commercial treatment planning systems. The physical models in FoCa were compared with the commissioning data from our institution and show an excellent agreement in depth dose distributions and longitudinal and transversal fluence profiles for both passive scattering and active scanning modalities. 3D dose distributions in phantom and patient geometries were compared with a commercial treatment planning system, yielding a gamma-index pass rate of above 94% (using FoCa’s most accurate algorithm) for all cases considered. Finally, the inverse treatment planning suite was used to produce the first prototype of intensity-modulated, passive-scattered proton therapy, using 13 passive scattering proton fields and multi-leaf modulation to produce a concave dose distribution on a cylindrical solid water phantom without any field-specific compensator.

  15. Test Driven Development of Scientific Models

    NASA Technical Reports Server (NTRS)

    Clune, Thomas L.

    2014-01-01

    Test-Driven Development (TDD), a software development process that promises many advantages for developer productivity and software reliability, has become widely accepted among professional software engineers. As the name suggests, TDD practitioners alternate between writing short automated tests and producing code that passes those tests. Although this overly simplified description will undoubtedly sound prohibitively burdensome to many uninitiated developers, the advent of powerful unit-testing frameworks greatly reduces the effort required to produce and routinely execute suites of tests. By testimony, many developers find TDD to be addicting after only a few days of exposure, and find it unthinkable to return to previous practices.After a brief overview of the TDD process and my experience in applying the methodology for development activities at Goddard, I will delve more deeply into some of the challenges that are posed by numerical and scientific software as well as tools and implementation approaches that should address those challenges.

  16. Mercury: Reusable software application for Metadata Management, Data Discovery and Access

    NASA Astrophysics Data System (ADS)

    Devarakonda, Ranjeet; Palanisamy, Giri; Green, James; Wilson, Bruce E.

    2009-12-01

    Mercury is a federated metadata harvesting, data discovery and access tool based on both open source packages and custom developed software. It was originally developed for NASA, and the Mercury development consortium now includes funding from NASA, USGS, and DOE. Mercury is itself a reusable toolset for metadata, with current use in 12 different projects. Mercury also supports the reuse of metadata by enabling searching across a range of metadata specification and standards including XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115. Mercury provides a single portal to information contained in distributed data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial and temporal searches across these metadata sources. One of the major goals of the recent redesign of Mercury was to improve the software reusability across the projects which currently fund the continuing development of Mercury. These projects span a range of land, atmosphere, and ocean ecological communities and have a number of common needs for metadata searches, but they also have a number of needs specific to one or a few projects To balance these common and project-specific needs, Mercury’s architecture includes three major reusable components; a harvester engine, an indexing system and a user interface component. The harvester engine is responsible for harvesting metadata records from various distributed servers around the USA and around the world. The harvester software was packaged in such a way that all the Mercury projects will use the same harvester scripts but each project will be driven by a set of configuration files. The harvested files are then passed to the Indexing system, where each of the fields in these structured metadata records are indexed properly, so that the query engine can perform simple, keyword, spatial and temporal searches across these metadata sources. The search user interface software has two API categories; a common core API which is used by all the Mercury user interfaces for querying the index and a customized API for project specific user interfaces. For our work in producing a reusable, portable, robust, feature-rich application, Mercury received a 2008 NASA Earth Science Data Systems Software Reuse Working Group Peer-Recognition Software Reuse Award. The new Mercury system is based on a Service Oriented Architecture and effectively reuses components for various services such as Thesaurus Service, Gazetteer Web Service and UDDI Directory Services. The software also provides various search services including: RSS, Geo-RSS, OpenSearch, Web Services and Portlets, integrated shopping cart to order datasets from various data centers (ORNL DAAC, NSIDC) and integrated visualization tools. Other features include: Filtering and dynamic sorting of search results, book-markable search results, save, retrieve, and modify search criteria.

  17. Ionospheric Convection and Structure Using Ground-Based Digital Ionosondes

    DTIC Science & Technology

    1988-02-01

    M(3000)F2 were provided by the autoscaling software ARTIST which is part of the Digisonde /6, 7/. The virtual height traces scaled from the ionograms...using ARTIST were passed to the true-height analyuis program POLAN /8/, to pro- vide reliable estimates of hmF2. DISCREPANCIES BETWEEN POLAN AND

  18. Lean and Efficient Software: Whole Program Optimization of Executables

    DTIC Science & Technology

    2016-12-31

    format string “ baked in”? (If multiple printf calls pass the same format string, they could share the same new function.) This leads to the...format string becomes baked into the target function.  Moving down: o Moving from the first row to the second makes any potential user control of the

  19. Supporting Students in C++ Programming Courses with Automatic Program Style Assessment

    ERIC Educational Resources Information Center

    Ala-Mutka, Kirsti; Uimonen, Toni; Jarvinen, Hannu-Matti

    2004-01-01

    Professional programmers need common coding conventions to assure co-operation and a degree of quality of the software. Novice programmers, however, easily forget issues of programming style in their programming coursework. In particular with large classes, students may pass several courses without learning elements of programming style. This is…

  20. How to Get from Cupertino to Boca Raton.

    ERIC Educational Resources Information Center

    Troxel, Duane K.; Chiavacci, Jim

    1985-01-01

    Describes seven methods to transfer data from Apple computer disks to IBM computer disks and vice versa: print out data and retype; use a commercial software package, optical-character reader, homemade cable, or modem to pass or transfer data directly; pay commercial data-transfer service; or store files on mainframe and download. (MBR)

  1. Program Analyzes Radar Altimeter Data

    NASA Technical Reports Server (NTRS)

    Vandemark, Doug; Hancock, David; Tran, Ngan

    2004-01-01

    A computer program has been written to perform several analyses of radar altimeter data. The program was designed to improve on previous methods of analysis of altimeter engineering data by (1) facilitating and accelerating the analysis of large amounts of data in a more direct manner and (2) improving the ability to estimate performance of radar-altimeter instrumentation and provide data corrections. The data in question are openly available to the international scientific community and can be downloaded from anonymous file-transfer- protocol (FTP) locations that are accessible via links from altimetry Web sites. The software estimates noise in range measurements, estimates corrections for electromagnetic bias, and performs statistical analyses on various parameters for comparison of different altimeters. Whereas prior techniques used to perform similar analyses of altimeter range noise require comparison of data from repetitions of satellite ground tracks, the present software uses a high-pass filtering technique to obtain similar results from single satellite passes. Elimination of the requirement for repeat-track analysis facilitates the analysis of large amounts of satellite data to assess subtle variations in range noise.

  2. Digital system for structural dynamics simulation

    NASA Technical Reports Server (NTRS)

    Krauter, A. I.; Lagace, L. J.; Wojnar, M. K.; Glor, C.

    1982-01-01

    State-of-the-art digital hardware and software for the simulation of complex structural dynamic interactions, such as those which occur in rotating structures (engine systems). System were incorporated in a designed to use an array of processors in which the computation for each physical subelement or functional subsystem would be assigned to a single specific processor in the simulator. These node processors are microprogrammed bit-slice microcomputers which function autonomously and can communicate with each other and a central control minicomputer over parallel digital lines. Inter-processor nearest neighbor communications busses pass the constants which represent physical constraints and boundary conditions. The node processors are connected to the six nearest neighbor node processors to simulate the actual physical interface of real substructures. Computer generated finite element mesh and force models can be developed with the aid of the central control minicomputer. The control computer also oversees the animation of a graphics display system, disk-based mass storage along with the individual processing elements.

  3. Pc-based car license plate reading

    NASA Astrophysics Data System (ADS)

    Tanabe, Katsuyoshi; Marubayashi, Eisaku; Kawashima, Harumi; Nakanishi, Tadashi; Shio, Akio

    1994-03-01

    A PC-based car license plate recognition system has been developed. The system recognizes Chinese characters and Japanese phonetic hiragana characters as well as six digits on Japanese license plates. The system consists of a CCD camera, vehicle sensors, a strobe unit, a monitoring center, and an i486-based PC. The PC includes in its extension slots: a vehicle detector board, a strobe emitter board, and an image grabber board. When a passing vehicle is detected by the vehicle sensors, the strobe emits a pulse of light. The light pulse is synchronized with the time the vehicle image is frozen on an image grabber board. The recognition process is composed of three steps: image thresholding, character region extraction, and matching-based character recognition. The recognition software can handle obscured characters. Experimental results for hundreds of outdoor images showed high recognition performance within relatively short performance times. The results confirmed that the system is applicable to a wide variety of applications such as automatic vehicle identification and travel time measurement.

  4. Quantification of the first-order high-pass filter's influence on the automatic measurements of the electrocardiogram.

    PubMed

    Isaksen, Jonas; Leber, Remo; Schmid, Ramun; Schmid, Hans-Jakob; Generali, Gianluca; Abächerli, Roger

    2017-02-01

    The first-order high-pass filter (AC coupling) has previously been shown to affect the ECG for higher cut-off frequencies. We seek to find a systematic deviation in computer measurements of the electrocardiogram when the AC coupling with a 0.05 Hz first-order high-pass filter is used. The standard 12-lead electrocardiogram from 1248 patients and the automated measurements of their DC and AC coupled version were used. We expect a large unipolar QRS-complex to produce a deviation in the opposite direction in the ST-segment. We found a strong correlation between the QRS integral and the offset throughout the ST-segment. The coefficient for J amplitude deviation was found to be -0.277 µV/(µV⋅s). Potential dangerous alterations to the diagnostically important ST-segment were found. Medical professionals and software developers for electrocardiogram interpretation programs should be aware of such high-pass filter effects since they could be misinterpreted as pathophysiology or some pathophysiology could be masked by these effects. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. Feasibility of a Modified E-PASS and POSSUM System for Postoperative Risk Assessment in Patients with Spinal Disease.

    PubMed

    Chun, Dong Hyun; Kim, Do Young; Choi, Sun Kyu; Shin, Dong Ah; Ha, Yoon; Kim, Keung Nyun; Yoon, Do Heum; Yi, Seong

    2018-04-01

    This retrospective case control study aimed to evaluate the feasibility of using Estimation of Physiological Ability and Surgical Stress (E-PASS) and Physiological and Operative Severity Score for the enumeration of Mortality and Morbidity (POSSUM) systems in patients undergoing spinal surgical procedures. Degenerative spine disease has increased in incidence in aging societies, as has the number of older adult patients undergoing spinal surgery. Many older adults are at a high surgical risk because of comorbidity and poor general health. We retrospectively reviewed 217 patients who had undergone spinal surgery at a single tertiary care. We investigated complications within 1 month after surgery. Criteria for both skin incision in E-PASS and operation magnitude in the POSSUM system were modified to fit spine surgery. We calculated the E-PASS and POSSUM scores for enrolled patients, and investigated the relationship between postoperative complications and both surgical risk scoring systems. To reinforce the predictive ability of the E-PASS system, we adjusted equations and developed modified E-PASS systems. The overall complication rate for spinal surgery was 22.6%. Forty-nine patients experienced 58 postoperative complications. Nineteen major complications, including hematoma, deep infection, pleural effusion, progression of weakness, pulmonary edema, esophageal injury, myocardial infarction, pneumonia, reoperation, renal failure, sepsis, and death, occurred in 17 patients. The area under the receiver operating characteristic curve (AUC) for predicted postoperative complications after spine surgery was 0.588 for E-PASS and 0.721 for POSSUM. For predicted major postoperative complications, the AUC increased to 0.619 for E-PASS and 0.842 for POSSUM. The AUC of the E-PASS system increased from 0.588 to 0.694 with the Modified E-PASS equation. The POSSUM system may be more useful than the E-PASS system for estimating postoperative surgical risk in patients undergoing spine surgery. The preoperative risk scores of E-PASS and POSSUM can be useful for predicting postoperative major complications. To enhance the predictability of the scoring systems, using of modified equations based on spine surgery-specific factors may help ensure surgical outcomes and patient safety. Copyright © 2017. Published by Elsevier Inc.

  6. Parallel computing on Unix workstation arrays

    NASA Astrophysics Data System (ADS)

    Reale, F.; Bocchino, F.; Sciortino, S.

    1994-12-01

    We have tested arrays of general-purpose Unix workstations used as MIMD systems for massive parallel computations. In particular we have solved numerically a demanding test problem with a 2D hydrodynamic code, generally developed to study astrophysical flows, by exucuting it on arrays either of DECstations 5000/200 on Ethernet LAN, or of DECstations 3000/400, equipped with powerful Alpha processors, on FDDI LAN. The code is appropriate for data-domain decomposition, and we have used a library for parallelization previously developed in our Institute, and easily extended to work on Unix workstation arrays by using the PVM software toolset. We have compared the parallel efficiencies obtained on arrays of several processors to those obtained on a dedicated MIMD parallel system, namely a Meiko Computing Surface (CS-1), equipped with Intel i860 processors. We discuss the feasibility of using non-dedicated parallel systems and conclude that the convenience depends essentially on the size of the computational domain as compared to the relative processor power and network bandwidth. We point out that for future perspectives a parallel development of processor and network technology is important, and that the software still offers great opportunities of improvement, especially in terms of latency times in the message-passing protocols. In conditions of significant gain in terms of speedup, such workstation arrays represent a cost-effective approach to massive parallel computations.

  7. Multichannel Networked Phasemeter Readout and Analysis

    NASA Technical Reports Server (NTRS)

    Edmonds, Karina

    2008-01-01

    Netmeter software reads a data stream from up to 250 networked phasemeters, synchronizes the data, saves the reduced data to disk (after applying a low-pass filter), and provides a Web server interface for remote control. Unlike older phasemeter software that requires a special, real-time operating system, this program can run on any general-purpose computer. It needs about five percent of the CPU (central processing unit) to process 20 channels because it adds built-in data logging and network-based GUIs (graphical user interfaces) that are implemented in Scalable Vector Graphics (SVG). Netmeter runs on Linux and Windows. It displays the instantaneous displacements measured by several phasemeters at a user-selectable rate, up to 1 kHz. The program monitors the measure and reference channel frequencies. For ease of use, levels of status in Netmeter are color coded: green for normal operation, yellow for network errors, and red for optical misalignment problems. Netmeter includes user-selectable filters up to 4 k samples, and user-selectable averaging windows (after filtering). Before filtering, the program saves raw data to disk using a burst-write technique.

  8. Development of polarization-controlled multi-pass Thomson scattering system in the GAMMA 10 tandem mirror

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoshikawa, M.; Morimoto, M.; Shima, Y.

    2012-10-15

    In the GAMMA 10 tandem mirror, the typical electron density is comparable to that of the peripheral plasma of torus-type fusion devices. Therefore, an effective method to increase Thomson scattering (TS) signals is required in order to improve signal quality. In GAMMA 10, the yttrium-aluminum-garnet (YAG)-TS system comprises a laser, incident optics, light collection optics, signal detection electronics, and a data recording system. We have been developing a multi-pass TS method for a polarization-based system based on the GAMMA 10 YAG TS. To evaluate the effectiveness of the polarization-based configuration, the multi-pass system was installed in the GAMMA 10 YAG-TSmore » system, which is capable of double-pass scattering. We carried out a Rayleigh scattering experiment and applied this double-pass scattering system to the GAMMA 10 plasma. The integrated scattering signal was made about twice as large by the double-pass system.« less

  9. SU-F-T-308: Mobius FX Evaluation and Comparison Against a Commercial 4D Detector Array for VMAT Plan QA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vazquez Quino, L; Huerta Hernandez, C; Morrow, A

    2016-06-15

    Purpose: To evaluate the use of MobiusFX as a pre-treatment verification IMRT QA tool and compare it with a commercial 4D detector array for VMAT plan QA. Methods: 15 VMAT plan QA of different treatment sites were delivered and measured by traditional means with the 4D detector array ArcCheck (Sun Nuclear corporation) and at the same time measurement in linac treatment logs (Varian Dynalogs files) were analyzed from the same delivery with MobiusFX software (Mobius Medical Systems). VMAT plan QAs created in Eclipse treatment planning system (Varian) in a TrueBeam linac machine (Varian) were delivered and analyzed with the gammamore » analysis routine from SNPA software (Sun Nuclear corporation). Results: Comparable results in terms of the gamma analysis with 99.06% average gamma passing with 3%,3mm passing rate is observed in the comparison among MobiusFX, ArcCheck measurements, and the Treatment Planning System dose calculated. When going to a stricter criterion (1%,1mm) larger discrepancies are observed in different regions of the measurements with an average gamma of 66.24% between MobiusFX and ArcCheck. Conclusion: This work indicates the potential for using MobiusFX as a routine pre-treatment patient specific IMRT method for quality assurance purposes and its advantages as a phantom-less method which reduce the time for IMRT QA measurement. MobiusFX is capable of produce similar results of those by traditional methods used for patient specific pre-treatment verification VMAT QA. Even the gamma results comparing to the TPS are similar the analysis of both methods show that the errors being identified by each method are found in different regions. Traditional methods like ArcCheck are sensitive to setup errors and dose difference errors coming from the linac output. On the other hand linac log files analysis record different errors in the VMAT QA associated with the MLCs and gantry motion that by traditional methods cannot be detected.« less

  10. Direct coupling of pulsed radio frequency and pulsed high power in novel pulsed power system for plasma immersion ion implantation.

    PubMed

    Gong, Chunzhi; Tian, Xiubo; Yang, Shiqin; Fu, Ricky K Y; Chu, Paul K

    2008-04-01

    A novel power supply system that directly couples pulsed high voltage (HV) pulses and pulsed 13.56 MHz radio frequency (rf) has been developed for plasma processes. In this system, the sample holder is connected to both the rf generator and HV modulator. The coupling circuit in the hybrid system is composed of individual matching units, low pass filters, and voltage clamping units. This ensures the safe operation of the rf system even when the HV is on. The PSPICE software is utilized to optimize the design of circuits. The system can be operated in two modes. The pulsed rf discharge may serve as either the seed plasma source for glow discharge or high-density plasma source for plasma immersion ion implantation (PIII). The pulsed high-voltage glow discharge is induced when a rf pulse with a short duration or a larger time interval between the rf and HV pulses is used. Conventional PIII can also be achieved. Experiments conducted on the new system confirm steady and safe operation.

  11. MOBILE GAMMA IRRADIATORS FOR FRUIT PRODUCE (Engineering Materials)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    1963-10-31

    Mobile irradiators used for the radiopasteurization of strawberries, grapes, peaches, tomatoes, and lemons are described. The irradiators are mounted on trailers and each irradiator, including the trailer, weighs 70 to 80 tons. Radiatton doses range from 100,000 to 200,000 rads. Minimum production is 500 lb of fruit per hour. Drawings are included for four types of irradiators: the single-slab twopass, double-slab one-pass, single-slab four-pass, and line-source rotary. In the single-slab two-pass system, the packages make two passes in front of the source. The length of the packages is parallel to the direction of travel. The packages are irradiated on eachmore » side. This system is light in weight, has low capital cost, and is simple to fabricate. The double-slab one- pass system is the same as the above except the source strength is doubled and irradiation time is cut in half. The same arrangement is used in the single-slab four-pass system that is used in the singleslab two-pass system except the packages make two passes on each side of the source. The rotary system combines a linear and rotary motion to provide high dosage. It uses a small Co/sup 60/ source but costs more than a single-slab twopass system. (F.E.S.)« less

  12. Multidisciplinary HIS DICOM interfaces at the Department of Veterans Affairs

    NASA Astrophysics Data System (ADS)

    Kuzmak, Peter M.; Dayhoff, Ruth E.

    2000-05-01

    The U.S. Department of Veterans Affairs (VA) is using the Digital Imaging and Communications in Medicine (DICOM) standard to integrate image data objects from multiple systems for use across the healthcare enterprise. DICOM uses a structured representation of image data and a communication mechanism that allows the VA to easily acquire images from multiple sources and store them directly into the online patient record. The VA can obtain both radiology and non- radiology images using DICOM, and can display them on low-cost clinician's color workstations throughout the medical center. High-resolution gray-scale diagnostic quality multi-monitor workstations with specialized viewing software can be used for reading radiology images. The VA's DICOM capabilities can interface six different commercial Picture Archiving and Communication Systems (PACS) and over twenty different image acquisition modalities. The VA is advancing its use of DICOM beyond radiology. New color imaging applications for Gastrointestinal Endoscopy and Ophthalmology using DICOM are under development. These are the first DICOM offerings for the vendors, who are planning to support the recently passed DICOM Visible Light and Structured Reporting service classes. Implementing these in VistA is a challenge because of the different workflow and software support for these disciplines within the VA HIS environment.

  13. SEXTANT - Station Explorer for X-ray Timing and Navigation Technology

    NASA Technical Reports Server (NTRS)

    Mitchell, Jason W.; Hasouneh, Munther Abdel Hamid; Winternitz, Luke M. B.; Valdez, Jennifer E.; Price, Samuel R.; Semper, Sean R.; Yu, Wayne H.; Arzoumanian, Zaven; Ray, Paul S.; Wood, Kent S.; hide

    2015-01-01

    The Station Explorer for X-ray Timing and Navigation Technology (SEXTANT) is a technology demonstration enhancement to the Neutron-star Interior Composition Explorer (NICER) mission, which is scheduled to launch in late 2016 and will be hosted as an externally attached payload on the International Space Station (ISS) via the ExPRESS Logistics Carrier (ELC). During NICER's 18-month baseline science mission to understand ultra-dense matter though observations of neutron stars in the soft X-ray band, SEXTANT will, for the first-time, demonstrate real-time, on-board X-ray pulsar navigation, which is a significant milestone in the quest to establish a GPS-like navigation capability that will be available throughout our Solar System and beyond. Along with NICER, SEXTANT has proceeded through Phase B, Mission Definition, and received numerous refinements in concept of operation, algorithms, flight software, ground system, and ground test capability. NICER/SEXTANT's Phase B work culminated in NASA's confirmation of NICER to Phase C, Design and Development, in March 2014. Recently, NICER/SEXTANT successfully passed its Critical Design Review and SEXTANT received continuation approval in September 2014. In this paper, we describe the X-ray pulsar navigation concept and provide a brief history of previous work, and then summarize the SEXTANT technology demonstration objective, hardware and software components, and development to date.

  14. Impact of some low-cost interventions on students' performance in a Nigerian medical school.

    PubMed

    Anyaehie, U B; Okeke, T; Nwagha, U; Orizu, I; Iyare, E; Dim, C; Okafor, C

    2014-01-01

    Students' poor performance in physiology examinations has been worrisome to the university community. Reported preference of peer-tutoring to didactic lectures at the University of Nigeria Medical School has not been investigated. The aim of this work is to design/implement low-cost interventions to improve teaching and learning of physiology. This is a postintervention retrospective review of medical Student's performance in 2 nd Bachelor of Medicine and Bachelor of Surgery examinations physiology. Data were collected and analyzed by descriptive and inferential statistics using the MedCalc Statistical software (Turkey). The odds ratio (OR) was used to determine the chances of passing before and after the intervention. The level of significance was set at P < 0.05. A total of 2152 students sat for the professional examination over the study period, and 1485 students passed the examination at first attempt giving an overall pass rate of 69%. The pass rate from 2008 when our interventions started was significantly higher than the pass rate before this reform (OR: 0.53; 95% confidence interval: 0.43-0.64; P < 0.0001). Results support the engagement of teachers with strong translational interests and clinicians to augment existing faculty in basic sciences, innovative alternatives to passive lecture formats and students involvement in program evaluation.

  15. X-Band Acquisition Aid Software

    NASA Technical Reports Server (NTRS)

    Britcliffe, Michael J.; Strain, Martha M.; Wert, Michael

    2011-01-01

    The X-band Acquisition Aid (AAP) software is a low-cost acquisition aid for the Deep Space Network (DSN) antennas, and is used while acquiring a spacecraft shortly after it has launched. When enabled, the acquisition aid provides corrections to the antenna-predicted trajectory of the spacecraft to compensate for the variations that occur during the actual launch. The AAP software also provides the corrections to the antenna-predicted trajectory to the navigation team that uses the corrections to refine their model of the spacecraft in order to produce improved antenna-predicted trajectories for each spacecraft that passes over each complex. The software provides an automated Acquisition Aid receiver calibration, and provides graphical displays to the operator and remote viewers via an Ethernet connection. It has a Web server, and the remote workstations use the Firefox browser to view the displays. At any given time, only one operator can control any particular display in order to avoid conflicting commands from more than one control point. The configuration and control is accomplished solely via the graphical displays. The operator does not have to remember any commands. Only a few configuration parameters need to be changed, and can be saved to the appropriate spacecraft-dependent configuration file on the AAP s hard disk. AAP automates the calibration sequence by first commanding the antenna to the correct position, starting the receiver calibration sequence, and then providing the operator with the option of accepting or rejecting the new calibration parameters. If accepted, the new parameters are stored in the appropriate spacecraft-dependent configuration file. The calibration can be performed on the Sun, greatly expanding the window of opportunity for calibration. The spacecraft traditionally used for calibration is in view typically twice per day, and only for about ten minutes each pass.

  16. Energy dissipation in fragmented geomaterials associated with impacting oscillators

    NASA Astrophysics Data System (ADS)

    Khudyakov, Maxim; Pasternak, Elena; Dyskin, Arcady

    2016-04-01

    In wave propagation through fragmented geomaterials forced by periodic loadings, the elements (fragments) strike against each other when passing through the neutral position (position with zero mutual rotation), quickly damping the oscillations. Essentially the impacts act as shock absorbers albeit localised at the neutral points. In order to analyse the vibrations of and wave propagation in such structures, a differential equation of a forced harmonic oscillator was investigated, where the each time the system passes through the neutral point the velocity gets reduced by multiplying it with the restitution coefficient which characterise the impact of the fragments. In forced vibrations the impact times depend on both the forced oscillations and the restitution coefficient and form an irregular sequence. Numerical solution of the differential equation was performed using Mathematica software. Along with vibration diagrams, the dependence of the energy dissipation on the ratio of the forcing frequency to the natural frequency was obtained. For small positive values of the restitution coefficient (less than 0.5), the asymmetric oscillations were found, and the phase of the forced vibrations determined the direction of the asymmetry. Also, at some values of the forcing frequencies and the restitution coefficient chaotic behaviour was found.

  17. High-pressure crystallography of periodic and aperiodic crystals

    PubMed Central

    Hejny, Clivia; Minkov, Vasily S.

    2015-01-01

    More than five decades have passed since the first single-crystal X-ray diffraction experiments at high pressure were performed. These studies were applied historically to geochemical processes occurring in the Earth and other planets, but high-pressure crystallography has spread across different fields of science including chemistry, physics, biology, materials science and pharmacy. With each passing year, high-pressure studies have become more precise and comprehensive because of the development of instrumentation and software, and the systems investigated have also become more complicated. Starting with crystals of simple minerals and inorganic compounds, the interests of researchers have shifted to complicated metal–organic frameworks, aperiodic crystals and quasicrystals, molecular crystals, and even proteins and viruses. Inspired by contributions to the microsymposium ‘High-Pressure Crystallography of Periodic and Aperiodic Crystals’ presented at the 23rd IUCr Congress and General Assembly, the authors have tried to summarize certain recent results of single-crystal studies of molecular and aperiodic structures under high pressure. While the selected contributions do not cover the whole spectrum of high-pressure research, they demonstrate the broad diversity of novel and fascinating results and may awaken the reader’s interest in this topic. PMID:25866659

  18. Analysis of Power Generating Speed Bumps Made of Concrete Foam Composite

    NASA Astrophysics Data System (ADS)

    Syam, B.; Muttaqin, M.; Hastrino, D.; Sebayang, A.; Basuki, W. S.; Sabri, M.; Abda, S.

    2017-03-01

    This paper discusses the analysis of speed bump made of concrete foam composite which is used to generate electrical power. Speed bumps are designed to decelerate the speed of vehicles before passing through toll gates, public areas, or any other safety purposes. In Indonesia a speed bump should be designed in the accordance with KM Menhub 3 year 1994. In this research, the speed bump was manufactured with dimensions and geometry comply to the regulation mentioned above. Concrete foam composite speed bumps were used due to its light weight and relatively strong to receive vertical forces from the tyres of vehicles passing over the bumps. The reinforcement materials are processed from empty fruit bunch of oil palm. The materials were subjected to various tests to obtain its physical and mechanical properties. To analyze the structure stability of the speed bumps some models were analyzed using a FEM-based numerical softwares. It was obtained that the speed bumps coupled with polymeric composite bar (3 inches in diameter) are significantly reduce the radial stresses. In addition, the speed bumps equipped with polymeric composite casing or steel casing are also suitable for use as part of system components in producing electrical energy.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sadayappan, Ponnuswamy

    Exascale computing systems will provide a thousand-fold increase in parallelism and a proportional increase in failure rate relative to today's machines. Systems software for exascale machines must provide the infrastructure to support existing applications while simultaneously enabling efficient execution of new programming models that naturally express dynamic, adaptive, irregular computation; coupled simulations; and massive data analysis in a highly unreliable hardware environment with billions of threads of execution. We propose a new approach to the data and work distribution model provided by system software based on the unifying formalism of an abstract file system. The proposed hierarchical data model providesmore » simple, familiar visibility and access to data structures through the file system hierarchy, while providing fault tolerance through selective redundancy. The hierarchical task model features work queues whose form and organization are represented as file system objects. Data and work are both first class entities. By exposing the relationships between data and work to the runtime system, information is available to optimize execution time and provide fault tolerance. The data distribution scheme provides replication (where desirable and possible) for fault tolerance and efficiency, and it is hierarchical to make it possible to take advantage of locality. The user, tools, and applications, including legacy applications, can interface with the data, work queues, and one another through the abstract file model. This runtime environment will provide multiple interfaces to support traditional Message Passing Interface applications, languages developed under DARPA's High Productivity Computing Systems program, as well as other, experimental programming models. We will validate our runtime system with pilot codes on existing platforms and will use simulation to validate for exascale-class platforms. In this final report, we summarize research results from the work done at the Ohio State University towards the larger goals of the project listed above.« less

  20. Description of real-time Ada software implementation of a power system monitor for the Space Station Freedom PMAD DC testbed

    NASA Technical Reports Server (NTRS)

    Ludwig, Kimberly; Mackin, Michael; Wright, Theodore

    1991-01-01

    The Ada language software development to perform the electrical system monitoring functions for the NASA Lewis Research Center's Power Management and Distribution (PMAD) DC testbed is described. The results of the effort to implement this monitor are presented. The PMAD DC testbed is a reduced-scale prototype of the electrical power system to be used in the Space Station Freedom. The power is controlled by smart switches known as power control components (or switchgear). The power control components are currently coordinated by five Compaq 382/20e computers connected through an 802.4 local area network. One of these computers is designated as the control node with the other four acting as subsidiary controllers. The subsidiary controllers are connected to the power control components with a Mil-Std-1553 network. An operator interface is supplied by adding a sixth computer. The power system monitor algorithm is comprised of several functions including: periodic data acquisition, data smoothing, system performance analysis, and status reporting. Data is collected from the switchgear sensors every 100 milliseconds, then passed through a 2 Hz digital filter. System performance analysis includes power interruption and overcurrent detection. The reporting mechanism notifies an operator of any abnormalities in the system. Once per second, the system monitor provides data to the control node for further processing, such as state estimation. The system monitor required a hardware time interrupt to activate the data acquisition function. The execution time of the code was optimized using an assembly language routine. The routine allows direct vectoring of the processor to Ada language procedures that perform periodic control activities. A summary of the advantages and side effects of this technique are discussed.

  1. DSN Resource Scheduling

    NASA Technical Reports Server (NTRS)

    Wang, Yeou-Fang; Baldwin, John

    2007-01-01

    TIGRAS is client-side software, which provides tracking-station equipment planning, allocation, and scheduling services to the DSMS (Deep Space Mission System). TIGRAS provides functions for schedulers to coordinate the DSN (Deep Space Network) antenna usage time and to resolve the resource usage conflicts among tracking passes, antenna calibrations, maintenance, and system testing activities. TIGRAS provides a fully integrated multi-pane graphical user interface for all scheduling operations. This is a great improvement over the legacy VAX VMS command line user interface. TIGRAS has the capability to handle all DSN resource scheduling aspects from long-range to real time. TIGRAS assists NASA mission operations for DSN tracking of station equipment resource request processes from long-range load forecasts (ten years or longer), to midrange, short-range, and real-time (less than one week) emergency tracking plan changes. TIGRAS can be operated by NASA mission operations worldwide to make schedule requests for the DSN station equipment.

  2. An interactive parallel programming environment applied in atmospheric science

    NASA Technical Reports Server (NTRS)

    vonLaszewski, G.

    1996-01-01

    This article introduces an interactive parallel programming environment (IPPE) that simplifies the generation and execution of parallel programs. One of the tasks of the environment is to generate message-passing parallel programs for homogeneous and heterogeneous computing platforms. The parallel programs are represented by using visual objects. This is accomplished with the help of a graphical programming editor that is implemented in Java and enables portability to a wide variety of computer platforms. In contrast to other graphical programming systems, reusable parts of the programs can be stored in a program library to support rapid prototyping. In addition, runtime performance data on different computing platforms is collected in a database. A selection process determines dynamically the software and the hardware platform to be used to solve the problem in minimal wall-clock time. The environment is currently being tested on a Grand Challenge problem, the NASA four-dimensional data assimilation system.

  3. Using PVM to host CLIPS in distributed environments

    NASA Technical Reports Server (NTRS)

    Myers, Leonard; Pohl, Kym

    1994-01-01

    It is relatively easy to enhance CLIPS (C Language Integrated Production System) to support multiple expert systems running in a distributed environment with heterogeneous machines. The task is minimized by using the PVM (Parallel Virtual Machine) code from Oak Ridge Labs to provide the distributed utility. PVM is a library of C and FORTRAN subprograms that supports distributive computing on many different UNIX platforms. A PVM deamon is easily installed on each CPU that enters the virtual machine environment. Any user with rsh or rexec access to a machine can use the one PVM deamon to obtain a generous set of distributed facilities. The ready availability of both CLIPS and PVM makes the combination of software particularly attractive for budget conscious experimentation of heterogeneous distributive computing with multiple CLIPS executables. This paper presents a design that is sufficient to provide essential message passing functions in CLIPS and enable the full range of PVM facilities.

  4. Automated Analysis of Stateflow Models

    NASA Technical Reports Server (NTRS)

    Bourbouh, Hamza; Garoche, Pierre-Loic; Garion, Christophe; Gurfinkel, Arie; Kahsaia, Temesghen; Thirioux, Xavier

    2017-01-01

    Stateflow is a widely used modeling framework for embedded and cyber physical systems where control software interacts with physical processes. In this work, we present a framework a fully automated safety verification technique for Stateflow models. Our approach is two-folded: (i) we faithfully compile Stateflow models into hierarchical state machines, and (ii) we use automated logic-based verification engine to decide the validity of safety properties. The starting point of our approach is a denotational semantics of State flow. We propose a compilation process using continuation-passing style (CPS) denotational semantics. Our compilation technique preserves the structural and modal behavior of the system. The overall approach is implemented as an open source toolbox that can be integrated into the existing Mathworks Simulink Stateflow modeling framework. We present preliminary experimental evaluations that illustrate the effectiveness of our approach in code generation and safety verification of industrial scale Stateflow models.

  5. Work Experience Report

    NASA Technical Reports Server (NTRS)

    Guo, Daniel

    2017-01-01

    The NASA Platform for Autonomous Systems (NPAS) toolkit is currently being used at the NASA John C. Stennis Space Center (SSC) to develop the INSIGHT program, which will autonomously monitor and control the Nitrogen System of the High Pressure Gas Facility (HPGF) on site. The INSIGHT program is in need of generic timing capabilities in order to perform timing based actions such as pump usage timing and sequence step timing. The purpose of this project was to develop a timing module that could fulfill these requirements and be adaptable for expanded use in the future. The code was written in Gensym G2 software platform, the same as INSIGHT, and was written generically to ensure compatibility with any G2 program. Currently, the module has two timing capabilities, a stopwatch function and a countdown function. Although the module has gone through some functionality testing, actual integration of the module into NPAS and the INSIGHT program is contingent on the module passing later checks.

  6. [Development of a Surgical Navigation System with Beam Split and Fusion of the Visible and Near-Infrared Fluorescence].

    PubMed

    Yang, Xiaofeng; Wu, Wei; Wang, Guoan

    2015-04-01

    This paper presents a surgical optical navigation system with non-invasive, real-time, and positioning characteristics for open surgical procedure. The design was based on the principle of near-infrared fluorescence molecular imaging. The in vivo fluorescence excitation technology, multi-channel spectral camera technology and image fusion software technology were used. Visible and near-infrared light ring LED excitation source, multi-channel band pass filters, spectral camera 2 CCD optical sensor technology and computer systems were integrated, and, as a result, a new surgical optical navigation system was successfully developed. When the near-infrared fluorescence was injected, the system could display anatomical images of the tissue surface and near-infrared fluorescent functional images of surgical field simultaneously. The system can identify the lymphatic vessels, lymph node, tumor edge which doctor cannot find out with naked eye intra-operatively. Our research will guide effectively the surgeon to remove the tumor tissue to improve significantly the success rate of surgery. The technologies have obtained a national patent, with patent No. ZI. 2011 1 0292374. 1.

  7. R.O.A.D. to Success: Evaluation of Workplace Literacy Efforts.

    ERIC Educational Resources Information Center

    Askov, Eunice N.; Brown, Emory J.

    1992-01-01

    A group of 58 Pennsylvania workers completed the R.O.A.D. course, which involved functional context and interactive software to improve drivers' reading skills to pass the Commercial Driver's License exam. Comparison with pre- and posttest scores of 10 in a control group showed that R.O.A.D. completers had significantly higher scores. (SK)

  8. High time resolved electron temperature measurements by using the multi-pass Thomson scattering system in GAMMA 10/PDX.

    PubMed

    Yoshikawa, Masayuki; Yasuhara, Ryo; Ohta, Koichi; Chikatsu, Masayuki; Shima, Yoriko; Kohagura, Junko; Sakamoto, Mizuki; Nakashima, Yousuke; Imai, Tsuyoshi; Ichimura, Makoto; Yamada, Ichihiro; Funaba, Hisamichi; Minami, Takashi

    2016-11-01

    High time resolved electron temperature measurements are useful for fluctuation study. A multi-pass Thomson scattering (MPTS) system is proposed for the improvement of both increasing the TS signal intensity and time resolution. The MPTS system in GAMMA 10/PDX has been constructed for enhancing the Thomson scattered signals for the improvement of measurement accuracy. The MPTS system has a polarization-based configuration with an image relaying system. We optimized the image relaying optics for improving the multi-pass laser confinement and obtaining the stable MPTS signals over ten passing TS signals. The integrated MPTS signals increased about five times larger than that in the single pass system. Finally, time dependent electron temperatures were obtained in MHz sampling.

  9. Study for the dispersion of double-diffraction spectrometers

    NASA Astrophysics Data System (ADS)

    Pang, Yajun; Zhang, Yinxin; Yang, Huaidong; Huang, Zhanhua; Xu, Mingming; Jin, Guofan

    2018-01-01

    Double-cascade spectrometers and double-pass spectrometers can be uniformly called double-diffraction spectrometers. In current double-diffraction spectrometers design theory, the differences of the incident angles in the second diffraction are ignored. There is a significant difference between the design in theory and the actual result. In this study, based on the geometries of the double-diffraction spectrometers, we strictly derived the theoretical formulas of their dispersion. By employing the ZEMAX simulation software, verification of our theoretical model is implemented, and the simulation results show big agreement with our theoretical formulas. Based on the conclusions, a double-pass spectrometer was set up and tested, and the experiment results agree with the theoretical model and the simulation.

  10. Comparing the OpenMP, MPI, and Hybrid Programming Paradigm on an SMP Cluster

    NASA Technical Reports Server (NTRS)

    Jost, Gabriele; Jin, Hao-Qiang; anMey, Dieter; Hatay, Ferhat F.

    2003-01-01

    Clusters of SMP (Symmetric Multi-Processors) nodes provide support for a wide range of parallel programming paradigms. The shared address space within each node is suitable for OpenMP parallelization. Message passing can be employed within and across the nodes of a cluster. Multiple levels of parallelism can be achieved by combining message passing and OpenMP parallelization. Which programming paradigm is the best will depend on the nature of the given problem, the hardware components of the cluster, the network, and the available software. In this study we compare the performance of different implementations of the same CFD benchmark application, using the same numerical algorithm but employing different programming paradigms.

  11. Sample registration software for process automation in the Neutron Activation Analysis (NAA) Facility in Malaysia nuclear agency

    NASA Astrophysics Data System (ADS)

    Rahman, Nur Aira Abd; Yussup, Nolida; Salim, Nazaratul Ashifa Bt. Abdullah; Ibrahim, Maslina Bt. Mohd; Mokhtar, Mukhlis B.; Soh@Shaari, Syirrazie Bin Che; Azman, Azraf B.; Ismail, Nadiah Binti

    2015-04-01

    Neutron Activation Analysis (NAA) had been established in Nuclear Malaysia since 1980s. Most of the procedures established were done manually including sample registration. The samples were recorded manually in a logbook and given ID number. Then all samples, standards, SRM and blank were recorded on the irradiation vial and several forms prior to irradiation. These manual procedures carried out by the NAA laboratory personnel were time consuming and not efficient. Sample registration software is developed as part of IAEA/CRP project on `Development of Process Automation in the Neutron Activation Analysis (NAA) Facility in Malaysia Nuclear Agency (RC17399)'. The objective of the project is to create a pc-based data entry software during sample preparation stage. This is an effective method to replace redundant manual data entries that needs to be completed by laboratory personnel. The software developed will automatically generate sample code for each sample in one batch, create printable registration forms for administration purpose, and store selected parameters that will be passed to sample analysis program. The software is developed by using National Instruments Labview 8.6.

  12. Porting the AVS/Express scientific visualization software to Cray XT4.

    PubMed

    Leaver, George W; Turner, Martin J; Perrin, James S; Mummery, Paul M; Withers, Philip J

    2011-08-28

    Remote scientific visualization, where rendering services are provided by larger scale systems than are available on the desktop, is becoming increasingly important as dataset sizes increase beyond the capabilities of desktop workstations. Uptake of such services relies on access to suitable visualization applications and the ability to view the resulting visualization in a convenient form. We consider five rules from the e-Science community to meet these goals with the porting of a commercial visualization package to a large-scale system. The application uses message-passing interface (MPI) to distribute data among data processing and rendering processes. The use of MPI in such an interactive application is not compatible with restrictions imposed by the Cray system being considered. We present details, and performance analysis, of a new MPI proxy method that allows the application to run within the Cray environment yet still support MPI communication required by the application. Example use cases from materials science are considered.

  13. The AI Bus architecture for distributed knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Schultz, Roger D.; Stobie, Iain

    1991-01-01

    The AI Bus architecture is layered, distributed object oriented framework developed to support the requirements of advanced technology programs for an order of magnitude improvement in software costs. The consequent need for highly autonomous computer systems, adaptable to new technology advances over a long lifespan, led to the design of an open architecture and toolbox for building large scale, robust, production quality systems. The AI Bus accommodates a mix of knowledge based and conventional components, running on heterogeneous, distributed real world and testbed environment. The concepts and design is described of the AI Bus architecture and its current implementation status as a Unix C++ library or reusable objects. Each high level semiautonomous agent process consists of a number of knowledge sources together with interagent communication mechanisms based on shared blackboards and message passing acquaintances. Standard interfaces and protocols are followed for combining and validating subsystems. Dynamic probes or demons provide an event driven means for providing active objects with shared access to resources, and each other, while not violating their security.

  14. Efficient system interrupt concept design at the microprogramming level

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fakharzadeh, M.M.

    1989-01-01

    Over the past decade the demand for high speed super microcomputers has been tremendously increased. To satisfy this demand many high speed 32-bit microcomputers have been designed. However, the currently available 32-bit systems do not provide an adequate solution to many highly demanding problems such as in multitasking, and in interrupt driven applications, which both require context switching. Systems for these purposes usually incorporate sophisticated software. In order to be efficient, a high end microprocessor based system must satisfy stringent software demands. Although these microprocessors use the latest technology in the fabrication design and run at a very high speed,more » they still suffer from insufficient hardware support for such applications. All too often, this lack also is the premier cause of execution inefficiency. In this dissertation a micro-programmable control unit and operation unit is considered in an advanced design. An automaton controller is designed for high speed micro-level interrupt handling. Different stack models are designed for the single task and multitasking environment. The stacks are used for storage of various components of the processor during the interrupt calls, procedure calls, and task switching. A universal (as an example seven port) register file is designed for high speed parameter passing, and intertask communication in the multitasking environment. In addition, the register file provides a direct path between ALU and the peripheral data which is important in real-time control applications. The overall system is a highly parallel architecture, with no pipeline and internal cache memory, which allows the designer to be able to predict the processor's behavior during the critical times.« less

  15. Decentralized formation flying control in a multiple-team hierarchy.

    PubMed

    Mueller, Joseph B; Thomas, Stephanie J

    2005-12-01

    In recent years, formation flying has been recognized as an enabling technology for a variety of mission concepts in both the scientific and defense arenas. Examples of developing missions at NASA include magnetospheric multiscale (MMS), solar imaging radio array (SIRA), and terrestrial planet finder (TPF). For each of these missions, a multiple satellite approach is required in order to accomplish the large-scale geometries imposed by the science objectives. In addition, the paradigm shift of using a multiple satellite cluster rather than a large, monolithic spacecraft has also been motivated by the expected benefits of increased robustness, greater flexibility, and reduced cost. However, the operational costs of monitoring and commanding a fleet of close-orbiting satellites is likely to be unreasonable unless the onboard software is sufficiently autonomous, robust, and scalable to large clusters. This paper presents the prototype of a system that addresses these objectives-a decentralized guidance and control system that is distributed across spacecraft using a multiple team framework. The objective is to divide large clusters into teams of "manageable" size, so that the communication and computation demands driven by N decentralized units are related to the number of satellites in a team rather than the entire cluster. The system is designed to provide a high level of autonomy, to support clusters with large numbers of satellites, to enable the number of spacecraft in the cluster to change post-launch, and to provide for on-orbit software modification. The distributed guidance and control system will be implemented in an object-oriented style using a messaging architecture for networking and threaded applications (MANTA). In this architecture, tasks may be remotely added, removed, or replaced post launch to increase mission flexibility and robustness. This built-in adaptability will allow software modifications to be made on-orbit in a robust manner. The prototype system, which is implemented in Matlab, emulates the object-oriented and message-passing features of the MANTA software. In this paper, the multiple team organization of the cluster is described, and the modular software architecture is presented. The relative dynamics in eccentric reference orbits is reviewed, and families of periodic, relative trajectories are identified, expressed as sets of static geometric parameters. The guidance law design is presented, and an example reconfiguration scenario is used to illustrate the distributed process of assigning geometric goals to the cluster. Next, a decentralized maneuver planning approach is presented that utilizes linear-programming methods to enact reconfiguration and coarse formation keeping maneuvers. Finally, a method for performing online collision avoidance is discussed, and an example is provided to gauge its performance.

  16. LongISLND: in silico sequencing of lengthy and noisy datatypes

    PubMed Central

    Lau, Bayo; Mohiyuddin, Marghoob; Mu, John C.; Fang, Li Tai; Bani Asadi, Narges; Dallett, Carolina; Lam, Hugo Y. K.

    2016-01-01

    Summary: LongISLND is a software package designed to simulate sequencing data according to the characteristics of third generation, single-molecule sequencing technologies. The general software architecture is easily extendable, as demonstrated by the emulation of Pacific Biosciences (PacBio) multi-pass sequencing with P5 and P6 chemistries, producing data in FASTQ, H5, and the latest PacBio BAM format. We demonstrate its utility by downstream processing with consensus building and variant calling. Availability and Implementation: LongISLND is implemented in Java and available at http://bioinform.github.io/longislnd Contact: hugo.lam@roche.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27667791

  17. Database for propagation models

    NASA Astrophysics Data System (ADS)

    Kantak, Anil V.

    1991-07-01

    A propagation researcher or a systems engineer who intends to use the results of a propagation experiment is generally faced with various database tasks such as the selection of the computer software, the hardware, and the writing of the programs to pass the data through the models of interest. This task is repeated every time a new experiment is conducted or the same experiment is carried out at a different location generating different data. Thus the users of this data have to spend a considerable portion of their time learning how to implement the computer hardware and the software towards the desired end. This situation may be facilitated considerably if an easily accessible propagation database is created that has all the accepted (standardized) propagation phenomena models approved by the propagation research community. Also, the handling of data will become easier for the user. Such a database construction can only stimulate the growth of the propagation research it if is available to all the researchers, so that the results of the experiment conducted by one researcher can be examined independently by another, without different hardware and software being used. The database may be made flexible so that the researchers need not be confined only to the contents of the database. Another way in which the database may help the researchers is by the fact that they will not have to document the software and hardware tools used in their research since the propagation research community will know the database already. The following sections show a possible database construction, as well as properties of the database for the propagation research.

  18. MaROS: Information Management Service

    NASA Technical Reports Server (NTRS)

    Allard, Daniel A.; Gladden, Roy E.; Wright, Jesse J.; Hy, Franklin H.; Rabideau, Gregg R.; Wallick, Michael N.

    2011-01-01

    This software is provided by the Mars Relay Operations Service (MaROS) task to a variety of Mars projects for the purpose of coordinating communications sessions between landed spacecraft assets and orbiting spacecraft assets at Mars. The Information Management Service centralizes a set of functions previously distributed across multiple spacecraft operations teams, and as such, greatly improves visibility into the end-to-end strategic coordination process. Most of the process revolves around the scheduling of communications sessions between the spacecraft during periods of time when a landed asset on Mars is geometrically visible by an orbiting spacecraft. These relay sessions are used to transfer data both to and from the landed asset via the orbiting asset on behalf of Earth-based spacecraft operators. This software component is an application process running as a Java virtual machine. The component provides all service interfaces via a Representational State Transfer (REST) protocol over https to external clients. There are two general interaction modes with the service: upload and download of data. For data upload, the service must execute logic specific to the upload data type and trigger any applicable calculations including pass delivery latencies and overflight conflicts. For data download, the software must retrieve and correlate requested information and deliver to the requesting client. The provision of this service enables several key advancements over legacy processes and systems. For one, this service represents the first time that end-to-end relay information is correlated into a single shared repository. The software also provides the first multimission latency calculator; previous latency calculations had been performed on a mission-by-mission basis.

  19. Structural Modeling Using "Scanning and Mapping" Technique

    NASA Technical Reports Server (NTRS)

    Amos, Courtney L.; Dash, Gerald S.; Shen, J. Y.; Ferguson, Frederick; Noga, Donald F. (Technical Monitor)

    2000-01-01

    Supported by NASA Glenn Center, we are in the process developing a structural damage diagnostic and monitoring system for rocket engines, which consists of five modules: Structural Modeling, Measurement Data Pre-Processor, Structural System Identification, Damage Detection Criterion, and Computer Visualization. The function of the system is to detect damage as it is incurred by the engine structures. The scientific principle to identify damage is to utilize the changes in the vibrational properties between the pre-damaged and post-damaged structures. The vibrational properties of the pre-damaged structure can be obtained based on an analytic computer model of the structure. Thus, as the first stage of the whole research plan, we currently focus on the first module - Structural Modeling. Three computer software packages are selected, and will be integrated for this purpose. They are PhotoModeler-Pro, AutoCAD-R14, and MSC/NASTRAN. AutoCAD is the most popular PC-CAD system currently available in the market. For our purpose, it plays like an interface to generate structural models of any particular engine parts or assembly, which is then passed to MSC/NASTRAN for extracting structural dynamic properties. Although AutoCAD is a powerful structural modeling tool, the complexity of engine components requires a further improvement in structural modeling techniques. We are working on a so-called "scanning and mapping" technique, which is a relatively new technique. The basic idea is to producing a full and accurate 3D structural model by tracing on multiple overlapping photographs taken from different angles. There is no need to input point positions, angles, distances or axes. Photographs can be taken by any types of cameras with different lenses. With the integration of such a modeling technique, the capability of structural modeling will be enhanced. The prototypes of any complex structural components will be produced by PhotoModeler first based on existing similar components, then passed to AutoCAD for modification and correction of any discrepancies seen in the Photomodeler version of the 3Dmodel. These three software packages are fully compatible. The DXF file can be used to transfer drawings among those packages. To begin this entire process, we are using a small replica of an actual engine blade as a test object. This paper introduces the accomplishment of our recent work.

  20. VELoCiRaPTORS.

    NASA Astrophysics Data System (ADS)

    Lundgren, J.; Esham, B.; Padalino, S. J.; Sangster, T. C.; Glebov, V.

    2007-11-01

    The Venting and Exhausting of Low Level Air Contaminants in the Rapid Pneumatic Transport of Radioactive Samples (VELoCiRaPTORS) system is constructed to transport radioactive materials quickly and safely at the NIF. A radioactive sample will be placed inside a carrier that is transported via an airflow system produced by controlled differential pressure. Midway through the transportation process, the carrier will be stopped and vented by a powered exhaust blower which will remove radioactive gases within the transport carrier. A Geiger counter will monitor the activity of the exhaust gas to ensure that it is below acceptable levels. If the radiation level is sufficient, the carrier will pass through the remainder of the system, pneumatically braking at the counting station. The complete design will run manually or automatically with control software. Tests were performed using an inactive carrier to determine possible transportation problems. The system underwent many consecutive trials without failure. VELoCiRaPTORS is a prototype of a system that could be installed at both the Laboratory for Laser Energetics at the University of Rochester and the National Ignition Facility at LLNL.

  1. Enabling a systems biology knowledgebase with gaggle and firegoose

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baliga, Nitin S.

    The overall goal of this project was to extend the existing Gaggle and Firegoose systems to develop an open-source technology that runs over the web and links desktop applications with many databases and software applications. This technology would enable researchers to incorporate workflows for data analysis that can be executed from this interface to other online applications. The four specific aims were to (1) provide one-click mapping of genes, proteins, and complexes across databases and species; (2) enable multiple simultaneous workflows; (3) expand sophisticated data analysis for online resources; and enhance open-source development of the Gaggle-Firegoose infrastructure. Gaggle is anmore » open-source Java software system that integrates existing bioinformatics programs and data sources into a user-friendly, extensible environment to allow interactive exploration, visualization, and analysis of systems biology data. Firegoose is an extension to the Mozilla Firefox web browser that enables data transfer between websites and desktop tools including Gaggle. In the last phase of this funding period, we have made substantial progress on development and application of the Gaggle integration framework. We implemented the workspace to the Network Portal. Users can capture data from Firegoose and save them to the workspace. Users can create workflows to start multiple software components programmatically and pass data between them. Results of analysis can be saved to the cloud so that they can be easily restored on any machine. We also developed the Gaggle Chrome Goose, a plugin for the Google Chrome browser in tandem with an opencpu server in the Amazon EC2 cloud. This allows users to interactively perform data analysis on a single web page using the R packages deployed on the opencpu server. The cloud-based framework facilitates collaboration between researchers from multiple organizations. We have made a number of enhancements to the cmonkey2 application to enable and improve the integration within different environments, and we have created a new tools pipeline for generating EGRIN2 models in a largely automated way.« less

  2. The DoD's High Performance Computing Modernization Program - Ensuing the National Earth Systems Prediction Capability Becomes Operational

    NASA Astrophysics Data System (ADS)

    Burnett, W.

    2016-12-01

    The Department of Defense's (DoD) High Performance Computing Modernization Program (HPCMP) provides high performance computing to address the most significant challenges in computational resources, software application support and nationwide research and engineering networks. Today, the HPCMP has a critical role in ensuring the National Earth System Prediction Capability (N-ESPC) achieves initial operational status in 2019. A 2015 study commissioned by the HPCMP found that N-ESPC computational requirements will exceed interconnect bandwidth capacity due to the additional load from data assimilation and passing connecting data between ensemble codes. Memory bandwidth and I/O bandwidth will continue to be significant bottlenecks for the Navy's Hybrid Coordinate Ocean Model (HYCOM) scalability - by far the major driver of computing resource requirements in the N-ESPC. The study also found that few of the N-ESPC model developers have detailed plans to ensure their respective codes scale through 2024. Three HPCMP initiatives are designed to directly address and support these issues: Productivity Enhancement, Technology, Transfer and Training (PETTT), the HPCMP Applications Software Initiative (HASI), and Frontier Projects. PETTT supports code conversion by providing assistance, expertise and training in scalable and high-end computing architectures. HASI addresses the continuing need for modern application software that executes effectively and efficiently on next-generation high-performance computers. Frontier Projects enable research and development that could not be achieved using typical HPCMP resources by providing multi-disciplinary teams access to exceptional amounts of high performance computing resources. Finally, the Navy's DoD Supercomputing Resource Center (DSRC) currently operates a 6 Petabyte system, of which Naval Oceanography receives 15% of operational computational system use, or approximately 1 Petabyte of the processing capability. The DSRC will provide the DoD with future computing assets to initially operate the N-ESPC in 2019. This talk will further describe how DoD's HPCMP will ensure N-ESPC becomes operational, efficiently and effectively, using next-generation high performance computing.

  3. Parallel Fortran-MPI software for numerical inversion of the Laplace transform and its application to oscillatory water levels in groundwater environments

    USGS Publications Warehouse

    Zhan, X.

    2005-01-01

    A parallel Fortran-MPI (Message Passing Interface) software for numerical inversion of the Laplace transform based on a Fourier series method is developed to meet the need of solving intensive computational problems involving oscillatory water level's response to hydraulic tests in a groundwater environment. The software is a parallel version of ACM (The Association for Computing Machinery) Transactions on Mathematical Software (TOMS) Algorithm 796. Running 38 test examples indicated that implementation of MPI techniques with distributed memory architecture speedups the processing and improves the efficiency. Applications to oscillatory water levels in a well during aquifer tests are presented to illustrate how this package can be applied to solve complicated environmental problems involved in differential and integral equations. The package is free and is easy to use for people with little or no previous experience in using MPI but who wish to get off to a quick start in parallel computing. ?? 2004 Elsevier Ltd. All rights reserved.

  4. The Software Correlator of the Chinese VLBI Network

    NASA Technical Reports Server (NTRS)

    Zheng, Weimin; Quan, Ying; Shu, Fengchun; Chen, Zhong; Chen, Shanshan; Wang, Weihua; Wang, Guangli

    2010-01-01

    The software correlator of the Chinese VLBI Network (CVN) has played an irreplaceable role in the CVN routine data processing, e.g., in the Chinese lunar exploration project. This correlator will be upgraded to process geodetic and astronomical observation data. In the future, with several new stations joining the network, CVN will carry out crustal movement observations, quick UT1 measurements, astrophysical observations, and deep space exploration activities. For the geodetic or astronomical observations, we need a wide-band 10-station correlator. For spacecraft tracking, a realtime and highly reliable correlator is essential. To meet the scientific and navigation requirements of CVN, two parallel software correlators in the multiprocessor environments are under development. A high speed, 10-station prototype correlator using the mixed Pthreads and MPI (Massage Passing Interface) parallel algorithm on a computer cluster platform is being developed. Another real-time software correlator for spacecraft tracking adopts the thread-parallel technology, and it runs on the SMP (Symmetric Multiple Processor) servers. Both correlators have the characteristic of flexible structure and scalability.

  5. Heating and cooling system for an on-board gas adsorbent storage vessel

    DOEpatents

    Tamburello, David A.; Anton, Donald L.; Hardy, Bruce J.; Corgnale, Claudio

    2017-06-20

    In one aspect, a system for controlling the temperature within a gas adsorbent storage vessel of a vehicle may include an air conditioning system forming a continuous flow loop of heat exchange fluid that is cycled between a heated flow and a cooled flow. The system may also include at least one fluid by-pass line extending at least partially within the gas adsorbent storage vessel. The fluid by-pass line(s) may be configured to receive a by-pass flow including at least a portion of the heated flow or the cooled flow of the heat exchange fluid at one or more input locations and expel the by-pass flow back into the continuous flow loop at one or more output locations, wherein the by-pass flow is directed through the gas adsorbent storage vessel via the by-pass line(s) so as to adjust an internal temperature within the gas adsorbent storage vessel.

  6. Software-Related Recalls of Health Information Technology and Other Medical Devices: Implications for FDA Regulation of Digital Health.

    PubMed

    Ronquillo, Jay G; Zuckerman, Diana M

    2017-09-01

    Policy Points: Medical software has become an increasingly critical component of health care, yet the regulation of these devices is inconsistent and controversial. No studies of medical devices and software assess the impact on patient safety of the FDA's current regulatory safeguards and new legislative changes to those standards. Our analysis quantifies the impact of software problems in regulated medical devices and indicates that current regulations are necessary but not sufficient for ensuring patient safety by identifying and eliminating dangerous defects in software currently on the market. New legislative changes will further deregulate health IT, reducing safeguards that facilitate the reporting and timely recall of flawed medical software that could harm patients. Medical software has become an increasingly critical component of health care, yet the regulatory landscape for digital health is inconsistent and controversial. To understand which policies might best protect patients, we examined the impact of the US Food and Drug Administration's (FDA's) regulatory safeguards on software-related technologies in recent years and the implications for newly passed legislative changes in regulatory policy. Using FDA databases, we identified all medical devices that were recalled from 2011 through 2015 primarily because of software defects. We counted all software-related recalls for each FDA risk category and evaluated each high-risk and moderate-risk recall of electronic medical records to determine the manufacturer, device classification, submission type, number of units, and product details. A total of 627 software devices (1.4 million units) were subject to recalls, with 12 of these devices (190,596 units) subject to the highest-risk recalls. Eleven of the devices recalled as high risk had entered the market through the FDA review process that does not require evidence of safety or effectiveness, and one device was completely exempt from regulatory review. The largest high-risk recall categories were anesthesiology and general hospital, with one each in cardiovascular and neurology. Five electronic medical record systems (9,347 units) were recalled for software defects classified as posing a moderate risk to patient safety. Software problems in medical devices are not rare and have the potential to negatively influence medical care. Premarket regulation has not captured all the software issues that could harm patients, evidenced by the potentially large number of patients exposed to software products later subject to high-risk and moderate-risk recalls. Provisions of the 21st Century Cures Act that became law in late 2016 will reduce safeguards further. Absent stronger regulations and implementation to create robust risk assessment and adverse event reporting, physicians and their patients are likely to be at risk from medical errors caused by software-related problems in medical devices. © 2017 Milbank Memorial Fund.

  7. An Educational MONTE CARLO Simulation/Animation Program for the Cosmic Rays Muons and a Prototype Computer-Driven Hardware Display.

    ERIC Educational Resources Information Center

    Kalkanis, G.; Sarris, M. M.

    1999-01-01

    Describes an educational software program for the study of and detection methods for the cosmic ray muons passing through several light transparent materials (i.e., water, air, etc.). Simulates muons and Cherenkov photons' paths and interactions and visualizes/animates them on the computer screen using Monte Carlo methods/techniques which employ…

  8. FASTER - A tool for DSN forecasting and scheduling

    NASA Technical Reports Server (NTRS)

    Werntz, David; Loyola, Steven; Zendejas, Silvino

    1993-01-01

    FASTER (Forecasting And Scheduling Tool for Earth-based Resources) is a suite of tools designed for forecasting and scheduling JPL's Deep Space Network (DSN). The DSN is a set of antennas and other associated resources that must be scheduled for satellite communications, astronomy, maintenance, and testing. FASTER consists of MS-Windows based programs that replace two existing programs (RALPH and PC4CAST). FASTER was designed to be more flexible, maintainable, and user friendly. FASTER makes heavy use of commercial software to allow for customization by users. FASTER implements scheduling as a two pass process: the first pass calculates a predictive profile of resource utilization; the second pass uses this information to calculate a cost function used in a dynamic programming optimization step. This information allows the scheduler to 'look ahead' at activities that are not as yet scheduled. FASTER has succeeded in allowing wider access to data and tools, reducing the amount of effort expended and increasing the quality of analysis.

  9. SU-D-213-05: Design, Evaluation and First Applications of a Off-Site State-Of-The-Art 3D Dosimetry System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malcolm, J; Mein, S; McNiven, A

    2015-06-15

    Purpose: To design, construct and commission a prototype in-house three dimensional (3D) dose verification system for stereotatic body radiotherapy (SBRT) verification at an off-site partner institution. To investigate the potential of this system to achieve sufficient performance (1mm resolution, 3% noise, within 3% of true dose reading) for SBRT verification. Methods: The system was designed utilizing a parallel ray geometry instigated by precision telecentric lenses and an LED 630nm light source. Using a radiochromic dosimeter, a 3D dosimetric comparison with our gold-standard system and treatment planning software (Eclipse) was done for a four-field box treatment, under gamma passing criteria ofmore » 3%/3mm/10% dose threshold. Post off-site installation, deviations in the system’s dose readout performance was assessed by rescanning the four-field box irradiated dosimeter and using line-profiles to compare on-site and off-site mean and noise levels in four distinct dose regions. As a final step, an end-to-end test of the system was completed at the off-site location, including CT-simulation, irradiation of the dosimeter and a 3D dosimetric comparison of the planned (Pinnacle{sup 3}) to delivered dose for a spinal SBRT treatment(12 Gy per fraction). Results: The noise level in the high and medium dose regions of the four field box treatment was relatively 5% pre and post installation. This reflects the reduction in positional uncertainty through the new design. This At 1mm dose voxels, the gamma pass rates(3%,3mm) for our in-house gold standard system and the off-site system were comparable at 95.8% and 93.2% respectively. Conclusion: This work will describe the end-to-end process and results of designing, installing, and commissioning a state-of-the-art 3D dosimetry system created for verification of advanced radiation treatments including spinal radiosurgery.« less

  10. [Functional indices of the participants of the satellite experiments of the "Mars-500" project in the north of Russia in different seasons of a year].

    PubMed

    Solonin, Iu G; Markov, A L; Boĭko, E R; Potolitsyna, N N; Parshukova, O I

    2014-01-01

    17 male northerners participating in the satellite experiments of the '"Mars-500" project passed through the morphological, physiometric, psychological and biochemical studies. The prenosological health indices in different seasons were calculated using the hardware-software complex "Ecosan-2007". Seasonal sinusoidal fluctuations were detected for the thermoregulation (body and skin temperature), lipids metabolism (cholesterol, HDL and LDL levels in the blood), circulation regulation under physical exercise (the increase of "double product" and its recovery time). In the majority of the participants the unfavorable deviations of body mass index, "power" and "life" indices, simple visual-motor reaction time, Kerdo vegetative index, physical health levels and regulatory systems activity index (in comparison with the mid-latitude standards) were found.

  11. SU-E-T-345: Effect of DLG and MLC Transmission Value Set in the Treatment Planning System (TPS) On Dosimetric Accuracy of True Beam Hypofractionated SRT/SBRT and 2Gy/fx Prostate Rapid Arc Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, X; Wang, Y

    Purpose: Due to limited commissioning time, we previously only released our True beam non-FFF mode for prostate treatment. Clinical demand now pushes us to release the non-FFF mode for SRT/SBRT treatment. When re-planning on True beam previously treated SRT/SBRT cases on iX machine we found the patient specific QA pass rate was worse than iX’s, though the 2Gy/fx prostate Result had been as good. We hypothesize that in TPS the True beam DLG and MLC transmission values, of those measured during commissioning could not yet provide accurate SRS/SBRT dosimetry. Hence this work is to investigate how the TPS DLG andmore » transmission value affects Rapid Arc plans’ dosimetric accuracy. Methods: We increased DLG and transmission value of True beam in TPS such that their percentage differences against the measured matched those of iX’s. We re-calculated 2 SRT, 1 SBRT and 2 prostate plans, performed patient specific QA on these new plans and compared the results to the previous. Results: With DLG and transmission value set respectively 40 and 8% higher than the measured, the patient specific QA pass rate (at 3%/3mm) improved from 95.0 to 97.6% vs previous iX’s 97.8% in the case of SRT. In the case of SBRT, the pass rate improved from 75.2 to 93.9% vs previous iX’s 92.5%. In the case of prostate, the pass rate improved from 99.3 to 100%. The maximum dose difference in plans before and after adjusting DLG and transmission was approximately 1% of the prescription dose among all plans. Conclusion: The impact of adjusting DLG and transmission value on dosimetry might be the same among all Rapid Arc plans regardless hypofractionated or not. The large variation observed in patient specific QA pass rate might be due to the data analysis method in the QA software being more sensitive to hypofractionated plans.« less

  12. Development of high precision digital driver of acoustic-optical frequency shifter for ROG

    NASA Astrophysics Data System (ADS)

    Zhang, Rong; Kong, Mei; Xu, Yameng

    2016-10-01

    We develop a high precision digital driver of the acoustic-optical frequency shifter (AOFS) based on the parallel direct digital synthesizer (DDS) technology. We use an atomic clock as the phase-locked loop (PLL) reference clock, and the PLL is realized by a dual digital phase-locked loop. A DDS sampling clock up to 320 MHz with a frequency stability as low as 10-12 Hz is obtained. By constructing the RF signal measurement system, it is measured that the frequency output range of the AOFS-driver is 52-58 MHz, the center frequency of the band-pass filter is 55 MHz, the ripple in the band is less than 1 dB@3MHz, the single channel output power is up to 0.3 W, the frequency stability is 1 ppb (1 hour duration), and the frequency-shift precision is 0.1 Hz. The obtained frequency stability has two orders of improvement compared to that of the analog AOFS-drivers. For the designed binary frequency shift keying (2-FSK) and binary phase shift keying (2-PSK) modulation system, the demodulating frequency of the input TTL synchronous level signal is up to 10 kHz. The designed digital-bus coding/decoding system is compatible with many conventional digital bus protocols. It can interface with the ROG signal detecting software through the integrated drive electronics (IDE) and exchange data with the two DDS frequency-shift channels through the signal detecting software.

  13. Extraction, integration and analysis of alternative splicing and protein structure distributed information

    PubMed Central

    D'Antonio, Matteo; Masseroli, Marco

    2009-01-01

    Background Alternative splicing has been demonstrated to affect most of human genes; different isoforms from the same gene encode for proteins which differ for a limited number of residues, thus yielding similar structures. This suggests possible correlations between alternative splicing and protein structure. In order to support the investigation of such relationships, we have developed the Alternative Splicing and Protein Structure Scrutinizer (PASS), a Web application to automatically extract, integrate and analyze human alternative splicing and protein structure data sparsely available in the Alternative Splicing Database, Ensembl databank and Protein Data Bank. Primary data from these databases have been integrated and analyzed using the Protein Identifier Cross-Reference, BLAST, CLUSTALW and FeatureMap3D software tools. Results A database has been developed to store the considered primary data and the results from their analysis; a system of Perl scripts has been implemented to automatically create and update the database and analyze the integrated data; a Web interface has been implemented to make the analyses easily accessible; a database has been created to manage user accesses to the PASS Web application and store user's data and searches. Conclusion PASS automatically integrates data from the Alternative Splicing Database with protein structure data from the Protein Data Bank. Additionally, it comprehensively analyzes the integrated data with publicly available well-known bioinformatics tools in order to generate structural information of isoform pairs. Further analysis of such valuable information might reveal interesting relationships between alternative splicing and protein structure differences, which may be significantly associated with different functions. PMID:19828075

  14. Solidify, An LLVM pass to compile LLVM IR into Solidity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kothapalli, Abhiram

    The software currently compiles LLVM IR into Solidity (Ethereum’s dominant programming language) using LLVM’s pass library. Specifically, his compiler allows us to convert an arbitrary DSL into Solidity. We focus specifically on converting Domain Specific Languages into Solidity due to their ease of use, and provable properties. By creating a toolchain to compile lightweight domain-specific languages into Ethereum's dominant language, Solidity, we allow non-specialists to effectively develop safe and useful smart contracts. For example lawyers from a certain firm can have a proprietary DSL that codifies basic laws safely converted to Solidity to be securely executed on the blockchain. Inmore » another example, a simple provenance tracking language can be compiled and securely executed on the blockchain.« less

  15. Real time speech formant analyzer and display

    DOEpatents

    Holland, George E.; Struve, Walter S.; Homer, John F.

    1987-01-01

    A speech analyzer for interpretation of sound includes a sound input which converts the sound into a signal representing the sound. The signal is passed through a plurality of frequency pass filters to derive a plurality of frequency formants. These formants are converted to voltage signals by frequency-to-voltage converters and then are prepared for visual display in continuous real time. Parameters from the inputted sound are also derived and displayed. The display may then be interpreted by the user. The preferred embodiment includes a microprocessor which is interfaced with a television set for displaying of the sound formants. The microprocessor software enables the sound analyzer to present a variety of display modes for interpretive and therapeutic used by the user.

  16. Real time speech formant analyzer and display

    DOEpatents

    Holland, G.E.; Struve, W.S.; Homer, J.F.

    1987-02-03

    A speech analyzer for interpretation of sound includes a sound input which converts the sound into a signal representing the sound. The signal is passed through a plurality of frequency pass filters to derive a plurality of frequency formants. These formants are converted to voltage signals by frequency-to-voltage converters and then are prepared for visual display in continuous real time. Parameters from the inputted sound are also derived and displayed. The display may then be interpreted by the user. The preferred embodiment includes a microprocessor which is interfaced with a television set for displaying of the sound formants. The microprocessor software enables the sound analyzer to present a variety of display modes for interpretive and therapeutic used by the user. 19 figs.

  17. Optical design of a novel instrument that uses the Hartmann-Shack sensor and Zernike polynomials to measure and simulate customized refraction correction surgery outcomes and patient satisfaction

    NASA Astrophysics Data System (ADS)

    Yasuoka, Fatima M. M.; Matos, Luciana; Cremasco, Antonio; Numajiri, Mirian; Marcato, Rafael; Oliveira, Otavio G.; Sabino, Luis G.; Castro N., Jarbas C.; Bagnato, Vanderlei S.; Carvalho, Luis A. V.

    2016-03-01

    An optical system that conjugates the patient's pupil to the plane of a Hartmann-Shack (HS) wavefront sensor has been simulated using optical design software. And an optical bench prototype is mounted using mechanical eye device, beam splitter, illumination system, lenses, mirrors, mirrored prism, movable mirror, wavefront sensor and camera CCD. The mechanical eye device is used to simulate aberrations of the eye. From this device the rays are emitted and travelled by the beam splitter to the optical system. Some rays fall on the camera CCD and others pass in the optical system and finally reach the sensor. The eye models based on typical in vivo eye aberrations is constructed using the optical design software Zemax. The computer-aided outcomes of each HS images for each case are acquired, and these images are processed using customized techniques. The simulated and real images for low order aberrations are compared using centroid coordinates to assure that the optical system is constructed precisely in order to match the simulated system. Afterwards a simulated version of retinal images is constructed to show how these typical eyes would perceive an optotype positioned 20 ft away. Certain personalized corrections are allowed by eye doctors based on different Zernike polynomial values and the optical images are rendered to the new parameters. Optical images of how that eye would see with or without corrections of certain aberrations are generated in order to allow which aberrations can be corrected and in which degree. The patient can then "personalize" the correction to their own satisfaction. This new approach to wavefront sensing is a promising change in paradigm towards the betterment of the patient-physician relationship.

  18. USGS Imagery Applications During Disaster Response After Recent Earthquakes

    NASA Astrophysics Data System (ADS)

    Hudnut, K. W.; Brooks, B. A.; Glennie, C. L.; Finnegan, D. C.

    2015-12-01

    It is not only important to rapidly characterize surface fault rupture and related ground deformation after an earthquake, but also to repeatedly make observations following an event to forecast fault afterslip. These data may also be used by other agencies to monitor progress on damage repairs and restoration efforts by emergency responders and the public. Related requirements include repeatedly obtaining reference or baseline imagery before a major disaster occurs, as well as maintaining careful geodetic control on all imagery in a time series so that absolute georeferencing may be applied to the image stack through time. In addition, repeated post-event imagery acquisition is required, generally at a higher repetition rate soon after the event, then scaled back to less frequent acquisitions with time, to capture phenomena (such as fault afterslip) that are known to have rates that decrease rapidly with time. For example, lidar observations acquired before and after the South Napa earthquake of 2014, used in our extensive post-processing work that was funded primarily by FEMA, aided in the accurate forecasting of fault afterslip. Lidar was used to independently validate and verify the official USGS afterslip forecast. In order to keep pace with rapidly evolving technology, a development pipeline must be established and maintained to continually test and incorporate new sensors, while adapting these new components to the existing platform and linking them to the existing base software system, and then sequentially testing the system as it evolves. Improvements in system performance by incremental upgrades of system components and software are essential. Improving calibration parameters and thereby progressively eliminating artifacts requires ongoing testing, research and development. To improve the system, we have formed an interdisciplinary team with common interests and diverse sources of support. We share expertise and leverage funding while effectively and rapidly improving our system, which includes the sensor package and software for all steps in acquiring, processing and differencing repeat-pass lidar and electro-optical imagery, and the GRiD metadata and point cloud database standard, already used during disaster response surge events by other agencies (e.g., during Hurricane Sandy in 2012).

  19. A study of PC-based ultrasonic goniometer system of surface properties and characterization of materials

    NASA Astrophysics Data System (ADS)

    Sani, S.; Saad, M. H. Md; Jamaludin, N.; Ismail, M. P.; Mohd, S.; Mustapha, I.; Masenwat, N. A.; Tengku Amran, T. S.; Megat Ahmad, M. H. A.

    2018-01-01

    This paper discussed the design and development of a portable PC-based ultrasonic goniometer system that can be used to study material properties using ultrasonic wave. The system utilizes an ultrasonic pulse-receiver card model attached to computer notebook for signal display. A new specific software package (GoNIO) was developed to control the operation of the scanner, displaying the data and analyze characteristics of materials. System testing was carried out using samples with cubic dimension of about 10 mm x 20 mm x 30 mm. This size allows the sample to be fitted into the goniometer specimen holder and immersed in a liquid during measurement. The sample was rotated from incident angle of 0° to 90° during measurement and the amplitude reflected signals were recorded at every one degree of rotation. Immersion transducers were used to generate and receive the ultrasounds that pass through the samples. Longitudinal, shear and Rayleigh wave measurements were performed on the samples to determine the Dynamic Young’s Modulus. Results of measurements are explained and discussed.

  20. An integrated GPS-FID system for airborne gas detection of pipeline right-of-ways

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gehue, H.L.; Sommer, P.

    1996-12-31

    Pipeline integrity, safety and environmental concerns are of prime importance in the Canadian natural gas industry. Terramatic Technology Inc. (TTI) has developed an integrated GPS/FID gas detection system known as TTI-AirTrac{trademark} for use in airborne gas detection (AGD) along pipeline right-of-ways. The Flame Ionization Detector (FID), which has traditionally been used to monitor air quality for gas plants and refineries, has been integrated with the Global Positioning System (GPS) via a 486 DX2-50 computer and specialized open architecture data acquisition software. The purpose of this technology marriage is to be able to continuously monitor air quality during airborne pipeline inspection.more » Event tagging from visual surveillance is used to determine an explanation of any delta line deviations (DLD). These deviations are an indication of hydrocarbon gases present in the plume that the aircraft has passed through. The role of the GPS system is to provide mapping information and coordinate data for ground inspections. The ground based inspection using a handheld multi gas detector will confirm whether or not a leak exists.« less

  1. PANDA: A distributed multiprocessor operating system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chubb, P.

    1989-01-01

    PANDA is a design for a distributed multiprocessor and an operating system. PANDA is designed to allow easy expansion of both hardware and software. As such, the PANDA kernel provides only message passing and memory and process management. The other features needed for the system (device drivers, secondary storage management, etc.) are provided as replaceable user tasks. The thesis presents PANDA's design and implementation, both hardware and software. PANDA uses multiple 68010 processors sharing memory on a VME bus, each such node potentially connected to others via a high speed network. The machine is completely homogeneous: there are no differencesmore » between processors that are detectable by programs running on the machine. A single two-processor node has been constructed. Each processor contains memory management circuits designed to allow processors to share page tables safely. PANDA presents a programmers' model similar to the hardware model: a job is divided into multiple tasks, each having its own address space. Within each task, multiple processes share code and data. Tasks can send messages to each other, and set up virtual circuits between themselves. Peripheral devices such as disc drives are represented within PANDA by tasks. PANDA divides secondary storage into volumes, each volume being accessed by a volume access task, or VAT. All knowledge about the way that data is stored on a disc is kept in its volume's VAT. The design is such that PANDA should provide a useful testbed for file systems and device drivers, as these can be installed without recompiling PANDA itself, and without rebooting the machine.« less

  2. Fluid-solid interaction: benchmarking of an external coupling of ANSYS with CFX for cardiovascular applications.

    PubMed

    Hose, D R; Lawford, P V; Narracott, A J; Penrose, J M T; Jones, I P

    2003-01-01

    Fluid-solid interaction is a primary feature of cardiovascular flows. There is increasing interest in the numerical solution of these systems as the extensive computational resource required for such studies becomes available. One form of coupling is an external weak coupling of separate solid and fluid mechanics codes. Information about the stress tensor and displacement vector at the wetted boundary is passed between the codes, and an iterative scheme is employed to move towards convergence of these parameters at each time step. This approach has the attraction that separate codes with the most extensive functionality for each of the separate phases can be selected, which might be important in the context of the complex rheology and contact mechanics that often feature in cardiovascular systems. Penrose and Staples describe a weak coupling of CFX for computational fluid mechanics to ANSYS for solid mechanics, based on a simple Jacobi iteration scheme. It is important to validate the coupled numerical solutions. An extensive analytical study of flow in elastic-walled tubes was carried out by Womersley in the late 1950s. This paper describes the performance of the coupling software for the straight elastic-walled tube, and compares the results with Womersley's analytical solutions. It also presents preliminary results demonstrating the application of the coupled software in the context of a stented vessel.

  3. Topics on data transmission problem in software definition network

    NASA Astrophysics Data System (ADS)

    Gao, Wei; Liang, Li; Xu, Tianwei; Gan, Jianhou

    2017-08-01

    In normal computer networks, the data transmission between two sites go through the shortest path between two corresponding vertices. However, in the setting of software definition network (SDN), it should monitor the network traffic flow in each site and channel timely, and the data transmission path between two sites in SDN should consider the congestion in current networks. Hence, the difference of available data transmission theory between normal computer network and software definition network is that we should consider the prohibit graph structures in SDN, and these forbidden subgraphs represent the sites and channels in which data can't be passed by the serious congestion. Inspired by theoretical analysis of an available data transmission in SDN, we consider some computational problems from the perspective of the graph theory. Several results determined in the paper imply the sufficient conditions of data transmission in SDN in the various graph settings.

  4. A Robust and Scalable Software Library for Parallel Adaptive Refinement on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Lou, John Z.; Norton, Charles D.; Cwik, Thomas A.

    1999-01-01

    The design and implementation of Pyramid, a software library for performing parallel adaptive mesh refinement (PAMR) on unstructured meshes, is described. This software library can be easily used in a variety of unstructured parallel computational applications, including parallel finite element, parallel finite volume, and parallel visualization applications using triangular or tetrahedral meshes. The library contains a suite of well-designed and efficiently implemented modules that perform operations in a typical PAMR process. Among these are mesh quality control during successive parallel adaptive refinement (typically guided by a local-error estimator), parallel load-balancing, and parallel mesh partitioning using the ParMeTiS partitioner. The Pyramid library is implemented in Fortran 90 with an interface to the Message-Passing Interface (MPI) library, supporting code efficiency, modularity, and portability. An EM waveguide filter application, adaptively refined using the Pyramid library, is illustrated.

  5. Three-month performance evaluation of the Nanometrics, Inc., Libra Satellite Seismograph System in the northern California Seismic Network

    USGS Publications Warehouse

    Oppenheimer, David H.

    2000-01-01

    In 1999 the Northern California Seismic Network (NCSN) purchased a Libra satellite seismograph system from Nanometrics, Inc to assess whether this technology was a cost-effective and robust replacement for their analog microwave system. The system was purchased subject to it meeting the requirements, criteria and tests described in Appendix A. In early 2000, Nanometrics began delivery of various components of the system, such as the hub and remote satellite dish and mounting hardware, and the NCSN installed and assembled most equipment in advance of the arrival of Nanometrics engineers to facilitate the configuration of the system. The hub was installed in its permanent location, but for logistical reasons the "remote" satellite hardware was initially configured at the NCSN for testing. During the first week of April Nanometrics engineers came to Menlo Park to configure the system and train NCSN staff. The two dishes were aligned with the satellite, and the system was fully operational in 2 days with little problem. Nanometrics engineers spent the remaining 3 days providing hands-on training to NCSN staff in hardware/software operation, configuration, and maintenance. During the second week of April 2000, NCSN staff moved the entire remote system of digitizers, dish assembly, and mounting hardware to Mammoth Lakes, California. The system was reinstalled at the Mammoth Lakes water treatment plant and communications successfully reestablished with the hub via the satellite on 14 April 2000. The system has been in continuous operation since then. This report reviews the performance of the Libra system for the three-month period 20 April 2000 through 20 July 2000. The purpose of the report is to assess whether the system passed the acceptance tests described in Appendix A. We examine all data gaps reported by NCSN "gap list" software and discuss their cause.

  6. How to study the Doppler effect with Audacity software

    NASA Astrophysics Data System (ADS)

    Adriano Dias, Marco; Simeão Carvalho, Paulo; Rodrigues Ventura, Daniel

    2016-05-01

    The Doppler effect is one of the recurring themes in college and high school classes. In order to contextualize the topic and engage the students in their own learning process, we propose a simple and easily accessible activity, i.e. the analysis of the videos available on the internet by the students. The sound of the engine of the vehicle passing by the camera is recorded on the video; it is then analyzed with the free software Audacity by measuring the frequency of the sound during approach and recede of the vehicle from the observer. The speed of the vehicle is determined due to the application of Doppler effect equations for acoustic waves.

  7. LongISLND: in silico sequencing of lengthy and noisy datatypes.

    PubMed

    Lau, Bayo; Mohiyuddin, Marghoob; Mu, John C; Fang, Li Tai; Bani Asadi, Narges; Dallett, Carolina; Lam, Hugo Y K

    2016-12-15

    LongISLND is a software package designed to simulate sequencing data according to the characteristics of third generation, single-molecule sequencing technologies. The general software architecture is easily extendable, as demonstrated by the emulation of Pacific Biosciences (PacBio) multi-pass sequencing with P5 and P6 chemistries, producing data in FASTQ, H5, and the latest PacBio BAM format. We demonstrate its utility by downstream processing with consensus building and variant calling. LongISLND is implemented in Java and available at http://bioinform.github.io/longislnd CONTACT: hugo.lam@roche.comSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  8. G.I. Joe Meets Barbie, Software Engineer Meets Caregiver: Males and Females in B.C.'s Public Schools and Beyond. BCTF Research Report.

    ERIC Educational Resources Information Center

    Schaefer, Anne C.

    Following a referral from the March 2000 Annual General Meeting of the British Columbia (B.C.) Teachers' Federation, the Spring 2000 Representative Assembly passed a motion that recommended research be collected, conducted, and disseminated on the current status of students in the province. This research report identifies current information on…

  9. Evaluation of Procedures for Backcalculation of Airfield Pavement Moduli

    DTIC Science & Technology

    2015-08-01

    to develop pavement design and structural evaluation criteria, procedures, and software to ensure that its airfield pavements can support mission...aircraft. As tire pressures and aircraft weights have increased steadily during this time, the design and evaluation software– Pavement -Transportation...the remaining life for the pavement in terms of remaining pavement life (passes-to-failure) or allowable gross aircraft loads and also to design

  10. Writing in the Disciplines versus Corporate Workplaces: On the Importance of Conflicting Disciplinary Discourses in the Open Source Movement and the Value of Intellectual Property

    ERIC Educational Resources Information Center

    Ballentine, Brian D.

    2009-01-01

    Writing programs and more specifically, Writing in the Disciplines (WID) initiatives have begun to embrace the use of and the ideology inherent to, open source software. The Conference on College Composition and Communication has passed a resolution stating that whenever feasible educators and their institutions consider open source applications.…

  11. Investigation for improving Global Positioning System (GPS) orbits using a discrete sequential estimator and stochastic models of selected physical processes

    NASA Technical Reports Server (NTRS)

    Goad, Clyde C.; Chadwell, C. David

    1993-01-01

    GEODYNII is a conventional batch least-squares differential corrector computer program with deterministic models of the physical environment. Conventional algorithms were used to process differenced phase and pseudorange data to determine eight-day Global Positioning system (GPS) orbits with several meter accuracy. However, random physical processes drive the errors whose magnitudes prevent improving the GPS orbit accuracy. To improve the orbit accuracy, these random processes should be modeled stochastically. The conventional batch least-squares algorithm cannot accommodate stochastic models, only a stochastic estimation algorithm is suitable, such as a sequential filter/smoother. Also, GEODYNII cannot currently model the correlation among data values. Differenced pseudorange, and especially differenced phase, are precise data types that can be used to improve the GPS orbit precision. To overcome these limitations and improve the accuracy of GPS orbits computed using GEODYNII, we proposed to develop a sequential stochastic filter/smoother processor by using GEODYNII as a type of trajectory preprocessor. Our proposed processor is now completed. It contains a correlated double difference range processing capability, first order Gauss Markov models for the solar radiation pressure scale coefficient and y-bias acceleration, and a random walk model for the tropospheric refraction correction. The development approach was to interface the standard GEODYNII output files (measurement partials and variationals) with software modules containing the stochastic estimator, the stochastic models, and a double differenced phase range processing routine. Thus, no modifications to the original GEODYNII software were required. A schematic of the development is shown. The observational data are edited in the preprocessor and the data are passed to GEODYNII as one of its standard data types. A reference orbit is determined using GEODYNII as a batch least-squares processor and the GEODYNII measurement partial (FTN90) and variational (FTN80, V-matrix) files are generated. These two files along with a control statement file and a satellite identification and mass file are passed to the filter/smoother to estimate time-varying parameter states at each epoch, improved satellite initial elements, and improved estimates of constant parameters.

  12. Signal processing method of the diameter measurement system based on CCD parallel light projection method

    NASA Astrophysics Data System (ADS)

    Song, Qing; Zhu, Sijia; Yan, Han; Wu, Wenqian

    2008-03-01

    Parallel light projection method for the diameter measurement is to project the workpiece to be measured on the photosensitive units of CCD, but the original signal output from CCD cannot be directly used for counting or measurement. The weak signal with high-frequency noise should be filtered and amplified firstly. This paper introduces RC low-pass filter and multiple feed-back second-order low-pass filter with infinite gain. Additionally there is always dispersion on the light band and the output signal has a transition between the irradiant area and the shadow, because of the instability of the light source intensity and the imperfection of the light system adjustment. To obtain exactly the shadow size related to the workpiece diameter, binary-value processing is necessary to achieve a square wave. Comparison method and differential method can be adopted for binary-value processing. There are two ways to decide the threshold value when using voltage comparator: the fixed level method and the floated level method. The latter has a high accuracy. Deferential method is to output two spike pulses with opposite pole by the rising edge and the failing edge of the video signal related to the differential circuit firstly, then the rising edge of the signal output from the differential circuit is acquired by half-wave rectifying circuit. After traveling through the zero passing comparator and the maintain- resistance edge trigger, the square wave which indicates the measured size is acquired at last. And then it is used for filling through standard pulses and for counting through the counter. Data acquisition and information processing is accomplished by the computer and the control software. This paper will introduce in detail the design and analysis of the filter circuit, binary-value processing circuit and the interface circuit towards the computer.

  13. PASS--Placement/Advisement for Student Success.

    ERIC Educational Resources Information Center

    Shreve, Chuck; Wildie, Avace

    In 1985-86, Northern Michigan College (NMC) used funds received from the United States Department of Education to develop a system of assessment, advisement, and placement--Placement/Advisement for Student Success (PASS), an integrated system designed to improve student retention. PASS currently consists of three components: summer orientation,…

  14. WE-D-BRA-06: IMRT QA with ArcCHECK: The MD Anderson Experience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aristophanous, M; Suh, Y; Chi, P

    Purpose: The objective of this project was to report our initial IMRT QA results and experience with the SunNuclear ArcCHECK. Methods: Three thousand one-hundred and sixteen cases were treated with IMRT or VMAT at our institution between October 2013 and September 2014. All IMRT/VMAT treatment plans underwent Quality Assurance (QA) using ArcCHECK prior to therapy. For clinical evaluation, a Gamma analysis is performed following QA delivery using the SNC Patient software (Sun Nuclear Corp) at the 3%/3mm level. QA Gamma pass rates were analyzed based on categories of treatment site, technique, and type of MLCs. Our current clinical threshold formore » passing a QA (Tclin) is set at a Gamma pass rate greater than 90%. We recorded the percent of failures for each category, as well as the Gamma pass rate threshold that would Result in 95% of QAs to pass (T95). Results: Using Tclin a failure rate of 5.9% over all QAs was observed. The highest failure rate was observed for gynecological (22%) and the lowest for CNS (0.9%) treatments. T95 was 91% over all QAs and ranged from 73% (gynecological) to 96.5% (CNS) for individual treatments sites. T95 was lower for IMRT and non-HD (high definition) MLCs at 88.5% and 94.5%, respectively, compared to 92.4% and 97.1% for VMAT and HD MLC treatments, respectively. There was a statistically significant difference between the passing rates for IMRT vs. VMAT and for HD MLCs vs. non-HD MLCs (p-values << 0.01). Gynecological, IMRT, and HD MLC treatments typically include more plans with larger field sizes. Conclusion: On average, Tclin with ArcCHECK was consistent with T95, as well as the 90% action level reported in TG-119. However, significant variations between the examined categories suggest a link between field size and QA passing rates and may warrant field size-specific passing rate thresholds.« less

  15. Sample registration software for process automation in the Neutron Activation Analysis (NAA) Facility in Malaysia nuclear agency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rahman, Nur Aira Abd, E-mail: nur-aira@nuclearmalaysia.gov.my; Yussup, Nolida; Ibrahim, Maslina Bt. Mohd

    Neutron Activation Analysis (NAA) had been established in Nuclear Malaysia since 1980s. Most of the procedures established were done manually including sample registration. The samples were recorded manually in a logbook and given ID number. Then all samples, standards, SRM and blank were recorded on the irradiation vial and several forms prior to irradiation. These manual procedures carried out by the NAA laboratory personnel were time consuming and not efficient. Sample registration software is developed as part of IAEA/CRP project on ‘Development of Process Automation in the Neutron Activation Analysis (NAA) Facility in Malaysia Nuclear Agency (RC17399)’. The objective ofmore » the project is to create a pc-based data entry software during sample preparation stage. This is an effective method to replace redundant manual data entries that needs to be completed by laboratory personnel. The software developed will automatically generate sample code for each sample in one batch, create printable registration forms for administration purpose, and store selected parameters that will be passed to sample analysis program. The software is developed by using National Instruments Labview 8.6.« less

  16. Mississippi State Axion Search

    NASA Astrophysics Data System (ADS)

    Madsen, Kris; Mississippi State Axion Search Collaboration

    2013-10-01

    Ever since the Peccei-Quinn Theory was proposed in 1977 as a possible solution to the strong CP problem, the therein postulated Axion, a weakly interacting boson, has been much sought after. The Mississippi State Axion Search is an attempt to improve the limit in the mass-coupling parameter space by using a variation of the Light Shining Through a Wall (LSW) technique. A vacuum sealed and RF shielded cavity is partitioned by a lead wall. EM waves at a frequency between 420 and 430 MHz are amplified by SR-550 and SR-510 amplifiers, broadcast from an antenna on one side of the lead wall and pass through an intense magnetic field. Theory predicts that in the presence of such a magnetic field, axions can be produced from photons via the Primakoff effect. Any axions generated will pass unimpeded to the other half of the cavity, regenerate into photons, and be detected as an excess in the signal picked up by the antenna on the far side. The Data Acquisition is handled by LABView based software running Measurement Computing drivers for two PCI DAQ cards: the DAS-08 handles the analog signals from the receiving antenna and monitors vital statistics in the cavity, while the DIO-24 provides the 1 kHz timing TTL pulse and allows remote control of the experiment's systems.

  17. Combinatorial games with a pass: a dynamical systems approach.

    PubMed

    Morrison, Rebecca E; Friedman, Eric J; Landsberg, Adam S

    2011-12-01

    By treating combinatorial games as dynamical systems, we are able to address a longstanding open question in combinatorial game theory, namely, how the introduction of a "pass" move into a game affects its behavior. We consider two well known combinatorial games, 3-pile Nim and 3-row Chomp. In the case of Nim, we observe that the introduction of the pass dramatically alters the game's underlying structure, rendering it considerably more complex, while for Chomp, the pass move is found to have relatively minimal impact. We show how these results can be understood by recasting these games as dynamical systems describable by dynamical recursion relations. From these recursion relations, we are able to identify underlying structural connections between these "games with passes" and a recently introduced class of "generic (perturbed) games." This connection, together with a (non-rigorous) numerical stability analysis, allows one to understand and predict the effect of a pass on a game.

  18. Verification of Faulty Message Passing Systems with Continuous State Space in PVS

    NASA Technical Reports Server (NTRS)

    Pilotto, Concetta; White, Jerome

    2010-01-01

    We present a library of Prototype Verification System (PVS) meta-theories that verifies a class of distributed systems in which agent commu nication is through message-passing. The theoretic work, outlined in, consists of iterative schemes for solving systems of linear equations , such as message-passing extensions of the Gauss and Gauss-Seidel me thods. We briefly review that work and discuss the challenges in formally verifying it.

  19. The US Navy Coastal Surge and Inundation Prediction System (CSIPS): Making Forecasts Easier

    DTIC Science & Technology

    2013-02-14

    produced the best results Peak Water Level Percent Error CD Formulation LAWMA , Amerada Pass Freshwater Canal Locks Calcasieu Pass Sabine Pass...Conclusions Ongoing Work 16 Baseline Simulation Results Peak Water Level Percent Error LAWMA , Amerada Pass Freshwater Canal Locks Calcasieu Pass...Conclusions Ongoing Work 20 Sensitivity Studies Waves Run Water Level – Percent Error of Peak HWM MAPE Lawma , Armeda Pass Freshwater

  20. Feasibility of magnetic resonance imaging-guided liver stereotactic body radiation therapy: A comparison between modulated tri-cobalt-60 teletherapy and linear accelerator-based intensity modulated radiation therapy.

    PubMed

    Kishan, Amar U; Cao, Minsong; Wang, Pin-Chieh; Mikaeilian, Argin G; Tenn, Stephen; Rwigema, Jean-Claude M; Sheng, Ke; Low, Daniel A; Kupelian, Patrick A; Steinberg, Michael L; Lee, Percy

    2015-01-01

    The purpose of this study was to investigate the dosimetric feasibility of liver stereotactic body radiation therapy (SBRT) using a teletherapy system equipped with 3 rotating (60)Co sources (tri-(60)Co system) and a built-in magnetic resonance imager (MRI). We hypothesized tumor size and location would be predictive of favorable dosimetry with tri-(60)Co SBRT. The primary study population consisted of 11 patients treated with SBRT for malignant hepatic lesions whose linear accelerator (LINAC)-based SBRT plans met all mandatory Radiation Therapy Oncology Group (RTOG) 1112 organ-at-risk (OAR) constraints. The secondary study population included 5 additional patients whose plans did not meet the mandatory constraints. Patients received 36 to 60 Gy in 3 to 5 fractions. Tri-(60)Co system SBRT plans were planned with ViewRay system software. All patients in the primary study population had tri-(60)Co SBRT plans that passed all RTOG constraints, with similar planning target volume coverage and OAR doses to LINAC plans. Mean liver doses and V10Gy to the liver, although easily meeting RTOG 1112 guidelines, were significantly higher with tri-(60)Co plans. When the 5 additional patients were included in a univariate analysis, the tri-(60)Co SBRT plans were still equally able to pass RTOG constraints, although they did have inferior ability to pass more stringent liver and kidney constraints (P < .05). A multivariate analysis found the ability of a tri-(60)Co SBRT plan to meet these constraints depended on lesion location and size. Patients with smaller or more peripheral lesions (as defined by distance from the aorta, chest wall, liver dome, and relative lesion volume) were significantly more likely to have tri-(60)Co plans that spared the liver and kidney as well as LINAC plans did (P < .05). It is dosimetrically feasible to perform liver SBRT with a tri-(60)Co system with a built-in MRI. Patients with smaller or more peripheral lesions are more likely to have optimal liver and kidney sparing, with the added benefit of MRI guidance, when receiving tri-(60)Co-based SBRT. Copyright © 2015 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  1. Observing System Simulation Experiment (OSSE) for the HyspIRI Spectrometer Mission

    NASA Technical Reports Server (NTRS)

    Turmon, Michael J.; Block, Gary L.; Green, Robert O.; Hua, Hook; Jacob, Joseph C.; Sobel, Harold R.; Springer, Paul L.; Zhang, Qingyuan

    2010-01-01

    The OSSE software provides an integrated end-to-end environment to simulate an Earth observing system by iteratively running a distributed modeling workflow based on the HyspIRI Mission, including atmospheric radiative transfer, surface albedo effects, detection, and retrieval for agile exploration of the mission design space. The software enables an Observing System Simulation Experiment (OSSE) and can be used for design trade space exploration of science return for proposed instruments by modeling the whole ground truth, sensing, and retrieval chain and to assess retrieval accuracy for a particular instrument and algorithm design. The OSSE in fra struc ture is extensible to future National Research Council (NRC) Decadal Survey concept missions where integrated modeling can improve the fidelity of coupled science and engineering analyses for systematic analysis and science return studies. This software has a distributed architecture that gives it a distinct advantage over other similar efforts. The workflow modeling components are typically legacy computer programs implemented in a variety of programming languages, including MATLAB, Excel, and FORTRAN. Integration of these diverse components is difficult and time-consuming. In order to hide this complexity, each modeling component is wrapped as a Web Service, and each component is able to pass analysis parameterizations, such as reflectance or radiance spectra, on to the next component downstream in the service workflow chain. In this way, the interface to each modeling component becomes uniform and the entire end-to-end workflow can be run using any existing or custom workflow processing engine. The architecture lets users extend workflows as new modeling components become available, chain together the components using any existing or custom workflow processing engine, and distribute them across any Internet-accessible Web Service endpoints. The workflow components can be hosted on any Internet-accessible machine. This has the advantages that the computations can be distributed to make best use of the available computing resources, and each workflow component can be hosted and maintained by their respective domain experts.

  2. Wide-bandwidth high-resolution search for extraterrestrial intelligence

    NASA Technical Reports Server (NTRS)

    Horowitz, Paul

    1993-01-01

    A third antenna was added to the system. It is a terrestrial low-gain feed, to act as a veto for local interference. The 3-chip design for a 4 megapoint complex FFT was reduced to finished working hardware. The 4-Megachannel circuit board contains 36 MByte of DRAM, 5 CPLDs, the three large FFT ASICs, and 74 ICs in all. The Austek FDP-based Spectrometer/Power Accumulator (SPA) has now been implemented as a 4-layer printed circuit. A PC interface board has been designed and together with its associated user interface and control software allows an IBM compatible computer to control the SPA board, and facilitates the transfer of spectra to the PC for display, processing, and storage. The Feature Recognizer Array cards receive the stream of modulus words from the 4M FFT cards, and forward a greatly thinned set of reports to the PC's in whose backplane they reside. In particular, a powerful ROM-based state-machine architecture has been adopted, and DRAM has been added to permit integration modes when tracking or reobserving source candidates. The general purpose (GP) array consists of twenty '486 PC class computers, each of which receives and processes the data from a feature extractor/correlator board set. The array performs a first analysis on the provided 'features' and then passes this information on to the workstation. The core workstation software is now written. That is, the communication channels between the user interface, the backend monitor program and the PC's have working software.

  3. Non-cycloplegic spherical equivalent refraction in adults: comparison of the double-pass system, retinoscopy, subjective refraction and a table-mounted autorefractor.

    PubMed

    Vilaseca, Meritxell; Arjona, Montserrat; Pujol, Jaume; Peris, Elvira; Martínez, Vanessa

    2013-01-01

    To evaluate the accuracy of spherical equivalent (SE) estimates of a double-pass system and to compare it with retinoscopy, subjective refraction and a table-mounted autorefractor. Non-cycloplegic refraction was performed on 125 eyes of 65 healthy adults (age 23.5±3.0 years) from October 2010 to January 2011 using retinoscopy, subjective refraction, autorefraction (Auto kerato-refractometer TOPCON KR-8100, Japan) and a double-pass system (Optical Quality Analysis System, OQAS, Visiometrics S.L., Spain). Nine consecutive measurements with the double-pass system were performed on a subgroup of 22 eyes to assess repeatability. To evaluate the trueness of the OQAS instrument, the SE laboratory bias between the double-pass system and the other techniques was calculated. The SE mean coefficient of repeatability obtained was 0.22D. Significant correlations could be established between the OQAS and the SE obtained with retinoscopy (r=0.956, P<0.001), subjective refraction (r=0.955, P<0.001) and autorefraction (r=0.957, P<0.001). The differences in SE between the double-pass system and the other techniques were significant (P<0.001), but lacked clinical relevance except for retinoscopy; Retinoscopy gave more hyperopic values than the double-pass system -0.51±0.50D as well as the subjective refraction -0.23±0.50D; More myopic values were achieved by means of autorefraction 0.24±0.49D. The double-pass system provides accurate and reliable estimates of the SE that can be used for clinical studies. This technique can determine the correct focus position to assess the ocular optical quality. However, it has a relatively small measuring range in comparison with autorefractors (-8.00 to +5.00D), and requires prior information on the refractive state of the patient.

  4. Non-cycloplegic spherical equivalent refraction in adults: comparison of the double-pass system, retinoscopy, subjective refraction and a table-mounted autorefractor

    PubMed Central

    Vilaseca, Meritxell; Arjona, Montserrat; Pujol, Jaume; Peris, Elvira; Martínez, Vanessa

    2013-01-01

    AIM To evaluate the accuracy of spherical equivalent (SE) estimates of a double-pass system and to compare it with retinoscopy, subjective refraction and a table-mounted autorefractor. METHODS Non-cycloplegic refraction was performed on 125 eyes of 65 healthy adults (age 23.5±3.0 years) from October 2010 to January 2011 using retinoscopy, subjective refraction, autorefraction (Auto kerato-refractometer TOPCON KR-8100, Japan) and a double-pass system (Optical Quality Analysis System, OQAS, Visiometrics S.L., Spain). Nine consecutive measurements with the double-pass system were performed on a subgroup of 22 eyes to assess repeatability. To evaluate the trueness of the OQAS instrument, the SE laboratory bias between the double-pass system and the other techniques was calculated. RESULTS The SE mean coefficient of repeatability obtained was 0.22D. Significant correlations could be established between the OQAS and the SE obtained with retinoscopy (r=0.956, P<0.001), subjective refraction (r=0.955, P<0.001) and autorefraction (r=0.957, P<0.001). The differences in SE between the double-pass system and the other techniques were significant (P<0.001), but lacked clinical relevance except for retinoscopy; Retinoscopy gave more hyperopic values than the double-pass system -0.51±0.50D as well as the subjective refraction -0.23±0.50D; More myopic values were achieved by means of autorefraction 0.24±0.49D. CONCLUSION The double-pass system provides accurate and reliable estimates of the SE that can be used for clinical studies. This technique can determine the correct focus position to assess the ocular optical quality. However, it has a relatively small measuring range in comparison with autorefractors (-8.00 to +5.00D), and requires prior information on the refractive state of the patient. PMID:24195036

  5. Assessment of soil compaction properties based on surface wave techniques

    NASA Astrophysics Data System (ADS)

    Jihan Syamimi Jafri, Nur; Rahim, Mohd Asri Ab; Zahid, Mohd Zulham Affandi Mohd; Faizah Bawadi, Nor; Munsif Ahmad, Muhammad; Faizal Mansor, Ahmad; Omar, Wan Mohd Sabki Wan

    2018-03-01

    Soil compaction plays an important role in every construction activities to reduce risks of any damage. Traditionally, methods of assessing compaction include field tests and invasive penetration tests for compacted areas have great limitations, which caused time-consuming in evaluating large areas. Thus, this study proposed the possibility of using non-invasive surface wave method like Multi-channel Analysis of Surface Wave (MASW) as a useful tool for assessing soil compaction. The aim of this study was to determine the shear wave velocity profiles and field density of compacted soils under varying compaction efforts by using MASW method. Pre and post compaction of MASW survey were conducted at Pauh Campus, UniMAP after applying rolling compaction with variation of passes (2, 6 and 10). Each seismic data was recorded by GEODE seismograph. Sand replacement test was conducted for each survey line to obtain the field density data. All seismic data were processed using SeisImager/SW software. The results show the shear wave velocity profiles increase with the number of passes from 0 to 6 passes, but decrease after 10 passes. This method could attract the interest of geotechnical community, as it can be an alternative tool to the standard test for assessing of soil compaction in the field operation.

  6. Injecting Errors for Testing Built-In Test Software

    NASA Technical Reports Server (NTRS)

    Gender, Thomas K.; Chow, James

    2010-01-01

    Two algorithms have been conceived to enable automated, thorough testing of Built-in test (BIT) software. The first algorithm applies to BIT routines that define pass/fail criteria based on values of data read from such hardware devices as memories, input ports, or registers. This algorithm simulates effects of errors in a device under test by (1) intercepting data from the device and (2) performing AND operations between the data and the data mask specific to the device. This operation yields values not expected by the BIT routine. This algorithm entails very small, permanent instrumentation of the software under test (SUT) for performing the AND operations. The second algorithm applies to BIT programs that provide services to users application programs via commands or callable interfaces and requires a capability for test-driver software to read and write the memory used in execution of the SUT. This algorithm identifies all SUT code execution addresses where errors are to be injected, then temporarily replaces the code at those addresses with small test code sequences to inject latent severe errors, then determines whether, as desired, the SUT detects the errors and recovers

  7. Comparative assessment of software for non-targeted data analysis in the study of volatile fingerprint changes during storage of a strawberry beverage.

    PubMed

    Morales, M L; Callejón, R M; Ordóñez, J L; Troncoso, A M; García-Parrilla, M C

    2017-11-03

    Five free software packages were compared to assess their utility for the non-targeted study of changes in the volatile profile during the storage of a novel strawberry beverage. AMDIS coupled to Gavin software turned out to be easy to use, required the minimum handling for subsequent data treatment and its results were the most similar to those obtained by manual integration. However, AMDIS coupled to SpectConnect software provided more information for the study of volatile profile changes during the storage of strawberry beverage. During storage, volatile profile changed producing the differentiation among the strawberry beverage stored at different temperatures, and this difference increases as time passes; these results were also supported by PCA. As expected, it seems that cold temperature is the best way of preservation for this product during long time storage. Variable Importance in the Projection (VIP) and correlation scores pointed out four volatile compounds as potential markers for shelf-life of our strawberry beverage: 2-phenylethyl acetate, decanoic acid, γ-decalactone and furfural. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Performance of single-pass and by-pass multi-step multi-soil-layering systems for low-(C/N)-ratio polluted river water treatment.

    PubMed

    Wei, Cai-Jie; Wu, Wei-Zhong

    2018-09-01

    Two kinds of hybrid two-step multi-soil-layering (MSL) systems loaded with different filter medias (zeolite-ceramsite MSL-1 and ceramsite-red clay MSL-2) were set-up for the low-(C/N)-ratio polluted river water treatment. A long-term pollutant removal performance of these two kinds of MSL systems was evaluated for 214 days. By-pass was employed in MSL systems to evaluate its effect on nitrogen removal enhancement. Zeolite-ceramsite single-pass MSL-1 system owns outstanding ammonia removal capability (24 g NH 4 + -Nm -2 d -1 ), 3 times higher than MSL-2 without zeolite under low aeration rate condition (0.8 × 10 4  L m -2 .h -1 ). Aeration rate up to 1.6 × 10 4  L m -2 .h -1 well satisfied the requirement of complete nitrification in first unit of both two MSLs. However, weak denitrification in second unit was commonly observed. By-pass of 50% influent into second unit can improve about 20% TN removal rate for both MSL-1 and MSL-2. Complete nitrification and denitrification was achieved in by-pass MSL systems after addition of carbon source with the resulting C/N ratio up to 2.5. The characters of biofilms distributed in different sections inside MSL-1 system well illustrated the nitrogen removal mechanism inside MSL systems. Two kinds of MSLs are both promising as an appealing nitrifying biofilm reactor. Recirculation can be considered further for by-pass MSL-2 system to ensure a complete ammonia removal. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. A Loader for Executing Multi-Binary Applications on the Thinking Machines CM-5: It's Not Just for SPMD Anymore

    NASA Technical Reports Server (NTRS)

    Becker, Jeffrey C.

    1995-01-01

    The Thinking Machines CM-5 platform was designed to run single program, multiple data (SPMD) applications, i.e., to run a single binary across all nodes of a partition, with each node possibly operating on different data. Certain classes of applications, such as multi-disciplinary computational fluid dynamics codes, are facilitated by the ability to have subsets of the partition nodes running different binaries. In order to extend the CM-5 system software to permit such applications, a multi-program loader was developed. This system is based on the dld loader which was originally developed for workstations. This paper provides a high level description of dld, and describes how it was ported to the CM-5 to provide support for multi-binary applications. Finally, it elaborates how the loader has been used to implement the CM-5 version of MPIRUN, a portable facility for running multi-disciplinary/multi-zonal MPI (Message-Passing Interface Standard) codes.

  10. Real-time digital signal recovery for a multi-pole low-pass transfer function system.

    PubMed

    Lee, Jhinhwan

    2017-08-01

    In order to solve the problems of waveform distortion and signal delay by many physical and electrical systems with multi-pole linear low-pass transfer characteristics, a simple digital-signal-processing (DSP)-based method of real-time recovery of the original source waveform from the distorted output waveform is proposed. A mathematical analysis on the convolution kernel representation of the single-pole low-pass transfer function shows that the original source waveform can be accurately recovered in real time using a particular moving average algorithm applied on the input stream of the distorted waveform, which can also significantly reduce the overall delay time constant. This method is generalized for multi-pole low-pass systems and has noise characteristics of the inverse of the low-pass filter characteristics. This method can be applied to most sensors and amplifiers operating close to their frequency response limits to improve the overall performance of data acquisition systems and digital feedback control systems.

  11. High-performance computing — an overview

    NASA Astrophysics Data System (ADS)

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  12. Understanding Satellite Characterization Knowledge Gained from Radiometric Data

    DTIC Science & Technology

    2011-09-01

    observation model, the time - resolved pose of a satellite can be estimated autonomously through each pass from non- resolved radiometry. The benefits of...and we assume the satellite can achieve both the set attitude and the necessary maneuver to change its orientation from one time -step to the next...Observation Model The UKF observation model uses the Time domain Analysis Simulation for Advanced Tracking (TASAT) software to provide high-fidelity satellite

  13. Efficient Implementation of Multigrid Solvers on Message-Passing Parrallel Systems

    NASA Technical Reports Server (NTRS)

    Lou, John

    1994-01-01

    We discuss our implementation strategies for finite difference multigrid partial differential equation (PDE) solvers on message-passing systems. Our target parallel architecture is Intel parallel computers: the Delta and Paragon system.

  14. Design and implementation of the tree-based fuzzy logic controller.

    PubMed

    Liu, B D; Huang, C Y

    1997-01-01

    In this paper, a tree-based approach is proposed to design the fuzzy logic controller. Based on the proposed methodology, the fuzzy logic controller has the following merits: the fuzzy control rule can be extracted automatically from the input-output data of the system and the extraction process can be done in one-pass; owing to the fuzzy tree inference structure, the search spaces of the fuzzy inference process are largely reduced; the operation of the inference process can be simplified as a one-dimensional matrix operation because of the fuzzy tree approach; and the controller has regular and modular properties, so it is easy to be implemented by hardware. Furthermore, the proposed fuzzy tree approach has been applied to design the color reproduction system for verifying the proposed methodology. The color reproduction system is mainly used to obtain a color image through the printer that is identical to the original one. In addition to the software simulation, an FPGA is used to implement the prototype hardware system for real-time application. Experimental results show that the effect of color correction is quite good and that the prototype hardware system can operate correctly under the condition of 30 MHz clock rate.

  15. Feasibility of an in situ measurement device for bubble size and distribution.

    PubMed

    Junker, Beth; Maciejak, Walter; Darnell, Branson; Lester, Michael; Pollack, Michael

    2007-09-01

    The feasibility of in situ measurement device for bubble size and distribution was explored. A novel in situ probe measurement system, the EnviroCam, was developed. Where possible, this probe incorporated strengths, and minimized weaknesses of historical and currently available real-time measurement methods for bubbles. The system was based on a digital, high-speed, high resolution, modular camera system, attached to a stainless steel shroud, compatible with standard Ingold ports on fermenters. Still frames and/or video were produced, capturing bubbles passing through the notch of the shroud. An LED light source was integral with the shroud. Bubbles were analyzed using customized commercially available image analysis software and standard statistical methods. Using this system, bubble sizes were measured as a function of various operating parameters (e.g., agitation rate, aeration rate) and as a function of media properties (e.g., viscosity, antifoam, cottonseed flour, and microbial/animal cell broths) to demonstrate system performance and its limitations. For selected conditions, mean bubble size changes qualitatively compared favorably with published relationships. Current instrument measurement capabilities were limited primarily to clear solutions that did not contain large numbers of overlapping bubbles.

  16. AU-FREDI - AUTONOMOUS FREQUENCY DOMAIN IDENTIFICATION

    NASA Technical Reports Server (NTRS)

    Yam, Y.

    1994-01-01

    The Autonomous Frequency Domain Identification program, AU-FREDI, is a system of methods, algorithms and software that was developed for the identification of structural dynamic parameters and system transfer function characterization for control of large space platforms and flexible spacecraft. It was validated in the CALTECH/Jet Propulsion Laboratory's Large Spacecraft Control Laboratory. Due to the unique characteristics of this laboratory environment, and the environment-specific nature of many of the software's routines, AU-FREDI should be considered to be a collection of routines which can be modified and reassembled to suit system identification and control experiments on large flexible structures. The AU-FREDI software was originally designed to command plant excitation and handle subsequent input/output data transfer, and to conduct system identification based on the I/O data. Key features of the AU-FREDI methodology are as follows: 1. AU-FREDI has on-line digital filter design to support on-orbit optimal input design and data composition. 2. Data composition of experimental data in overlapping frequency bands overcomes finite actuator power constraints. 3. Recursive least squares sine-dwell estimation accurately handles digitized sinusoids and low frequency modes. 4. The system also includes automated estimation of model order using a product moment matrix. 5. A sample-data transfer function parametrization supports digital control design. 6. Minimum variance estimation is assured with a curve fitting algorithm with iterative reweighting. 7. Robust root solvers accurately factorize high order polynomials to determine frequency and damping estimates. 8. Output error characterization of model additive uncertainty supports robustness analysis. The research objectives associated with AU-FREDI were particularly useful in focusing the identification methodology for realistic on-orbit testing conditions. Rather than estimating the entire structure, as is typically done in ground structural testing, AU-FREDI identifies only the key transfer function parameters and uncertainty bounds that are necessary for on-line design and tuning of robust controllers. AU-FREDI's system identification algorithms are independent of the JPL-LSCL environment, and can easily be extracted and modified for use with input/output data files. The basic approach of AU-FREDI's system identification algorithms is to non-parametrically identify the sampled data in the frequency domain using either stochastic or sine-dwell input, and then to obtain a parametric model of the transfer function by curve-fitting techniques. A cross-spectral analysis of the output error is used to determine the additive uncertainty in the estimated transfer function. The nominal transfer function estimate and the estimate of the associated additive uncertainty can be used for robust control analysis and design. AU-FREDI's I/O data transfer routines are tailored to the environment of the CALTECH/ JPL-LSCL which included a special operating system to interface with the testbed. Input commands for a particular experiment (wideband, narrowband, or sine-dwell) were computed on-line and then issued to respective actuators by the operating system. The operating system also took measurements through displacement sensors and passed them back to the software for storage and off-line processing. In order to make use of AU-FREDI's I/O data transfer routines, a user would need to provide an operating system capable of overseeing such functions between the software and the experimental setup at hand. The program documentation contains information designed to support users in either providing such an operating system or modifying the system identification algorithms for use with input/output data files. It provides a history of the theoretical, algorithmic and software development efforts including operating system requirements and listings of some of the various special purpose subroutines which were developed and optimized for Lahey FORTRAN compilers on IBM PC-AT computers before the subroutines were integrated into the system software. Potential purchasers are encouraged to purchase and review the documentation before purchasing the AU-FREDI software. AU-FREDI is distributed in DEC VAX BACKUP format on a 1600 BPI 9-track magnetic tape (standard media) or a TK50 tape cartridge. AU-FREDI was developed in 1989 and is a copyrighted work with all copyright vested in NASA.

  17. KNIME for reproducible cross-domain analysis of life science data.

    PubMed

    Fillbrunn, Alexander; Dietz, Christian; Pfeuffer, Julianus; Rahn, René; Landrum, Gregory A; Berthold, Michael R

    2017-11-10

    Experiments in the life sciences often involve tools from a variety of domains such as mass spectrometry, next generation sequencing, or image processing. Passing the data between those tools often involves complex scripts for controlling data flow, data transformation, and statistical analysis. Such scripts are not only prone to be platform dependent, they also tend to grow as the experiment progresses and are seldomly well documented, a fact that hinders the reproducibility of the experiment. Workflow systems such as KNIME Analytics Platform aim to solve these problems by providing a platform for connecting tools graphically and guaranteeing the same results on different operating systems. As an open source software, KNIME allows scientists and programmers to provide their own extensions to the scientific community. In this review paper we present selected extensions from the life sciences that simplify data exploration, analysis, and visualization and are interoperable due to KNIME's unified data model. Additionally, we name other workflow systems that are commonly used in the life sciences and highlight their similarities and differences to KNIME. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  18. 40 CFR 205.171-8 - Passing or failing under SEA.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Passing or failing under SEA. 205.171... Passing or failing under SEA. (a) A failing exhaust system is one which, when installed on any motorcycle... equal to the number in Column A, the sample passes. (c) Pass or failure of a SEA takes place when a...

  19. 40 CFR 205.171-8 - Passing or failing under SEA.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Passing or failing under SEA. 205.171... Passing or failing under SEA. (a) A failing exhaust system is one which, when installed on any motorcycle... equal to the number in Column A, the sample passes. (c) Pass or failure of a SEA takes place when a...

  20. 40 CFR 205.171-8 - Passing or failing under SEA.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Passing or failing under SEA. 205.171... Passing or failing under SEA. (a) A failing exhaust system is one which, when installed on any motorcycle... equal to the number in Column A, the sample passes. (c) Pass or failure of a SEA takes place when a...

  1. 40 CFR 205.171-8 - Passing or failing under SEA.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Passing or failing under SEA. 205.171... Passing or failing under SEA. (a) A failing exhaust system is one which, when installed on any motorcycle... equal to the number in Column A, the sample passes. (c) Pass or failure of a SEA takes place when a...

  2. Performance Measurement, Visualization and Modeling of Parallel and Distributed Programs

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Sarukkai, Sekhar R.; Mehra, Pankaj; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    This paper presents a methodology for debugging the performance of message-passing programs on both tightly coupled and loosely coupled distributed-memory machines. The AIMS (Automated Instrumentation and Monitoring System) toolkit, a suite of software tools for measurement and analysis of performance, is introduced and its application illustrated using several benchmark programs drawn from the field of computational fluid dynamics. AIMS includes (i) Xinstrument, a powerful source-code instrumentor, which supports both Fortran77 and C as well as a number of different message-passing libraries including Intel's NX Thinking Machines' CMMD, and PVM; (ii) Monitor, a library of timestamping and trace -collection routines that run on supercomputers (such as Intel's iPSC/860, Delta, and Paragon and Thinking Machines' CM5) as well as on networks of workstations (including Convex Cluster and SparcStations connected by a LAN); (iii) Visualization Kernel, a trace-animation facility that supports source-code clickback, simultaneous visualization of computation and communication patterns, as well as analysis of data movements; (iv) Statistics Kernel, an advanced profiling facility, that associates a variety of performance data with various syntactic components of a parallel program; (v) Index Kernel, a diagnostic tool that helps pinpoint performance bottlenecks through the use of abstract indices; (vi) Modeling Kernel, a facility for automated modeling of message-passing programs that supports both simulation -based and analytical approaches to performance prediction and scalability analysis; (vii) Intrusion Compensator, a utility for recovering true performance from observed performance by removing the overheads of monitoring and their effects on the communication pattern of the program; and (viii) Compatibility Tools, that convert AIMS-generated traces into formats used by other performance-visualization tools, such as ParaGraph, Pablo, and certain AVS/Explorer modules.

  3. Multiple pass laser amplifier system

    DOEpatents

    Brueckner, Keith A.; Jorna, Siebe; Moncur, N. Kent

    1977-01-01

    A laser amplification method for increasing the energy extraction efficiency from laser amplifiers while reducing the energy flux that passes through a flux limited system which includes apparatus for decomposing a linearly polarized light beam into multiple components, passing the components through an amplifier in delayed time sequence and recombining the amplified components into an in phase linearly polarized beam.

  4. Programming languages and compiler design for realistic quantum hardware.

    PubMed

    Chong, Frederic T; Franklin, Diana; Martonosi, Margaret

    2017-09-13

    Quantum computing sits at an important inflection point. For years, high-level algorithms for quantum computers have shown considerable promise, and recent advances in quantum device fabrication offer hope of utility. A gap still exists, however, between the hardware size and reliability requirements of quantum computing algorithms and the physical machines foreseen within the next ten years. To bridge this gap, quantum computers require appropriate software to translate and optimize applications (toolflows) and abstraction layers. Given the stringent resource constraints in quantum computing, information passed between layers of software and implementations will differ markedly from in classical computing. Quantum toolflows must expose more physical details between layers, so the challenge is to find abstractions that expose key details while hiding enough complexity.

  5. Programming languages and compiler design for realistic quantum hardware

    NASA Astrophysics Data System (ADS)

    Chong, Frederic T.; Franklin, Diana; Martonosi, Margaret

    2017-09-01

    Quantum computing sits at an important inflection point. For years, high-level algorithms for quantum computers have shown considerable promise, and recent advances in quantum device fabrication offer hope of utility. A gap still exists, however, between the hardware size and reliability requirements of quantum computing algorithms and the physical machines foreseen within the next ten years. To bridge this gap, quantum computers require appropriate software to translate and optimize applications (toolflows) and abstraction layers. Given the stringent resource constraints in quantum computing, information passed between layers of software and implementations will differ markedly from in classical computing. Quantum toolflows must expose more physical details between layers, so the challenge is to find abstractions that expose key details while hiding enough complexity.

  6. Data Center Energy Practitioner (DCEP) Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Traber, Kim; Salim, Munther; Sartor, Dale A.

    2016-02-02

    The main objective for the DCEP program is to raise the standards of those involved in energy assessments of data centers to accelerate energy savings. The program is driven by the fact that significant knowledge, training, and skills are required to perform accurate energy assessments. The program will raise the confidence level in energy assessments in data centers. For those who pass the exam, the program will recognize them as Data Center Energy Practitioners (DCEPs) by issuing a certificate. Hardware req.: PC, MAC; Software Req.: Windows; Related/Auxiliary software--MS Office; Type of files: executable modules, user guide; Documentation: e-user manual; Documentation:more » http://www.1.eere.energy.gov/industry/datacenters/ 12/10/15-New Documentation URL: https://datacenters.lbl.gov/dcep« less

  7. PyMS: a Python toolkit for processing of gas chromatography-mass spectrometry (GC-MS) data. Application and comparative study of selected tools.

    PubMed

    O'Callaghan, Sean; De Souza, David P; Isaac, Andrew; Wang, Qiao; Hodkinson, Luke; Olshansky, Moshe; Erwin, Tim; Appelbe, Bill; Tull, Dedreia L; Roessner, Ute; Bacic, Antony; McConville, Malcolm J; Likić, Vladimir A

    2012-05-30

    Gas chromatography-mass spectrometry (GC-MS) is a technique frequently used in targeted and non-targeted measurements of metabolites. Most existing software tools for processing of raw instrument GC-MS data tightly integrate data processing methods with graphical user interface facilitating interactive data processing. While interactive processing remains critically important in GC-MS applications, high-throughput studies increasingly dictate the need for command line tools, suitable for scripting of high-throughput, customized processing pipelines. PyMS comprises a library of functions for processing of instrument GC-MS data developed in Python. PyMS currently provides a complete set of GC-MS processing functions, including reading of standard data formats (ANDI- MS/NetCDF and JCAMP-DX), noise smoothing, baseline correction, peak detection, peak deconvolution, peak integration, and peak alignment by dynamic programming. A novel common ion single quantitation algorithm allows automated, accurate quantitation of GC-MS electron impact (EI) fragmentation spectra when a large number of experiments are being analyzed. PyMS implements parallel processing for by-row and by-column data processing tasks based on Message Passing Interface (MPI), allowing processing to scale on multiple CPUs in distributed computing environments. A set of specifically designed experiments was performed in-house and used to comparatively evaluate the performance of PyMS and three widely used software packages for GC-MS data processing (AMDIS, AnalyzerPro, and XCMS). PyMS is a novel software package for the processing of raw GC-MS data, particularly suitable for scripting of customized processing pipelines and for data processing in batch mode. PyMS provides limited graphical capabilities and can be used both for routine data processing and interactive/exploratory data analysis. In real-life GC-MS data processing scenarios PyMS performs as well or better than leading software packages. We demonstrate data processing scenarios simple to implement in PyMS, yet difficult to achieve with many conventional GC-MS data processing software. Automated sample processing and quantitation with PyMS can provide substantial time savings compared to more traditional interactive software systems that tightly integrate data processing with the graphical user interface.

  8. Technology and Tool Development to Support Safety and Mission Assurance

    NASA Technical Reports Server (NTRS)

    Denney, Ewen; Pai, Ganesh

    2017-01-01

    The Assurance Case approach is being adopted in a number of safety-mission-critical application domains in the U.S., e.g., medical devices, defense aviation, automotive systems, and, lately, civil aviation. This paradigm refocuses traditional, process-based approaches to assurance on demonstrating explicitly stated assurance goals, emphasizing the use of structured rationale, and concrete product-based evidence as the means for providing justified confidence that systems and software are fit for purpose in safely achieving mission objectives. NASA has also been embracing assurance cases through the concepts of Risk Informed Safety Cases (RISCs), as documented in the NASA System Safety Handbook, and Objective Hierarchies (OHs) as put forth by the Agency's Office of Safety and Mission Assurance (OSMA). This talk will give an overview of the work being performed by the SGT team located at NASA Ames Research Center, in developing technologies and tools to engineer and apply assurance cases in customer projects pertaining to aviation safety. We elaborate how our Assurance Case Automation Toolset (AdvoCATE) has not only extended the state-of-the-art in assurance case research, but also demonstrated its practical utility. We have successfully developed safety assurance cases for a number of Unmanned Aircraft Systems (UAS) operations, which underwent, and passed, scrutiny both by the aviation regulator, i.e., the FAA, as well as the applicable NASA boards for airworthiness and flight safety, flight readiness, and mission readiness. We discuss our efforts in expanding AdvoCATE capabilities to support RISCs and OHs under a project recently funded by OSMA under its Software Assurance Research Program. Finally, we speculate on the applicability of our innovations beyond aviation safety to such endeavors as robotic, and human spaceflight.

  9. Three-dimensional path planning software-assisted transjugular intrahepatic portosystemic shunt: a technical modification.

    PubMed

    Tsauo, Jiaywei; Luo, Xuefeng; Ye, Linchao; Li, Xiao

    2015-06-01

    This study was designed to report our results with a modified technique of three-dimensional (3D) path planning software assisted transjugular intrahepatic portosystemic shunt (TIPS). 3D path planning software was recently developed to facilitate TIPS creation by using two carbon dioxide portograms acquired at least 20° apart to generate a 3D path for overlay needle guidance. However, one shortcoming is that puncturing along the overlay would be technically impossible if the angle of the liver access set and the angle of the 3D path are not the same. To solve this problem, a prototype 3D path planning software was fitted with a utility to calculate the angle of the 3D path. Using this, we modified the angle of the liver access set accordingly during the procedure in ten patients. Failure for technical reasons occurred in three patients (unsuccessful wedged hepatic venography in two cases, software technical failure in one case). The procedure was successful in the remaining seven patients, and only one needle pass was required to obtain portal vein access in each case. The course of puncture was comparable to the 3D path in all patients. No procedure-related complication occurred following the procedures. Adjusting the angle of the liver access set to match the angle of the 3D path determined by the software appears to be a favorable modification to the technique of 3D path planning software assisted TIPS.

  10. Port-of-entry advanced sorting system (PASS) operational test

    DOT National Transportation Integrated Search

    1998-12-01

    In 1992 the Oregon Department of Transportation undertook an operational test of the Port-of-Entry Advanced Sorting System (PASS), which uses a two-way communication automatic vehicle identification system, integrated with weigh-in-motion, automatic ...

  11. SU-E-T-418: Explore the Sensitive of the Planar Quality Assurance to the MLC Error with Different Beam Complexity in Intensity-Modulate Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, J; Peng, J; Xie, J

    2015-06-15

    Purpose: The purpose of this study is to investigate the sensitivity of the planar quality assurance to MLC errors with different beam complexities in intensity-modulate radiation therapy. Methods: sixteen patients’ planar quality assurance (QA) plans in our institution were enrolled in this study, including 10 dynamic MLC (DMLC) IMRT plans measured by Portal Dosimetry and 6 static MLC (SMLC) IMRT plans measured by Mapcheck. The gamma pass rate was calculated using vender’s software. The field numbers were 74 and 40 for DMLC and SMLC, respectively. A random error was generated and introduced to these fields. The modified gamma pass ratemore » was calculated by comparing the original measured fluence and modified fields’ fluence. A decreasing gamma pass rate was acquired using the original gamma pass rate minus the modified gamma pass rate. Eight complexity scores were calculated in MATLAB based on the fluence and MLC sequence of these fields. The complexity scores include fractal dimension, monitor unit of field, modulation index, fluence map complexity, weighted average of field area, weighted average of field perimeter, and small aperture ratio ( <5cm{sup 2} and <50cm{sup 2}). The Spearman’s rank correlation coefficient was implemented to analyze the correlation between these scores and decreasing gamma rate. Results: The relation between the decreasing gamma pass rate and field complexity was insignificant for most complexity scores. The most significant complexity score was fluence map complexity for SMLC, which have ρ =0.4274 (p-value=0.0063). For DMLC, the most significant complex score was fractal dimension, which have ρ=−0.3068 (p-value=0.0081). Conclusions: According to the primarily Result of this study, the sensitivity gamma pass rate was not strongly relate to the field complexity.« less

  12. Sci-Thur AM: YIS – 08: Automated Imaging Quality Assurance for Image-Guided Small Animal Irradiators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnstone, Chris; Bazalova-Carter, Magdalena

    Purpose: To develop quality assurance (QA) standards and tolerance levels for image quality of small animal irradiators. Methods: A fully automated in-house QA software for image analysis of a commercial microCT phantom was created. Quantitative analyses of CT linearity, signal-to-noise ratio (SNR), uniformity and noise, geometric accuracy, modulation transfer function (MTF), and CT number evaluation was performed. Phantom microCT scans from seven institutions acquired with varying parameters (kVp, mA, time, voxel size, and frame rate) and five irradiator units (Xstrahl SARRP, PXI X-RAD 225Cx, PXI X-RAD SmART, GE explore CT/RT 140, and GE Explore CT 120) were analyzed. Multi-institutional datamore » sets were compared using our in-house software to establish pass/fail criteria for each QA test. Results: CT linearity (R2>0.996) was excellent at all but Institution 2. Acceptable SNR (>35) and noise levels (<55HU) were obtained at four of the seven institutions, where failing scans were acquired with less than 120mAs. Acceptable MTF (>1.5 lp/mm for MTF=0.2) was obtained at all but Institution 6 due to the largest scan voxel size (0.35mm). The geometric accuracy passed (<1.5%) at five of the seven institutions. Conclusion: Our QA software can be used to rapidly perform quantitative imaging QA for small animal irradiators, accumulate results over time, and display possible changes in imaging functionality from its original performance and/or from the recommended tolerance levels. This tool will aid researchers in maintaining high image quality, enabling precise conformal dose delivery to small animals.« less

  13. SEMICONDUCTOR INTEGRATED CIRCUITS: An asymmetric MOSFET-C band-pass filter with on-chip charge pump auto-tuning

    NASA Astrophysics Data System (ADS)

    Fangxiong, Chen; Min, Lin; Heping, Ma; Hailong, Jia; Yin, Shi; Forster, Dai

    2009-08-01

    An asymmetric MOSFET-C band-pass filter (BPF) with on chip charge pump auto-tuning is presented. It is implemented in UMC (United Manufacturing Corporation) 0.18 μm CMOS process technology. The filter system with auto-tuning uses a master-slave technique for continuous tuning in which the charge pump outputs 2.663 V, much higher than the power supply voltage, to improve the linearity of the filter. The main filter with third order low-pass and second order high-pass properties is an asymmetric band-pass filter with bandwidth of 2.730-5.340 MHz. The in-band third order harmonic input intercept point (IIP3) is 16.621 dBm, with 50 Ω as the source impedance. The input referred noise is about 47.455 μVrms. The main filter dissipates 3.528 mW while the auto-tuning system dissipates 2.412 mW from a 1.8 V power supply. The filter with the auto-tuning system occupies 0.592 mm2 and it can be utilized in GPS (global positioning system) and Bluetooth systems.

  14. Development of Standard Station Interface for Comprehensive Nuclear Test Ban Treaty Organistation Monitoring Networks

    NASA Astrophysics Data System (ADS)

    Dricker, I. G.; Friberg, P.; Hellman, S.

    2001-12-01

    Under the contract with the CTBTO, Instrumental Software Technologies Inc., (ISTI) has designed and developed a Standard Station Interface (SSI) - a set of executable programs and application programming interface libraries for acquisition, authentication, archiving and telemetry of seismic and infrasound data for stations of the CTBTO nuclear monitoring network. SSI (written in C) is fully supported under both the Solaris and Linux operating systems and will be shipped with fully documented source code. SSI consists of several interconnected modules. The Digitizer Interface Module maintains a near-real-time data flow between multiple digitizers and the SSI. The Disk Buffer Module is responsible for local data archival. The Station Key Management Module is a low-level tool for data authentication and verification of incoming signatures. The Data Transmission Module supports packetized near-real-time data transmission from the primary CTBTO stations to the designated Data Center. The AutoDRM module allows transport of seismic and infrasound signed data via electronic mail (auxiliary station mode). The Command Interface Module is used to pass the remote commands to the digitizers and other modules of SSI. A station operator has access to the state-of-health information and waveforms via an the Operator Interface Module. Modular design of SSI will allow painless extension of the software system within and outside the boundaries of CTBTO station requirements. Currently an alpha version of SSI undergoes extensive tests in the lab and onsite.

  15. Advanced Monitoring to Improve Combustion Turbine/Combined Cycle Reliability, Availability & Maintainability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leonard Angello

    2005-09-30

    Power generators are concerned with the maintenance costs associated with the advanced turbines that they are purchasing. Since these machines do not have fully established Operation and Maintenance (O&M) track records, power generators face financial risk due to uncertain future maintenance costs. This risk is of particular concern, as the electricity industry transitions to a competitive business environment in which unexpected O&M costs cannot be passed through to consumers. These concerns have accelerated the need for intelligent software-based diagnostic systems that can monitor the health of a combustion turbine in real time and provide valuable information on the machine's performancemore » to its owner/operators. EPRI, Impact Technologies, Boyce Engineering, and Progress Energy have teamed to develop a suite of intelligent software tools integrated with a diagnostic monitoring platform that, in real time, interpret data to assess the 'total health' of combustion turbines. The 'Combustion Turbine Health Management System' (CTHMS) will consist of a series of 'Dynamic Link Library' (DLL) programs residing on a diagnostic monitoring platform that accepts turbine health data from existing monitoring instrumentation. CTHMS interprets sensor and instrument outputs, correlates them to a machine's condition, provide interpretative analyses, project servicing intervals, and estimate remaining component life. In addition, the CTHMS enables real-time anomaly detection and diagnostics of performance and mechanical faults, enabling power producers to more accurately predict critical component remaining useful life and turbine degradation.« less

  16. Development of the ITER magnetic diagnostic set and specification.

    PubMed

    Vayakis, G; Arshad, S; Delhom, D; Encheva, A; Giacomin, T; Jones, L; Patel, K M; Pérez-Lasala, M; Portales, M; Prieto, D; Sartori, F; Simrock, S; Snipes, J A; Udintsev, V S; Watts, C; Winter, A; Zabeo, L

    2012-10-01

    ITER magnetic diagnostics are now in their detailed design and R&D phase. They have passed their conceptual design reviews and a working diagnostic specification has been prepared aimed at the ITER project requirements. This paper highlights specific design progress, in particular, for the in-vessel coils, steady state sensors, saddle loops and divertor sensors. Key changes in the measurement specifications, and a working concept of software and electronics are also outlined.

  17. Port-of-entry Advanced Sorting System (PASS) operational test : final report

    DOT National Transportation Integrated Search

    1998-12-01

    In 1992 the Oregon Department of Transportation undertook an operational test of the Port-of-Entry Advanced Sorting System (PASS), which uses a two-way communication automatic vehicle identification system, integrated with weigh-in-motion, automatic ...

  18. SU-F-T-584: Investigating Correction Methods for Ion Recombination Effects in OCTAVIUS 1000 SRS Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knill, C; Wayne State University School of Medicine, Detroit, MI; Snyder, M

    Purpose: PTW’s Octavius 1000 SRS array performs IMRT QA measurements with liquid filled ionization chambers (LICs). Collection efficiencies of LICs have been shown to change during IMRT delivery as a function of LINAC pulse frequency and pulse dose, which affects QA results. In this study, two methods were developed to correct changes in collection efficiencies during IMRT QA measurements, and the effects of these corrections on QA pass rates were compared. Methods: For the first correction, Matlab software was developed that calculates pulse frequency and pulse dose for each detector, using measurement and DICOM RT Plan files. Pulse information ismore » converted to collection efficiency and measurements are corrected by multiplying detector dose by ratios of calibration to measured collection efficiencies. For the second correction, MU/min in daily 1000 SRS calibration was chosen to match average MU/min of the VMAT plan. Usefulness of derived corrections were evaluated using 6MV and 10FFF SBRT RapidArc plans delivered to the OCTAVIUS 4D system using a TrueBeam equipped with an HD- MLC. Effects of the two corrections on QA results were examined by performing 3D gamma analysis comparing predicted to measured dose, with and without corrections. Results: After complex Matlab corrections, average 3D gamma pass rates improved by [0.07%,0.40%,1.17%] for 6MV and [0.29%,1.40%,4.57%] for 10FFF using [3%/3mm,2%/2mm,1%/1mm] criteria. Maximum changes in gamma pass rates were [0.43%,1.63%,3.05%] for 6MV and [1.00%,4.80%,11.2%] for 10FFF using [3%/3mm,2%/2mm,1%/1mm] criteria. On average, pass rates of simple daily calibration corrections were within 1% of complex Matlab corrections. Conclusion: Ion recombination effects can potentially be clinically significant for OCTAVIUS 1000 SRS measurements, especially for higher pulse dose unflattened beams when using tighter gamma tolerances. Matching daily 1000 SRS calibration MU/min to average planned MU/min is a simple correction that greatly reduces ion recombination effects, improving measurements accuracy and gamma pass rates. This work was supported by PTW.« less

  19. The Pancreatitis Activity Scoring System predicts clinical outcomes in acute pancreatitis: findings from a prospective cohort study.

    PubMed

    Buxbaum, James; Quezada, Michael; Chong, Bradford; Gupta, Nikhil; Yu, Chung Yao; Lane, Christianne; Da, Ben; Leung, Kenneth; Shulman, Ira; Pandol, Stephen; Wu, Bechien

    2018-05-01

    The Pancreatitis Activity Scoring System (PASS) has been derived by an international group of experts via a modified Delphi process. Our aim was to perform an external validation study to assess for concordance of the PASS score with high face validity clinical outcomes and determine specific meaningful thresholds to assist in application of this scoring system in a large prospectively ascertained cohort. We analyzed data from a prospective cohort study of consecutive patients admitted to the Los Angeles County Hospital between March 2015 and March 2017. Patients were identified using an emergency department paging system and electronic alert system. Comprehensive characterization included substance use history, pancreatitis etiology, biochemical profile, and detailed clinical course. We calculated the PASS score at admission, discharge, and at 12 h increments during the hospitalization. We performed several analyses to assess the relationship between the PASS score and outcomes at various points during hospitalization as well as following discharge. Using multivariable logistic regression analysis, we assessed the relationship between admission PASS score and risk of severe pancreatitis. PASS score performance was compared to established systems used to predict severe pancreatitis. Additional inpatient outcomes assessed included local complications, length of stay, development of systemic inflammatory response syndrome (SIRS), and intensive care unit (ICU) admission. We also assessed whether the PASS score at discharge was associated with early readmission (re-hospitalization for pancreatitis symptoms and complications within 30 days of discharge). A total of 439 patients were enrolled, their mean age was 42 (±15) years, and 53% were male. Admission PASS score >140 was associated with moderately severe and severe pancreatitis (OR 3.5 [95% CI 2.0, 6.3]), ICU admission (OR 4.9 [2.5, 9.4]), local complications (3.0 [1.6, 5.7]), and development of SIRS (OR 2.9 [1.8, 4.5]) as well as prolongation of hospitalization by a mean of 1.5 (1.3-1.7) days. For the prediction of moderately severe/severe pancreatitis, the PASS score (AUC = 0.71) was comparable to the more established Ranson's (AUC = 0.63), Glasgow (AUC = 0.72), Panc3 (AUC = 0.57), and HAPS (AUC = 0.54) scoring systems. Discharge PASS score >60 was associated with early readmission (OR 5.0 [2.4, 10.7]). The PASS score is associated with important clinical outcomes in acute pancreatitis. The ability of the score to forecast important clinical events at different points in the disease course suggests that it is a valid measure of activity in patients with acute pancreatitis.

  20. Tough2{_}MP: A parallel version of TOUGH2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Keni; Wu, Yu-Shu; Ding, Chris

    2003-04-09

    TOUGH2{_}MP is a massively parallel version of TOUGH2. It was developed for running on distributed-memory parallel computers to simulate large simulation problems that may not be solved by the standard, single-CPU TOUGH2 code. The new code implements an efficient massively parallel scheme, while preserving the full capacity and flexibility of the original TOUGH2 code. The new software uses the METIS software package for grid partitioning and AZTEC software package for linear-equation solving. The standard message-passing interface is adopted for communication among processors. Numerical performance of the current version code has been tested on CRAY-T3E and IBM RS/6000 SP platforms. Inmore » addition, the parallel code has been successfully applied to real field problems of multi-million-cell simulations for three-dimensional multiphase and multicomponent fluid and heat flow, as well as solute transport. In this paper, we will review the development of the TOUGH2{_}MP, and discuss the basic features, modules, and their applications.« less

  1. LSDCat: Detection and cataloguing of emission-line sources in integral-field spectroscopy datacubes

    NASA Astrophysics Data System (ADS)

    Herenz, Edmund Christian; Wisotzki, Lutz

    2017-06-01

    We present a robust, efficient, and user-friendly algorithm for detecting faint emission-line sources in large integral-field spectroscopic datacubes together with the public release of the software package Line Source Detection and Cataloguing (LSDCat). LSDCat uses a three-dimensional matched filter approach, combined with thresholding in signal-to-noise, to build a catalogue of individual line detections. In a second pass, the detected lines are grouped into distinct objects, and positions, spatial extents, and fluxes of the detected lines are determined. LSDCat requires only a small number of input parameters, and we provide guidelines for choosing appropriate values. The software is coded in Python and capable of processing very large datacubes in a short time. We verify the implementation with a source insertion and recovery experiment utilising a real datacube taken with the MUSE instrument at the ESO Very Large Telescope. The LSDCat software is available for download at http://muse-vlt.eu/science/tools and via the Astrophysics Source Code Library at http://ascl.net/1612.002

  2. Statistical Analysis of an Infrared Thermography Inspection of Reinforced Carbon-Carbon

    NASA Technical Reports Server (NTRS)

    Comeaux, Kayla

    2011-01-01

    Each piece of flight hardware being used on the shuttle must be analyzed and pass NASA requirements before the shuttle is ready for launch. One tool used to detect cracks that lie within flight hardware is Infrared Flash Thermography. This is a non-destructive testing technique which uses an intense flash of light to heat up the surface of a material after which an Infrared camera is used to record the cooling of the material. Since cracks within the material obstruct the natural heat flow through the material, they are visible when viewing the data from the Infrared camera. We used Ecotherm, a software program, to collect data pertaining to the delaminations and analyzed the data using Ecotherm and University of Dayton Log Logistic Probability of Detection (POD) Software. The goal was to reproduce the statistical analysis produced by the University of Dayton software, by using scatter plots, log transforms, and residuals to test the assumption of normality for the residuals.

  3. A New Generation of Telecommunications for Mars: The Reconfigurable Software Radio

    NASA Technical Reports Server (NTRS)

    Adams, J.; Horne, W.

    2000-01-01

    Telecommunications is a critical component for any mission at Mars as it is an enabling function that provides connectivity back to Earth and provides a means for conducting science. New developments in telecommunications, specifically in software - configurable radios, expand the possible approaches for science missions at Mars. These radios provide a flexible and re-configurable platform that can evolve with the mission and that provide an integrated approach to communications and science data processing. Deep space telecommunication faces challenges not normally faced by terrestrial and near-earth communications. Radiation, thermal, highly constrained mass, volume, packaging and reliability all are significant issues. Additionally, once the spacecraft leaves earth, there is no way to go out and upgrade or replace radio components. The reconfigurable software radio is an effort to provide not only a product that is immediately usable in the harsh space environment but also to develop a radio that will stay current as the years pass and technologies evolve.

  4. A single-board NMR spectrometer based on a software defined radio architecture

    NASA Astrophysics Data System (ADS)

    Tang, Weinan; Wang, Weimin

    2011-01-01

    A single-board software defined radio (SDR) spectrometer for nuclear magnetic resonance (NMR) is presented. The SDR-based architecture, realized by combining a single field programmable gate array (FPGA) and a digital signal processor (DSP) with peripheral radio frequency (RF) front-end circuits, makes the spectrometer compact and reconfigurable. The DSP, working as a pulse programmer, communicates with a personal computer via a USB interface and controls the FPGA through a parallel port. The FPGA accomplishes digital processing tasks such as a numerically controlled oscillator (NCO), digital down converter (DDC) and gradient waveform generator. The NCO, with agile control of phase, frequency and amplitude, is part of a direct digital synthesizer that is used to generate an RF pulse. The DDC performs quadrature demodulation, multistage low-pass filtering and gain adjustment to produce a bandpass signal (receiver bandwidth from 3.9 kHz to 10 MHz). The gradient waveform generator is capable of outputting shaped gradient pulse waveforms and supports eddy-current compensation. The spectrometer directly acquires an NMR signal up to 30 MHz in the case of baseband sampling and is suitable for low-field (<0.7 T) application. Due to the featured SDR architecture, this prototype has flexible add-on ability and is expected to be suitable for portable NMR systems.

  5. CAREER DEVELOPMENT

    EPA Science Inventory

    The Baltimore Summit Project on Career Development/PERFORMS Enhancement/360 Evaluations for All has made some progress. We have identified the fact that we cannot change the current Pass/Fail PERFORMS system to a tiered system. The current pass/fail system does not have a mechani...

  6. Ultrasonic sensing of GMAW: Laser/EMAT defect detection system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlson, N.M.; Johnson, J.A.; Larsen, E.D.

    1992-08-01

    In-process ultrasonic sensing of welding allows detection of weld defects in real time. A noncontacting ultrasonic system is being developed to operate in a production environment. The principal components are a pulsed laser for ultrasound generation and an electromagnetic acoustic transducer (EMAT) for ultrasound reception. A PC-based data acquisition system determines the quality of the weld on a pass-by-pass basis. The laser/EMAT system interrogates the area in the weld volume where defects are most likely to occur. This area of interest is identified by computer calculations on a pass-by-pass basis using weld planning information provided by the off-line programmer. Themore » absence of a signal above the threshold level in the computer-calculated time interval indicates a disruption of the sound path by a defect. The ultrasonic sensor system then provides an input signal to the weld controller about the defect condition. 8 refs.« less

  7. Design of dual ring wavelength filters for WDM applications

    NASA Astrophysics Data System (ADS)

    Sathyadevaki, R.; Shanmuga sundar, D.; Sivanantha Raja, A.

    2016-12-01

    Wavelength division multiplexing plays a prime role in an optical communication due to its advantages such as easy network expansion, longer span lengths etc. In this work, photonic crystal based filters with the dual rings are proposed which act as band pass filters (BPF) and channel drop filter (CDF) that has found a massive applications in C and L-bands used for wavelength selection and noise filtering at erbium doped fiber amplifiers and dense wavelength division multiplexing operation. These filters are formulated on the square lattice with crystal rods of silicon material of refractive index 3.4 which are perforated on an air of refractive index 1. Dual ring double filters (band pass filter and channel drop filter) on single layout possess passing and dropping band of wavelengths in two distinct arrangements with entire band quality factors of 92.09523 & 505.263 and 124.85019 & 456.8633 for the pass and drop filters of initial setup and amended setup respectively. These filters have the high-quality factor with broad and narrow bandwidths of 16.8 nm & 3.04 nm and 12.85 nm & 3.3927 nm. Transmission spectra and band gap of the desired filters is analyzed using Optiwave software suite. Two dual ring filters incorporated on a single layout comprises the size of 15×11 μm which can also be used in the integrated photonic chips for the ultra-compact unification of devices.

  8. Customized altitude-azimuth mount for a raster-scanning Fourier transform spectrometer

    NASA Astrophysics Data System (ADS)

    Durrenberger, Jed E.; Gutman, William M.; Gammill, Troy D.; Grover, Dennis H.

    1996-10-01

    Applications of the Army Research Laboratory Mobile Atmospheric Spectrometer Remote Sensing Rover required development of a customized computer-controlled mount to satisfy a variety of requirements within a limited budget. The payload was designed to operate atop a military electronics shelter mounted on a 4-wheel drive truck to be above most atmospheric ground turbulence. Pointing orientation in altitude is limited by constraints imposed by use of a liquid nitrogen detector Dewar in the spectrometer. Stepper motor drives and control system are compatible with existing custom software used with other instrumentation for controlled incremental raster stepping. The altitude axis passes close to the center of gravity of the complete payload to minimize load eccentricity and drive torque requirements. Dovetail fixture mounting enables quick service and fine adjustment of balance to minimize stepper/gearbox drive backlash through the limited orientation range in altitude. Initial applications to characterization of remote gas plumes have been successful.

  9. Parallelization of the Coupled Earthquake Model

    NASA Technical Reports Server (NTRS)

    Block, Gary; Li, P. Peggy; Song, Yuhe T.

    2007-01-01

    This Web-based tsunami simulation system allows users to remotely run a model on JPL s supercomputers for a given undersea earthquake. At the time of this reporting, predicting tsunamis on the Internet has never happened before. This new code directly couples the earthquake model and the ocean model on parallel computers and improves simulation speed. Seismometers can only detect information from earthquakes; they cannot detect whether or not a tsunami may occur as a result of the earthquake. When earthquake-tsunami models are coupled with the improved computational speed of modern, high-performance computers and constrained by remotely sensed data, they are able to provide early warnings for those coastal regions at risk. The software is capable of testing NASA s satellite observations of tsunamis. It has been successfully tested for several historical tsunamis, has passed all alpha and beta testing, and is well documented for users.

  10. Physical and hydrologic characteristics of Matlacha Pass, southwestern Florida

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kane, R.L.; Russell, G.M.

    1994-03-01

    Matlacha Pass is part of the connected inshore waters of the Charlotte Harbor estuary in southwestern Florida. Bathymetry indicates that depths in the main channel of the pass range from 4 to 14 feet below sea level. The channel averages about 8 feet deep in the northern part of the pass and about 5 feet deep in the southern part. Additionally, depths average about 4 feet in a wide section of the middle of the pass and about 2 feet along the mangrove swamps near the shoreline. Tidal flow within Matlacha Pass varies depending on aquatic vegetation densities, oyster beds,more » and tidal flats. Surface-water runoff occurs primarily during the wet season (May to September), with most of the flow entering the Matlacha Pass through two openings in the spreader canal system near the city of Matlacha. Freshwater flow into the pass from the north Cape Coral spreader canal system averaged 113 cubic feet per second from October 1987 to September 1992. Freshwater inflow from the Aries Canal of the south Cape Coral spreader canal system averaged 14.1 cubic feet per second from October 1989 to September 1992. Specific conductance throughout Matlacha Pass ranged from less than 1,000 to 57,000 microsiemens per centimeter. Specific conductance, collected from a continuous monitoring data logger in the middle of the pass from February to September 1992, averaged 36,000 microsiemens per centimeter at 2 feet below the water surface and 40,000 microsiemens per centimeter at 2 feet above the bottom. During both the wet and dry seasons, specific conductance indicated that the primary mixing of tidal waters and freshwater inflow occurs in the mangrove swamps along the shoreline.« less

  11. Users manual for the Chameleon parallel programming tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gropp, W.; Smith, B.

    1993-06-01

    Message passing is a common method for writing programs for distributed-memory parallel computers. Unfortunately, the lack of a standard for message passing has hampered the construction of portable and efficient parallel programs. In an attempt to remedy this problem, a number of groups have developed their own message-passing systems, each with its own strengths and weaknesses. Chameleon is a second-generation system of this type. Rather than replacing these existing systems, Chameleon is meant to supplement them by providing a uniform way to access many of these systems. Chameleon`s goals are to (a) be very lightweight (low over-head), (b) be highlymore » portable, and (c) help standardize program startup and the use of emerging message-passing operations such as collective operations on subsets of processors. Chameleon also provides a way to port programs written using PICL or Intel NX message passing to other systems, including collections of workstations. Chameleon is tracking the Message-Passing Interface (MPI) draft standard and will provide both an MPI implementation and an MPI transport layer. Chameleon provides support for heterogeneous computing by using p4 and PVM. Chameleon`s support for homogeneous computing includes the portable libraries p4, PICL, and PVM and vendor-specific implementation for Intel NX, IBM EUI (SP-1), and Thinking Machines CMMD (CM-5). Support for Ncube and PVM 3.x is also under development.« less

  12. A programming environment for distributed complex computing. An overview of the Framework for Interdisciplinary Design Optimization (FIDO) project. NASA Langley TOPS exhibit H120b

    NASA Technical Reports Server (NTRS)

    Townsend, James C.; Weston, Robert P.; Eidson, Thomas M.

    1993-01-01

    The Framework for Interdisciplinary Design Optimization (FIDO) is a general programming environment for automating the distribution of complex computing tasks over a networked system of heterogeneous computers. For example, instead of manually passing a complex design problem between its diverse specialty disciplines, the FIDO system provides for automatic interactions between the discipline tasks and facilitates their communications. The FIDO system networks all the computers involved into a distributed heterogeneous computing system, so they have access to centralized data and can work on their parts of the total computation simultaneously in parallel whenever possible. Thus, each computational task can be done by the most appropriate computer. Results can be viewed as they are produced and variables changed manually for steering the process. The software is modular in order to ease migration to new problems: different codes can be substituted for each of the current code modules with little or no effect on the others. The potential for commercial use of FIDO rests in the capability it provides for automatically coordinating diverse computations on a networked system of workstations and computers. For example, FIDO could provide the coordination required for the design of vehicles or electronics or for modeling complex systems.

  13. Personal Access Satellite System (PASS) study. Fiscal year 1989 results

    NASA Technical Reports Server (NTRS)

    Sue, Miles K. (Editor)

    1990-01-01

    The Jet Propulsion Laboratory is exploring the potential and feasibility of a personal access satellite system (PASS) that will offer the user greater freedom and mobility than existing or currently planned communications systems. Studies performed in prior years resulted in a strawman design and the identification of technologies that are critical to the successful implementation of PASS. The study efforts in FY-89 were directed towards alternative design options with the objective of either improving the system performance or alleviating the constraints on the user terminal. The various design options and system issues studied this year and the results of the study are presented.

  14. SpcAudace: Spectroscopic processing and analysis package of Audela software

    NASA Astrophysics Data System (ADS)

    Mauclaire, Benjamin

    2017-11-01

    SpcAudace processes long slit spectra with automated pipelines and performs astrophysical analysis of the latter data. These powerful pipelines do all the required steps in one pass: standard preprocessing, masking of bad pixels, geometric corrections, registration, optimized spectrum extraction, wavelength calibration and instrumental response computation and correction. Both high and low resolution long slit spectra are managed for stellar and non-stellar targets. Many types of publication-quality figures can be easily produced: pdf and png plots or annotated time series plots. Astrophysical quantities can be derived from individual or large amount of spectra with advanced functions: from line profile characteristics to equivalent width and periodogram. More than 300 documented functions are available and can be used into TCL scripts for automation. SpcAudace is based on Audela open source software.

  15. Description of the U.S. Geological Survey Geo Data Portal data integration framework

    USGS Publications Warehouse

    Blodgett, David L.; Booth, Nathaniel L.; Kunicki, Thomas C.; Walker, Jordan I.; Lucido, Jessica M.

    2012-01-01

    The U.S. Geological Survey has developed an open-standard data integration framework for working efficiently and effectively with large collections of climate and other geoscience data. A web interface accesses catalog datasets to find data services. Data resources can then be rendered for mapping and dataset metadata are derived directly from these web services. Algorithm configuration and information needed to retrieve data for processing are passed to a server where all large-volume data access and manipulation takes place. The data integration strategy described here was implemented by leveraging existing free and open source software. Details of the software used are omitted; rather, emphasis is placed on how open-standard web services and data encodings can be used in an architecture that integrates common geographic and atmospheric data.

  16. Dosimetric validation for an automatic brain metastases planning software using single-isocenter dynamic conformal arcsDosimetric validation for an automatic brain metastases planning software using single-isocenter dynamic conformal arcs.

    PubMed

    Liu, Haisong; Li, Jun; Pappas, Evangelos; Andrews, David; Evans, James; Werner-Wasik, Maria; Yu, Yan; Dicker, Adam; Shi, Wenyin

    2016-09-08

    An automatic brain-metastases planning (ABMP) software has been installed in our institution. It is dedicated for treating multiple brain metastases with radiosurgery on linear accelerators (linacs) using a single-setup isocenter with noncoplanar dynamic conformal arcs. This study is to validate the calculated absolute dose and dose distribution of ABMP. Three types of measurements were performed to validate the planning software: 1, dual micro ion chambers were used with an acrylic phantom to measure the absolute dose; 2, a 3D cylindrical phantom with dual diode array was used to evaluate 2D dose distribution and point dose for smaller targets; and 3, a 3D pseudo-in vivo patient-specific phantom filled with polymer gels was used to evaluate the accuracy of 3D dose distribution and radia-tion delivery. Micro chamber measurement of two targets (volumes of 1.2 cc and 0.9 cc, respectively) showed that the percentage differences of the absolute dose at both targets were less than 1%. Averaged GI passing rate of five different plans measured with the diode array phantom was above 98%, using criteria of 3% dose difference, 1 mm distance to agreement (DTA), and 10% low-dose threshold. 3D gel phantom measurement results demonstrated a 3D displacement of nine targets of 0.7 ± 0.4 mm (range 0.2 ~ 1.1 mm). The averaged two-dimensional (2D) GI passing rate for several region of interests (ROI) on axial slices that encompass each one of the nine targets was above 98% (5% dose difference, 2 mm DTA, and 10% low-dose threshold). Measured D95, the minimum dose that covers 95% of the target volume, of the nine targets was 0.7% less than the calculated D95. Three different types of dosimetric verification methods were used and proved the dose calculation of the new automatic brain metastases planning (ABMP) software was clinical acceptable. The 3D pseudo-in vivo patient-specific gel phantom test also served as an end-to-end test for validating not only the dose calculation, but the treatment delivery accuracy as well. © 2016 The Authors.

  17. An intelligent subsurface buoy design for measuring ocean ambient noise

    NASA Astrophysics Data System (ADS)

    Li, Bing; Wang, Lei

    2012-11-01

    A type of ultra-low power subsurface buoy system is designed to measure and record ocean ambient noise data. The buoy utilizes a vector hydrophone (pass band 20Hz-1.2kHz) and a 6-element vertical hydrophone array (pass band 20Hz-2kHz) to measure ocean ambient noise. The acoustic signals are passed through an automatically modified gain, a band pass filter, and an analog-to-digital (A/D) conversion module. They are then stored in high-capacity flash memory. In order to identify the direction of noise source, the vector sensor measuring system has integrated an electric-magnetic compass. The system provides a low-rate underwater acoustic communication system which is used to report the buoy state information and a high-speed USB interface which is used to retrieve the recorded data on deck. The whole system weighs about 125kg and can operate autonomously for more than 72 hours. The system's main architecture and the sea-trial test results are provided in this paper.

  18. Alternate biomass harvesting systems using conventional equipment

    Treesearch

    Bryce J. Stokes; William F. Watson; I. Winston Savelle

    1985-01-01

    Three harvesting methods were field tested in two stand types. Costs and stand utilization rates were developed for a conventional harvesting system, without energy wood recovery; a two-pass roundwood and energy wood system; and a one-pass system that harvests roundwood and energy wood. The systems harvested 20-acre test blocks in two pine pulpwood plantations and in a...

  19. TU-AB-202-06: Quantitative Evaluation of Deformable Image Registration in MRI-Guided Adaptive Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mooney, K; Zhao, T; Green, O

    Purpose: To assess the performance of the deformable image registration algorithm used for MRI-guided adaptive radiation therapy using image feature analysis. Methods: MR images were collected from five patients treated on the MRIdian (ViewRay, Inc., Oakwood Village, OH), a three head Cobalt-60 therapy machine with an 0.35 T MR system. The images were acquired immediately prior to treatment with a uniform 1.5 mm resolution. Treatment sites were as follows: head/neck, lung, breast, stomach, and bladder. Deformable image registration was performed using the ViewRay software between the first fraction MRI and the final fraction MRI, and the DICE similarity coefficient (DSC)more » for the skin contours was reported. The SIFT and Harris feature detection and matching algorithms identified point features in each image separately, then found matching features in the other image. The target registration error (TRE) was defined as the vector distance between matched features on the two image sets. Each deformation was evaluated based on comparison of average TRE and DSC. Results: Image feature analysis produced between 2000–9500 points for evaluation on the patient images. The average (± standard deviation) TRE for all patients was 3.3 mm (±3.1 mm), and the passing rate of TRE<3 mm was 60% on the images. The head/neck patient had the best average TRE (1.9 mm±2.3 mm) and the best passing rate (80%). The lung patient had the worst average TRE (4.8 mm±3.3 mm) and the worst passing rate (37.2%). DSC was not significantly correlated with either TRE (p=0.63) or passing rate (p=0.55). Conclusions: Feature matching provides a quantitative assessment of deformable image registration, with a large number of data points for analysis. The TRE of matched features can be used to evaluate the registration of many objects throughout the volume, whereas DSC mainly provides a measure of gross overlap. We have a research agreement with ViewRay Inc.« less

  20. Automatic centring and bonding of lenses

    NASA Astrophysics Data System (ADS)

    Krey, Stefan; Heinisch, J.; Dumitrescu, E.

    2007-05-01

    We present an automatic bonding station which is able to center and bond individual lenses or doublets to a barrel with sub micron centring accuracy. The complete manufacturing cycle includes the glue dispensing and UV curing. During the process the state of centring is continuously controlled by the vision software, and the final result is recorded to a file for process statistics. Simple pass or fail results are displayed to the operator at the end of the process.

  1. Use of a hardware token for Grid authentication by the MICE data distribution framework

    NASA Astrophysics Data System (ADS)

    Nebrensky, JJ; Martyniak, J.

    2017-10-01

    The international Muon Ionization Cooling Experiment (MICE) is designed to demonstrate the principle of muon ionisation cooling for the first time. Data distribution and archiving, batch reprocessing, and simulation are all carried out using the EGI Grid infrastructure, in particular the facilities provided by GridPP in the UK. To prevent interference - especially accidental data deletion - these activities are separated by different VOMS roles. Data acquisition, in particular, can involve 24/7 operation for a number of weeks and so for moving the data out of the MICE Local Control Room at the experiment a valid, VOMS-enabled, Grid proxy must be made available continuously over that time. The MICE "Data Mover" agent is now using a robot certificate stored on a hardware token (Feitian ePass2003) from which a cron job generates a “plain” proxy to which the VOMS authorisation extensions are added in a separate transaction. A valid short-lifetime proxy is thus continuously available to the Data Mover process. The Feitian ePass2003 was chosen because it was both significantly cheaper and easier to actually purchase than the token commonly referred to in the community at that time; however there was no software support for the hardware. This paper describes the software packages, process and commands used to deploy the token into production.

  2. A multicenter prospective study of surgical audit systems.

    PubMed

    Haga, Yoshio; Ikejiri, Koji; Wada, Yasuo; Takahashi, Tadateru; Ikenaga, Masakazu; Akiyama, Noriyoshi; Koike, Shoichiro; Koseki, Masato; Saitoh, Toshihiro

    2011-01-01

    This study was undertaken to evaluate a modified form of Estimation of Physiologic Ability and Surgical Stress (E-PASS) for surgical audit comparing with other existing models. Although several scoring systems have been devised for surgical audit, no nation-wide survey has been performed yet. We modified our previous E-PASS surgical audit system by computing the weights of 41 procedures, using data from 4925 patients who underwent elective digestive surgery, designated it as mE-PASS. Subsequently, a prospective cohort study was conducted in 43 national hospitals in Japan from April 1, 2005, to April 8, 2007. Variables for the E-PASS and American Society of Anesthesiologists (ASA) status-based model were collected for 5272 surgically treated patients. Of the 5272 patients, we also collected data for the Portsmouth modification of Physiologic and Operative Severity Score for the enUmeration of Mortality and morbidity (P-POSSUM) in 3128 patients. The area under the receiver operative characteristic curve (AUC) was used to evaluate discrimination performance to detect in-hospital mortality. The ratio of observed to estimated in-hospital mortality rates (OE ratio) was defined as a measure of quality. The numbers of variables required were 10 for E-PASS, 7 for mE-PASS, 20 for P-POSSUM, and 4 for the ASA status-based model. The AUC (95% confidence interval) values were 0.86 (0.79-0.93) for E-PASS, 0.86 (0.79-0.92) for mE-PASS, 0.81 (0.75-0.88) for P-POSSUM, and 0.73 (0.63-0.83) for the ASA status-based model. The OE ratios for mE-PASS among large-volume hospitals significantly correlated with those for E-PASS (R = 0.93, N = 9, P = 0.00026), P-POSSUM (R = 0.96, N = 6, P = 0.0021), and ASA status-based model (R = 0.83, N = 9, P = 0.0051). Because of its features of easy use, accuracy, and generalizability, mE-PASS is a candidate for a nation-wide survey.

  3. 40 CFR 205.171-8 - Passing or failing under SEA.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Passing or failing under SEA. 205.171-8... failing under SEA. (a) A failing exhaust system is one which, when installed on any motorcycle which is in... in Column A, the sample passes. (c) Pass or failure of a SEA takes place when a decision that an...

  4. Three-Dimensional Path Planning Software-Assisted Transjugular Intrahepatic Portosystemic Shunt: A Technical Modification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsauo, Jiaywei, E-mail: 80732059@qq.com; Luo, Xuefeng, E-mail: luobo-913@126.com; Ye, Linchao, E-mail: linchao.ye@siemens.com

    2015-06-15

    PurposeThis study was designed to report our results with a modified technique of three-dimensional (3D) path planning software assisted transjugular intrahepatic portosystemic shunt (TIPS).Methods3D path planning software was recently developed to facilitate TIPS creation by using two carbon dioxide portograms acquired at least 20° apart to generate a 3D path for overlay needle guidance. However, one shortcoming is that puncturing along the overlay would be technically impossible if the angle of the liver access set and the angle of the 3D path are not the same. To solve this problem, a prototype 3D path planning software was fitted with a utility to calculate themore » angle of the 3D path. Using this, we modified the angle of the liver access set accordingly during the procedure in ten patients.ResultsFailure for technical reasons occurred in three patients (unsuccessful wedged hepatic venography in two cases, software technical failure in one case). The procedure was successful in the remaining seven patients, and only one needle pass was required to obtain portal vein access in each case. The course of puncture was comparable to the 3D path in all patients. No procedure-related complication occurred following the procedures.ConclusionsAdjusting the angle of the liver access set to match the angle of the 3D path determined by the software appears to be a favorable modification to the technique of 3D path planning software assisted TIPS.« less

  5. The evaluation of a 2D diode array in "magic phantom" for use in high dose rate brachytherapy pretreatment quality assurance.

    PubMed

    Espinoza, A; Petasecca, M; Fuduli, I; Howie, A; Bucci, J; Corde, S; Jackson, M; Lerch, M L F; Rosenfeld, A B

    2015-02-01

    High dose rate (HDR) brachytherapy is a treatment method that is used increasingly worldwide. The development of a sound quality assurance program for the verification of treatment deliveries can be challenging due to the high source activity utilized and the need for precise measurements of dwell positions and times. This paper describes the application of a novel phantom, based on a 2D 11 × 11 diode array detection system, named "magic phantom" (MPh), to accurately measure plan dwell positions and times, compare them directly to the treatment plan, determine errors in treatment delivery, and calculate absorbed dose. The magic phantom system was CT scanned and a 20 catheter plan was generated to simulate a nonspecific treatment scenario. This plan was delivered to the MPh and, using a custom developed software suite, the dwell positions and times were measured and compared to the plan. The original plan was also modified, with changes not disclosed to the primary authors, and measured again using the device and software to determine the modifications. A new metric, the "position-time gamma index," was developed to quantify the quality of a treatment delivery when compared to the treatment plan. The MPh was evaluated to determine the minimum measurable dwell time and step size. The incorporation of the TG-43U1 formalism directly into the software allows for dose calculations to be made based on the measured plan. The estimated dose distributions calculated by the software were compared to the treatment plan and to calibrated EBT3 film, using the 2D gamma analysis method. For the original plan, the magic phantom system was capable of measuring all dwell points and dwell times and the majority were found to be within 0.93 mm and 0.25 s, respectively, from the plan. By measuring the altered plan and comparing it to the unmodified treatment plan, the use of the position-time gamma index showed that all modifications made could be readily detected. The MPh was able to measure dwell times down to 0.067 ± 0.001 s and planned dwell positions separated by 1 mm. The dose calculation carried out by the MPh software was found to be in agreement with values calculated by the treatment planning system within 0.75%. Using the 2D gamma index, the dose map of the MPh plane and measured EBT3 were found to have a pass rate of over 95% when compared to the original plan. The application of this magic phantom quality assurance system to HDR brachytherapy has demonstrated promising ability to perform the verification of treatment plans, based upon the measured dwell positions and times. The introduction of the quantitative position-time gamma index allows for direct comparison of measured parameters against the plan and could be used prior to patient treatment to ensure accurate delivery. © 2015 American Association of Physicists in Medicine.

  6. Systems and methods for circuit lifetime evaluation

    NASA Technical Reports Server (NTRS)

    Heaps, Timothy L. (Inventor); Sheldon, Douglas J. (Inventor); Bowerman, Paul N. (Inventor); Everline, Chester J. (Inventor); Shalom, Eddy (Inventor); Rasmussen, Robert D. (Inventor)

    2013-01-01

    Systems and methods for estimating the lifetime of an electrical system in accordance with embodiments of the invention are disclosed. One embodiment of the invention includes iteratively performing Worst Case Analysis (WCA) on a system design with respect to different system lifetimes using a computer to determine the lifetime at which the worst case performance of the system indicates the system will pass with zero margin or fail within a predetermined margin for error given the environment experienced by the system during its lifetime. In addition, performing WCA on a system with respect to a specific system lifetime includes identifying subcircuits within the system, performing Extreme Value Analysis (EVA) with respect to each subcircuit to determine whether the subcircuit fails EVA for the specific system lifetime, when the subcircuit passes EVA, determining that the subcircuit does not fail WCA for the specified system lifetime, when a subcircuit fails EVA performing at least one additional WCA process that provides a tighter bound on the WCA than EVA to determine whether the subcircuit fails WCA for the specified system lifetime, determining that the system passes WCA with respect to the specific system lifetime when all subcircuits pass WCA, and determining that the system fails WCA when at least one subcircuit fails WCA.

  7. SAGA: A project to automate the management of software production systems

    NASA Technical Reports Server (NTRS)

    Campbell, Roy H.; Beckman-Davies, C. S.; Benzinger, L.; Beshers, G.; Laliberte, D.; Render, H.; Sum, R.; Smith, W.; Terwilliger, R.

    1986-01-01

    Research into software development is required to reduce its production cost and to improve its quality. Modern software systems, such as the embedded software required for NASA's space station initiative, stretch current software engineering techniques. The requirements to build large, reliable, and maintainable software systems increases with time. Much theoretical and practical research is in progress to improve software engineering techniques. One such technique is to build a software system or environment which directly supports the software engineering process, i.e., the SAGA project, comprising the research necessary to design and build a software development which automates the software engineering process. Progress under SAGA is described.

  8. Estimation of physiologic ability and surgical stress (E-PASS) scoring system could provide preoperative advice on whether to undergo laparoscopic surgery for colorectal cancer patients with a high physiological risk

    PubMed Central

    Zhang, Ao; Liu, Tingting; Zheng, Kaiyuan; Liu, Ningbo; Huang, Fei; Li, Weidong; Liu, Tong; Fu, Weihua

    2017-01-01

    Abstract Laparoscopic colorectal surgery had been widely used for colorectal cancer patient and showed a favorable outcome on the postoperative morbidity rate. We attempted to evaluate physiological status of patients by mean of Estimation of physiologic ability and surgical stress (E-PASS) system and to analyze the difference variation of postoperative morbidity rate of open and laparoscopic colorectal cancer surgery in patients with different physiological status. In total 550 colorectal cancer patients who underwent surgery treatment were included. E-PASS and some conventional scoring systems were reviewed to examine their mortality prediction ability. The preoperative risk score (PRS) in the E-PASS system was used to evaluate the physiological status of patients. The difference of postoperative morbidity rate between open and laparoscopic colorectal cancer surgeries was analyzed respectively in patients with different physiological status. E-PASS had better prediction ability than other conventional scoring systems in colorectal cancer surgeries. Postoperative morbidities were developed in 143 patients. The parameters in the E-PASS system had positive correlations with postoperative morbidity. The overall postoperative morbidity rate of laparoscopic surgeries was lower than open surgeries (19.61% and 28.46%), but the postoperative morbidity rate of laparoscopic surgeries increased more significantly than in open surgery as PRS increased. When PRS was more than 0.7, the postoperative morbidity rate of laparoscopic surgeries would exceed the postoperative morbidity rate of open surgeries. The E-PASS system was capable to evaluate the physiological and surgical risk of colorectal cancer surgery. PRS could assist preoperative decision-making on the surgical method. Colorectal cancer patients who were assessed with a low physiological risk by PRS would be safe to undergo laparoscopic surgery. On the contrary, surgeons should make decisions prudently on the operation method for patient with a high physiological risk. PMID:28816959

  9. Resistance is Futile: STScI's Science Planning and Scheduling Team Switches From VMS to Unix Operations

    NASA Astrophysics Data System (ADS)

    Adler, D. S.

    2000-12-01

    The Science Planning and Scheduling Team (SPST) of the Space Telescope Science Institute (STScI) has historically operated exclusively under VMS. Due to diminished support for VMS-based platforms at STScI, SPST is in the process of transitioning to Unix operations. In the summer of 1999, SPST selected Python as the primary scripting language for the operational tools and began translation of the VMS DCL code. As of October 2000, SPST has installed a utility library of 16 modules consisting of 8000 lines of code and 80 Python tools consisting of 13000 lines of code. All tasks related to calendar generation have been switched to Unix operations. Current work focuses on translating the tools used to generate the Science Mission Specifications (SMS). The software required to generate the Mission Schedule and Command Loads (PASS), maintained by another team at STScI, will take longer to translate than the rest of the SPST operational code. SPST is planning on creating tools to access PASS from Unix in the short term. We are on schedule to complete the work needed to fully transition SPST to Unix operations (while remotely accessing PASS on VMS) by the fall of 2001.

  10. Computer-aided discovery of biological activity spectra for anti-aging and anti-cancer olive oil oleuropeins.

    PubMed

    Corominas-Faja, Bruna; Santangelo, Elvira; Cuyàs, Elisabet; Micol, Vicente; Joven, Jorge; Ariza, Xavier; Segura-Carretero, Antonio; García, Jordi; Menendez, Javier A

    2014-09-01

    Aging is associated with common conditions, including cancer, diabetes, cardiovascular disease, and Alzheimer's disease. The type of multi-targeted pharmacological approach necessary to address a complex multifaceted disease such as aging might take advantage of pleiotropic natural polyphenols affecting a wide variety of biological processes. We have recently postulated that the secoiridoids oleuropein aglycone (OA) and decarboxymethyl oleuropein aglycone (DOA), two complex polyphenols present in health-promoting extra virgin olive oil (EVOO), might constitute a new family of plant-produced gerosuppressant agents. This paper describes an analysis of the biological activity spectra (BAS) of OA and DOA using PASS (Prediction of Activity Spectra for Substances) software. PASS can predict thousands of biological activities, as the BAS of a compound is an intrinsic property that is largely dependent on the compound's structure and reflects pharmacological effects, physiological and biochemical mechanisms of action, and specific toxicities. Using Pharmaexpert, a tool that analyzes the PASS-predicted BAS of substances based on thousands of "mechanism-effect" and "effect-mechanism" relationships, we illuminate hypothesis-generating pharmacological effects, mechanisms of action, and targets that might underlie the anti-aging/anti-cancer activities of the gerosuppressant EVOO oleuropeins.

  11. Inquiry into the Indigenous, Cultural and Traditional Astronomical Knowledge: A case of the Lamba land of Zambia

    NASA Astrophysics Data System (ADS)

    Simpemba, Prospery C.

    2015-08-01

    Indigenous astronomy in the context of Zambia is the oral astronomy knowledge, culture and beliefs which relate to celestial bodies, astronomy events and related behaviour that are held by the elderly persons and passed on to younger generations. Much is not written down and with the passing away of the custodians, this knowledge is threatened to be extinct. A mini study of the astronomical beliefs and culture of the ancient Zambian community during the International Year of Astronomy (IYA) 2009 revealed that such knowledge existed. A comprehensive study assesses cultural and traditional knowledge on astronomy and to ascertain how much of this knowledge has been passed on to the younger generations. Open-ended interviews were conducted using questionnaires and focus group discussions. Respondents were identified by snowball sampling of the elderly people and random sampling of the middle aged and young. Nine randomly sampled districts of the Copperbelt Province were considered. The collected data has been analysed using MAXQDA software. Knowledge of traditional astronomy is high among the elderly people and declining with age hence the need for documenting and introducing it in the school curriculum and regular public discourse.

  12. Evaluation of the utility of the Estimation of Physiologic Ability and Surgical Stress score for predicting post-operative morbidity after orthopaedic surgery.

    PubMed

    Nagata, Takehiro; Hirose, Jun; Nakamura, Takayuki; Tokunaga, Takuya; Uehara, Yusuke; Mizuta, Hiroshi

    2015-11-01

    The purpose of this study was to investigate the utility of the Estimation of Physiologic Ability and Surgical Stress (E-PASS) scoring system for predicting post-operative morbidity. We included 1,883 patients (mean age, 52.1 years) who underwent orthopaedic surgery. The post-operative complications were classified as surgical site and non-surgical site complications, and the relationship between the E-PASS scores and post-operative morbidity was investigated. The incidence of post-operative complications (n = 274) significantly increased with an increase in E-PASS scores (p < 0.001). The areas under the curve for the comprehensive risk score of the E-PASS scoring system for overall and non-surgical site complications were 0.777 and 0.794, respectively. The E-PASS scoring system showed some utility in predicting post-operative morbidity after general orthopaedic surgery. However, creating a new risk score that is more suitable for orthopaedic surgery will be challenging.

  13. Software Safety Risk in Legacy Safety-Critical Computer Systems

    NASA Technical Reports Server (NTRS)

    Hill, Janice; Baggs, Rhoda

    2007-01-01

    Safety-critical computer systems must be engineered to meet system and software safety requirements. For legacy safety-critical computer systems, software safety requirements may not have been formally specified during development. When process-oriented software safety requirements are levied on a legacy system after the fact, where software development artifacts don't exist or are incomplete, the question becomes 'how can this be done?' The risks associated with only meeting certain software safety requirements in a legacy safety-critical computer system must be addressed should such systems be selected as candidates for reuse. This paper proposes a method for ascertaining formally, a software safety risk assessment, that provides measurements for software safety for legacy systems which may or may not have a suite of software engineering documentation that is now normally required. It relies upon the NASA Software Safety Standard, risk assessment methods based upon the Taxonomy-Based Questionnaire, and the application of reverse engineering CASE tools to produce original design documents for legacy systems.

  14. Modernizing Systems and Software: How Evolving Trends in Future Trends in Systems and Software Technology Bode Well for Advancing the Precision of Technology

    DTIC Science & Technology

    2009-04-23

    of Code Need for increased functionality will be a forcing function to bring the fields of software and systems engineering... of Software-Intensive Systems is Increasing 3 How Evolving Trends in Systems and Software Technologies Bode Well for Advancing the Precision of ...Engineering in Continued Partnership 4 How Evolving Trends in Systems and Software Technologies Bode Well for Advancing the

  15. Sensitivity of 3D Dose Verification to Multileaf Collimator Misalignments in Stereotactic Body Radiation Therapy of Spinal Tumor.

    PubMed

    Xin-Ye, Ni; Ren, Lei; Yan, Hui; Yin, Fang-Fang

    2016-12-01

    This study aimed to detect the sensitivity of Delt 4 on ordinary field multileaf collimator misalignments, system misalignments, random misalignments, and misalignments caused by gravity of the multileaf collimator in stereotactic body radiation therapy. (1) Two field sizes, including 2.00 cm (X) × 6.00 cm (Y) and 7.00 cm (X) × 6.00 cm (Y), were set. The leaves of X1 and X2 in the multileaf collimator were simultaneously opened. (2) Three cases of stereotactic body radiation therapy of spinal tumor were used. The dose of the planning target volume was 1800 cGy with 3 fractions. The 4 types to be simulated included (1) the leaves of X1 and X2 in the multileaf collimator were simultaneously opened, (2) only X1 of the multileaf collimator and the unilateral leaf were opened, (3) the leaves of X1 and X2 in the multileaf collimator were randomly opened, and (4) gravity effect was simulated. The leaves of X1 and X2 in the multileaf collimator shifted to the same direction. The difference between the corresponding 3-dimensional dose distribution measured by Delt 4 and the dose distribution in the original plan made in the treatment planning system was analyzed with γ index criteria of 3.0 mm/3.0%, 2.5 mm/2.5%, 2.0 mm/2.0%, 2.5 mm/1.5%, and 1.0 mm/1.0%. (1) In the field size of 2.00 cm (X) × 6.00 cm (Y), the γ pass rate of the original was 100% with 2.5 mm/2.5% as the statistical standard. The pass rate decreased to 95.9% and 89.4% when the X1 and X2 directions of the multileaf collimator were opened within 0.3 and 0.5 mm, respectively. In the field size of 7.00 (X) cm × 6.00 (Y) cm with 1.5 mm/1.5% as the statistical standard, the pass rate of the original was 96.5%. After X1 and X2 of the multileaf collimator were opened within 0.3 mm, the pass rate decreased to lower than 95%. The pass rate was higher than 90% within the 3 mm opening. (2) For spinal tumor, the change in the planning target volume V 18 under various modes calculated using treatment planning system was within 1%. However, the maximum dose deviation of the spinal cord was high. In the spinal cord with a gravity of -0.25 mm, the maximum dose deviation minimally changed and increased by 6.8% than that of the original. In the largest opening of 1.00 mm, the deviation increased by 47.7% than that of the original. Moreover, the pass rate of the original determined through Delt 4 was 100% with 3 mm/3% as the statistical standard. The pass rate was 97.5% in the 0.25 mm opening and higher than 95% in the 0.5 mm opening A, 0.25 mm opening A, whole gravity series, and 0.20 mm random opening. Moreover, the pass rate was higher than 90% with 2.0 mm/2.0% as the statistical standard in the original and in the 0.25 mm gravity. The difference in the pass rates was not statistically significant among the -0.25 mm gravity, 0.25 mm opening A, 0.20 mm random opening, and original as calculated using SPSS 11.0 software with P > .05. Different analysis standards of Delt 4 were analyzed in different field sizes to improve the detection sensitivity of the multileaf collimator position on the basis of 90% throughout rate. In stereotactic body radiation therapy of spinal tumor, the 2.0 mm/2.0% standard can reveal the dosimetric differences caused by the minor multileaf collimator position compared with the 3.0 mm/3.0% statistical standard. However, some position derivations of the misalignments that caused high dose amount to the spinal cord cannot be detected. However, some misalignments were not detected when a large number of multileaf collimator were administered into the spinal cord. © The Author(s) 2015.

  16. An Architecture, System Engineering, and Acquisition Approach for Space System Software Resiliency

    NASA Astrophysics Data System (ADS)

    Phillips, Dewanne Marie

    Software intensive space systems can harbor defects and vulnerabilities that may enable external adversaries or malicious insiders to disrupt or disable system functions, risking mission compromise or loss. Mitigating this risk demands a sustained focus on the security and resiliency of the system architecture including software, hardware, and other components. Robust software engineering practices contribute to the foundation of a resilient system so that the system "can take a hit to a critical component and recover in a known, bounded, and generally acceptable period of time". Software resiliency must be a priority and addressed early in the life cycle development to contribute a secure and dependable space system. Those who develop, implement, and operate software intensive space systems must determine the factors and systems engineering practices to address when investing in software resiliency. This dissertation offers methodical approaches for improving space system resiliency through software architecture design, system engineering, increased software security, thereby reducing the risk of latent software defects and vulnerabilities. By providing greater attention to the early life cycle phases of development, we can alter the engineering process to help detect, eliminate, and avoid vulnerabilities before space systems are delivered. To achieve this objective, this dissertation will identify knowledge, techniques, and tools that engineers and managers can utilize to help them recognize how vulnerabilities are produced and discovered so that they can learn to circumvent them in future efforts. We conducted a systematic review of existing architectural practices, standards, security and coding practices, various threats, defects, and vulnerabilities that impact space systems from hundreds of relevant publications and interviews of subject matter experts. We expanded on the system-level body of knowledge for resiliency and identified a new software architecture framework and acquisition methodology to improve the resiliency of space systems from a software perspective with an emphasis on the early phases of the systems engineering life cycle. This methodology involves seven steps: 1) Define technical resiliency requirements, 1a) Identify standards/policy for software resiliency, 2) Develop a request for proposal (RFP)/statement of work (SOW) for resilient space systems software, 3) Define software resiliency goals for space systems, 4) Establish software resiliency quality attributes, 5) Perform architectural tradeoffs and identify risks, 6) Conduct architecture assessments as part of the procurement process, and 7) Ascertain space system software architecture resiliency metrics. Data illustrates that software vulnerabilities can lead to opportunities for malicious cyber activities, which could degrade the space mission capability for the user community. Reducing the number of vulnerabilities by improving architecture and software system engineering practices can contribute to making space systems more resilient. Since cyber-attacks are enabled by shortfalls in software, robust software engineering practices and an architectural design are foundational to resiliency, which is a quality that allows the system to "take a hit to a critical component and recover in a known, bounded, and generally acceptable period of time". To achieve software resiliency for space systems, acquirers and suppliers must identify relevant factors and systems engineering practices to apply across the lifecycle, in software requirements analysis, architecture development, design, implementation, verification and validation, and maintenance phases.

  17. Development of a portable remote sensing system for measurement of diesel emissions from passing diesel trucks.

    DOT National Transportation Integrated Search

    2011-04-08

    A wireless remote-sensing system has been developed for measurement of NOx and particulate matters (PM) emissions from passing diesel trucks. The NOx measurement system has a UV light source with quartz fiber optics that focused the light source into...

  18. SU-F-T-619: Dose Evaluation of Specific Patient Plans Based On Monte Carlo Algorithm for a CyberKnife Stereotactic Radiosurgery System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piao, J; PLA 302 Hospital, Beijing; Xu, S

    2016-06-15

    Purpose: This study will use Monte Carlo to simulate the Cyberknife system, and intend to develop the third-party tool to evaluate the dose verification of specific patient plans in TPS. Methods: By simulating the treatment head using the BEAMnrc and DOSXYZnrc software, the comparison between the calculated and measured data will be done to determine the beam parameters. The dose distribution calculated in the Raytracing, Monte Carlo algorithms of TPS (Multiplan Ver4.0.2) and in-house Monte Carlo simulation method for 30 patient plans, which included 10 head, lung and liver cases in each, were analyzed. The γ analysis with the combinedmore » 3mm/3% criteria would be introduced to quantitatively evaluate the difference of the accuracy between three algorithms. Results: More than 90% of the global error points were less than 2% for the comparison of the PDD and OAR curves after determining the mean energy and FWHM.The relative ideal Monte Carlo beam model had been established. Based on the quantitative evaluation of dose accuracy for three algorithms, the results of γ analysis shows that the passing rates (84.88±9.67% for head,98.83±1.05% for liver,98.26±1.87% for lung) of PTV in 30 plans between Monte Carlo simulation and TPS Monte Carlo algorithms were good. And the passing rates (95.93±3.12%,99.84±0.33% in each) of PTV in head and liver plans between Monte Carlo simulation and TPS Ray-tracing algorithms were also good. But the difference of DVHs in lung plans between Monte Carlo simulation and Ray-tracing algorithms was obvious, and the passing rate (51.263±38.964%) of γ criteria was not good. It is feasible that Monte Carlo simulation was used for verifying the dose distribution of patient plans. Conclusion: Monte Carlo simulation algorithm developed in the CyberKnife system of this study can be used as a reference tool for the third-party tool, which plays an important role in dose verification of patient plans. This work was supported in part by the grant from Chinese Natural Science Foundation (Grant No. 11275105). Thanks for the support from Accuray Corp.« less

  19. Ultrasonic sensing of GMAW: Laser/EMAT defect detection system. [Gas Metal Arc Welding (GMAW), Electromagnetic acoustic transducer (EMAT)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlson, N.M.; Johnson, J.A.; Larsen, E.D.

    1992-01-01

    In-process ultrasonic sensing of welding allows detection of weld defects in real time. A noncontacting ultrasonic system is being developed to operate in a production environment. The principal components are a pulsed laser for ultrasound generation and an electromagnetic acoustic transducer (EMAT) for ultrasound reception. A PC-based data acquisition system determines the quality of the weld on a pass-by-pass basis. The laser/EMAT system interrogates the area in the weld volume where defects are most likely to occur. This area of interest is identified by computer calculations on a pass-by-pass basis using weld planning information provided by the off-line programmer. Themore » absence of a signal above the threshold level in the computer-calculated time interval indicates a disruption of the sound path by a defect. The ultrasonic sensor system then provides an input signal to the weld controller about the defect condition. 8 refs.« less

  20. Protocol standards and implementation within the digital engineering laboratory computer network (DELNET) using the universal network interface device (UNID). Part 2

    NASA Astrophysics Data System (ADS)

    Phister, P. W., Jr.

    1983-12-01

    Development of the Air Force Institute of Technology's Digital Engineering Laboratory Network (DELNET) was continued with the development of an initial draft of a protocol standard for all seven layers as specified by the International Standards Organization's (ISO) Reference Model for Open Systems Interconnections. This effort centered on the restructuring of the Network Layer to perform Datagram routing and to conform to the developed protocol standards and actual software module development of the upper four protocol layers residing within the DELNET Monitor (Zilog MCZ 1/25 Computer System). Within the guidelines of the ISO Reference Model the Transport Layer was developed utilizing the Internet Header Format (IHF) combined with the Transport Control Protocol (TCP) to create a 128-byte Datagram. Also a limited Application Layer was created to pass the Gettysburg Address through the DELNET. This study formulated a first draft for the DELNET Protocol Standard and designed, implemented, and tested the Network, Transport, and Application Layers to conform to these protocol standards.

  1. Observing Tsunamis in the Ionosphere Using Ground Based GPS Measurements

    NASA Technical Reports Server (NTRS)

    Galvan, D. A.; Komjathy, A.; Song, Y. Tony; Stephens, P.; Hickey, M. P.; Foster, J.

    2011-01-01

    Ground-based Global Positioning System (GPS) measurements of ionospheric Total Electron Content (TEC) show variations consistent with atmospheric internal gravity waves caused by ocean tsunamis following recent seismic events, including the Tohoku tsunami of March 11, 2011. We observe fluctuations correlated in time, space, and wave properties with this tsunami in TEC estimates processed using JPL's Global Ionospheric Mapping Software. These TEC estimates were band-pass filtered to remove ionospheric TEC variations with periods outside the typical range of internal gravity waves caused by tsunamis. Observable variations in TEC appear correlated with the Tohoku tsunami near the epicenter, at Hawaii, and near the west coast of North America. Disturbance magnitudes are 1-10% of the background TEC value. Observations near the epicenter are compared to estimates of expected tsunami-driven TEC variations produced by Embry Riddle Aeronautical University's Spectral Full Wave Model, an atmosphere-ionosphere coupling model, and found to be in good agreement. The potential exists to apply these detection techniques to real-time GPS TEC data, providing estimates of tsunami speed and amplitude that may be useful for future early warning systems.

  2. Development of a Phasor Diagram Creator to Visualize the Piston and Displacer Forces in an Advanced Stirling Convertor

    NASA Technical Reports Server (NTRS)

    Saha, Dipanjan; Lewandowski, Edward J.

    2013-01-01

    The steady-state, nearly sinusoidal behavior of the components in a free-piston Stirling engine allows for visualization of the forces in the system using phasor diagrams. Based on Newton's second law, F = ma, any phasor diagrams modeling a given component in a system should close if all of the acting forces have been considered. Since the Advanced Stirling Radioisotope Generator (ASRG), currently being developed for future NASA deep space missions, is made up of such nearly sinusoidally oscillating components, its phasor diagrams would also be expected to close. A graphical user interface (GUI) has been written in MATLAB (MathWorks), which takes user input data, passes it to Sage (Gedeon Associates), a one-dimensional thermodynamic modeling program used to model the Stirling convertor, runs Sage, and then automatically plots the phasor diagrams. Using this software tool, the effect of varying different Sage inputs on the phasor diagrams was determined. The parameters varied were piston amplitude, hot-end temperature, cold-end temperature, operating frequency, and displacer spring constant. These phasor diagrams offer useful insight into convertor operation and performance.

  3. Developing a model-based decision support system for call-a-ride paratransit service problems.

    DOT National Transportation Integrated Search

    2011-02-01

    Paratransit is the transportation service that supplements larger public transportation : systems by providing individualized rides without fixed routes or timetables. In 1990, : the Americans with Disabilities Act (ADA) was passed which allows passe...

  4. Study of fault tolerant software technology for dynamic systems

    NASA Technical Reports Server (NTRS)

    Caglayan, A. K.; Zacharias, G. L.

    1985-01-01

    The major aim of this study is to investigate the feasibility of using systems-based failure detection isolation and compensation (FDIC) techniques in building fault-tolerant software and extending them, whenever possible, to the domain of software fault tolerance. First, it is shown that systems-based FDIC methods can be extended to develop software error detection techniques by using system models for software modules. In particular, it is demonstrated that systems-based FDIC techniques can yield consistency checks that are easier to implement than acceptance tests based on software specifications. Next, it is shown that systems-based failure compensation techniques can be generalized to the domain of software fault tolerance in developing software error recovery procedures. Finally, the feasibility of using fault-tolerant software in flight software is investigated. In particular, possible system and version instabilities, and functional performance degradation that may occur in N-Version programming applications to flight software are illustrated. Finally, a comparative analysis of N-Version and recovery block techniques in the context of generic blocks in flight software is presented.

  5. Two-pass imputation algorithm for missing value estimation in gene expression time series.

    PubMed

    Tsiporkova, Elena; Boeva, Veselka

    2007-10-01

    Gene expression microarray experiments frequently generate datasets with multiple values missing. However, most of the analysis, mining, and classification methods for gene expression data require a complete matrix of gene array values. Therefore, the accurate estimation of missing values in such datasets has been recognized as an important issue, and several imputation algorithms have already been proposed to the biological community. Most of these approaches, however, are not particularly suitable for time series expression profiles. In view of this, we propose a novel imputation algorithm, which is specially suited for the estimation of missing values in gene expression time series data. The algorithm utilizes Dynamic Time Warping (DTW) distance in order to measure the similarity between time expression profiles, and subsequently selects for each gene expression profile with missing values a dedicated set of candidate profiles for estimation. Three different DTW-based imputation (DTWimpute) algorithms have been considered: position-wise, neighborhood-wise, and two-pass imputation. These have initially been prototyped in Perl, and their accuracy has been evaluated on yeast expression time series data using several different parameter settings. The experiments have shown that the two-pass algorithm consistently outperforms, in particular for datasets with a higher level of missing entries, the neighborhood-wise and the position-wise algorithms. The performance of the two-pass DTWimpute algorithm has further been benchmarked against the weighted K-Nearest Neighbors algorithm, which is widely used in the biological community; the former algorithm has appeared superior to the latter one. Motivated by these findings, indicating clearly the added value of the DTW techniques for missing value estimation in time series data, we have built an optimized C++ implementation of the two-pass DTWimpute algorithm. The software also provides for a choice between three different initial rough imputation methods.

  6. Fractal analysis of the ischemic transition region in chronic ischemic heart disease using magnetic resonance imaging.

    PubMed

    Michallek, Florian; Dewey, Marc

    2017-04-01

    To introduce a novel hypothesis and method to characterise pathomechanisms underlying myocardial ischemia in chronic ischemic heart disease by local fractal analysis (FA) of the ischemic myocardial transition region in perfusion imaging. Vascular mechanisms to compensate ischemia are regulated at various vascular scales with their superimposed perfusion pattern being hypothetically self-similar. Dedicated FA software ("FraktalWandler") has been developed. Fractal dimensions during first-pass (FD first-pass ) and recirculation (FD recirculation ) are hypothesised to indicate the predominating pathomechanism and ischemic severity, respectively. Twenty-six patients with evidence of myocardial ischemia in 108 ischemic myocardial segments on magnetic resonance imaging (MRI) were analysed. The 40th and 60th percentiles of FD first-pass were used for pathomechanical classification, assigning lesions with FD first-pass  ≤ 2.335 to predominating coronary microvascular dysfunction (CMD) and ≥2.387 to predominating coronary artery disease (CAD). Optimal classification point in ROC analysis was FD first-pass  = 2.358. FD recirculation correlated moderately with per cent diameter stenosis in invasive coronary angiography in lesions classified CAD (r = 0.472, p = 0.001) but not CMD (r = 0.082, p = 0.600). The ischemic transition region may provide information on pathomechanical composition and severity of myocardial ischemia. FA of this region is feasible and may improve diagnosis compared to traditional noninvasive myocardial perfusion analysis. • A novel hypothesis and method is introduced to pathophysiologically characterise myocardial ischemia. • The ischemic transition region appears a meaningful diagnostic target in perfusion imaging. • Fractal analysis may characterise pathomechanical composition and severity of myocardial ischemia.

  7. Research and development of asymmetrical heat transfer augmentation method in radial channels of blades for high temperature gas turbines

    NASA Astrophysics Data System (ADS)

    Shevchenko, I. V.; Rogalev, A. N.; Garanin, I. V.; Vegera, A. N.; Kindra, V. O.

    2017-11-01

    The serpentine-like one and half-pass cooling channel systems are primarily used in blades fabricated by the lost-wax casting process. The heat transfer turbulators like cross-sectional or angled ribs used in channels of the midchord region failed to eliminate the temperature irregularity from the suction and pressure sides, which is reaching 200°C for a first stage blade of the high-pressure turbine for an aircraft engine. This paper presents the results of a numerical and experimental test of an advanced heat transfer augmentation system in radial channels developed for alignment of the temperature field from the suction and pressure sides. A numerical simulation of three-dimensional coolant flow for a wide range of Reynolds numbers was carried out using ANSYS CFX software. Effect of geometrical parameters on the heat removal asymmetry was determined. The test results of a blade with the proposed intensification system conducted in a liquid-metal thermostat confirmed the accuracy of calculations. Based on the experimental data, the dependencies for calculation of heat transfer coefficients to the cooling air in the blade studied were obtained.

  8. Accelerating Climate and Weather Simulations through Hybrid Computing

    NASA Technical Reports Server (NTRS)

    Zhou, Shujia; Cruz, Carlos; Duffy, Daniel; Tucker, Robert; Purcell, Mark

    2011-01-01

    Unconventional multi- and many-core processors (e.g. IBM (R) Cell B.E.(TM) and NVIDIA (R) GPU) have emerged as effective accelerators in trial climate and weather simulations. Yet these climate and weather models typically run on parallel computers with conventional processors (e.g. Intel, AMD, and IBM) using Message Passing Interface. To address challenges involved in efficiently and easily connecting accelerators to parallel computers, we investigated using IBM's Dynamic Application Virtualization (TM) (IBM DAV) software in a prototype hybrid computing system with representative climate and weather model components. The hybrid system comprises two Intel blades and two IBM QS22 Cell B.E. blades, connected with both InfiniBand(R) (IB) and 1-Gigabit Ethernet. The system significantly accelerates a solar radiation model component by offloading compute-intensive calculations to the Cell blades. Systematic tests show that IBM DAV can seamlessly offload compute-intensive calculations from Intel blades to Cell B.E. blades in a scalable, load-balanced manner. However, noticeable communication overhead was observed, mainly due to IP over the IB protocol. Full utilization of IB Sockets Direct Protocol and the lower latency production version of IBM DAV will reduce this overhead.

  9. Small Business Innovations

    NASA Technical Reports Server (NTRS)

    1994-01-01

    A Small Business Innovation Research (SBIR) contract resulted in a series of commercially available lasers, which have application in fiber optic communications, difference frequency generation, fiber optic sensing and general laboratory use. Developed under a Small Business Innovation Research (SBIR) contract, the Phase Doppler Particles Analyzer is a non-disruptive, highly accurate laser-based method of determining particle size, number density, trajectory, turbulence and other information about particles passing through a measurement probe volume. The system consists of an optical transmitter and receiver, signal processor and computer with data acquisition and analysis software. A variety of systems are offered for applications including spray characterization for paint, and agricultural and other sprays. The Microsizer, a related product, is used in medical equipment manufacturing and analysis of contained flows. High frequency components and subsystems produced by Millitech Corporation are marketed for both research and commercial use. These systems, which operate in the upper portion of the millimeter wave, resulted from a number of Small Business Innovation Research (SBIR) projects. By developing very high performance mixers and multipliers, the company has advanced the state of the art in sensitive receiver technology. Components are used in receivers and transceivers for monitoring chlorine monoxides, ozone, in plasma characterization and in material properties characterization.

  10. Status of the calibration and alignment framework at the Belle II experiment

    NASA Astrophysics Data System (ADS)

    Dossett, D.; Sevior, M.; Ritter, M.; Kuhr, T.; Bilka, T.; Yaschenko, S.; Belle Software Group, II

    2017-10-01

    The Belle II detector at the Super KEKB e+e-collider plans to take first collision data in 2018. The monetary and CPU time costs associated with storing and processing the data mean that it is crucial for the detector components at Belle II to be calibrated quickly and accurately. A fast and accurate calibration system would allow the high level trigger to increase the efficiency of event selection, and can give users analysis-quality reconstruction promptly. A flexible framework to automate the fast production of calibration constants is being developed in the Belle II Analysis Software Framework (basf2). Detector experts only need to create two components from C++ base classes in order to use the automation system. The first collects data from Belle II event data files and outputs much smaller files to pass to the second component. This runs the main calibration algorithm to produce calibration constants ready for upload into the conditions database. A Python framework coordinates the input files, order of processing, and submission of jobs. Splitting the operation into collection and algorithm processing stages allows the framework to optionally parallelize the collection stage on a batch system.

  11. 48 CFR 652.237-71 - Identification/Building Pass.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Identification/Building... Identification/Building Pass. As prescribed in 637.110(b), insert the following clause. Identification/Building.... (1) The contractor shall obtain a Department of State building pass for all employees performing...

  12. Plastic mechanism of multi-pass double-roller clamping spinning for arc-shaped surface flange

    NASA Astrophysics Data System (ADS)

    Fan, Shuqin; Zhao, Shengdun; Zhang, Qi; Li, Yongyi

    2013-11-01

    Compared with the conventional single-roller spinning process, the double-roller clamping spinning(DRCS) process can effectively prevent the sheet metal surface wrinkling and improve the the production efficiency and the shape precision of final spun part. Based on ABAQUS/Explicit nonlinear finite element software, the finite element model of the multi-pass DRCS for the sheet metal is established, and the material model, the contact definition, the mesh generation, the loading trajectory and other key technical problems are solved. The simulations on the multi-pass DRCS of the ordinary Q235A steel cylindrical part with the arc-shaped surface flange are carried out. The effects of number of spinning passes on the production efficiency, the spinning moment, the shape error of the workpiece, and the wall thickness distribution of the final part are obtained. It is indicated definitely that with the increase of the number of spinning passes the geometrical precision of the spun part increases while the production efficiency reduces. Moreover, the variations of the spinning forces and the distributions of the stresses, strains, wall thickness during the multi-pass DRCS process are revealed. It is indicated that during the DRCS process the radical force is the largest, and the whole deformation area shows the tangential tensile strain and the radial compressive strain, while the thickness strain changes along the generatrix directions from the compressive strain on the outer edge of the flange to the tensile strain on the inner edge of the flange. Based on the G-CNC6135 NC lathe, the three-axis linkage computer-controlled experimental device for DRCS which is driven by the AC servo motor is developed. And then using the experimental device, the Q235A cylindrical parts with the arc-shape surface flange are formed by the DRCS. The simulation results of spun parts have good consistency with the experimental results, which verifies the feasibility of DRCS process and the reliability of the finite element model for DRCS.

  13. Using GDAL to Convert NetCDF 4 CF 1.6 to GeoTIFF: Interoperability Problems and Solutions for Data Providers and Distributors

    NASA Astrophysics Data System (ADS)

    Haran, T. M.; Brodzik, M. J.; Nordgren, B.; Estilow, T.; Scott, D. J.

    2015-12-01

    An increasing number of new Earth science datasets are being producedby data providers in self-describing, machine-independent file formatsincluding Hierarchical Data Format version 5 (HDF5) and NetworkCommon Data Form version 4 (netCDF-4). Furthermore data providers maybe producing netCDF-4 files that follow the conventions for Climateand Forecast metadata version 1.6 (CF 1.6) which, for datasets mappedto a projected raster grid covering all or a portion of the earth,includes the Coordinate Reference System (CRS) used to define howlatitude and longitude are mapped to grid coordinates, i.e. columnsand rows, and vice versa. One problem that users may encounter is thattheir preferred visualization and analysis tool may not yet includesupport for one of these newer formats. Moreover, data distributorssuch as NASA's NSIDC DAAC may not yet include support for on-the-flyconversion of data files for all data sets produced in a new format toa preferred older distributed format.There do exist open source solutions to this dilemma in the form ofsoftware packages that can translate files in one of the new formatsto one of the preferred formats. However these software packagesrequire that the file to be translated conform to the specificationsof its respective format. Although an online CF-Convention compliancechecker is available from cfconventions.org, a recent NSIDC userservices incident described here in detail involved an NSIDC-supporteddata set that passed the (then current) CF Checker Version 2.0.6, butwas in fact lacking two variables necessary for conformance. Thisproblem was not detected until GDAL, a software package which reliedon the missing variables, was employed by a user in an attempt totranslate the data into a different file format, namely GeoTIFF.This incident indicates that testing a candidate data product with oneor more software products written to accept the advertised conventionsis proposed as a practice which improves interoperability. Differencesbetween data file contents and software package expectations areexposed, affording an opportunity to improve conformance of software,data or both. The incident can also serve as a demonstration that dataproviders, distributors, and users can work together to improve dataproduct quality and interoperability.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bachan, John

    Chisel is a new open-source hardware construction language developed at UC Berkeley that supports advanced hardware design using highly parameterized generators and layered domain-specific hardware languages. Chisel is embedded in the Scala programming language, which raises the level of hardware design abstraction by providing concepts including object orientation, functional programming, parameterized types, and type inference. From the same source, Chisel can generate a high-speed C++-based cycle-accurate software simulator, or low-level Verilog designed to pass on to standard ASIC or FPGA tools for synthesis and place and route.

  15. Exploring the Presence of microDNAs in Prostate Cancer Cell Lines, Tissue, and Sera of Prostate Cancer Patients and its Possible Application as Biomarker

    DTIC Science & Technology

    2016-04-01

    Sequence tags were mapped on the human reference genome using the Novoalign software. Only those...ends of the linear islands to create a novel junctional sequence that does not exist in the genome . Thus the PE- sequence of a fragment that breaks at... genome (Fig. 3b). Those PE-tags where one tag maps uniquely to an island and the other remains unmapped, but passes the sequence quality filter,

  16. Specification of Fenix MPI Fault Tolerance library version 1.0.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gamble, Marc; Van Der Wijngaart, Rob; Teranishi, Keita

    This document provides a specification of Fenix, a software library compatible with the Message Passing Interface (MPI) to support fault recovery without application shutdown. The library consists of two modules. The first, termed process recovery , restores an application to a consistent state after it has suffered a loss of one or more MPI processes (ranks). The second specifies functions the user can invoke to store application data in Fenix managed redundant storage, and to retrieve it from that storage after process recovery.

  17. Relay Forward-Link File Management Services (MaROS Phase 2)

    NASA Technical Reports Server (NTRS)

    Allard, Daniel A.; Wallick, Michael N.; Hy, Franklin H.; Gladden, Roy E.

    2013-01-01

    This software provides the service-level functionality to manage the delivery of files from a lander mission repository to an orbiter mission repository for eventual spacelink relay by the orbiter asset on a specific communications pass. It provides further functions to deliver and track a set of mission-defined messages detailing lander authorization instructions and orbiter data delivery state. All of the information concerning these transactions is persisted in a database providing a high level of accountability of the forward-link relay process.

  18. 8755 Emulator Design

    DTIC Science & Technology

    1988-12-01

    Break Control....................51 8755 I/O Control..................54 Z-100 Control Software................. 55 Pass User Memory...the emulator SRAM and the other is 55 for the target SRAM. If either signal is replaced by the NACK signal the host computer displays an error message...Block Diagram 69 AUU M/M 8 in Fiur[:. cemti Daga 70m F’M I-- ’I ANLAD AMam U52 COMM ow U2 i M.2 " -n ax- U 6_- Figure 2b. Schematic Diagram 71 )-M -A -I

  19. NASA's Space Launch System: Systems Engineering Approach for Affordability and Mission Success

    NASA Technical Reports Server (NTRS)

    Hutt, John J.; Whitehead, Josh; Hanson, John

    2017-01-01

    NASA is working toward the first launch of a new, unmatched capability for deep space exploration, with launch readiness planned for 2018. The initial Block 1 configuration of the Space Launch System will more than double the mass and volume to Low Earth Orbit (LEO) of any launch vehicle currently in operation - with a path to evolve to the greatest capability ever developed. The program formally began in 2011. The vehicle successfully passed Preliminary Design Review (PDR) in 2013, Key Decision Point C (KDPC) in 2014 and Critical Design Review (CDR) in October 2015 - nearly 40 years since the last CDR of a NASA human-rated rocket. Every major SLS element has completed components of test and flight hardware. Flight software has completed several development cycles. RS-25 hotfire testing at NASA Stennis Space Center (SSC) has successfully demonstrated the space shuttle-heritage engine can perform to SLS requirements and environments. The five-segment solid rocket booster design has successfully completed two full-size motor firing tests in Utah. Stage and component test facilities at Stennis and NASA Marshall Space Flight Center are nearing completion. Launch and test facilities, as well as transportation and other ground support equipment are largely complete at NASA's Kennedy, Stennis and Marshall field centers. Work is also underway on the more powerful Block 1 B variant with successful completion of the Exploration Upper Stage (EUS) PDR in January 2017. NASA's approach is to develop this heavy lift launch vehicle with limited resources by building on existing subsystem designs and existing hardware where available. The systems engineering and integration (SE&I) of existing and new designs introduces unique challenges and opportunities. The SLS approach was designed with three objectives in mind: 1) Design the vehicle around the capability of existing systems; 2) Reduce work hours for nonhardware/ software activities; 3) Increase the probability of mission success by focusing effort on more critical activities.

  20. AVE-SESAME program for the REEDA System

    NASA Technical Reports Server (NTRS)

    Hickey, J. S.

    1981-01-01

    The REEDA system software was modified and improved to process the AVE-SESAME severe storm data. A random access file system for the AVE storm data was designed, tested, and implemented. The AVE/SESAME software was modified to incorporate the random access file input and to interface with new graphics hardware/software now available on the REEDA system. Software was developed to graphically display the AVE/SESAME data in the convention normally used by severe storm researchers. Software was converted to AVE/SESAME software systems and interfaced with existing graphics hardware/software available on the REEDA System. Software documentation was provided for existing AVE/SESAME programs underlining functional flow charts and interacting questions. All AVE/SESAME data sets in random access format was processed to allow developed software to access the entire AVE/SESAME data base. The existing software was modified to allow for processing of different AVE/SESAME data set types including satellite surface and radar data.

  1. A satellite-based personal communication system for the 21st century

    NASA Technical Reports Server (NTRS)

    Sue, Miles K.; Dessouky, Khaled; Levitt, Barry; Rafferty, William

    1990-01-01

    Interest in personal communications (PCOMM) has been stimulated by recent developments in satellite and terrestrial mobile communications. A personal access satellite system (PASS) concept was developed at the Jet Propulsion Laboratory (JPL) which has many attractive user features, including service diversity and a handheld terminal. Significant technical challenges addressed in formulating the PASS space and ground segments are discussed. PASS system concept and basic design features, high risk enabling technologies, an optimized multiple access scheme, alternative antenna coverage concepts, the use of non-geostationary orbits, user terminal radiation constraints, and user terminal frequency reference are covered.

  2. Lessons Learned During Implementation and Early Operations of the DS1 Beacon Monitor Experiment

    NASA Technical Reports Server (NTRS)

    Sherwood, Rob; Wyatt, Jay; Hotz, Henry; Schlutsmeyer, Alan; Sue, Miles

    1998-01-01

    A new approach to mission operations will be flight validated on NASA's New Millennium Program Deep Space One (DS1) mission which launched in October 1998. The Beacon Monitor Operations Technology is aimed at decreasing the total volume of downlinked engineering telemetry by reducing the frequency of downlink and the volume of data received per pass. Cost savings are achieved by reducing the amount of routine telemetry processing and analysis performed by ground staff. The technology is required for upcoming NASA missions to Pluto, Europa, and possibly some other missions. With beacon monitoring, the spacecraft will assess its own health and will transmit one of four beacon messages each representing a unique frequency tone to inform the ground how urgent it is to track the spacecraft for telemetry. If all conditions are nominal, the tone provides periodic assurance to ground personnel that the mission is proceeding as planned without having to receive and analyze downlinked telemetry. If there is a problem, the tone will indicate that tracking is required and the resulting telemetry will contain a concise summary of what has occurred since the last telemetry pass. The primary components of the technology are a tone monitoring technology, AI-based software for onboard engineering data summarization, and a ground response system. In addition, there is a ground visualization system for telemetry summaries. This paper includes a description of the Beacon monitor concept, the trade-offs with adapting that concept as a technology experiment, the current state of the resulting implementation on DS1, and our lessons learned during the initial checkout phase of the mission. Applicability to future missions is also included.

  3. Development of advanced manufacturing technologies for low cost hydrogen storage vessels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leavitt, Mark; Lam, Patrick

    2014-12-29

    The U.S. Department of Energy (DOE) defined a need for low-cost gaseous hydrogen storage vessels at 700 bar to support cost goals aimed at 500,000 units per year. Existing filament winding processes produce a pressure vessel that is structurally inefficient, requiring more carbon fiber for manufacturing reasons, than would otherwise be necessary. Carbon fiber is the greatest cost driver in building a hydrogen pressure vessel. The objective of this project is to develop new methods for manufacturing Type IV pressure vessels for hydrogen storage with the purpose of lowering the overall product cost through an innovative hybrid process of optimizingmore » composite usage by combining traditional filament winding (FW) and advanced fiber placement (AFP) techniques. A numbers of vessels were manufactured in this project. The latest vessel design passed all the critical tests on the hybrid design per European Commission (EC) 79-2009 standard except the extreme temperature cycle test. The tests passed include burst test, cycle test, accelerated stress rupture test and drop test. It was discovered the location where AFP and FW overlap for load transfer could be weakened during hydraulic cycling at 85°C. To design a vessel that passed these tests, the in-house modeling software was updated to add capability to start and stop fiber layers to simulate the AFP process. The original in-house software was developed for filament winding only. Alternative fiber was also investigated in this project, but the added mass impacted the vessel cost negatively due to the lower performance from the alternative fiber. Overall the project was a success to show the hybrid design is a viable solution to reduce fiber usage, thus driving down the cost of fuel storage vessels. Based on DOE’s baseline vessel size of 147.3L and 91kg, the 129L vessel (scaled to DOE baseline) in this project shows a 32% composite savings and 20% cost savings when comparing Vessel 15 hybrid design and the Quantum baseline all filament wound vessel. Due to project timing, there was no additional time available to fine tune the design to improve the load transfer between AFP and FW. Further design modifications will likely help pass the extreme temperature cycle test, the remaining test that is critical to the hybrid design.« less

  4. Performance Analysis of Distributed Object-Oriented Applications

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D.

    1998-01-01

    The purpose of this research was to evaluate the efficiency of a distributed simulation architecture which creates individual modules which are made self-scheduling through the use of a message-based communication system used for requesting input data from another module which is the source of that data. To make the architecture as general as possible, the message-based communication architecture was implemented using standard remote object architectures (Common Object Request Broker Architecture (CORBA) and/or Distributed Component Object Model (DCOM)). A series of experiments were run in which different systems are distributed in a variety of ways across multiple computers and the performance evaluated. The experiments were duplicated in each case so that the overhead due to message communication and data transmission can be separated from the time required to actually perform the computational update of a module each iteration. The software used to distribute the modules across multiple computers was developed in the first year of the current grant and was modified considerably to add a message-based communication scheme supported by the DCOM distributed object architecture. The resulting performance was analyzed using a model created during the first year of this grant which predicts the overhead due to CORBA and DCOM remote procedure calls and includes the effects of data passed to and from the remote objects. A report covering the distributed simulation software and the results of the performance experiments has been submitted separately. The above report also discusses possible future work to apply the methodology to dynamically distribute the simulation modules so as to minimize overall computation time.

  5. Parallel algorithm of VLBI software correlator under multiprocessor environment

    NASA Astrophysics Data System (ADS)

    Zheng, Weimin; Zhang, Dong

    2007-11-01

    The correlator is the key signal processing equipment of a Very Lone Baseline Interferometry (VLBI) synthetic aperture telescope. It receives the mass data collected by the VLBI observatories and produces the visibility function of the target, which can be used to spacecraft position, baseline length measurement, synthesis imaging, and other scientific applications. VLBI data correlation is a task of data intensive and computation intensive. This paper presents the algorithms of two parallel software correlators under multiprocessor environments. A near real-time correlator for spacecraft tracking adopts the pipelining and thread-parallel technology, and runs on the SMP (Symmetric Multiple Processor) servers. Another high speed prototype correlator using the mixed Pthreads and MPI (Massage Passing Interface) parallel algorithm is realized on a small Beowulf cluster platform. Both correlators have the characteristic of flexible structure, scalability, and with 10-station data correlating abilities.

  6. Software Design Improvements. Part 2; Software Quality and the Design and Inspection Process

    NASA Technical Reports Server (NTRS)

    Lalli, Vincent R.; Packard, Michael H.; Ziemianski, Tom

    1997-01-01

    The application of assurance engineering techniques improves the duration of failure-free performance of software. The totality of features and characteristics of a software product are what determine its ability to satisfy customer needs. Software in safety-critical systems is very important to NASA. We follow the System Safety Working Groups definition for system safety software as: 'The optimization of system safety in the design, development, use and maintenance of software and its integration with safety-critical systems in an operational environment. 'If it is not safe, say so' has become our motto. This paper goes over methods that have been used by NASA to make software design improvements by focusing on software quality and the design and inspection process.

  7. Using the Eclipse Parallel Tools Platform to Assist Earth Science Model Development and Optimization on High Performance Computers

    NASA Astrophysics Data System (ADS)

    Alameda, J. C.

    2011-12-01

    Development and optimization of computational science models, particularly on high performance computers, and with the advent of ubiquitous multicore processor systems, practically on every system, has been accomplished with basic software tools, typically, command-line based compilers, debuggers, performance tools that have not changed substantially from the days of serial and early vector computers. However, model complexity, including the complexity added by modern message passing libraries such as MPI, and the need for hybrid code models (such as openMP and MPI) to be able to take full advantage of high performance computers with an increasing core count per shared memory node, has made development and optimization of such codes an increasingly arduous task. Additional architectural developments, such as many-core processors, only complicate the situation further. In this paper, we describe how our NSF-funded project, "SI2-SSI: A Productive and Accessible Development Workbench for HPC Applications Using the Eclipse Parallel Tools Platform" (WHPC) seeks to improve the Eclipse Parallel Tools Platform, an environment designed to support scientific code development targeted at a diverse set of high performance computing systems. Our WHPC project to improve Eclipse PTP takes an application-centric view to improve PTP. We are using a set of scientific applications, each with a variety of challenges, and using PTP to drive further improvements to both the scientific application, as well as to understand shortcomings in Eclipse PTP from an application developer perspective, to drive our list of improvements we seek to make. We are also partnering with performance tool providers, to drive higher quality performance tool integration. We have partnered with the Cactus group at Louisiana State University to improve Eclipse's ability to work with computational frameworks and extremely complex build systems, as well as to develop educational materials to incorporate into computational science and engineering codes. Finally, we are partnering with the lead PTP developers at IBM, to ensure we are as effective as possible within the Eclipse community development. We are also conducting training and outreach to our user community, including conference BOF sessions, monthly user calls, and an annual user meeting, so that we can best inform the improvements we make to Eclipse PTP. With these activities we endeavor to encourage use of modern software engineering practices, as enabled through the Eclipse IDE, with computational science and engineering applications. These practices include proper use of source code repositories, tracking and rectifying issues, measuring and monitoring code performance changes against both optimizations as well as ever-changing software stacks and configurations on HPC systems, as well as ultimately encouraging development and maintenance of testing suites -- things that have become commonplace in many software endeavors, but have lagged in the development of science applications. We view that the challenge with the increased complexity of both HPC systems and science applications demands the use of better software engineering methods, preferably enabled by modern tools such as Eclipse PTP, to help the computational science community thrive as we evolve the HPC landscape.

  8. Advanced software development workstation project: Engineering scripting language. Graphical editor

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Software development is widely considered to be a bottleneck in the development of complex systems, both in terms of development and in terms of maintenance of deployed systems. Cost of software development and maintenance can also be very high. One approach to reducing costs and relieving this bottleneck is increasing the reuse of software designs and software components. A method for achieving such reuse is a software parts composition system. Such a system consists of a language for modeling software parts and their interfaces, a catalog of existing parts, an editor for combining parts, and a code generator that takes a specification and generates code for that application in the target language. The Advanced Software Development Workstation is intended to be an expert system shell designed to provide the capabilities of a software part composition system.

  9. PyMS: a Python toolkit for processing of gas chromatography-mass spectrometry (GC-MS) data. Application and comparative study of selected tools

    PubMed Central

    2012-01-01

    Background Gas chromatography–mass spectrometry (GC-MS) is a technique frequently used in targeted and non-targeted measurements of metabolites. Most existing software tools for processing of raw instrument GC-MS data tightly integrate data processing methods with graphical user interface facilitating interactive data processing. While interactive processing remains critically important in GC-MS applications, high-throughput studies increasingly dictate the need for command line tools, suitable for scripting of high-throughput, customized processing pipelines. Results PyMS comprises a library of functions for processing of instrument GC-MS data developed in Python. PyMS currently provides a complete set of GC-MS processing functions, including reading of standard data formats (ANDI- MS/NetCDF and JCAMP-DX), noise smoothing, baseline correction, peak detection, peak deconvolution, peak integration, and peak alignment by dynamic programming. A novel common ion single quantitation algorithm allows automated, accurate quantitation of GC-MS electron impact (EI) fragmentation spectra when a large number of experiments are being analyzed. PyMS implements parallel processing for by-row and by-column data processing tasks based on Message Passing Interface (MPI), allowing processing to scale on multiple CPUs in distributed computing environments. A set of specifically designed experiments was performed in-house and used to comparatively evaluate the performance of PyMS and three widely used software packages for GC-MS data processing (AMDIS, AnalyzerPro, and XCMS). Conclusions PyMS is a novel software package for the processing of raw GC-MS data, particularly suitable for scripting of customized processing pipelines and for data processing in batch mode. PyMS provides limited graphical capabilities and can be used both for routine data processing and interactive/exploratory data analysis. In real-life GC-MS data processing scenarios PyMS performs as well or better than leading software packages. We demonstrate data processing scenarios simple to implement in PyMS, yet difficult to achieve with many conventional GC-MS data processing software. Automated sample processing and quantitation with PyMS can provide substantial time savings compared to more traditional interactive software systems that tightly integrate data processing with the graphical user interface. PMID:22647087

  10. Enhancing requirements engineering for patient registry software systems with evidence-based components.

    PubMed

    Lindoerfer, Doris; Mansmann, Ulrich

    2017-07-01

    Patient registries are instrumental for medical research. Often their structures are complex and their implementations use composite software systems to meet the wide spectrum of challenges. Commercial and open-source systems are available for registry implementation, but many research groups develop their own systems. Methodological approaches in the selection of software as well as the construction of proprietary systems are needed. We propose an evidence-based checklist, summarizing essential items for patient registry software systems (CIPROS), to accelerate the requirements engineering process. Requirements engineering activities for software systems follow traditional software requirements elicitation methods, general software requirements specification (SRS) templates, and standards. We performed a multistep procedure to develop a specific evidence-based CIPROS checklist: (1) A systematic literature review to build a comprehensive collection of technical concepts, (2) a qualitative content analysis to define a catalogue of relevant criteria, and (3) a checklist to construct a minimal appraisal standard. CIPROS is based on 64 publications and covers twelve sections with a total of 72 items. CIPROS also defines software requirements. Comparing CIPROS with traditional software requirements elicitation methods, SRS templates and standards show a broad consensus but differences in issues regarding registry-specific aspects. Using an evidence-based approach to requirements engineering for registry software adds aspects to the traditional methods and accelerates the software engineering process for registry software. The method we used to construct CIPROS serves as a potential template for creating evidence-based checklists in other fields. The CIPROS list supports developers in assessing requirements for existing systems and formulating requirements for their own systems, while strengthening the reporting of patient registry software system descriptions. It may be a first step to create standards for patient registry software system assessments. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Prediction of Trace Element based Energizing Sensor Control System using PWM

    NASA Astrophysics Data System (ADS)

    Zukri, Mohammad Nizar Bin Mohamed; Abu Bakar, Elmi Bin; Uchiyama, Naoki; Abdullah, Mohamad Nazir Bin

    2018-05-01

    A real-time system for field-work monitoring wastewater laden with heavy metal in industrial discharge through wireless communication network was developed. The monitoring system poses an interesting challenge in order to determine existing metal ion in the solution whereas the previous result only consider total dissolve ion. This paper aims to distinguish the metal ion based on reaction determination in solution. The control algorithm was implemented as generating voltage input for energize conductivity sensor since the voltage corresponding to oxidation and reaction based on standard reduction potential. Implementation of ATmega2560 microcontroller for control voltage fed on sensor equivalent to controlling the PWM duty cycle. PID controller was designed uses a microcontroller (Arduino) platform with manual tuning for identify reaction process and sufficient voltage input. From the experimental result, is found that the proposed PI controller has excellent tracking and measurement performance. Low-pass filter was applied in programming to make the system understand that signal has achieved stable. The development of hardware and software of the closed loop system has an enhancement of measurement performance and high feasibility for SME’s company in economic point of view. The desired objective is to achieve a system with the stable measurement and sufficient voltage supply. This system will provide an accurate and precise control efficiently without using costly component and complicated circuit.

  12. Using MATLAB Software on the Peregrine System | High-Performance Computing

    Science.gov Websites

    | NREL MATLAB Software on the Peregrine System Using MATLAB Software on the Peregrine System Learn how to use MATLAB software on the Peregrine system. Running MATLAB in Batch Mode Using the node. Understanding Versions and Licenses Learn about the MATLAB software versions and licenses

  13. Teaching computer interfacing with virtual instruments in an object-oriented language.

    PubMed Central

    Gulotta, M

    1995-01-01

    LabVIEW is a graphic object-oriented computer language developed to facilitate hardware/software communication. LabVIEW is a complete computer language that can be used like Basic, FORTRAN, or C. In LabVIEW one creates virtual instruments that aesthetically look like real instruments but are controlled by sophisticated computer programs. There are several levels of data acquisition VIs that make it easy to control data flow, and many signal processing and analysis algorithms come with the software as premade VIs. In the classroom, the similarity between virtual and real instruments helps students understand how information is passed between the computer and attached instruments. The software may be used in the absence of hardware so that students can work at home as well as in the classroom. This article demonstrates how LabVIEW can be used to control data flow between computers and instruments, points out important features for signal processing and analysis, and shows how virtual instruments may be used in place of physical instrumentation. Applications of LabVIEW to the teaching laboratory are also discussed, and a plausible course outline is given. PMID:8580361

  14. Teaching computer interfacing with virtual instruments in an object-oriented language.

    PubMed

    Gulotta, M

    1995-11-01

    LabVIEW is a graphic object-oriented computer language developed to facilitate hardware/software communication. LabVIEW is a complete computer language that can be used like Basic, FORTRAN, or C. In LabVIEW one creates virtual instruments that aesthetically look like real instruments but are controlled by sophisticated computer programs. There are several levels of data acquisition VIs that make it easy to control data flow, and many signal processing and analysis algorithms come with the software as premade VIs. In the classroom, the similarity between virtual and real instruments helps students understand how information is passed between the computer and attached instruments. The software may be used in the absence of hardware so that students can work at home as well as in the classroom. This article demonstrates how LabVIEW can be used to control data flow between computers and instruments, points out important features for signal processing and analysis, and shows how virtual instruments may be used in place of physical instrumentation. Applications of LabVIEW to the teaching laboratory are also discussed, and a plausible course outline is given.

  15. Introducing Risk Management Techniques Within Project Based Software Engineering Courses

    NASA Astrophysics Data System (ADS)

    Port, Daniel; Boehm, Barry

    2002-03-01

    In 1996, USC switched its core two-semester software engineering course from a hypothetical-project, homework-and-exam course based on the Bloom taxonomy of educational objectives (knowledge, comprehension, application, analysis, synthesis, and evaluation). The revised course is a real-client team-project course based on the CRESST model of learning objectives (content understanding, problem solving, collaboration, communication, and self-regulation). We used the CRESST cognitive demands analysis to determine the necessary student skills required for software risk management and the other major project activities, and have been refining the approach over the last 5 years of experience, including revised versions for one-semester undergraduate and graduate project course at Columbia. This paper summarizes our experiences in evolving the risk management aspects of the project course. These have helped us mature more general techniques such as risk-driven specifications, domain-specific simplifier and complicator lists, and the schedule as an independent variable (SAIV) process model. The largely positive results in terms of review of pass / fail rates, client evaluations, product adoption rates, and hiring manager feedback are summarized as well.

  16. Low-Noise Band-Pass Amplifier

    NASA Technical Reports Server (NTRS)

    Kleinberg, L.

    1982-01-01

    Circuit uses standard components to overcome common limitation of JFET amplifiers. Low-noise band-pass amplifier employs JFET and operational amplifier. High gain and band-pass characteristics are achieved with suitable choice of resistances and capacitances. Circuit should find use as low-noise amplifier, for example as first stage instrumentation systems.

  17. SAGA: A project to automate the management of software production systems

    NASA Technical Reports Server (NTRS)

    Campbell, R. H.; Badger, W.; Beckman, C. S.; Beshers, G.; Hammerslag, D.; Kimball, J.; Kirslis, P. A.; Render, H.; Richards, P.; Terwilliger, R.

    1984-01-01

    The project to automate the management of software production systems is described. The SAGA system is a software environment that is designed to support most of the software development activities that occur in a software lifecycle. The system can be configured to support specific software development applications using given programming languages, tools, and methodologies. Meta-tools are provided to ease configuration. Several major components of the SAGA system are completed to prototype form. The construction methods are described.

  18. The CCD/Transit Instrument (CTI) data-analysis system

    NASA Technical Reports Server (NTRS)

    Cawson, M. G. M.; Mcgraw, J. T.; Keane, M. J.

    1995-01-01

    The automated software system for archiving, analyzing, and interrogating data from the CCD/Transit Instrument (CTI) is described. The CTI collects up to 450 Mbytes of image-data each clear night in the form of a narrow strip of sky observed in two colors. The large data-volumes and the scientific aims of the project make it imperative that the data are analyzed within the 24-hour period following the observations. To this end a fully automatic and self evaluating software system has been developed. The data are collected from the telescope in real-time and then transported to Tucson for analysis. Verification is performed by visual inspection of random subsets of the data and obvious cosmic rays are detected and removed before permanent archival is made to the optical disc. The analysis phase is performed by a pair of linked algorithms, one operating on the absolute pixel-values and the other on the spatial derivative of the data. In this way both isolated and merged images are reliably detected in a single pass. In order to isolate the latter algorithm from the effects of noise spikes a 3x3 Hanning filter is applied to the raw data before the analysis is run. The algorithms reduce the input pixel-data to a database of measured parameters for each image which has been found. A contrast filter is applied in order to assign a detection-probability to each image and then x-y calibration and intensity calibration are performed using known reference stars in the strip. These are added to as necessary by secondary standards boot-strapped from the CTI data itself. The final stages involve merging the new data into the CTI Master-list and History-list and the automatic comparison of each new detection with a set of pre-defined templates in parameter-space to find interesting objects such as supernovae, quasars and variable stars. Each stage of the processing from verification to interesting image selection is performed under a data-logging system which both controls the pipe-lining of data through the system and records key performance monitor parameters which are built into the software. Furthermore, the data from each stage are stored in databases to facilitate evaluation, and all stages offer the facility to enter keyword-indexed free-format text into the data-logging system. In this way a large measure of certification is built into the system to provide the necessary confidence in the end results.

  19. Resilience Engineering in Critical Long Term Aerospace Software Systems: A New Approach to Spacecraft Software Safety

    NASA Astrophysics Data System (ADS)

    Dulo, D. A.

    Safety critical software systems permeate spacecraft, and in a long term venture like a starship would be pervasive in every system of the spacecraft. Yet software failure today continues to plague both the systems and the organizations that develop them resulting in the loss of life, time, money, and valuable system platforms. A starship cannot afford this type of software failure in long journeys away from home. A single software failure could have catastrophic results for the spaceship and the crew onboard. This paper will offer a new approach to developing safe reliable software systems through focusing not on the traditional safety/reliability engineering paradigms but rather by focusing on a new paradigm: Resilience and Failure Obviation Engineering. The foremost objective of this approach is the obviation of failure, coupled with the ability of a software system to prevent or adapt to complex changing conditions in real time as a safety valve should failure occur to ensure safe system continuity. Through this approach, safety is ensured through foresight to anticipate failure and to adapt to risk in real time before failure occurs. In a starship, this type of software engineering is vital. Through software developed in a resilient manner, a starship would have reduced or eliminated software failure, and would have the ability to rapidly adapt should a software system become unstable or unsafe. As a result, long term software safety, reliability, and resilience would be present for a successful long term starship mission.

  20. 75 FR 11918 - Hewlett Pachard Company, Business Critical Systems, Mission Critical Business Software Division...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-12

    ... Pachard Company, Business Critical Systems, Mission Critical Business Software Division, Openvms Operating... Business Software Division, Openvms Operating System Development Group, Including an Employee Operating Out... Company, Business Critical Systems, Mission Critical Business Software Division, OpenVMS Operating System...

  1. Software cost/resource modeling: Software quality tradeoff measurement

    NASA Technical Reports Server (NTRS)

    Lawler, R. W.

    1980-01-01

    A conceptual framework for treating software quality from a total system perspective is developed. Examples are given to show how system quality objectives may be allocated to hardware and software; to illustrate trades among quality factors, both hardware and software, to achieve system performance objectives; and to illustrate the impact of certain design choices on software functionality.

  2. Micro-optical-mechanical system photoacoustic spectrometer

    DOEpatents

    Kotovsky, Jack; Benett, William J.; Tooker, Angela C.; Alameda, Jennifer B.

    2013-01-01

    All-optical photoacoustic spectrometer sensing systems (PASS system) and methods include all the hardware needed to analyze the presence of a large variety of materials (solid, liquid and gas). Some of the all-optical PASS systems require only two optical-fibers to communicate with the opto-electronic power and readout systems that exist outside of the material environment. Methods for improving the signal-to-noise are provided and enable mirco-scale systems and methods for operating such systems.

  3. A semi-automated method of monitoring dam passage of American Eels Anguilla rostrata

    USGS Publications Warehouse

    Welsh, Stuart A.; Aldinger, Joni L.

    2014-01-01

    Fish passage facilities at dams have become an important focus of fishery management in riverine systems. Given the personnel and travel costs associated with physical monitoring programs, automated or semi-automated systems are an attractive alternative for monitoring fish passage facilities. We designed and tested a semi-automated system for eel ladder monitoring at Millville Dam on the lower Shenandoah River, West Virginia. A motion-activated eel ladder camera (ELC) photographed each yellow-phase American Eel Anguilla rostrata that passed through the ladder. Digital images (with date and time stamps) of American Eels allowed for total daily counts and measurements of eel TL using photogrammetric methods with digital imaging software. We compared physical counts of American Eels with camera-based counts; TLs obtained with a measuring board were compared with TLs derived from photogrammetric methods. Data from the ELC were consistent with data obtained by physical methods, thus supporting the semi-automated camera system as a viable option for monitoring American Eel passage. Time stamps on digital images allowed for the documentation of eel passage time—data that were not obtainable from physical monitoring efforts. The ELC has application to eel ladder facilities but can also be used to monitor dam passage of other taxa, such as crayfishes, lampreys, and water snakes.

  4. Autonomous Lawnmower using FPGA implementation.

    NASA Astrophysics Data System (ADS)

    Ahmad, Nabihah; Lokman, Nabill bin; Helmy Abd Wahab, Mohd

    2016-11-01

    Nowadays, there are various types of robot have been invented for multiple purposes. The robots have the special characteristic that surpass the human ability and could operate in extreme environment which human cannot endure. In this paper, an autonomous robot is built to imitate the characteristic of a human cutting grass. A Field Programmable Gate Array (FPGA) is used to control the movements where all data and information would be processed. Very High Speed Integrated Circuit (VHSIC) Hardware Description Language (VHDL) is used to describe the hardware using Quartus II software. This robot has the ability of avoiding obstacle using ultrasonic sensor. This robot used two DC motors for its movement. It could include moving forward, backward, and turning left and right. The movement or the path of the automatic lawn mower is based on a path planning technique. Four Global Positioning System (GPS) plot are set to create a boundary. This to ensure that the lawn mower operates within the area given by user. Every action of the lawn mower is controlled by the FPGA DE' Board Cyclone II with the help of the sensor. Furthermore, Sketch Up software was used to design the structure of the lawn mower. The autonomous lawn mower was able to operate efficiently and smoothly return to coordinated paths after passing the obstacle. It uses 25% of total pins available on the board and 31% of total Digital Signal Processing (DSP) blocks.

  5. Analyzing Software Requirements Errors in Safety-Critical, Embedded Systems

    NASA Technical Reports Server (NTRS)

    Lutz, Robyn R.

    1993-01-01

    This paper analyzes the root causes of safety-related software errors in safety-critical, embedded systems. The results show that software errors identified as potentially hazardous to the system tend to be produced by different error mechanisms than non- safety-related software errors. Safety-related software errors are shown to arise most commonly from (1) discrepancies between the documented requirements specifications and the requirements needed for correct functioning of the system and (2) misunderstandings of the software's interface with the rest of the system. The paper uses these results to identify methods by which requirements errors can be prevented. The goal is to reduce safety-related software errors and to enhance the safety of complex, embedded systems.

  6. OAST Space Theme Workshop. Volume 3: Working group summary. 4: Software (E-4). A. Summary. B. Technology needs (form 1). C. Priority assessment (form 2)

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Only a few efforts are currently underway to develop an adequate technology base for the various themes. Particular attention must be given to software commonality and evolutionary capability, to increased system integrity and autonomy; and to improved communications among the program users, the program developers, and the programs themselves. There is a need for quantum improvement in software development methods and increasing the awareness of software by all concerned. Major thrusts identified include: (1) data and systems management; (2) software technology for autonomous systems; (3) technology and methods for improving the software development process; (4) advances related to systems of software elements including their architecture, their attributes as systems, and their interfaces with users and other systems; and (5) applications of software including both the basic algorithms used in a number of applications and the software specific to a particular theme or discipline area. The impact of each theme on software is assessed.

  7. Software Design Methods for Real-Time Systems

    DTIC Science & Technology

    1989-12-01

    This module describes the concepts and methods used in the software design of real time systems . It outlines the characteristics of real time systems , describes...the role of software design in real time system development, surveys and compares some software design methods for real - time systems , and

  8. SU-F-T-236: Comparison of Two IMRT/VMAT QA Systems Using Gamma Index Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dogan, N; Denissova, S

    2016-06-15

    Purpose: The goal of this study is to assess differences in the Gamma index pass rates when using two commercial QA systems and provide optimum Gamma index parameters for pre-treatment patient specific QA. Methods: Twenty-two VMAT cases that consisted of prostate, lung, head and neck, spine, brain and pancreas, were included in this study. The verification plans have been calculated using AcurosXB(V11) algorithm for different dose grids (1.5mm, 2.5mm, 3mm). The measurements were performed on TrueBeam(Varian) accelerator using both EPID(S1000) portal imager and ArcCheck(SunNuclearCorp) devices. Gamma index criteria variation of 3%/3mm, 2%/3mm, 2%/2mm and threshold (TH) doses of 5% tomore » 50% were used in analysis. Results: The differences in Gamma pass rates between two devices are not statistically significant for 3%/3mm, yielding pass rate higher than 95%. Increase of lower dose TH showed reduced pass rates for both devices. ArcCheck’s more pronounced effect can be attributed to higher contribution of lower dose region spread. As expected, tightening criteria to 2%/2mm (TH: 10%) decreased Gamma pass rates below 95%. Higher EPID (92%) pass rates compared to ArcCheck (86%) probably due to better spatial resolution. Portal Dosimetry results showed lower Gamma pass rates for composite plans compared to individual field pass rates. This may be due to the expansion in the analyzed region which includes pixels not included in the separate field analysis. Decreasing dose grid size from 2.5mm to 1.5mm did not show statistically significant (p<0.05) differences in Gamma pass rates for both QA devices. Conclusion: Overall, both system measurements agree well with calculated dose when using gamma index criteria of 3%/3mm for a variety of VMAT cases. Variability between two systems increases using different dose GRID, TH and tighter gamma criteria and must be carefully assessed prior to clinical use.« less

  9. Taking the Observatory to the Astronomer

    NASA Astrophysics Data System (ADS)

    Bisque, T. M.

    1997-05-01

    Since 1992, Software Bisque's Remote Astronomy Software has been used by the Mt. Wilson Institute to allow interactive control of a 24" telescope and digital camera via modem. Software Bisque now introduces a comparable, relatively low-cost observatory system that allows powerful, yet "user-friendly" telescope and CCD camera control via the Internet. Utilizing software developed for the Windows 95/NT operating systems, the system offers point-and-click access to comprehensive celestial databases, extremely accurate telescope pointing, rapid download of digital CCD images by one or many users and flexible image processing software for data reduction and analysis. Our presentation will describe how the power of the personal computer has been leveraged to provide professional-level tools to the amateur astronomer, and include a description of this system's software and hardware components. The system software includes TheSky Astronomy Software?, CCDSoft CCD Astronomy Software?, TPoint Telescope Pointing Analysis System? software, Orchestrate? and, optionally, the RealSky CDs. The system hardware includes the Paramount GT-1100? Robotic Telescope Mount, as well as third party CCD cameras, focusers and optical tube assemblies.

  10. Spacecraft control center automation using the generic inferential executor (GENIE)

    NASA Technical Reports Server (NTRS)

    Hartley, Jonathan; Luczak, Ed; Stump, Doug

    1996-01-01

    The increasing requirement to dramatically reduce the cost of mission operations led to increased emphasis on automation technology. The expert system technology used at the Goddard Space Flight Center (MD) is currently being applied to the automation of spacecraft control center activities. The generic inferential executor (GENIE) is a tool which allows pass automation applications to be constructed. The pass script templates constructed encode the tasks necessary to mimic flight operations team interactions with the spacecraft during a pass. These templates can be configured with data specific to a particular pass. Animated graphical displays illustrate the progress during the pass. The first GENIE application automates passes of the solar, anomalous and magnetospheric particle explorer (SAMPEX) spacecraft.

  11. Implementing Software Safety in the NASA Environment

    NASA Technical Reports Server (NTRS)

    Wetherholt, Martha S.; Radley, Charles F.

    1994-01-01

    Until recently, NASA did not consider allowing computers total control of flight systems. Human operators, via hardware, have constituted the ultimate safety control. In an attempt to reduce costs, NASA has come to rely more and more heavily on computers and software to control space missions. (For example. software is now planned to control most of the operational functions of the International Space Station.) Thus the need for systematic software safety programs has become crucial for mission success. Concurrent engineering principles dictate that safety should be designed into software up front, not tested into the software after the fact. 'Cost of Quality' studies have statistics and metrics to prove the value of building quality and safety into the development cycle. Unfortunately, most software engineers are not familiar with designing for safety, and most safety engineers are not software experts. Software written to specifications which have not been safety analyzed is a major source of computer related accidents. Safer software is achieved step by step throughout the system and software life cycle. It is a process that includes requirements definition, hazard analyses, formal software inspections, safety analyses, testing, and maintenance. The greatest emphasis is placed on clearly and completely defining system and software requirements, including safety and reliability requirements. Unfortunately, development and review of requirements are the weakest link in the process. While some of the more academic methods, e.g. mathematical models, may help bring about safer software, this paper proposes the use of currently approved software methodologies, and sound software and assurance practices to show how, to a large degree, safety can be designed into software from the start. NASA's approach today is to first conduct a preliminary system hazard analysis (PHA) during the concept and planning phase of a project. This determines the overall hazard potential of the system to be built. Shortly thereafter, as the system requirements are being defined, the second iteration of hazard analyses takes place, the systems hazard analysis (SHA). During the systems requirements phase, decisions are made as to what functions of the system will be the responsibility of software. This is the most critical time to affect the safety of the software. From this point, software safety analyses as well as software engineering practices are the main focus for assuring safe software. While many of the steps proposed in this paper seem like just sound engineering practices, they are the best technical and most cost effective means to assure safe software within a safe system.

  12. System for determining aerodynamic imbalance

    NASA Technical Reports Server (NTRS)

    Churchill, Gary B. (Inventor); Cheung, Benny K. (Inventor)

    1994-01-01

    A system is provided for determining tracking error in a propeller or rotor driven aircraft by determining differences in the aerodynamic loading on the propeller or rotor blades of the aircraft. The system includes a microphone disposed relative to the blades during the rotation thereof so as to receive separate pressure pulses produced by each of the blades during the passage thereof by the microphone. A low pass filter filters the output signal produced by the microphone, the low pass filter having an upper cut-off frequency set below the frequency at which the blades pass by the microphone. A sensor produces an output signal after each complete revolution of the blades, and a recording display device displays the outputs of the low pass filter and sensor so as to enable evaluation of the relative magnitudes of the pressure pulses produced by passage of the blades by the microphone during each complete revolution of the blades.

  13. Efficient Data Generation and Publication as a Test Tool

    NASA Technical Reports Server (NTRS)

    Einstein, Craig Jakob

    2017-01-01

    A tool to facilitate the generation and publication of test data was created to test the individual components of a command and control system designed to launch spacecraft. Specifically, this tool was built to ensure messages are properly passed between system components. The tool can also be used to test whether the appropriate groups have access (read/write privileges) to the correct messages. The messages passed between system components take the form of unique identifiers with associated values. These identifiers are alphanumeric strings that identify the type of message and the additional parameters that are contained within the message. The values that are passed with the message depend on the identifier. The data generation tool allows for the efficient creation and publication of these messages. A configuration file can be used to set the parameters of the tool and also specify which messages to pass.

  14. Investigation of wind behaviour around high-rise buildings

    NASA Astrophysics Data System (ADS)

    Mat Isa, Norasikin; Fitriah Nasir, Nurul; Sadikin, Azmahani; Ariff Hairul Bahara, Jamil

    2017-09-01

    A study on the investigation of wind behaviour around the high-rise buildings is done through an experiment using a wind tunnel and computational fluid dynamics. High-rise buildings refer to buildings or structures that have more than 12 floors. Wind is invisible to the naked eye; thus, it is hard to see and analyse its flow around and over buildings without the use of proper methods, such as the use of wind tunnel and computational fluid dynamics software.The study was conducted on buildings located in Presint 4, Putrajaya, Malaysia which is the Ministry of Rural and Regional Development, Ministry of Information Communications and Culture, Ministry of Urban Wellbeing, Housing and Local Government and the Ministry of Women, Family, and Community by making scaled models of the buildings. The parameters in which this study is conducted on are, four different wind velocities used based on the seasonal monsoons, and wind direction. ANSYS Fluent workbench software is used to compute the simulations in order to achieve the objectives of this study. The data from the computational fluid dynamics are validated with the experiment done through the wind tunnel. From the results obtained through the use of the computation fluid dynamics, this study can identify the characteristics of wind around buildings, including boundary layer of the buildings, separation flow, wake region and etc. Then analyses is conducted on the occurance resulting from the wind that passes the buildings based on the velocity difference between before and after the wind passes the buildings.

  15. The ATLAS high level trigger steering

    NASA Astrophysics Data System (ADS)

    Berger, N.; Bold, T.; Eifert, T.; Fischer, G.; George, S.; Haller, J.; Hoecker, A.; Masik, J.; Nedden, M. Z.; Reale, V. P.; Risler, C.; Schiavi, C.; Stelzer, J.; Wu, X.

    2008-07-01

    The High Level Trigger (HLT) of the ATLAS experiment at the Large Hadron Collider receives events which pass the LVL1 trigger at ~75 kHz and has to reduce the rate to ~200 Hz while retaining the most interesting physics. It is a software trigger and performs the reduction in two stages: the LVL2 trigger and the Event Filter (EF). At the heart of the HLT is the Steering software. To minimise processing time and data transfers it implements the novel event selection strategies of seeded, step-wise reconstruction and early rejection. The HLT is seeded by regions of interest identified at LVL1. These and the static configuration determine which algorithms are run to reconstruct event data and test the validity of trigger signatures. The decision to reject the event or continue is based on the valid signatures, taking into account pre-scale and pass-through. After the EF, event classification tags are assigned for streaming purposes. Several new features for commissioning and operation have been added: comprehensive monitoring is now built in to the framework; for validation and debugging, reconstructed data can be written out; the steering is integrated with the new configuration (presented separately), and topological and global triggers have been added. This paper will present details of the final design and its implementation, the principles behind it, and the requirements and constraints it is subject to. The experience gained from technical runs with realistic trigger menus will be described.

  16. 14 CFR 417.123 - Computing systems and software.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false Computing systems and software. 417.123... systems and software. (a) A launch operator must document a system safety process that identifies the... systems and software. (b) A launch operator must identify all safety-critical functions associated with...

  17. 14 CFR 417.123 - Computing systems and software.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false Computing systems and software. 417.123... systems and software. (a) A launch operator must document a system safety process that identifies the... systems and software. (b) A launch operator must identify all safety-critical functions associated with...

  18. 14 CFR 417.123 - Computing systems and software.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false Computing systems and software. 417.123... systems and software. (a) A launch operator must document a system safety process that identifies the... systems and software. (b) A launch operator must identify all safety-critical functions associated with...

  19. 14 CFR 417.123 - Computing systems and software.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Computing systems and software. 417.123... systems and software. (a) A launch operator must document a system safety process that identifies the... systems and software. (b) A launch operator must identify all safety-critical functions associated with...

  20. 14 CFR 417.123 - Computing systems and software.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 4 2014-01-01 2014-01-01 false Computing systems and software. 417.123... systems and software. (a) A launch operator must document a system safety process that identifies the... systems and software. (b) A launch operator must identify all safety-critical functions associated with...

  1. Advanced software techniques for data management systems. Volume 1: Study of software aspects of the phase B space shuttle avionics system

    NASA Technical Reports Server (NTRS)

    Martin, F. H.

    1972-01-01

    An overview of the executive system design task is presented. The flight software executive system, software verification, phase B baseline avionics system review, higher order languages and compilers, and computer hardware features are also discussed.

  2. SU-E-T-83: A Study On Evaluating the Directional Dependency of 2D Seven 29 Ion Chamber Array Clinically with Different IMRT Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Syam; Aswathi, C.P.

    Purpose: To evaluate the directional dependency of 2D seven 29 ion chamber array clinically with different IMRT plans. Methods: 25 patients already treated with IMRT plans were selected for the study. Verification plans were created for each treatment plan in eclipse 10 treatment planning system using the AAA algorithm with the 2D array and the Octavius CT phantom. Verification plans were done 2 times for a single patient. First plan with real IMRT (plan-related approach) and second plan with zero degree gantry angle (field-related approach). Measurements were performed on a Varian Clinac-iX, linear accelerator equipped with a millennium 120 multileafmore » collimator. Fluence was measured for all the delivered plans and analyzed using the verisoft software. Comparison was done by selecting the fluence delivered in static gantry (zero degree gantry) versus IMRT with real gantry angles. Results: The gamma pass percentage is greater than 97 % for all IMRT delivered with zero gantry angle and between 95%–98% for real gantry angles. Dose difference between the TPS calculated and measured for IMRT delivered with zero gantry angle was found to be between (0.03 to 0.06Gy) and with real gantry angles between (0.02 to 0.05Gy). There is a significant difference between the gamma analysis between the zero degree and true angle with a significance of 0.002. Standard deviation of gamma pass percentage between the IMRT plans with zero gantry angle was 0.68 and for IMRT with true gantry angle was found to be 0.74. Conclusion: The gamma analysis for IMRT with zero degree gantry angles shows higher pass percentage than IMRT delivered with true gantry angles. Verification plans delivered with true gantry angles lower the verification accuracy when 2D array is used for measurement.« less

  3. MO-D-213-05: Sensitivity of Routine IMRT QA Metrics to Couch and Collimator Rotations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alaei, P

    Purpose: To assess the sensitivity of gamma index and other IMRT QA metrics to couch and collimator rotations. Methods: Two brain IMRT plans with couch and/or collimator rotations in one or more of the fields were evaluated using the IBA MatriXX ion chamber array and its associated software (OmniPro-I’mRT). The plans were subjected to routine QA by 1) Creating a composite planar dose in the treatment planning system (TPS) with the couch/collimator rotations and 2) Creating the planar dose after “zeroing” the rotations. Plan deliveries to MatriXX were performed with all rotations set to zero on a Varian 21ex linearmore » accelerator. This in effect created TPS-created planar doses with an induced rotation error. Point dose measurements for the delivered plans were also performed in a solid water phantom. Results: The IMRT QA of the plans with couch and collimator rotations showed clear discrepancies in the planar dose and 2D dose profile overlays. The gamma analysis, however, did pass with the criteria of 3%/3mm (for 95% of the points), albeit with a lower percentage pass rate, when one or two of the fields had a rotation. Similar results were obtained with tighter criteria of 2%/2mm. Other QA metrics such as percentage difference or distance-to-agreement (DTA) histograms produced similar results. The point dose measurements did not obviously indicate the error due to location of dose measurement (on the central axis) and the size of the ion chamber used (0.6 cc). Conclusion: Relying on Gamma analysis, percentage difference, or DTA to determine the passing of an IMRT QA may miss critical errors in the plan delivery due to couch/collimator rotations. A combination of analyses for composite QA plans, or per-beam analysis, would detect these errors.« less

  4. Toward the Reliable Diagnosis of DSM-5 Premenstrual Dysphoric Disorder: The Carolina Premenstrual Assessment Scoring System (C-PASS)

    PubMed Central

    Eisenlohr-Moul, Tory A.; Girdler, Susan S.; Schmalenberger, Katja M.; Dawson, Danyelle N.; Surana, Pallavi; Johnson, Jacqueline L.; Rubinow, David R.

    2016-01-01

    Objective Despite evidence for the validity of premenstrual dysphoric disorder (PMDD) and its recent inclusion in DSM-5, variable diagnostic practices compromise the construct validity of the diagnosis and threaten the clarity of efforts to understand and treat its underlying pathophysiology. In an effort to hasten and streamline the translation of the new DSM-5 criteria for PMDD into terms compatible with existing research practices, we present the development and initial validation of the Carolina Premenstrual Assessment Scoring System (C-PASS). The C-PASS is a standardized scoring system for making DSM-5 PMDD diagnoses using 2 or more menstrual cycles of daily symptom ratings using the Daily Record of Severity of Problems (DRSP). Method Two hundred women recruited for retrospectively-reported premenstrual emotional symptoms provided 2–4 menstrual cycles of daily symptom ratings on the DRSP. Diagnoses were made by expert clinician and the C-PASS. Results Agreement of C-PASS diagnosis with expert clinical diagnosis was excellent; overall correct classification by the C-PASS was estimated at 98%. Consistent with previous evidence, retrospective reports of premenstrual symptom increases were a poor predictor of prospective C-PASS diagnosis. Conclusions The C-PASS (available as a worksheet, Excel macro, and SAS macro) is a reliable and valid companion protocol to the DRSP that standardizes and streamlines the complex, multilevel diagnosis of DSM-5 PMDD. Consistent use of this robust diagnostic method would result in more clearly-defined, homogeneous samples of women with PMDD, thereby improving the clarity of studies seeking to characterize or treat the underlying pathophysiology of the disorder. PMID:27523500

  5. Network, system, and status software enhancements for the autonomously managed electrical power system breadboard. Volume 1: Project summary

    NASA Technical Reports Server (NTRS)

    Mckee, James W.

    1990-01-01

    This volume (1 of 4) gives a summary of the original AMPS software system configuration, points out some of the problem areas in the original software design that this project is to address, and in the appendix collects all the bimonthly status reports. The purpose of AMPS is to provide a self reliant system to control the generation and distribution of power in the space station. The software in the AMPS breadboard can be divided into three levels: the operating environment software, the protocol software, and the station specific software. This project deals only with the operating environment software and the protocol software. The present station specific software will not change except as necessary to conform to new data formats.

  6. Software system safety

    NASA Technical Reports Server (NTRS)

    Uber, James G.

    1988-01-01

    Software itself is not hazardous, but since software and hardware share common interfaces there is an opportunity for software to create hazards. Further, these software systems are complex, and proven methods for the design, analysis, and measurement of software safety are not yet available. Some past software failures, future NASA software trends, software engineering methods, and tools and techniques for various software safety analyses are reviewed. Recommendations to NASA are made based on this review.

  7. Software And Systems Engineering Risk Management

    DTIC Science & Technology

    2010-04-01

    RSKM 2004 COSO Enterprise RSKM Framework 2006 ISO/IEC 16085 Risk Management Process 2008 ISO/IEC 12207 Software Lifecycle Processes 2009 ISO/IEC...1 Software And Systems Engineering Risk Management John Walz VP Technical and Conferences Activities, IEEE Computer Society Vice-Chair Planning...Software & Systems Engineering Standards Committee, IEEE Computer Society US TAG to ISO TMB Risk Management Working Group Systems and Software

  8. A Real-Time Data Acquisition and Processing Framework Based on FlexRIO FPGA and ITER Fast Plant System Controller

    NASA Astrophysics Data System (ADS)

    Yang, C.; Zheng, W.; Zhang, M.; Yuan, T.; Zhuang, G.; Pan, Y.

    2016-06-01

    Measurement and control of the plasma in real-time are critical for advanced Tokamak operation. It requires high speed real-time data acquisition and processing. ITER has designed the Fast Plant System Controllers (FPSC) for these purposes. At J-TEXT Tokamak, a real-time data acquisition and processing framework has been designed and implemented using standard ITER FPSC technologies. The main hardware components of this framework are an Industrial Personal Computer (IPC) with a real-time system and FlexRIO devices based on FPGA. With FlexRIO devices, data can be processed by FPGA in real-time before they are passed to the CPU. The software elements are based on a real-time framework which runs under Red Hat Enterprise Linux MRG-R and uses Experimental Physics and Industrial Control System (EPICS) for monitoring and configuring. That makes the framework accord with ITER FPSC standard technology. With this framework, any kind of data acquisition and processing FlexRIO FPGA program can be configured with a FPSC. An application using the framework has been implemented for the polarimeter-interferometer diagnostic system on J-TEXT. The application is able to extract phase-shift information from the intermediate frequency signal produced by the polarimeter-interferometer diagnostic system and calculate plasma density profile in real-time. Different algorithms implementations on the FlexRIO FPGA are compared in the paper.

  9. Study the velocity and pressure exerted in front of the filter surface in the kitchen hood system by using ANSYS

    NASA Astrophysics Data System (ADS)

    Asmuin, Norzelawati; Pairan, M. Rasidi; Isa, Norasikin Mat; Sies, Farid

    2017-04-01

    Commercial kitchen hood ventilation system is a device used to capture and filtered the plumes from cooking activities in the kitchen area. Nowadays, it is very popular in the industrial sector such as restaurant and hotel to provide hygiene food. This study focused at the KSA filter part which installed in the kitchen hood system, the purpose of this study is to identify the critical region which indicated by observing the velocity and pressure of plumes exerted at of KSA filter. It is important to know the critical location of the KSA filter in order to install the nozzle which will helps increase the filtration effectiveness. The ANSYS 16.1 (FLUENT) software as a tool used to simulate the kitchen hood systems which consist of KSA filter. The commercial kitchen hood system model has a dimension 700 mm width, 1600 mm length and 555 mm height. The system has two inlets and one outlet. The velocity of the plumes is set to be 0.235m/s and the velocity of the inlet capture jet is set to be 1.078m/s. The KSA filter is placed 45 degree from the y axis. The result shows the plumes has more tendency flowing pass through at the bottom part of KSA filter.

  10. 78 FR 50405 - Amended Application for Presidential Permit; Northern Pass Transmission LLC

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-19

    ... project would adversely affect the operation of the U.S. electric power supply system under normal and... proposed project. Northern Pass is wholly owned by NU Transmission Ventures, Inc., a wholly-owned..., that would meet the needs of the Project.'' On July 1, 2013, Northern Pass submitted an amended...

  11. 75 FR 48278 - Defense Federal Acquisition Regulation Supplement; Excessive Pass-Through Charges

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-10

    ... DEPARTMENT OF DEFENSE Defense Acquisition Regulations System 48 CFR Parts 215, 231, and 252 [DFARS Case 2006-D057] Defense Federal Acquisition Regulation Supplement; Excessive Pass-Through Charges AGENCY: Defense Acquisition Regulations System, Department of Defense (DoD). ACTION: Final rule. SUMMARY...

  12. 76 FR 26712 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-09

    ... Defense (DoD) Pentagon Building Pass Files (September 11, 2008, 73 FR 52840). Changes: * * * * * System... completion date, access level, previous facility pass issuances, and authenticating official.'' Authority for...) months after expiration or return to PFPA. Verification records are maintained for 3-5 years and then...

  13. Complex pendulum biomass sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoskinson, Reed L.; Kenney, Kevin L.; Perrenoud, Ben C.

    A complex pendulum system biomass sensor having a plurality of pendulums. The plurality of pendulums allow the system to detect a biomass height and density. Each pendulum has an angular deflection sensor and a deflector at a unique height. The pendulums are passed through the biomass and readings from the angular deflection sensors are fed into a control system. The control system determines whether adjustment of machine settings is appropriate and either displays an output to the operator, or adjusts automatically adjusts the machine settings, such as the speed, at which the pendulums are passed through the biomass. In anmore » alternate embodiment, an entanglement sensor is also passed through the biomass to determine the amount of biomass entanglement. This measure of entanglement is also fed into the control system.« less

  14. UPmag: MATLAB software for viewing and processing u channel or other pass-through paleomagnetic data

    NASA Astrophysics Data System (ADS)

    Xuan, Chuang; Channell, James E. T.

    2009-10-01

    With the development of pass-through cryogenic magnetometers and the u channel sampling method, large volumes of paleomagnetic data can be accumulated within a short time period. It is often critical to visualize and process these data in "real time" as measurements proceed, so that the measurement plan can be dictated accordingly. We introduce new MATLAB™ software (UPmag) that is designed for easy and rapid analysis of natural remanent magnetization (NRM) and laboratory-induced remanent magnetization data for u channel samples or core sections. UPmag comprises three MATLAB™ graphic user interfaces: UVIEW, UDIR, and UINT. UVIEW allows users to open and check through measurement data from the magnetometer as well as to correct detected flux jumps in the data, and to export files for further treatment. UDIR reads the *.dir file generated by UVIEW, automatically calculates component directions using selectable demagnetization range(s) with anchored or free origin, and displays vector component plots and stepwise intensity plots for any position along the u channel sample. UDIR can also display data on equal area stereographic projections and draw virtual geomagnetic poles on various map projections. UINT provides a convenient platform to evaluate relative paleointensity (RPI) estimates using the *.int files that can be exported from UVIEW. Two methods are used for RPI estimation: the calculated slopes of the best fit line between the NRM and the respective normalizer (using paired demagnetization data for both parameters) and the averages of the NRM/normalizer ratios. Linear correlation coefficients (of slopes) and standard deviations (of ratios) can be calculated simultaneously to monitor the quality of the RPI estimates. All resulting data and plots from UPmag can be exported into various file formats. UPmag software, data format files, and test data can be downloaded from http://earthref.org/cgi-bin/er.cgi?s=erda.cgi?n=985.

  15. SU-F-T-300: Impact of Electron Density Modeling of ArcCHECK Cylindricaldiode Array On 3DVH Patient Specific QA Software Tool Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patwe, P; Mhatre, V; Dandekar, P

    Purpose: 3DVH software is a patient specific quality assurance tool which estimates the 3D dose to the patient specific geometry with the help of Planned Dose Perturbation algorithm. The purpose of this study is to evaluate the impact of HU value of ArcCHECK phantom entered in Eclipse TPS on 3D dose & DVH QA analysis. Methods: Manufacturer of ArcCHECK phantom provides CT data set of phantom & recommends considering it as a homogeneous phantom with electron density (1.19 gm/cc or 282 HU) close to PMMA. We performed this study on Eclipse TPS (V13, VMS) & trueBEAM STx VMS Linac &more » ArcCHECK phantom (SNC). Plans were generated for 6MV photon beam, 20cm×20cm field size at isocentre & SPD (Source to phantom distance) of 86.7 cm to deliver 100cGy at isocentre. 3DVH software requires patients DICOM data generated by TPS & plan delivered on ArcCHECK phantom. Plans were generated in TPS by assigning different HU values to phantom. We analyzed gamma index & the dose profile for all plans along vertical down direction of beam’s central axis for Entry, Exit & Isocentre dose. Results: The global gamma passing rate (2% & 2mm) for manufacturer recommended HU value 282 was 96.3%. Detector entry, Isocentre & detector exit Doses were 1.9048 (1.9270), 1.00(1.0199) & 0.5078(0.527) Gy for TPS (Measured) respectively.The global gamma passing rate for electron density 1.1302 gm/cc was 98.6%. Detector entry, Isocentre & detector exit Doses were 1.8714 (1.8873), 1.00(0.9988) & 0.5211(0.516) Gy for TPS (Measured) respectively. Conclusion: Electron density value assigned by manufacturer does not hold true for every user. Proper modeling of electron density of ArcCHECK in TPS is essential to avoid systematic error in dose calculation of patient specific QA.« less

  16. 22 CFR 121.8 - End-items, components, accessories, attachments, parts, firmware, software, and systems.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ..., parts, firmware, software, and systems. 121.8 Section 121.8 Foreign Relations DEPARTMENT OF STATE...-items, components, accessories, attachments, parts, firmware, software, and systems. (a) An end-item is.... Firmware includes but is not limited to circuits into which software has been programmed. (f) Software...

  17. 22 CFR 121.8 - End-items, components, accessories, attachments, parts, firmware, software and systems.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ..., parts, firmware, software and systems. 121.8 Section 121.8 Foreign Relations DEPARTMENT OF STATE...-items, components, accessories, attachments, parts, firmware, software and systems. (a) An end-item is.... Firmware includes but is not limited to circuits into which software has been programmed. (f) Software...

  18. 22 CFR 121.8 - End-items, components, accessories, attachments, parts, firmware, software and systems.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ..., parts, firmware, software and systems. 121.8 Section 121.8 Foreign Relations DEPARTMENT OF STATE...-items, components, accessories, attachments, parts, firmware, software and systems. (a) An end-item is.... Firmware includes but is not limited to circuits into which software has been programmed. (f) Software...

  19. 22 CFR 121.8 - End-items, components, accessories, attachments, parts, firmware, software and systems.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ..., parts, firmware, software and systems. 121.8 Section 121.8 Foreign Relations DEPARTMENT OF STATE...-items, components, accessories, attachments, parts, firmware, software and systems. (a) An end-item is.... Firmware includes but is not limited to circuits into which software has been programmed. (f) Software...

  20. 22 CFR 121.8 - End-items, components, accessories, attachments, parts, firmware, software and systems.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ..., parts, firmware, software and systems. 121.8 Section 121.8 Foreign Relations DEPARTMENT OF STATE...-items, components, accessories, attachments, parts, firmware, software and systems. (a) An end-item is.... Firmware includes but is not limited to circuits into which software has been programmed. (f) Software...

  1. Single-pass incremental force updates for adaptively restrained molecular dynamics.

    PubMed

    Singh, Krishna Kant; Redon, Stephane

    2018-03-30

    Adaptively restrained molecular dynamics (ARMD) allows users to perform more integration steps in wall-clock time by switching on and off positional degrees of freedoms. This article presents new, single-pass incremental force updates algorithms to efficiently simulate a system using ARMD. We assessed different algorithms for speedup measurements and implemented them in the LAMMPS MD package. We validated the single-pass incremental force update algorithm on four different benchmarks using diverse pair potentials. The proposed algorithm allows us to perform simulation of a system faster than traditional MD in both NVE and NVT ensembles. Moreover, ARMD using the new single-pass algorithm speeds up the convergence of observables in wall-clock time. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  2. SU-E-T-651: Quantification of Dosimetric Accuracy of Respiratory Gated Stereotactic Body Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thiyagarajan, Rajesh; Vikraman, S; Maragathaveni, S

    2015-06-15

    Purpose: To quantify the dosimetric accuracy of respiratory gated stereotactic body radiation therapy delivery using dynamic thorax phantom. Methods: Three patients with mobile target (2 lung, 1liver) were chosen. Retrospective 4DCT image sets were acquired for using Varian RPM system. An in-house MATLAB program was designed for MIP, MinIP and AvgIP generation. ITV was contoured on MIP image set for lung patients and on MinIP for liver patient. Dynamic IMRT plans were generated on selected phase bin image set in Eclipse (v10.0) planning system. CIRS dynamic thorax phantom was used to perform the dosimetric quality assurance. Patient breathing pattern filemore » from RPM system was converted to phantom compatible file by an in-house MATLAB program. This respiratory pattern fed to the CIRS dynamic thorax phantom. 4DCT image set was acquired for this phantom using patient breathing pattern. Verification plans were generated using patient gating window and delivered on the phantom. Measurements were carried out using with ion chamber and EBT2 film. Exposed films were analyzed and evaluated in FilmQA software. Results: The stability of gated output in comparison with un-gated output was within 0.5%. The Ion chamber measured and TPS calculated dose compared for all the patients. The difference observed was 0.45%, −0.52% and −0.54 for Patient 1, Patient2 and Patient 3 respectively.Gamma value evaluated from EBT film shows pass rates from 92.41% to 99.93% for 3% dose difference and 3mm distance to agreement criteria. Conclusion: Dosimetric accuracy of respiratory gated SBRT delivery for lung and liver was dosimetrically acceptable. The Ion chamber measured dose was within 0.203±0.5659% of the expected dose. Gamma pass rates were within 96.63±3.84% of the expected dose.« less

  3. Software Architecture for Big Data Systems

    DTIC Science & Technology

    2014-03-27

    Software Architecture: Trends and New Directions #SEIswArch © 2014 Carnegie Mellon University Software Architecture for Big Data Systems...AND SUBTITLE Software Architecture for Big Data Systems 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT...ih - . Software Architecture: Trends and New Directions #SEIswArch © 2014 Carnegie Mellon University WHAT IS BIG DATA ? FROM A SOFTWARE

  4. A Probabilistic Software System Attribute Acceptance Paradigm for COTS Software Evaluation

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry

    2005-01-01

    Standard software requirement formats are written from top-down perspectives only, that is, from an ideal notion of a client s needs. Despite the exactness of the standard format, software and system errors in designed systems have abounded. Bad and inadequate requirements have resulted in cost overruns, schedule slips and lost profitability. Commercial off-the-shelf (COTS) software components are even more troublesome than designed systems because they are often provided as is and subsequently delivered with unsubstantiated validation of described capabilities. For COTS software, there needs to be a way to express the client s software needs in a consistent and formal manner using software system attributes derived from software quality standards. Additionally, the format needs to be amenable to software evaluation processes that integrate observable evidence garnered from historical data. This paper presents a paradigm that effectively bridges the gap between what a client desires (top-down) and what has been demonstrated (bottom-up) for COTS software evaluation. The paradigm addresses the specification of needs before the software evaluation is performed and can be used to increase the shared understanding between clients and software evaluators about what is required and what is technically possible.

  5. Designing robust watermark barcodes for multiplex long-read sequencing.

    PubMed

    Ezpeleta, Joaquín; Krsticevic, Flavia J; Bulacio, Pilar; Tapia, Elizabeth

    2017-03-15

    To attain acceptable sample misassignment rates, current approaches to multiplex single-molecule real-time sequencing require upstream quality improvement, which is obtained from multiple passes over the sequenced insert and significantly reduces the effective read length. In order to fully exploit the raw read length on multiplex applications, robust barcodes capable of dealing with the full single-pass error rates are needed. We present a method for designing sequencing barcodes that can withstand a large number of insertion, deletion and substitution errors and are suitable for use in multiplex single-molecule real-time sequencing. The manuscript focuses on the design of barcodes for full-length single-pass reads, impaired by challenging error rates in the order of 11%. The proposed barcodes can multiplex hundreds or thousands of samples while achieving sample misassignment probabilities as low as 10-7 under the above conditions, and are designed to be compatible with chemical constraints imposed by the sequencing process. Software tools for constructing watermark barcode sets and demultiplexing barcoded reads, together with example sets of barcodes and synthetic barcoded reads, are freely available at www.cifasis-conicet.gov.ar/ezpeleta/NS-watermark . ezpeleta@cifasis-conicet.gov.ar. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  6. Educational support for specialist international medical graduates in anaesthesia.

    PubMed

    Higgins, Niall S; Taraporewalla, Kersi; Edirippulige, Sisira; Ware, Robert S; Steyn, Michael; Watson, Marcus O

    2013-08-19

    To measure specialist international medical graduates' (SIMGs) level of learning through participation in guided tutorials, face-to-face or through videoconferencing (VC), and the effect of tutorial attendance and quality of participation on success in specialist college examinations. Tutorials were conducted at the Royal Brisbane and Women's Hospital between 19 September 2007 and 23 August 2010, and delivered through VC to participants at other locations. Tutorials were recorded and transcribed, and speaker contributions were tagged and ranked using content analysis software. Summary examination results were obtained from the Australian and New Zealand College of Anaesthetists. Tutorial participation and attendance, and college examination pass and fail rates. Transcripts were obtained for 116 tutorials. The median participation percentage for those who subsequently failed the college examinations was 1% (interquartile range [IQR], 0%-1%), while for those who passed the exams it was 5% (IQR, 2%-8%; P < 0.001). There was also an association between attendance and exam success; the median (IQR) attendance of those who failed was 24% (IQR, 14%-39%), while for those who passed it was 59% (IQR, 39%-77%; P < 0.001). Use of VC technology was found to be a feasible method to assist SIMGs to become aware of the requirements of the exam and to prepare more effectively.

  7. NASA Tech Briefs, December 2007

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Topics include: Ka-Band TWT High-Efficiency Power Combiner for High-Rate Data Transmission; Reusable, Extensible High-Level Data-Distribution Concept; Processing Satellite Imagery To Detect Waste Tire Piles; Monitoring by Use of Clusters of Sensor-Data Vectors; Circuit and Method for Communication Over DC Power Line; Switched Band-Pass Filters for Adaptive Transceivers; Noncoherent DTTLs for Symbol Synchronization; High-Voltage Power Supply With Fast Rise and Fall Times; Waveguide Calibrator for Multi-Element Probe Calibration; Four-Way Ka-Band Power Combiner; Loss-of-Control-Inhibitor Systems for Aircraft; Improved Underwater Excitation-Emission Matrix Fluorometer; Metrology Camera System Using Two-Color Interferometry; Design and Fabrication of High-Efficiency CMOS/CCD Imagers; Foam Core Shielding for Spacecraft CHEM-Based Self-Deploying Planetary Storage Tanks Sequestration of Single-Walled Carbon Nanotubes in a Polymer PPC750 Performance Monitor Application-Program-Installer Builder Using Visual Odometry to Estimate Position and Attitude Design and Data Management System Simple, Script-Based Science Processing Archive Automated Rocket Propulsion Test Management Online Remote Sensing Interface Fusing Image Data for Calculating Position of an Object Implementation of a Point Algorithm for Real-Time Convex Optimization Handling Input and Output for COAMPS Modeling and Grid Generation of Iced Airfoils Automated Identification of Nucleotide Sequences Balloon Design Software Rocket Science 101 Interactive Educational Program Creep Forming of Carbon-Reinforced Ceramic-Matrix Composites Dog-Bone Horns for Piezoelectric Ultrasonic/Sonic Actuators Benchtop Detection of Proteins Recombinant Collagenlike Proteins Remote Sensing of Parasitic Nematodes in Plants Direct Coupling From WGM Resonator Disks to Photodetectors Using Digital Radiography To Image Liquid Nitrogen in Voids Multiple-Parameter, Low-False-Alarm Fire-Detection Systems Mosaic-Detector-Based Fluorescence Spectral Imager Plasmoid Thruster for High Specific-Impulse Propulsion Analysis Method for Quantifying Vehicle Design Goals Improved Tracking of Targets by Cameras on a Mars Rover Sample Caching Subsystem Multistage Passive Cooler for Spaceborne Instruments GVIPS Models and Software Stowable Energy-Absorbing Rocker-Bogie Suspensions

  8. WE-F-16A-05: Use of 3D-Printers to Create a Tissue Equivalent 3D-Bolus for External Beam Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burleson, S; Baker, J; Hsia, A

    2014-06-15

    Purpose: The purpose of this project is to demonstrate that a non-expensive 3D-printer can be used to manufacture a 3D-bolus for external beam therapy. The printed bolus then can be modeled in our treatment planning system to ensure accurate dose delivery to the patient. Methods: We developed a simple method to manufacture a patient-specific custom 3Dbolus. The bolus is designed using Eclipse Treatment Planning System, contoured onto the patients CT images. The bolus file is exported from Eclipse to 3D-printer software, and then printed using a 3D printer. Various tests were completed to determine the properties of the printing material.more » Percent depth dose curves in this material were measured with electron and photon beams for comparison to other materials. In order to test the validity of the 3D printed bolus for treatment planning, a custom bolus was printed and tested on the Rando phantom using film for a dose plane comparison. We compared the dose plane measured on the film to the same dose plane exported from our treatment planning system using Film QA software. The gamma-dose distribution tool was used in our film analysis. Results: We compared point measurements throughout the dose plane and were able to achieve greater than 95% passing rate at 3% dose difference and 3 mm distance to agreement, which is our departments acceptable gamma pixel parameters. Conclusion: The printed 3D bolus has proven to be accurately modeled in our treatment planning system, it is more conformal to the patient surface and more durable than other bolus currently used (wax, superflab etc.). It is also more convenient and less costly than comparable bolus from milling machine companies.« less

  9. Configuration management and software measurement in the Ground Systems Development Environment (GSDE)

    NASA Technical Reports Server (NTRS)

    Church, Victor E.; Long, D.; Hartenstein, Ray; Perez-Davila, Alfredo

    1992-01-01

    A set of functional requirements for software configuration management (CM) and metrics reporting for Space Station Freedom ground systems software are described. This report is one of a series from a study of the interfaces among the Ground Systems Development Environment (GSDE), the development systems for the Space Station Training Facility (SSTF) and the Space Station Control Center (SSCC), and the target systems for SSCC and SSTF. The focus is on the CM of the software following delivery to NASA and on the software metrics that relate to the quality and maintainability of the delivered software. The CM and metrics requirements address specific problems that occur in large-scale software development. Mechanisms to assist in the continuing improvement of mission operations software development are described.

  10. Independent calculation-based verification of IMRT plans using a 3D dose-calculation engine.

    PubMed

    Arumugam, Sankar; Xing, Aitang; Goozee, Gary; Holloway, Lois

    2013-01-01

    Independent monitor unit verification of intensity-modulated radiation therapy (IMRT) plans requires detailed 3-dimensional (3D) dose verification. The aim of this study was to investigate using a 3D dose engine in a second commercial treatment planning system (TPS) for this task, facilitated by in-house software. Our department has XiO and Pinnacle TPSs, both with IMRT planning capability and modeled for an Elekta-Synergy 6MV photon beam. These systems allow the transfer of computed tomography (CT) data and RT structures between them but do not allow IMRT plans to be transferred. To provide this connectivity, an in-house computer programme was developed to convert radiation therapy prescription (RTP) files as generated by many planning systems into either XiO or Pinnacle IMRT file formats. Utilization of the technique and software was assessed by transferring 14 IMRT plans from XiO and Pinnacle onto the other system and performing 3D dose verification. The accuracy of the conversion process was checked by comparing the 3D dose matrices and dose volume histograms (DVHs) of structures for the recalculated plan on the same system. The developed software successfully transferred IMRT plans generated by 1 planning system into the other. Comparison of planning target volume (TV) DVHs for the original and recalculated plans showed good agreement; a maximum difference of 2% in mean dose, - 2.5% in D95, and 2.9% in V95 was observed. Similarly, a DVH comparison of organs at risk showed a maximum difference of +7.7% between the original and recalculated plans for structures in both high- and medium-dose regions. However, for structures in low-dose regions (less than 15% of prescription dose) a difference in mean dose up to +21.1% was observed between XiO and Pinnacle calculations. A dose matrix comparison of original and recalculated plans in XiO and Pinnacle TPSs was performed using gamma analysis with 3%/3mm criteria. The mean and standard deviation of pixels passing gamma tolerance for XiO-generated IMRT plans was 96.1 ± 1.3, 96.6 ± 1.2, and 96.0 ± 1.5 in axial, coronal, and sagittal planes respectively. Corresponding results for Pinnacle-generated IMRT plans were 97.1 ± 1.5, 96.4 ± 1.2, and 96.5 ± 1.3 in axial, coronal, and sagittal planes respectively. © 2013 American Association of Medical Dosimetrists.

  11. Debugging and Performance Analysis Software Tools for Peregrine System |

    Science.gov Websites

    High-Performance Computing | NREL Debugging and Performance Analysis Software Tools for Peregrine System Debugging and Performance Analysis Software Tools for Peregrine System Learn about debugging and performance analysis software tools available to use with the Peregrine system. Allinea

  12. 75 FR 5146 - Hewlett Packard Company Business Critical Systems, Mission Critical Business Software Division...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-01

    ... Packard Company Business Critical Systems, Mission Critical Business Software Division, OpenVMS Operating... Software Division, OpenVMS Operating System Development Group, Including an Employee Operating Out of the..., Mission Critical Business Software Division, OpenVMS Operating System Development Group, including...

  13. Band-pass filtering algorithms for adaptive control of compressor pre-stall modes in aircraft gas-turbine engine

    NASA Astrophysics Data System (ADS)

    Kuznetsova, T. A.

    2018-05-01

    The methods for increasing gas-turbine aircraft engines' (GTE) adaptive properties to interference based on empowerment of automatic control systems (ACS) are analyzed. The flow pulsation in suction and a discharge line of the compressor, which may cause the stall, are considered as the interference. The algorithmic solution to the problem of GTE pre-stall modes’ control adapted to stability boundary is proposed. The aim of the study is to develop the band-pass filtering algorithms to provide the detection functions of the compressor pre-stall modes for ACS GTE. The characteristic feature of pre-stall effect is the increase of pressure pulsation amplitude over the impeller at the multiples of the rotor’ frequencies. The used method is based on a band-pass filter combining low-pass and high-pass digital filters. The impulse response of the high-pass filter is determined through a known low-pass filter impulse response by spectral inversion. The resulting transfer function of the second order band-pass filter (BPF) corresponds to a stable system. The two circuit implementations of BPF are synthesized. Designed band-pass filtering algorithms were tested in MATLAB environment. Comparative analysis of amplitude-frequency response of proposed implementation allows choosing the BPF scheme providing the best quality of filtration. The BPF reaction to the periodic sinusoidal signal, simulating the experimentally obtained pressure pulsation function in the pre-stall mode, was considered. The results of model experiment demonstrated the effectiveness of applying band-pass filtering algorithms as part of ACS to identify the pre-stall mode of the compressor for detection of pressure fluctuations’ peaks, characterizing the compressor’s approach to the stability boundary.

  14. Software Development to Assist in the Processing and Analysis of Data Obtained Using Fiber Bragg Grating Interrogation Systems

    NASA Technical Reports Server (NTRS)

    Hicks, Rebecca

    2010-01-01

    A fiber Bragg grating is a portion of a core of a fiber optic stand that has been treated to affect the way light travels through the strand. Light within a certain narrow range of wavelengths will be reflected along the fiber by the grating, while light outside that range will pass through the grating mostly undisturbed. Since the range of wavelengths that can penetrate the grating depends on the grating itself as well as temperature and mechanical strain, fiber Bragg gratings can be used as temperature and strain sensors. This capability, along with the light-weight nature of the fiber optic strands in which the gratings reside, make fiber optic sensors an ideal candidate for flight testing and monitoring in which temperature and wing strain are factors. A team of NASA Dryden engineers has been working to advance the fiber optic sensor technology since the mid 1990 s. The team has been able to improve the dependability and sample rate of fiber optic sensor systems, making them more suitable for real-time wing shape and strain monitoring and capable of rivaling traditional strain gauge sensors in accuracy. The sensor system was recently tested on the Ikhana unmanned aircraft and will be used on the Global Observer unmanned aircraft. Since a fiber Bragg grating sensor can be placed every halfinch on each optic fiber, and since fibers of approximately 40 feet in length each are to be used on the Global Observer, each of these fibers will have approximately 1,000 sensors. A total of 32 fibers are to be placed on the Global Observer aircraft, to be sampled at a rate of about 50 Hz, meaning about 1.6 million data points will be taken every second. The fiber optic sensors system is capable of producing massive amounts of potentially useful data; however, methods to capture, record, and analyze all of this data in a way that makes the information useful to flight test engineers are currently limited. The purpose of this project is to research the availability of software capable of processing massive amounts of data in both real-time and post-flight settings, and to produce software segments that can be integrated to assist in the task as well. The selected software must be able to: (1) process massive amounts of data (up to 4GB) at a speed useful in a real-time settings (small fractions of a second); (2) process data in post-flight settings to allow test reproduction or further data analysis, inclusive; (3) produce, or make easier to produce, three-dimensional plots/graphs to make the data accessible to flight test engineers; and (4) be customized to allow users to use their own processing formulas or functions and display the data in formats they prefer. Several software programs were evaluated to determine their utility in completing the research objectives. These programs include: OriginLab, Graphis, 3D Grapher, Visualization Sciences Group (VSG) Avizo Wind, Interactive Analysis and Display System (IADS), SigmaPlot, and MATLAB.

  15. The evaluation of a 2D diode array in “magic phantom” for use in high dose rate brachytherapy pretreatment quality assurance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Espinoza, A.; Petasecca, M.; Fuduli, I.

    2015-02-15

    Purpose: High dose rate (HDR) brachytherapy is a treatment method that is used increasingly worldwide. The development of a sound quality assurance program for the verification of treatment deliveries can be challenging due to the high source activity utilized and the need for precise measurements of dwell positions and times. This paper describes the application of a novel phantom, based on a 2D 11 × 11 diode array detection system, named “magic phantom” (MPh), to accurately measure plan dwell positions and times, compare them directly to the treatment plan, determine errors in treatment delivery, and calculate absorbed dose. Methods: Themore » magic phantom system was CT scanned and a 20 catheter plan was generated to simulate a nonspecific treatment scenario. This plan was delivered to the MPh and, using a custom developed software suite, the dwell positions and times were measured and compared to the plan. The original plan was also modified, with changes not disclosed to the primary authors, and measured again using the device and software to determine the modifications. A new metric, the “position–time gamma index,” was developed to quantify the quality of a treatment delivery when compared to the treatment plan. The MPh was evaluated to determine the minimum measurable dwell time and step size. The incorporation of the TG-43U1 formalism directly into the software allows for dose calculations to be made based on the measured plan. The estimated dose distributions calculated by the software were compared to the treatment plan and to calibrated EBT3 film, using the 2D gamma analysis method. Results: For the original plan, the magic phantom system was capable of measuring all dwell points and dwell times and the majority were found to be within 0.93 mm and 0.25 s, respectively, from the plan. By measuring the altered plan and comparing it to the unmodified treatment plan, the use of the position–time gamma index showed that all modifications made could be readily detected. The MPh was able to measure dwell times down to 0.067 ± 0.001 s and planned dwell positions separated by 1 mm. The dose calculation carried out by the MPh software was found to be in agreement with values calculated by the treatment planning system within 0.75%. Using the 2D gamma index, the dose map of the MPh plane and measured EBT3 were found to have a pass rate of over 95% when compared to the original plan. Conclusions: The application of this magic phantom quality assurance system to HDR brachytherapy has demonstrated promising ability to perform the verification of treatment plans, based upon the measured dwell positions and times. The introduction of the quantitative position–time gamma index allows for direct comparison of measured parameters against the plan and could be used prior to patient treatment to ensure accurate delivery.« less

  16. Achieving Better Buying Power through Acquisition of Open Architecture Software Systems for Web-Based and Mobile Devices

    DTIC Science & Technology

    2015-05-01

    Achieving Better Buying Power through Acquisition of Open Architecture Software Systems for Web-Based and Mobile Devices Walt Scacchi and Thomas...2015 to 00-00-2015 4. TITLE AND SUBTITLE Achieving Better Buying Power through Acquisition of Open Architecture Software Systems for Web-Based and...architecture (OA) software systems  Emerging challenges in achieving Better Buying Power (BBP) via OA software systems for Web- based and Mobile devices

  17. Computer software.

    PubMed

    Rosenthal, L E

    1986-10-01

    Software is the component in a computer system that permits the hardware to perform the various functions that a computer system is capable of doing. The history of software and its development can be traced to the early nineteenth century. All computer systems are designed to utilize the "stored program concept" as first developed by Charles Babbage in the 1850s. The concept was lost until the mid-1940s, when modern computers made their appearance. Today, because of the complex and myriad tasks that a computer system can perform, there has been a differentiation of types of software. There is software designed to perform specific business applications. There is software that controls the overall operation of a computer system. And there is software that is designed to carry out specialized tasks. Regardless of types, software is the most critical component of any computer system. Without it, all one has is a collection of circuits, transistors, and silicone chips.

  18. Software Management for the NOνAExperiment

    NASA Astrophysics Data System (ADS)

    Davies, G. S.; Davies, J. P.; C Group; Rebel, B.; Sachdev, K.; Zirnstein, J.

    2015-12-01

    The NOvAsoftware (NOνASoft) is written in C++, and built on the Fermilab Computing Division's art framework that uses ROOT analysis software. NOνASoftmakes use of more than 50 external software packages, is developed by more than 50 developers and is used by more than 100 physicists from over 30 universities and laboratories in 3 continents. The software builds are handled by Fermilab's custom version of Software Release Tools (SRT), a UNIX based software management system for large, collaborative projects that is used by several experiments at Fermilab. The system provides software version control with SVN configured in a client-server mode and is based on the code originally developed by the BaBar collaboration. In this paper, we present efforts towards distributing the NOvA software via the CernVM File System distributed file system. We will also describe our recent work to use a CMake build system and Jenkins, the open source continuous integration system, for NOνASoft.

  19. Engineering Complex Embedded Systems with State Analysis and the Mission Data System

    NASA Technical Reports Server (NTRS)

    Ingham, Michel D.; Rasmussen, Robert D.; Bennett, Matthew B.; Moncada, Alex C.

    2004-01-01

    It has become clear that spacecraft system complexity is reaching a threshold where customary methods of control are no longer affordable or sufficiently reliable. At the heart of this problem are the conventional approaches to systems and software engineering based on subsystem-level functional decomposition, which fail to scale in the tangled web of interactions typically encountered in complex spacecraft designs. Furthermore, there is a fundamental gap between the requirements on software specified by systems engineers and the implementation of these requirements by software engineers. Software engineers must perform the translation of requirements into software code, hoping to accurately capture the systems engineer's understanding of the system behavior, which is not always explicitly specified. This gap opens up the possibility for misinterpretation of the systems engineer s intent, potentially leading to software errors. This problem is addressed by a systems engineering methodology called State Analysis, which provides a process for capturing system and software requirements in the form of explicit models. This paper describes how requirements for complex aerospace systems can be developed using State Analysis and how these requirements inform the design of the system software, using representative spacecraft examples.

  20. Software archeology: a case study in software quality assurance and design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macdonald, John M; Lloyd, Jane A; Turner, Cameron J

    2009-01-01

    Ideally, quality is designed into software, just as quality is designed into hardware. However, when dealing with legacy systems, demonstrating that the software meets required quality standards may be difficult to achieve. As the need to demonstrate the quality of existing software was recognized at Los Alamos National Laboratory (LANL), an effort was initiated to uncover and demonstrate that legacy software met the required quality standards. This effort led to the development of a reverse engineering approach referred to as software archaeology. This paper documents the software archaeology approaches used at LANL to document legacy software systems. A case studymore » for the Robotic Integrated Packaging System (RIPS) software is included.« less

Top